EliteLux

Was the story about the watch in the hood of an old man in 1968 true? Exploring the facts and the fiction.

Alright, let me tell you about this little thing I worked on recently. Had this phrase stuck in my head: watch in the hood of an old man in 1968. Don’t ask me where it came from, just popped in there one day while I was tinkering with some image generation stuff. Seemed specific enough to be a challenge, you know? So, I figured, why not try and make it?

Was the story about the watch in the hood of an old man in 1968 true? Exploring the facts and the fiction.

Getting Started – Firing up the AI

So, I pulled up one of those AI image tools – you know the ones everyone’s talking about. Stable Diffusion, Midjourney, all that jazz. I mostly mess around with a local setup, gives me more control, or so I tell myself. Anyway, typed in the basic prompt: old man looking at watch, leaning on car hood, 1968 photograph style. Something simple to kick things off.

First results? Meh. Got an old man, yeah. Got a car, sometimes. But the ‘hood’ part? The AI kept getting confused. Sometimes it was the car’s hood, sometimes the man was wearing a hoodie sweatshirt, which felt kinda wrong for the ’68 vibe I wanted. And the watch? Often just a blur on the wrist, or sometimes, hilariously, the AI would try to stick a giant clock face onto the car hood itself. Gotta love the literal interpretations sometimes.

The Struggle is Real – Refining the Image

Spent a good hour or two just tweaking words. Added stuff like detailed vintage wristwatch, worried expression, looking down at wrist. Tried specifying the car type, like 1960s American sedan, hoping that would ground the scene better. Also threw in negative prompts like -hoodie, -jacket hood, -clock on car to steer it away from the nonsense.

Got closer. Started getting images where the man was actually leaning on a car hood, sometimes even looking towards his wrist. But getting that watch visible and clear? That was the real pain. The AI seemed to struggle with hands and small details like watches back then (seems better now, but this was a bit ago). It often looked like a smudge, or the hand was mangled. You know how AI hands can be sometimes… nightmare fuel.

Getting the 1968 Feel

This part was actually kinda fun. I focused on adding terms like black and white, Kodachrome, Ektachrome, film grain, slightly faded. Tried to get that specific look of old photos. Sometimes it worked too well, making the image blurry. Other times it nailed the atmosphere. The cars started looking more period-correct, the clothes too. Less hoodies, more fedoras or just regular jackets.

I remember thinking about my own old family photos, trying to capture that specific light, that texture. It’s funny how you try to explain a feeling, a ‘vibe’, to a machine using just words. It’s like talking to a very literal-minded alien sometimes.

Finally Getting Somewhere

After a whole lot of generating, discarding, and tweaking, I started getting results that felt… right. Or at least, close enough. Found one image where the composition was good: an older guy, maybe late 50s or 60s, leaning against the hood of a classic-looking car. He wasn’t staring directly at the watch, but his gaze was down, towards his wrist, and you could actually make out something there that looked like a watch. It wasn’t perfect, crystal clear detail wasn’t quite there, but the suggestion was strong. The overall mood, the lighting, the slight graininess – it felt like 1968. It had that heavy feeling, like the guy had something on his mind.

Was it exactly what was in my head? Nah, never is, right? But the process itself, wrestling with the tool, trying to translate that weird phrase into pixels, that was the interesting part. It’s like chipping away at stone, you don’t always know exactly what you’ll end up with. This time, I got this moody shot of a moment that feels like it could have happened. Good enough for a practice run.

Exit mobile version