INTRO

Hey there DTC operators, every week I dig through the AI noise and hype to bring real, actionable and pragmatic AI information. While GPT-5 hit last week, that’s not actually why this week was exciting.

  • Everyone is losing their mind over GPT-5 and we find out why

  • We’re going to walk through a new AI Image tool rolled out that generates people amazingly well (and no one is talking about it)

Hype Meme

Overhyped

Everyone Loses Their Mind Over GPT-5

Unless you’re living in a fallout shelter hiding from our impending AI overlords, you know that last week, ChatGPT-5 dropped. Most people seem to be losing their minds over it - some positively, some negatively.

Before we get to how it’s shaping up, one thing immediately obvious to me was Jony Ive’s involvement. The whole event felt very Apple. Right down to the sudden naming of GPT-5, GPT-5 Mini, and GPT-5 Nano. OpenAI clearly has a soft spot for the iPod.

The Good: GPT-5 Is Better At Analysis

At Raleon, we released our updated DTC AI Leaderboard which rated all the most recent models against DTC specific tasks (you can read about our process if interested).

GPT-5 consistently outperformed all other models on pure reasoning tasks. Two examples:

I wanted to see how GPT-5 would handle analyzing performance marketing data. I gave it context about the brand, and historical data for Q3. I then asked them to analyze why Q3 ROAS dropped 23% despite traffic being up 15%.

Claude Sonnet 4 gave me surface level, obvious insights and recommendations like “your website load speed may need to be optimized”. My other favorite was “try some different creative”. 

GPT-5 actually connected the dots between some product changes, shopping behaviors, and attribution.

Second, I tested GPT-5 in our new Flow Analyzer Agent that is coming out next week. Raleon’s Flow Analyzer Agent performs a highly detailed analysis of your Flow performance, comparing it with other data in Raleon, and then gives specific tasks you can perform to improve performance. Given the level of reasoning needed, I tested GPT-5 and Claude Sonnet 4 against a pet brand’s “Welcome Flow”.

Claude Sonnet 4 and 3.7 generated very usable results with a few surface level suggestions.

Where I was surprised was with GPT-5. It actually got more detailed and seemed to have a better contextual understanding. Here’s the suggestions from GPT-5 driving our agent upon analyzing a “Welcome Flow”.

The above was after it did a full analytical breakdown of the flow and where subscribers are falling out.

The challenge is that while GPT-5’s reasoning has improved, it’s at the cost of its creativity.

The Bad: GPT-Writes Like a Corporate Robot

GPT-5 is even worse at copywriting and being creative. Not only are em-dashes still around, but it lost all personality (and this is where some of the broader outrage comes from). Here’s an example from 12 subject line tests for a skincare brand:

GPT-5:

"Unlock Your Skin's Natural Radiance with Our Scientifically-Formulated Solutions" 🤮

Claude Sonnet 3.7:

“Your skin at 3am vs 3pm (and how to fix it)”

I know. That Sonnet subject is money. That belongs in an ad.

Going back over the launch, it’s more clear why GPT-5 is in fact not as creative.

“GPT‑5 is the strongest coding model we’ve ever released. It outperforms o3 across coding benchmarks and real-world use cases, and has been fine-tuned to shine in agentic coding products like Cursor, Windsurf, GitHub Copilot, and Codex CLI. GPT‑5 impressed our alpha testers, setting records on many of their private internal evals.”

OpenAI @ GPT-5 Launch

The focus on reasoning and coding is likely an attempt to try and stop Anthropic from continuing to steal enterprise market share (2 years ago OpenAI was at 50%, they’re now at 25%).

The Ugly: Slooooooow

Using GPT-5 in Regular and Thinking mode is like getting stuck behind a car going 10 miles under the speed limit. Find the gas pedal already.

GPT-5 Thinking is so slow it’s actually killing my workflow. The average amount of time GPT-5 Thinking takes to respond to a prompt I give it is five minutes. What’s terrible is with that amount of time, if its result wasn’t helpful, I’ve got to take another five minutes. The result is I’m spending 15 - 20 minutes on tasks with Thinking.

Compare that to o3 (which I loved). The average time to respond was 60 seconds. That is a much more reasonable back-and-forth workflow timing.

I am sure that, like o3, its speed will improve over time. In the meantime, though, I’ll be chatting with Claude to help get work done. GPT-5 is still worth your time, just know it’s going to take some adjusting (that’s why I linked OpenAI’s new prompt guide in the rabbit hole).

Actually Useful

This AI Image Tool Recreates Human Models (And Nobody’s Talking About It)

Last week, while everyone was arguing about whether ChatGPT-5 was actually better than 4o, Ideogram quietly dropped the biggest image generation breakthrough for DTC brands since 4o image generation, Ideogram Character.

I spent 6 hours testing it. I believe it solves one of the biggest pain points in creating great images: using an existing human model your DTC brand has, and being able to create variations of them that actually look real.

How Does Ideogram Character Work?

First you take a photo of your model and upload it to Ideogram. Here’s one I pulled from Jones Road Beauty’s website.

And you’re done. Ideogram Character can then generate that exact same person across different scenes, poses, and settings while keeping their face, hair, and identity locked in.

But there is a catch. I’ve found some of the “magic” in getting Character to work well is in how you apply the “mask”.

The Magic Is in Mask Setup

Here's the part nobody's explaining well: Ideogram Character lets you edit masks to control what stays locked and what can change.

When you first upload a photo to Character, it automatically applies a very tight mask to the image. It’s Character’s way of trying to narrow in on the face (to produce the true character).

Want to keep the face but change the hair? Adjust the mask. Want to preserve skin tone but allow different expressions? Fine-tune the identity controls. I spent 45 minutes just playing with mask settings. "It's like having a slider between 'clone this exact person' and 'why does my skincare model now look like she sells essential oils at 2am on QVC?'"

Thankfully, Character made it easy to edit the mask. Here’s how I adjusted the default mask for our photo above.

Making the mask adjustments is very simple. Just click and drag around. You can see I widened the mask to make sure it caught hand details and hair. Below is another example on a test with a True Classic model. I unmasked a lot more because I really wanted the full identity of the model.

Pro tip: Upload front-facing or three-quarter view shots with good lighting. I tried a moody side-lit photo of a skincare model and it generated her evil twin instead.

Generate a New Character Image

The last step is just prompting a new character image. The prompt makes some difference, but you honestly don’t need to be a prompting genius to make it work.

For instance, I used the simple prompt: “Put our character in a mini van, but looking happy” (Yes, I did imply that people who drive mini-vans wish they weren’t).

From a basic prompt like above, Character produces an amazing high quality result.

For those particularly astute, you’ve noticed that the car the character is in is not in fact a mini-van. That’s OK. I chalk that up to my intentionally lazy prompting test. When I was more specific in other tests and clarified, the surroundings were much better (like the woman in a bathroom, next to a sink below).

Where This Actually Helps DTC Brands

Character is going to help brands reduce their creative costs. Instead of spending $15k on a photoshoot to get your human model in 47 different scenarios, you can shoot once for less and let AI handle the rest. There’s a few places where this can be immediately useful today:

Static ad creative: Instead of needing a model in 90 different poses, you can capture a few, and then easily use Character to try out different moods and settings in your creative like ads or landing pages. Test different ads to see what resonates with the audience quickly before booking another shoot.

Landing pages: Take that same model from the static ad, and create a nice variation of them (or a few for A/B tests) on a landing page to keep it fresh but consistent. 

Email campaigns: Generate your model in different seasonal scenarios from one photoshoot. I tested this with a jewelry brand - same model wearing their pieces in coffee shops, offices, and date nights.

Reality Check

While Ideogram Character is powerful, it has its limitations. For instance, if you look at the “goop” shot on the right above, Ideogram doesn’t perform great with keeping products consistent or swapping them in. For that your best bet is to use Ideogram Character + Flux Kontext.

But for the first time, I can see a clear path where DTC brands do one comprehensive photoshoot with their key human models and products, then extend that investment across months of campaigns using AI. I would predict in less than 12 months, we’ll have AI models that can enable this, and new SaaS apps with the right workflow to bring it all together.

This Week’s Rabbit Holes

And that's it for this week's edition. GPT-5 writes like a corporate committee while Ideogram just cracked part of the creative consistency problem that's been plaguing DTC brands since AI image generation started.

Thanks for reading!

Have a topic you want me to cover specifically or a question? Reply to this email! I read every reply.

Keep Reading