Skip to content
Operational Convergence
8 min read 12 May 2025

LoRAs Aren't Dead: Why Fine-Tuning Still Matters in the GPT-4o Landscape

GPT-4o has shifted the AI world, but brand personalisation, refinement, and control haven't changed. LoRAs aren't just surviving - they're more essential than ever.

James Pierechod

Founder, Visual Content Consultancy

TL;DR

  • GPT-4o improves consistency at a technical level but can't capture your brand's unique visual style without specific training - LoRAs fill that gap
  • LoRAs are evolving from technical fine-tuning into modular design libraries - the creative DNA of your brand in the age of AI-assisted creation
  • Foundation models provide the canvas, LoRAs provide the distinctive style - they're complementary, not competing

The rumours of their death are greatly exaggerated

I’ve seen a few people call out the “death of LoRAs” due to the launch of GPT-4o image generators and the availability of their new API. The AI world just shifted, but brand personalisation, refinement, and resultant expectations haven’t.

The launch of GPT-4o has been nothing short of revolutionary. With its remarkable consistency in character generation and improved visual capabilities, many are questioning whether custom fine-tuning approaches like LoRAs (Low-Rank Adaptations) still have a place in our production AI toolbox. As someone who’s spent years building private AI infrastructures for brands and agencies, I can confidently say: LoRAs aren’t just surviving - they’re more essential than ever.

Why branded AI still needs customisation post GPT-4o

Let’s address the elephant in the room: GPT-4o has dramatically improved consistency in visual generation. This same technical advancement that’s made consistent character generation possible is now extending into motion and video. It’s impressive, but it doesn’t make custom fine-tuning obsolete - quite the opposite.

Warning - we’re about to get geeky.

Brand specificity still requires dedicated training

While general models like GPT-4o excel at broad capabilities, it’s difficult to capture your brand’s unique visual style, content specifics, and context without specific training.

When working with production studios integrating AI, I’ve repeatedly seen how general models fail to capture nuanced brand guidelines consistently across thousands of visual assets into a SINGLE ControlNet or LoRA. Even with perfect training and prompting, you can’t achieve the precision of unified requests from a single model specifically trained on your brand assets.

  • Assess the current visual identity guidelines
  • Identify the unique visual elements that define the brand
  • Create a structured dataset of your brand-approved content
  • Consider this the foundation of your LoRA training strategy
  • Refresh and revise this dataset regularly

Private data domains remain critical for competitive advantage

The technical shift powering GPT-4o’s improved consistency doesn’t eliminate the need for private, secure AI environments tailored to branded content. Piling everything into a single LLM or AI agent and expecting this to fly with IT or cybersecurity is silly.

  • Map and categorise the proprietary content
  • Consider which content types need specialised generation parameters (think like a 3D modeller)
  • Begin building a variable private vector database of your key assets
  • Integrate these with defined prompting, multi-agent RAG layers, closed-loop MCPs, and semantic databases (aligned to brand context - like seasonality, audience specificity, etc.)

Pipeline integration demands customised approaches

For serious production pipelines (I’m talking about global brands with internal and agency capacities), the integration of AI into existing workflows requires flexible database solutions that a single general model won’t provide.

Video workflow doesn’t work the exact same as photography, and CGI doesn’t work like 2D animation. There are huge crossovers in all of these - but the approach relies on different stages to get there.

For video production teams I’ve worked on defining these differences and factoring the features out, so integrating autoregressive models with virtual production technologies creates innovative “mixed media” marketing solutions that wouldn’t be possible with off-the-shelf tools. These integrations require specialised fine-tuning to match existing production pipelines.

  • Audit the current content production workflow per asset classification
  • Identify where automation could convert or remove bottlenecks
  • Be open to restructuring the traditional approach (photography to digital scanning for example)
  • Evaluate which steps require specialised model training
  • Consider a hybrid approach that leverages both general models and custom LoRAs

The future of LoRAs - design and brand libraries for the AI age

Rather than making LoRAs obsolete, advancements in foundation models are transforming how we use this fine-tuning. Think about it this way - LoRAs are evolving into ‘modular design libraries’ that encode brand identity, style, seasonality, and tone across a consistent content generation pipeline.

These adaptable modules will become the standard for how brands maintain consistency across AI-generated content.

Think of LoRAs not as technical fine-tuning but as the creative DNA of your brand in the age of AI-assisted creation.

As someone deeply involved in designing and implementing these systems, I see LoRAs becoming more accessible and powerful, essentially functioning as the art direction layer in AI-generated content. They’re the bridge between powerful foundation models and the ability to control and repeat unique brand expression with automation.

The complementary relationship

The reality is that foundation models like GPT-4o and specialised fine-tuning techniques like LoRAs aren’t competing - they’re complementary. As I often tell my clients, foundation models provide the canvas, but LoRAs provide the distinctive style.

Fine-tuning is analogous to art direction in traditional creative processes. While GPT-4o improves consistency at a technical level, it doesn’t eliminate the need for a “marketeer’s eye” that LoRAs can encode. We’re adding fine-tuning to improve both the context and practical aspects of consistency.

What this means for your 2025 AI production strategy

The integration of AI into video production workflows is accelerating. For marketing leaders, CMOs, agency producers, and production companies, this creates both challenges and opportunities. Your 2025 AI strategy should:

  1. Categorise what you have properly - AI is great with unstructured and structured data, but artistic structure is different
  2. Invest in your brand’s AI foundation assets - use the structure to build a wider foundation
  3. Build modular LoRAs for different content types, products, NPD, seasonality, and campaigns
  4. Create private vector databases of your brand assets
  5. Develop hybrid workflows that leverage both general AI and custom models
  6. Refresh the model regularly - DON’T rely on trained data only to train new data
  7. Think of AI as a creative partner, not a programme or app

The age of AI art direction is just beginning

The launch of GPT-4o doesn’t signal the death of LoRAs - it marks the beginning of a more sophisticated approach to fine-tuning. Just as we don’t use stock photography for all our visual content, we shouldn’t rely solely on general AI models for our creative production.

Hybrid production training is the key to controlling and implementing these frameworks effectively. If we simply rely on the AI-generated data we lose the control of the context.

The real competitive advantage lies in how you blend powerful foundation models with your unique brand expression through targeted fine-tuning, with updates and refinement from hybrid production approaches. Video, photography, and CGI isn’t dead either - it’s used in conjunction to amplify the messaging.

LoRAs and custom embedding frameworks aren’t just surviving in the GPT-4o era - they’re becoming the essential tools for brands who want to effectively implement an AI-generated landscape.

Common questions

Quick answers

Got another question?

Does GPT-4o make LoRAs obsolete?

No. GPT-4o dramatically improves consistency in visual generation, but general models can't capture nuanced brand guidelines consistently across thousands of assets. LoRAs encode your specific brand identity, style, and context - they're the art direction layer that sits on top of foundation models.

What are LoRAs in practical terms?

Low-Rank Adaptations - lightweight fine-tuning modules that encode specific visual styles, brand guidelines, or content parameters. Think of them as modular design libraries that maintain brand consistency across AI-generated content. They sit between the foundation model and each generation as a stylistic control function.

How do LoRAs fit into a production pipeline?

They're part of a hybrid approach. Foundation models handle the broad capabilities, LoRAs handle brand-specific consistency, and private vector databases store your proprietary assets. Different content types (video, photography, CGI, 2D animation) need different pipeline stages but LoRAs provide the consistent brand layer across all of them.

What should my 2025 AI production strategy include?

Categorise your existing assets properly, invest in brand AI foundation assets, build modular LoRAs for different content types and campaigns, create private vector databases, develop hybrid workflows that leverage both general AI and custom models, refresh regularly, and treat AI as a creative partner - not a programme.

Want to discuss this?

If this resonates with a challenge you're facing, let's talk.

Book a conversation