The lawsuit we’ve all been waiting for
The lawsuit we’ve all been waiting for has finally arrived.
Not the response you’d think I might have - but this is a key moment for the creative and AI community. Let’s unpack why.
Disney and Universal have filed what could be the most significant visual AI copyright case to date, suing MidJourney for what they’re calling a “bottomless pit of plagiarism”. Well, a cursory scroll through the gallery section might indicate why.

As someone who’s been working commercially in GenAI for years, and can STILL remember MidJourney V1 - I’ve been talking about this since late 2022, and I will tell you this for a fact: this was inevitable.
The platform’s evolution from early experimental outputs to today’s sophisticated image generation has been amazing to witness. The creative possibilities it unlocks are genuinely awe-inspiring. I’ve seen it spark ideas, inform new strategic directions, drive huge personal innovations, and accelerate creative exploration in ways that traditional media, research, inspiration, and education simply couldn’t ever match. I am grateful for MidJourney.
Fast forward to 2025
MidJourney is a hugely established entity now. Its creative flexibility, adaptability, and even control have been extremely useful for creatives the world over.
This is inspiration on a unique and previously unseen level. Its discovery, keyword analysis, stylistic illustration, directorial references, and creative ‘genre’ visualisation like NOTHING we’ve ever been able to see or use before. Ever.
But with this ubiquity (trust me, “every creative knows MidJourney”) comes risk. And the risk extends beyond obvious character reproduction. Legal teams on both sides of the matter have been chiming in saying this isn’t a “slam dunk” for either side, with “terms of service provisions and basic fair use analysis” requiring this landmark court resolution. I doubt this will be resolved quickly, and this leaves users in an uncertain position.
But in commercial content generation, uncertainty is not a sustainable strategy.

This distinction between creative exploration and commercial deployment is exactly what this lawsuit will help clarify. MidJourney has democratised access to sophisticated image generation, but democratisation of tools doesn’t automatically democratise the legal frameworks governing their commercial use.
For agencies, brands, and production companies, this isn’t just another tech industry lawsuit. It’s a fundamental shift that will redefine how creatives create, protect, and commercialise visual content in an AI-first world.
The $300M question - what this lawsuit really means
The numbers tell a story that should concern every content creator. MidJourney generated $300 million last year alone (yep, £236m), and is literally floating a video service launch this month.

Meanwhile, Disney’s legal team isn’t mincing words: “piracy is piracy, and the fact that it’s done by an AI company does not make it any less infringing”.
This case centres on fundamental questions I’ve been raising in every AI implementation discussion: where does inspiration end and infringement begin? The lawsuit alleges MidJourney produces images that are essentially copies of copyrighted characters - Darth Vader, Elsa, Spider-Man - “just in new locations or with a new background. These aren’t transformed in a creative or imaginative way”.
But here’s what makes this different from previous copyright battles: the scale and speed of AI generation. Traditional infringement cases involve individual instances. These AI platforms can generate millions of infringing images at an unprecedented speed and scale. How does protection and precedence deal with this new speed? And what about volume?
The Getty lawsuit influence
Getty Images has been hugely proactive in defending artists’ rights. They banned AI-generated uploads in 2022 and created a “socially responsible” image generator that rewards artists with micro licensing payments. On March 15th 2025 they submitted their own response and RFI to the White House on Safeguarding Creativity: Respecting IP in the US.
They’ve been vocally suing Stability in the landmark 2023 case - Getty vs. Stability AI - for training its Stable Diffusion model on over 12 million Getty Images without permission or compensation. Getty alleged that Stability AI ignored licensing options.
Stability argues that training AI on freely scraped images is “fair use” under copyright law. They and other AI companies are pushing for legal rulings that would safeguard AI development, citing national security and economic benefits. They claim that paying artists for training data would slow innovation and put them at a disadvantage compared to countries like China with less strict copyright laws - which is 100% true. This is what we’ve all been saying for years.
It’s like Google’s defence when they scanned entire book libraries: ‘We’re not reproducing the books, we’re creating a searchable index that helps people discover content.’ Stability AI was essentially arguing the same - “We’re not copying images, we’re creating a statistical model that helps generate new content.”
Google won, by the way, because they were able to prove that the books were not being published or released for free, and that these books were being ‘discovered’ by new readers and the attribution was heading back to them. They also said they WEREN’T selling a service that would create new books from the collection. However, the underlying technology for semantic understanding of how these books were ‘built’ - the keywords, long-tail and short-tail language and its relationships - was (in part) a foundation for how Google works, and supported their commercial dominance for decades.

Getty Images’ own CEO Craig Peters states that fighting AI copyright battles is “extraordinarily expensive.” Getty has spent millions on a single case against Stability AI. Due to the high costs, Getty won’t be able to pursue every infringement.
If we want to look at more like-for-like “content generation” cases, how about Google again, with YouTube? They were sued by Viacom in 2012 for the liability of copyrighted content appearing on the site. YouTube’s defence was that they couldn’t monitor every upload and relied on takedown notices.
But what if YouTube had been training an AI on copyrighted videos to generate ‘new’ content? That’s essentially what Stability are doing - but they didn’t just host content, they used it to build a database.
And this is part of the key reason for Disney’s action here, and why this is landscape-defining in legislation, precedence, and future “appropriate” equitable control.
What’s beyond a ‘fair use’ claim? An AI crossroads
What’s fascinating is Disney’s nuanced position. They’re not anti-AI - they’re “optimistic about how AI can be used responsibly as a tool to further human creativity”. This distinction is crucial for understanding the future landscape.
The entertainment industry has already embraced AI (well, sort of). In VFX, AI has been a staple in MANY workflows for the past 15+ years. Look at rotoscoping, de-aging, upscaling, relighting, colour grading, tracking.
What’s the difference? Controlled, licensed, and owned enhancement and implementation.
In the production industry we’ve known this for years. No visible brands, no obvious product placements, no brand logos that influence or subconsciously detract from the demographic fit for the content.
But this doesn’t mean no AI.
This aligns perfectly with what I’ve been advocating and applying as a consultant: moving away from “platform-specific AI generation” and towards ownable, modular frameworks.
As I’ve always said, “we’ll see brands, agencies, and production teams lean further into AI systems (not just tools) - utilising AI architecture across more ownable and attributable models”.
What are ‘owned frameworks’ powered by AI?
Here’s what we need to do for deploying visual AI frameworks.
Transition from third-party AI platforms to controlled environments
Start by moving away from the void that is “unattributable provenance” - places where our control of the output is only driven by prompt, visual style, or simplistic training personalisations. We are using more of the stylistic choices of the LLM than the brand or artist.
It’s like building a brand, campaign, or style solely on royalty-free stock content or templates. This might be licensable and easy, but it’s not unique or distinctive, because everyone else is using the same stuff.
Develop proprietary training datasets with clear licensing provenance
If you’re using this content commercially, you will NEED more control - lighting, compositions, campaign assets. Imagine this phase as ‘living brand guidelines’. This is called “fine-tuning”.
The more control we can achieve when generating content consistently with the style of YOUR brand vs someone else’s, the more effective this will be in the long run. This comes from using your OWN content as part of the base, licensing styles or content from stock sites, or creating and commissioning original content for the purpose of training these fine-tuned datasets.
These are bits that sit between the open-source LLM and each generation, or as part of a stylistic control function on an existing workflow. These are GREAT - they are iterative and dialable. It’s like being in a flexible photoshoot, where you can ‘dial in’ the factors that define that photo - after the shoot.

Build hybrid workflows that maintain creative control and legal compliance
This is where creators, agencies, and production companies will sit in the new content pipeline. Creating workflows and frameworks that support AI generation, but in an ownable and controllable way. Augmenting their generations to specificity, and leading the creative vision using AI as an accelerator of the creation process.
We may not need as many production shoots, CGI worlds, or location visits to generate content, but the creativity and control remains in the hands of the creatives.
Risk assessment - what creators need to know NOW
If you’re currently using MidJourney or similar platforms for distributable client work, you need immediate risk mitigation strategies. This is where controlled iteration (super versioning) becomes critical. Instead of generating completely new images each time, you need AI systems that allow for:
- Documented creative lineage - capture every asset’s transparent development trail, dialled in using the workflows above. Then make sure this remains compliant with each market’s mandates on AI law. Expect these to adapt, and be ready for it
- Transformative validation - clear evidence of creative transformation. How has this changed from the inputs? Can you PROVE the originality?
- Rights verification - confirmed licensing for all source materials, using fine-tuning to define the percentage of originality aligned to the LLM
- Version control - ability to trace and modify creative decisions using a strong creative iteration controller driven by the AI workflow, to define what has changed and where
Your DAM needs upgrading
This is a guess. But I have worked on several DAMs for small and global clients, and most are a mess - but also a treasure trove of potential training data to help with all of the phases above.
Traditional “15 HDDs and a subscription to a rubbish cloud asset storage facility” is not a viable future where volume is coming.
You need an AI-integrated DAM ecosystem built to categorise content heritages, including automatic tagging, object detection, and future rights and licensing management.
These are part of the new “living brand guidelines” - and the fuel for more scalable and legal AI generation pipelines.
The bigger picture - preparing for the post-lawsuit landscape
Whatever the outcome, this Disney case will establish new precedents for AI training data, fair use in the digital age, and corporate liability for AI-generated content.
This represents the inflection point I’ve been predicting. The era of “cobbling together an AI workflow” without strategic, analytical support is done. It’s not the time to dismiss AI either - it’s time to prepare for the adaptable future appropriately.
- Regular team auditing and education of all AI-powered creative processes (careful with employees using shadow AI)
- Continuous legal reviews of current AI platform usage and terms of service
- Investment in controlled AI infrastructure rather than platform dependencies (all will change, don’t rely on one)
- Development of proprietary training approaches using owned or licensed content
- Enhanced documentation protocols for all AI-assisted creative work
The future belongs to those of us that treat AI as a sophisticated creative instrument requiring ongoing calibration, not just a magic black box that spits out content. The golden spot is “augmenting” creative expertise while maintaining security, balance, and quality.
Beyond risk to competitive advantage
Let’s be serious here. This lawsuit isn’t just about legal compliance, ethical AI, or any other creative industry development - it’s about competitive positioning and revenue retention. Brands and agencies that proactively address these challenges will have significant advantages when the legal landscape settles.

The cat is truly out of the bag. The question isn’t whether AI will transform creative industries - it’s whether your content or business will still be effective in 6-12 months.
The tools that seemed revolutionary yesterday may become liabilities tomorrow. But an adaptable strategic framework built today will determine your competitive positioning for the future.