What Producers Need to Know About Generative AI and Legal Concerns


The 1+1 newsletter

by Nick Dorra

The Legal Considerations Around Using Generative AI Tools in Your Projects

I get asked a lot about the legal aspects of AI animation tools. So here’s a quick breakdown of the key considerations, and how they may impact your choice of tools or impose limits on your project.

DISCLAIMER: I am not a lawyer, and this article does not constitute legal advice. This article is intended to highlight issues you should research and/or discuss with your professional legal counsel.

Setting the Scene: AI Tools and Legal Frameworks

With many AI and Machine Learning enabled tools, the legal framework is straight forward: you pay for the service, it has not been trained on questionable source material, and you have all rights to the results.

However, Generative AI tools (such as Sora, VEO2, Kling etc) are a different story. The legal landscape here is still evolving, which creates uncertainties. For example, many broadcasters currently don’t allow Generative AI tools in the productions they commission. This means you may be restricted to using these tools in projects you plan to distribute or monetise independently.

With that said, let’s look at this in some more detail:

The Three Perspectives for Generative AI Tools

When considering Generative AI tools, the legal concerns fall into three key areas:

1.) The Inputs

2.) The Outputs

3.) The Training Data

Each has their own ramifications, so let’s take a closer look.

The Inputs

The data you submit to the AI tool (text prompt, image, video) are usually called Inputs. You should review the tool’s Terms of Service (TOS) closely, since some of them will include terms that will give the service provider perpetual, nonexclusive rights to the material you upload, or gives the provider the right to use your uploaded material to train their subsequent models. In some services, these terms will apply only to the free tier, and can be removed if you get paid access, but you should always verify this case-by-case.

The Outputs

Similarly to Inputs, whatever the AI tool produces is usually called the Output. Here too, reviewing the TOS is a must, since you might be granting the service provider the rights to your outputs just by using the tool. Again, this term is often dropped if you get paid access, but you should verify this for every tool you use.

Additionally, copyright law adds complexity here: At least in the US, copyright can only be claimed for work created by a human. This means that images created by AI image generators may not qualify for copyright ‘as is’. However, in the case of animation for example, combining AI-generated clips, adding sound, and making deliberate creative decisions would usually result in a copyrightable work.

The Training Data

This is a hot potato, and for good reason. Many Gen-AI tools have been trained on data scraped off the internet, without regards to whether this practice is legal or not. While the issue is being litigated, such models remain widely available.

It is usually very hard to find out what data a Gen-AI tool has been trained on. As a rule of thumb, unless the provider clearly claims that the model has only been trained on consensual data (like Adobe does for its Firefly AI suite), you can assume that it has been trained on scraped data. Then you need to use your own judgement: Am I ok in using this tool, in this particular case, even if it has been trained on scraped material?

Where this gets interesting is the fact that at this time, many service providers are training models either on a) public domain data [LINK], b) licensed data (I know several large animation studios who license their content libraries via brokers to service providers for this purpose) and c) synthetic data (providers creating video content in Unreal Engine that they then use to train their models). The market here is so big that it is just a matter of time before these kinds of ‘consensual’ services will come onto the market.

Keep Up To Date!

Generative AI tools are powerful but come with legal considerations that can’t be ignored. Staying informed, reading the fine print, and consulting a lawyer are essential steps to protecting yourself and your projects - and remember that those TOS can and do change regularly!

I’ve been keeping a close eye on this rapidly evolving landscape for a while now, and regularly advise clients on navigating these challenges. If you have any questions or thoughts around this, just hit reply to this email and I’m happy to help where I can!

- Nick


Nick Dorra

Say hi 👋 on Linkedin
🤝 Book a meeting via video or f2f to chat more

Unsubscribe · Preferences

ConvertKit
113 Cherry St #92768, Seattle, WA 98104-2205

1+1 newsletter

I’m an animation producer with 20+ years in the industry, helping studios explore AI tools that actually work - without risking their pipeline or creative control. My newsletter shares real-world tests, legal insights, and what’s actually working for teams using AI in production. If you’re figuring out how to start (or what to avoid), this is for you.

Read more from 1+1 newsletter

The 1+1 newsletter by Nick Dorra Is the gap between creator standards and audience needs widening? I was reading out of a children's book the other day, when a sailboat illustration caught my eye. At first glance, I was delighted - finally, someone had drawn a boat that actually looked like a real sailboat! They'd drawn a 7/8 fractional rig, included the reinforcement at the head of the mainsail, and several other technical details that really had me convinced.But when asked to read the story...

The 1+1 newsletter by Nick Dorra “AI won’t replace human art” - but let’s unpack what Demis Hassabis actually means On the latest episode of NYT’s Hard Fork, Google DeepMind CEO’s speaks to the limits of AI in art/storytelling/film, and I think we need nuance in how we interpret this: Breaking Down the "Soul" Statement When Hassabis says “a novel written by a robot might not feel like it has a soul,” some will hear: “any AI content will always lack soul.” But that’s not what he’s getting at....

The 1+1 newsletter by Nick Dorra Why you should be running internal AI tests Public broadcasters now ask every producer one extra question: can you prove your AI tools didn’t infringe on someone else’s IP? That alone is making a lot of indie studios pause before starting to test any workflows with scraped-data models. And fair enough - nobody wants to get into problems with their clients. Clean models are here and more are coming Two recent datapoints worth tracking: 👉 F-lite — launched this...