Like all sectors, the movie industry has been both affected by and exploring potential uses of generative AI.
Insofar as the former is concerned, movie studios have also detected and initiated litigation over the unauthorized use of their protected content in the training of AI models. For example:
- Disney and Universal have recently filed a complaint in the US District Court for the Central District of California, in which they claim that Midjourney seeks to reap their creative investment by selling an AI image-generating service “that functions as a virtual vending machine, generating endless unauthorized copies of Disney’s and Universal’s copyrighted works”;
- Warner Bros. Discovery has sued Midjourney before the same court too, claiming that the defendant company “thinks it is above the law” by selling “a commercial subscription service […], powered by artificial intelligence (“AI”) technology, that was developed using illegal copies of Warner Bros. Discovery’s copyrighted works”, including Superman, Batman, Wonder Woman, Flash, Tweety, Bugs Bunny, and Scooby-Doo;
- Another lawsuit has been filed against Minimax.
Somewhat related to this, the recent unveiling of OpenAI’s GPT-4o and the resulting possibility of creating images in the style of Studio Ghibli’s animated productions has raised questions related to the lawfulness of model training with content whose use has not been authorized by relevant rightholders.
Movie industry workers have not stayed still either. Not only have they taken action against the alleged unauthorized appropriation of their personal attributes (including voice) by model developers, but also against studios regarding the perceived threat that the adoption and integration of AI within the industry poses to their professional livelihood. The latter was, for example, a key issue in the context of the 2023 Writers Guild of America and SAG-AFTRA strikes targeting movie studios (here and here), as well as the 2024-2025 SAG-AFTRA strike involving video game actors.
Some movie directors have been taking a position regarding the use of AI in a way that has been described as a “pro-human vehemence”. The ever-increasing implementation of AI, including actors’ avatars for the purpose of advertising and marketing, has also resulted in new types of contracts, the fairness of which is the subject of discussion.
The use of AI by movie studios is indeed growing overall (with guidance on content production entailing the use of AI also being released publicly), whether it is to achieve results usually made possible by special effects (as was recently the case in Netflix’s 2025 series The Eternaut), or to save on time and costs, or to alter the accent and appearance of actors playing the same character spanning several different decades. The former was, for example, what was done for Adrien Brody’s spoken Hungarian in the 2025 Oscar-winning movie The Brutalist; the latter was implemented in, e.g., the Robert Zemeckis–directed 2024 movie Here, starring Tom Hanks and Robin Wright.
On the one hand, some have stressed the opportunities presented by the implementation of AI, including by advancing claims, like those made by AI video studio The Dor Brothers, that AI tools “are actually a purer form of expression, offering the most direct link between the artist’s brain and the end result, without the compromises required in large productions or the constraints that come with complex shoots”.
It is probably along these lines (as well as in the aftermath of the Oscar award won by Adrien Brody for his AI-altered performance in The Brutalist) that the Academy of Motion Picture Arts and Sciences decided in spring 2025 that – while consideration of human involvement remains key – movies containing parts generated with the aid of AI are also eligible for Oscar nominations and awards, as well as having dedicated contests like the Reply AI Film Festival. During the latter’s most recent edition, director Gabriele Muccino referred to the implementation of generative AI in movies as a shift comparable to that caused by the introduction of synchronized sound.
On the other hand, critics have emphasised the potential displacement of industry workers, including workers employed in technical roles and younger and emerging actors.
New paper
Against the background illustrated above, a new academic paper commissioned by 4iP Council and just published in Computer Security Law Review intends to map and critically evaluate relevant legal issues facing the development, deployment, and use of AI models from a movie industry perspective.
The principal objective is to conduct a survey of the main legal issues facing copyright, AI development, and the movie industry, rather than undertaking an in-depth discussion of the various matters individually considered. The analysis is conducted having regard to EU and UK copyright law and is divided into three parts:
- Input/AI training: By considering relevant legal restrictions applicable to the training of AI models on protected audiovisual content, the border between lawful unlicensed uses and restricted uses is drawn;
- Protectability of AI-generated outputs: Turning to the output generation phase, the protectability of such outputs is considered next, by focusing in particular on the requirements of authorship and originality under EU and UK copyright law;
- Legal risks and potential liability stemming from the use of third-party AI models for output generation: Still having regard to the output generation phase, relevant legal issues that might arise having regard to the use of AI models that ‘regurgitate’ third-party training data at output generation are considered, alongside the question of style protection (and imitation) under copyright.
The main conclusions are as follows:
- Input/AI training: Insofar as model training on third-party protected content is concerned, there are no exceptions under EU/UK law that fully cover the entirety of these processes. As a result, lacking legislative reform, the establishment of a licensing framework appears unavoidable for such activities to be deemed lawful;
- Protectability of AI-generated outputs: The deployment of AI across various phases of the creative process does not render the resulting content unprotectable, provided that human involvement and control remain significant throughout, with the result that AI is relied upon as a tool that aids – rather than replaces – the creativity of industry workers;
- Legal risks and potential liability stemming from the use of third-party AI models for output generation: The use of AI models that generate infringing outputs, such as regurgitating input data or merely imitating style, may trigger the application of exclusive rights under copyright and related rights. The resulting liability may vest with the user of such models, as well as the model developer/provider. The latter aspect means that terms that exclude any such liability may ultimately be found to be unenforceable against users and ineffective against rightsholders.
Webinar and where to read more
The main findings of the paper will be discussed during a webinar organized and hosted by 4iP Council on 30 October. You can find further information and register here.
The paper is available to read on the website of Computer Security Law Review here.
[Originally published on The IPKat on 28 September 2025]