We sat down with Nicola Hartley (founder and CEO of Mint & Co, a leading legal and business affairs consultancy for the creative industries) and Henry Priestley (partner at Russells, a leading entertainment law firm) to discuss the impact of generative AI on rights management.
Nicola, how have you seen generative AI impact your clients’ rights management practices?
NH: Generally, when clients engage us to audit the licensed rights used in their business, or review rights and clearances in content they have created, we’d review their agreements to map out rights granted by territory, scope, duration, purpose, number of uses etc. From a chain of title/right to exploit perspective it’s been relatively straightforward to identify gaps in rights and other potential issues. With generative AI now being adopted in many client businesses, it adds an extra layer of due diligence as, depending on what the software is used for, and how the AI model has been trained, there could be latent IP infringement issues that the client isn’t aware of and which are extremely difficult to identify.
Content creators for instance, are routinely asked to give warranties, backed up by an indemnity, to the company commissioning the content, as well as onward distributors/licensees, that everything contained in the content is ‘fully cleared’ – i.e. that everything in the content is either owned or licensed IP and nothing in the content will breach any third party rights/result in claims. If they have used generative AI, not all the T&C’s of such AI products would support content creators being able to safely give such warranties, so proper due diligence on the AI products is important. Errors and Omissions insurers who routinely insure content creators against IP claims will be grappling with such questions at the moment as to what they are willing to cover, or not, in terms of generative AI-produced elements of the content. Creators need to be careful not to leave themselves exposed risk-wise, as well as think carefully about the impact it could have on the value of their content from a sales and licensing perspective if distributors and licensees will not take the risk on buying content that has not been properly cleared.
HP: We’re seeing similar issues, particularly on the transactional side - investors in businesses that have adopted generative AI have ramped up their due diligence, particularly in the wake of the many high-profile US legal cases where rights holders are suing AI companies and the Getty Images v Stability AI case in England. As Nicola says, ownership/use of content created by generative AI software has to be considered on a case by case basis based on the AI company’s T&Cs and if the relevant software has been trained on third party unlicensed content, this raises potential IP infringement issues and goes to the warranties that vendors can give. Equally, we are seeing investors asking questions about businesses that aren’t currently using generative AI and the potential threat to a target’s business model from competitors that have adopted and are deploying AI technology.
That’s an interesting development – and how are you seeing companies dealing with the opportunities and threats presented by generative AI ‘day to day’?
NH: Many of our clients instruct us to help them negotiate the suite of commercial agreements required to drive their creative content, products and experiences across traditional and new media. We expect to start seeing more clauses relating to generative AI introduced into brands’ and media organizations’ standard terms – for example, the right to use content to train AI or prescriptive language, or in some cases prohibitions, around the use of generative AI in content creation. We definitely see value for many businesses in being able to track what content has been created (whether in whole or in part) using AI.
In this respect, platforms like Medialake can play a crucial role by providing a robust framework for tracking and verifying the rights associated with AI-generated content, thereby mitigating potential legal risks and enhancing transparency.
HP: Whilst some media companies might currently see generative AI as a threat, those that own their own content (or can readily identify and ring fence owned content from what they don’t own) could have a competitive advantage over AI companies that have trained and developed their software on third party content. AI companies that use third party content to train their models are faced with a choice between taking licences (which may not be given, or could be expensive if given) or risk being sued for copyright infringement for use of content without a licence (see the Getty Images v Stability AI case as example). Whereas, provided they have properly cleared the rights in the first place, media companies could be sat on their own valuable training data sets that they could use to develop their own AI tools without any legal/licensing issues/costs. Therefore, we see significant potential value in being able to efficiently and effectively audit digital assets in the context of generative AI.
So to sum up, what should clients be thinking about?
NH: Embrace change! Generative AI is here to stay and can add significant value to businesses in the creative industries – my suggestion to clients is to do your homework and if using generative AI in some capacity is right for your business, only work with suppliers who you trust and can demonstrate that they have built their model on licensed (not infringing) content.
HP: I couldn’t have said it better myself!