What is the potential impact of Getty’s lawsuit against Stability AI on the music industry?

Credit: II.studio/Shutterstock

MBW Views is a series of exclusive op/eds from eminent music industry people… with something to say. 

In the following op/ed, Jonathan Coote (pictured inset), an associate and music lawyer at Bray & Krais Solicitors, examines the potential impact of Getty’s lawsuit against Stability AI on the music industry and suggests that there is a need for “urgent clarification on the law relating to AI training in the UK”.


From allegations that AI-music platform Suno is potentially trained on copyrighted recordings to artists calling on tech platforms to stop infringing and devaluing their rights, there is still profound concern in the industry that copyright-protected music is being used without a licence or due compensation to train AI tools that could replicate (and ultimately replace) human-created music.

But what is the legal framework to prevent such training?

In the UK, the current law appears to prohibit the training of AI tools on copyright works for commercial purposes.

The government initially declined to enact the more lenient EU “opt-out” approach during the Brexit transition period, but then sought to radically transform the law in 2022 by attempting to bring in an extremely permissive exception.

This plan was quickly discarded after being described as “music laundering” by rightsholders. The government’s next initiative was to establish a working group to develop a system under which AI companies could obtain a “fair licence”.

Perhaps unsurprisingly, given the diametrically opposed interests of tech companies and rightsholders, these talks failed.

Unlike in the US, where “fair use” defences may consider the wider impact to the public, the UK’s copyright defences are narrow and technical.

This means that the application of the current law (notably created before large language models and generative AI) may be inadequate to deal with the economic, creative and social consequences of AI. Without further legislation, this could create unhelpful precedents.

Whilst the restriction on training under UK law on its face is clear, the outcomes of actual cases are not, as they may be highly fact specific. There is one major case which could set the direction of UK law on these issues: stock-image site Getty Images has sued Stability AI in the US and England for training its models on Getty’s image database. In the English courts, Stability AI has failed to strike out the case but its defence highlights some of the many difficulties in bringing a successful infringement claim.

To start with, arguments that an individual output is infringing are inherently difficult to prove as an output must reproduce a “substantial part” of a particular work, and AI tools are generally designed with safeguards preventing this. Moreover, Getty must show that Stability AI committed an infringing act in the UK. In its defence, Stability AI deny this, claiming that the relevant developers were abroad and any alleged copying occurred on a foreign cloud server. Getty also argues that the model in question, Stable Diffusion, is an imported infringing “article”, conventionally used for physical goods like pirated CDs. The effectiveness of these arguments remains to be seen.

There are also a number of highly fact-specific elements to Getty’s case which cannot be relied upon by many in the music industry. Firstly, Getty has managed to reproduce distorted watermarks in output images, demonstrating that its images were clearly used in the training (a difficult task given the “black-box” nature of most AI tools) and consequently providing grounds for trade mark infringement. Secondly, because Getty has compiled a stock image library, it can also claim infringement of its “database rights”, which is unlikely to work for most rightsholders. Given its technical and legal complexities, if the case does ultimately lead to a decision, it is unlikely to establish a universal and definitive precedent.

With AI-generated music already being distributed on streaming services, music companies and musicians are understandably worried about its proliferation. Yet whilst the music industry has vocally called out AI companies for “music laundering”, with one notable exception, it has been reluctant to bring proceedings.

Furthermore, the music industry is perhaps more worried than other industries about the impact of deepfakes, given recent astonishing developments in voice-cloning technology. Whilst many instinctively feel that this should be unlawful, there are technically no specific image or likeness rights (even though they are widely licensed) that would clearly restrict the outputs of such tools in the UK. As such, the most effective way to bring an action may be to claim that the training process itself was infringing. The issue of training is therefore absolutely vital for the industry.

The regulatory landscape is rapidly evolving. The US is potentially bringing in legislation to create a new “digital representation” right to tackle deepfakes and, across the channel, the EU AI Act is set to increase the transparency required by AI companies operating in the EU with respect to their training data and may also extend its “opt-out” restrictions extra-territorially.

There are a number of positive initiatives on the horizon in the UK: an All-Party Parliamentary Group is investigating the impact of AI on the music industry (you can find the author’s thoughts on this here) and a bill in the House of Lords is bringing real regulation to Parliament, including crucial provisions on transparency.

Given the potential inadequacies of the current law, though, the UK government urgently needs to capitalise on this work to provide much-needed clarity.

 Music Business Worldwide

Related Posts