• IE
Choose your location?
  • Global Global
  • Australian flag Australia
  • French flag France
  • German flag Germany
  • Irish flag Ireland
  • Italian flag Italy
  • Polish flag Poland
  • Qatar flag Qatar
  • Spanish flag Spain
  • UAE flag UAE
  • UK flag UK

Pixelated predicaments: Getty vs. Stability AI may unleash a wave of mass claims

05 February 2025

Getty Images (US) Inc and Ors v Stability AI Ltd [2025] EWHC 38 (Ch)

The UK's proposed copyright reforms, aimed at mandating AI developers to disclose the content used in training their models, could lead to a surge in mass litigation from content creators. However, it appears the courts may be attempting to stem this wave early this year, following a procedural ruling against a US company's attempt to represent thousands of content creators alleging copyright infringement by AI developer Stability AI.

Case Background

Getty Images (Getty) brought a claim against Stability AI- a UK based open-source generative AI company.  It alleged that the company had scraped millions of copyrighted images from Getty’s websites without consent to train its AI model, Stable Diffusion. Getty argued that both the training process and the outputs of Stable Diffusion  amounted to copyright infringement. Key claims included:

1.       Training and Development Claim: Stability AI allegedly downloaded and used Getty's copyrighted works in the UK during the training and development of Stable Diffusion.

2.       Communication to the Public: The model reproduced substantial parts of the copyright works when users generated outputs based on prompts tied to Getty content.

3.       Secondary Infringement: Stability AI's importation of pre-trained Stable Diffusion software into the UK constituted secondary infringement under UK copyright law.

Stability AI sought to dismiss the training and secondary infringement claims, arguing insufficient evidence of UK-based activity and that UK law only applies to tangible items. The High Court rejected this, allowing Getty's claims to proceed, including an amended claim targeting Stable Diffusion's "image-to-image" feature, which allegedly enabled near-identical reproductions of copyrighted works.

Getty  claimed that Stability AI had infringed the rights of over 50,000 photographers and content creators who have exclusively licensed their works to Getty for decades. With Getty's support, US-based Thomas M Barwick Inc., one of those rights holders, attempted to bring a representative action on behalf of this group. Under the Civil Procedure Rules, such an action is only allowed if all group members share the same interest in the claim.

Decision on representative claim

The High Court ruled in favour of Stability AI’s application under rule 19.8(2) of the Civil Procedure Rules, preventing Thomas M Barwick Inc. from representing the group. Stability AI argued that the claimants' approach assumed the works in question had been used to train the AI model—a fact that could only be confirmed at trial. The judge also noted there was no definitive list of copyrighted works used in the training process, making it even harder to identify who should be included in the group.

The High Court judge concluded that these issues, particularly the flawed class definition, did not justify permitting the representative action. An alternative proposal to allow claims without joining other affected creators was also rejected. The judge noted that Stability AI could face multiple lawsuits from other licensors, and the court required proper evidence before considering such an approach.

Despite these procedural barriers, the judge suggested the difficulties were not insurmountable, leaving the door open for the claimants to reapply. The courts also noted that a representative action could still be practical and might be addressed during the upcoming trial proceedings in June.

Context

The challenges faced in the Getty v. Stability AI case, particularly the difficulty in identifying class members due to the absence of clear records of the training data, highlight a broader issue in the intersection of copyright and artificial intelligence. This gap could be addressed by one of the proposals currently under consultation by the UK government, which aims to require AI developers to be more transparent about the data used to train their models.

If implemented, such transparency obligations would compel developers to disclose detailed information about the sources of their training data, including copyrighted material. This would directly resolve issues like those seen in the Getty case, where uncertainty about which works were used hindered the formation of a clear class of claimants.

Requiring transparency would streamline the process for rights holders to verify whether their works were used without authorization, making it easier to form representative actions. Additionally, it would provide courts with more concrete evidence of alleged infringements, reducing the reliance on speculative claims and lengthy investigations. Ultimately, this reform could significantly improve accountability for AI developers while ensuring stronger protections for creators and their intellectual property.

Conclusion

The intersection of AI development and copyright law is a rapidly evolving and a contentious area, as evidenced by the Getty v. Stability AI case. While the High Court's decision to block the representative action highlights the procedural challenges in pursuing mass copyright claims, it also underscores the limitations of the current legal framework in addressing the complexities of AI-driven infringement.

The ongoing UK government consultation on copyright and AI offers a potential pathway to mitigate these issues through enhanced transparency obligations for AI developers. Such reforms could bridge the gap between technological innovation and intellectual property rights, providing clearer guidance for courts, developers, and rights holders alike. As the trial proceedings continue, the outcomes of this case and the government’s proposals may collectively shape the future balance between fostering AI innovation and safeguarding creative content.

This article was co-authored by Gabriella Rasiah.

Further Reading