Nov 6 2025 min read

Picture Imperfect: framing the legal boundaries in Getty Images v Stability AI

The long-awaited High Court decision in Getty Images v Stability AI has been handed down.  It is 205 pages long.

Overall, Getty Images has been successful in its trade mark infringement claim, albeit with a narrow scope.  The main takeaway is that Getty Images failed in its secondary copyright infringement claim.

Whilst this is a historic ruling, many elements of how generative AI will interact with IP laws in the UK remain unaddressed as the claims the judgment considers were largely paired back, with various claims withdrawn before trial.

Below is a summary of what was decided.

 

What was the claim

The claim relates to the millions of high-quality photos and videos owned or licensed by Getty Images and the allegation that they were used unlawfully to train the Stable Diffusion model.

The Stable Diffusion model is actually various iterations of a "text--to-image" generative AI.  Users input prompts and the system generates a synthetic image based on the data it has been trained on.

 

What was decided

The Trade Mark infringement Claim (paras 134 – 538)

Various Getty Images and iStock watermarks appeared on generated images.  Getty Images brought trade mark infringement action under s.10(1), s.10(2), and s.10(3) of the Trade Marks Act ("TMA").

Stability argued that it was not using the relevant signs, but instead only providing the model.  However, the evidence that users could in certain circumstances not prevent the appearance of a watermark was persuasive and the judge rejected Stability's attempt to put responsibility onto the user.

Certain watermarks were identical or similar to the watermarks (aside from the Getty Images marks and so the s.10(1) claim failed in respect of those marks).

The judge held that Stable Diffusion outputs were "synthetic image outputs" which encompassed "images".

Getty Images was successful in an extremely limited scope.  Under 10(1) TMA, Getty Images succeeded in respect of iStock watermarks* generated by users of v1.x (in so far as the Models were accessed via DreamStudio and/or the Developer Platform).  Notably, the judge states that "Given the way in which the case has been advanced, it is impossible to know how many (or even on what scale) watermarks have been generated in real life that would fall into a similar category".

In respect of s.10(2) a likelihood of confusion was found on the basis that consumers would assume a connection (i.e. some form of licence from the Getty image database).

The claim under s.10(3) TMA was dismissed.

 

Secondary Copyright Infringement Claim (paras 543 – 609)

Under UK copyright law there is a difference between primary and secondary infringement.  Primary infringement relates, broadly, to reproductions of copyright works.  Whereas secondary infringement relates to involvement and dealings in such works "downstream".

The primary infringement claim was dropped by Getty Images in advance of the trial on the basis that there was no evidence that training or development of Stable Diffusion took place in the UK.

Getty Images contended that Stable Diffusion is an infringing copy on the basis that the making of its model weights would have constituted infringement of the Copyright Works had it been carried out in the UK (“the Secondary Infringement Claim”).

The court had to decide if Stable Diffusion was capable of being an "article" for the purposes of copyright law and whether it was an "infringing copy".  

Getty Images argued that the legislative definition of "infringing copy" was sufficiently broad to encompass an article whose creation involved copyright infringement.

However, importantly Stable Diffusion does not itself store the data on which it was trained.  This was persuasive to the judge, who held that the relevant statutory sections were concerned with infringing copies, and not with a process which (while it may involve acts of infringement) ultimately produces an article which is not an infringing copy.

Many may consider the admissions that the training of the Model involved the reproduction of the Copyright Works both locally and in cloud computing resources, however it appears that this did not take place in the UK.

Interestingly, the High Court’s finding aligns with the two District Court judges in the US that found the use of copyrighted material to be a fair use under US Copyright law.  As one judge stated the use for training was “quintessentially transformative” a key factor in uphold the fair use defence. Getty Images has stated that: “We will be taking forward findings of fact from the UK ruling in our US case.”

 

What was not decided

The "Training and Development Claim" was abandoned on the basis that there was no evidence that training and development of Stable Diffusion took place in the UK.

The "Outputs Claim" was also abandoned because “the prompts which it was alleged had been used to generate the examples of infringing output from the Model in evidence in these proceedings have been blocked by Stability such that the relief to which Getty Images would have been entitled in respect of their allegations of primary infringement of copyright has now been substantially achieved.”

The Database Rights Infringement Claim was also dropped.

As a result, two key elements for creatives and the AI industry, namely the activity of training in the UK and, separately, that of producing allegedly infringing outputs 

 

Stobbs comments

Whilst this is a historic decision, it did not provide the clarity that many were hoping for.  It may well be construed as a win for the AI industry as it failed to prohibit many of Stability AI’s business practices to which Getty Images objected, and it certainly highlights how model training can avoid being caught by UK copyright legislation by the defendant undertaking its activities in a different jurisdiction.  However, it leaves important questions unanswered, namely, if more activity had taken place in the UK, would the result have been the same? Getty Images may yet decide to appeal the decision to the Court of Appeal. This open question, combined with on-going litigation in several other major jurisdictions, makes it more important than ever to stay abreast of the developing legal landscape in order to avoid any landmines.

Certainly greater certainty on the extent of and limitations on protection for creative works against unauthorised use by AI companies is much needed. Getty Images has urged ‘governments, including the UK, to establish stronger transparency rules’ to avoid the need for further ‘costly legal battles” in the future.  

Interested in how AI interacts with the IP landscape then please sign up to our AI Seminar on Wednesday 12 November.

Fancy more of this sent straight into your inbox?

Sign up to our mailing list.