It’s almost Chat GPT’s first birthday. In celebration, this article is a scrapbook of what it’s made over the last year (in particular who owned it in the US and the UK )- and a prediction about just how crazy the next year might get.
But first things first: does it really matter who owns the billions of weird and wonderful creations being generated by AI models around the world every day?
In short, yes. Each of the possible options raises its own issues:
Let’s look first at what’s happening across the pond. The recent decisions from the US Copyright Office confirm that the key to copyright protection is human involvement in, and ultimate creative control over, the work at issue. No human creative control = no protectable work.
It is worth noting that, in the Théâtre D’opéra Spatial case, the review board was clear that if a work containing AI material also contained enough human authorship to support a copyright claim, the Office would register the human’s contributions. It went on to rely on the fact that author Jason Allen had “refused” to limit his claim to exclude non-human authorship elements as a reason to refuse protection. This approach suggests a reluctance to distinguish between the artist’s creative process as evidenced by the engineering of a set of prompts and the tool used to execute that creative process. Consider the (imperfect) parallel of a still life, arranged by an artist and photographed by a camera. The approach taken in the Théâtre D’opéra Spatial case sees the AI model as a creative agent in its own right, rather than a tool by which a creative process is carried out. This strikes me as a distinction which (although it may in many cases provide the simple or indeed the correct answer) requires a certain degree more nuance.
Which brings us to the UK perspective – a nice reminder of how nuance can lead to chaos. The UK Copyright Designs and Patents Act expressly provides that copyright can exist in a computer-generated literary, dramatic, musical or artistic work, and the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken. Perhaps this is a better balance – protection is available, but only where the creation arrangements are sufficient to merit copyright protection.
However, it puts the UK courts in a difficult position. Before looking at the creativity inherent in the arrangements needed to create the work, they have to decide what those arrangements actually are. Are they the collating, tagging, coding and testing arrangements necessary to create the AI model, or the arrangement of a series of prompts to create a new work imagined by the user of that model? Where the UK courts end up on this will make a difference: the first option will put ownership in the hands of the big providers such as Meta, Google, Open AI and Anthropic, the second opens up the possibility of AI outputs being owned by users.
Both of these options have significant possible drawbacks. This concentration of power over creative works (c.f. the good old days when we liberally plastered all of our creative content across Youtube, Facebook, Reddit, Instagram…) is what made training generative AI models possible in the first place. Putting a new era of digital creative works back into the hands of the platform providers would understandably prompt a huge outcry. The concerns around AI models being a tool to create copyright works are equally well placed. In the new world of AI integration into our everyday systems where a simple prompt is enough to create an image with multiple independent points of nuance (hi, DALL·E 3), perhaps the low level of originality required under UK law is an insufficient control to prevent the mess of overlapping monopoly rights which could result.
A quick google tells me that the terrible twos are characterized by defiant behaviour, including saying “no,” hitting, kicking, biting and ignoring rules. In all honesty, that sounds about right for the next year - and maybe it’s necessary. Whilst the US model fails to recognise the effort that goes into executing complex creative processes using AI tools, the UK model fails to recognise the possibility of executing simple processes using AI tools which nonetheless produce complex creative results. If we still want to protect and incentivise human intellectual creation in this new AI context, we need to recognise that our existing frameworks do not adequately equip us for the task.
We have to evolve copyright law before we break it.