2023 has been a year where lawmakers have been scrambling to catch up with advances in artificial intelligence.
On the 30th October, G7 Leaders announced that they had agreed International Guiding Principles on Artificial Intelligence (AI) and a voluntary Code of Conduct for AI developers. These principles and the voluntary Code of Conduct are intended to complement the EU AI Act, which is thought to be the final stages of development.
Early November saw the UK host the AI Safety Summit 2023 where 28 governments, including the EU and UK, signed the ‘Bletchley Declaration’ to agree a common understanding of the global opportunities and risks posed by AI. This document highlights the need for international collaboration to uniformly regulate AI and ensure that new (‘frontier’) technology is developed in a safe and responsible manner.
Most recently, on the 9th December, the European Parliament and the Council have reached a historic provisional agreement on the draft EU AI Act. The European Parliament is expected to now vote on the AI Act proposals in early 2024, but the new legislation will not take effect until at least 2025.
The capabilities of AI spans many (and continues to evolve into more) diverse applications.
To try to deal with the extending reach of AI into our daily lives, the draft Act adopts a risk-based approach (similar in respects to GDPR) to allow the safe use of AI for legitimate purposes such as law enforcement, policing and fair business practices while preventing abuses such as
I’ve discussed these general principles before so I'll turn now more specifically to the issues concerning generative AI and intellectual property.
A number of transparency requirements around AI have been proposed in the draft Act to regulate generative AI tools that, on user instructions (called ‘prompts’), produce content (‘outputs’). These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.
So why are these transparency requirements necessary? What intellectual property could be used - or abused – in the process? Could the content or ‘outputs’ that AI creates actually be considered intellectual property too?
Inputs... the Training Datasets
Firstly, foundation models require data sets to 'learn' or acquire the data which they then use to assemble outputs. Acquiring data for such models is important to AI developers, but there are several lawsuits now pending where copyright owners believe their materials have been misused for training, without permission or license.
The EU, UK and US are largely aligned in their overall views about data mining, but the clarity and permissiveness of the text and data mining exceptions does vary from jurisdiction to jurisdiction and it is not yet sufficiently clear whether or when the law permits the use of copyright materials for AI training.
For the developers of generative AI tools, it is important that there is total clarity around whether copyright exceptions might apply, and if so then when.
Artists and creators equally want to know how their works can be protected - should a license or royalty system apply to works used in training datasets? What about the obtaining of already-pirated data? Another question that arises here is where the burden of proof should lie, and where liability falls, if infringement is alleged to have occurred.
Outputs... the Generated Results
Secondly, Generative AI by its nature can blur the line between what's human 'creation' and what is AI 'creation'. This gives rise to questions around whether Generative AI output could qualify as protectable "intellectual property" - can it be protected by copyright, design rights or even trade marks, for example?
Also, what about the possibility of infringement – if there is reproduction of a substantial part of another original work - at this ‘output’ stage. This question could be one of the most difficult for copyright lawyers, as the subjectivity inherent in determining whether a reproduction is actually ‘infringing’ makes predicting and preventing such infringements an extraordinarily big challenge.
The Impact of AI in business and in innovation
Apart from the IP-in and IP(?)-out matters discussed above, AI tools are becoming so powerful that their more sophisticated abilities and as-yet-unforeseen capacities are a cause for many concerns. The questions here are near-endless, but here are a few samples to consider:
Transparency
Even being able to investigate AI concerns presupposes disclosure that content has been generated by AI, as well as full disclosure of the copyright-protected content used to train the AI tools. But with claims abounding that huge volumes of content has been illegally 'scraped' from the internet for training purposes already, and creative groups of musicians, artists and authors alike feeling robbed and aggrieved, will the current requirements to label AI-generated content go far enough?
What can we reasonably expect?
The proposed new EU legislation will, if passed into law:
Useful links
European Parliament Press Release “Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI” (9 Dec 2023)
The Exception for Text and Data Mining (TDM) in the Proposed Directive on Copyright in the Digital Single Market-Technical Aspects (2018) https://www.europarl.europa.eu/RegData/etudes/BRIE/2018/604942/IPOL_BRI(2018)604942_EN.pdf
Link to more information and to download the Guiding Principles : https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-guiding-principles-advanced-ai-system
Link to more information and to download the Code of Conduct : https://digital-strategy.ec.europa.eu/en/library/hiroshima-process-international-code-conduct-advanced-ai-systems