December 14, 2023
2023- What AI year! A reflection on the past year, and what the provisional AI Act might mean for intellectual property in years to come
2023- What AI year! A reflection on the past year, and what the provisional AI Act might mean for intellectual property in years to come

2023 has been a year where lawmakers have been scrambling to catch up with advances in artificial intelligence.  

On the 30th October, G7 Leaders announced that they had agreed International Guiding Principles on Artificial Intelligence (AI) and a voluntary Code of Conduct for AI developers. These principles and the voluntary Code of Conduct are intended to complement the EU AI Act, which is thought to be the final stages of development.

Early November saw the UK host the AI Safety Summit 2023 where 28 governments, including the EU and UK, signed the ‘Bletchley Declaration’ to agree a common understanding of the global opportunities and risks posed by AI. This document highlights the need for international collaboration to uniformly regulate AI and ensure that new (‘frontier’) technology is developed in a safe and responsible manner.

Most recently, on the 9th December, the European Parliament and the Council have reached a historic provisional agreement on the draft EU AI Act. The European Parliament is expected to now vote on the AI Act proposals in early 2024, but the new legislation will not take effect until at least 2025.

The capabilities of AI spans many (and continues to evolve into more) diverse applications. 

To try to deal with the extending reach of AI into our daily lives, the draft Act adopts a risk-based approach (similar in respects to GDPR) to allow the safe use of AI for legitimate purposes such as law enforcement, policing and fair business practices while preventing abuses such as

  • Manipulating human behaviour;
  • Exploiting human vulnerabilities;
  • Enabling discrimination;
  • Facilitating criminal activities.

I’ve discussed these general principles before so I'll turn now more specifically to the issues concerning generative AI and intellectual property.  

A number of transparency requirements around AI have been proposed in the draft Act to regulate generative AI tools that, on user instructions (called ‘prompts’), produce content (‘outputs’). These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.  

So why are these transparency requirements necessary? What intellectual property could be used - or abused – in the process?  Could the content or ‘outputs’ that AI creates actually be considered intellectual property too?  


Inputs... the Training Datasets

Firstly, foundation models require data sets to 'learn' or acquire the data which they then use to assemble outputs. Acquiring data for such models is important to AI developers, but there are several lawsuits now pending where copyright owners believe their materials have been misused for training, without permission or license.   

The EU, UK and US are largely aligned in their overall views about data mining, but the clarity and permissiveness of the text and data mining exceptions does vary from jurisdiction to jurisdiction and it is not yet sufficiently clear whether or when the law permits the use of copyright materials for AI training.

For the developers of generative AI tools, it is important that there is total clarity around whether copyright exceptions might apply, and if so then when.

Artists and creators equally want to know how their works can be protected - should a license or royalty system apply to works used in training datasets? What about the obtaining of already-pirated data?   Another question that arises here is where the burden of proof should lie, and where liability falls, if infringement is alleged to have occurred.


Outputs... the Generated Results 

Secondly, Generative AI by its nature can blur the line between what's human 'creation' and what is AI 'creation'. This gives rise to questions around whether Generative AI output could qualify as protectable "intellectual property" - can it be protected by copyright, design rights or even trade marks, for example?

Also, what about the possibility of infringement – if there is reproduction of a substantial part of another original work - at this ‘output’ stage. This question could be one of the most difficult for copyright lawyers, as the subjectivity inherent in determining whether a reproduction is actually ‘infringing’ makes predicting and preventing such infringements an extraordinarily big challenge.


The Impact of AI in business and in innovation

Apart from the IP-in and IP(?)-out matters discussed above, AI tools are becoming so powerful that their more sophisticated abilities and as-yet-unforeseen capacities are a cause for many concerns. The questions here are near-endless, but here are a few samples to consider:

  • The speed at which AI can assess and apply learning from large datasets far outstrips the ability of humans.  Will most innovation in future depend, at least in part, on AI-generated solutions? Who will own the IP and how will disputes over ownership be settled?
  • Could AI mimic the works of artists to create works that deliberately avoid copyright infringement, and that compete with, and even outperform (in speed or value) the works of the artist themselves?
  • Could criminals using AI without regard for law or ethics outperform legitimate businesses who are staying within the parameters of the law?
  • Could ordinary people in their ordinary lives be manipulated by others, using AI tools to create influence by stealth? 



Even being able to investigate AI concerns presupposes disclosure that content has been generated by AI, as well as full disclosure of the copyright-protected content used to train the AI tools. But with claims abounding that huge volumes of content has been illegally 'scraped' from the internet for training purposes already, and creative groups of musicians, artists and authors alike feeling robbed and aggrieved, will the current requirements to label AI-generated content go far enough?


What can we reasonably expect?

 The proposed new EU legislation will, if passed into law:

  • Prohibit outright certain activities that are deemed a threat to human rights or democracy
  • Set specific transparency obligations for all foundation models, forcing developers to document accurately the modelling and training process,
  • Insist that information is provided to downstream users of AI tools, but also downstream users of AI outputs (so both companies and users will need to be clear as to when something is generated by AI)
  • Set a system of risk assessments, such as might prevent AI tools being used to disseminate illegal or harmful content, enable criminal behaviours, or cause reasonably foreseeable detrimental effects or damaging outcomes to human society
  • Ensure that higher-capability foundation models must undergo more rigorous scrutiny, such as regular external vetting, be subject to compliance controls by independent auditors, and establish a risk mitigation system before market launch
  • Provide consumers with a right to launch complaints.
  • Fines could be imposed for violations of the Act, up to 35 million euro or 7% of global turnover depending on the infringement and size of the company.

Useful links

European Parliament Press Release “Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI” (9 Dec 2023)

The Exception for Text and Data Mining (TDM) in the Proposed Directive on Copyright in the Digital Single Market-Technical Aspects (2018)

Link to more information and to download the Guiding Principles :

Link to more information and to download the Code of Conduct :



Found this article interesting today?
Send us your thoughts: