September 26, 2023
AI-lluminating The Legal Landscape: byte-sized considerations for AI
AI-lluminating The Legal Landscape: byte-sized considerations for AI

As innovation in AI-powered technologies continues to reshape our reality, and AI is no longer a buzzword but is now becoming a part of lived reality, I have been mulling over how the legal landscape might continue to evolve alongside the technology. 

Questions about who owns the algorithms and creations of AI, how we make sure there is enough fresh data to stop the internet being overwhelmed by AI-generated content, and the responsibility that online platforms have in relation to the creations of AI are becoming increasingly complex.

There are a few questions left to answer, as we see how the relationship between innovation and regulation plays out...

 

Things I'm thinking about (#1): What happens if the AI-generated code in your software can't be protected in key markets abroad?

It's often the case that copyright works starting life in one country are granted protection by other countries. But that protection is on the other country's terms - if your work is something that doesn't meet their criteria, that's tough. (That's a very low-fi summary of the Berne Convention, if you were wondering).

So - if the UK recognises copyright works which are computer generated, but the US and EU don't - surely certain elements of a work that is protectable in the UK could end up unprotectable and unenforceable abroad.

We're all having a nice time debating about the theory of AI copyright ownership - but sooner rather than later this patchwork protection will start to impact rightsholders.

 

Things I’ve been thinking about (#2): What happens if we "pollute" the internet with an abundance of AI-generated content?

Turns out it has some interesting (and potentially scary) implications for the training of AI models. If you train AI models on AI-generated data, there is a significant reduction in the quality and diversity of outputs.

Sometimes, this approach to training is a conscious choice to increase productivity and profit. Worryingly, it can also be an unintentional side effect of the internet being increasingly flooded with AI generated content. Play that second scenario forward and you could end up with online content inevitably trending toward the most commonly occurring images, opinions, poetic phrases, coding solutions. Not ideal.

Looks like AI itself is pointing toward the need for fresh data. Human created data. We might even need to start paying to use it.

 

Things I've been thinking about (#3): should online platforms have increased responsibility when it comes to AI-generated content?

Earlier this month, author and publishing commentator Jane Freidman found a whole host of AI-generated books on Amazon, purporting to be written by her. She had a nightmare getting them taken down, because she didn't have a trade mark covering her publishing name (many authors don't). The same situation has been encountered by dozens of authors across various online platforms - but most often Amazon, due to its "direct publish" feature.

What if online platforms had more responsibility (or fewer defences) when dealing with AI-generated content? The US has already considered this. In June this year a bill was proposed that would remove the notoriously broad immunity granted to online platforms in the context of re-publishing AI-generated content. The UK and the EU have both recently issued new laws in this area, but they haven't been expressly updated for AI. In particular, the EU AI Act expressly says it doesn't effect the new EU regime.

If laws changed so that online platforms were more exposed in this area, they might be forced to engage with the type of content they were republishing. This could be, for example, by screening for AI-generated content and applying more stringent verification methods before distributing it. Or by implementing a takedown process for AI content that didn't rely on IP rights and defamation laws which weren't designed for this context and are often tricky to apply without legal advice.

AI made deepfakes easy to generate. Online service providers made them easy to publish (unless they're illegal). The laws which could help aren't exactly user friendly. That's a pretty toxic equation unless something changes.

Tags
Trademarks /  Tech /  Designs & Copyright

Found this article interesting today?
Send us your thoughts: