fbpx

AI is testing the limits of copyright law

AI copyright Image credit: Cagkan Sayin / Shutterstock

Copyright is the latest battleground for AI. Technology companies want easy access to large quantities of data to train their AI models. Content creators say their intellectual property is being used without permission or recompense.

Most existing intellectual property laws were not written with AI in mind. Now, the rules of engagement are being tested on multiple fronts in several high-profile lawsuits.

The New York Times claims that OpenAI’s ChatGPT is copying source data for the system to learn and generate new content. But it also goes one step further to claim that ChatGPT has repeated source data verbatim in its output.

As a general rule under existing copyright law, permission would be required to make use of publishers’ content in this way. OpenAI has argued that AI development will be impossible under such restrictions.

The creation of services such as ChatGPT hinges upon a large amount of high-quality data being readily available to train them, and the company argues we could risk stalling the progress of a technology that has become an integral part of society.

This debate is unfolding not just in the US, but elsewhere in the world. The Daily Mail was reported to be gearing up to take on Google over the use of their articles to train its chatbot, Bard.

It was also announced last month that Getty Images’ claim against Stability AI, for its use of copyrighted images to train ‘Stable Diffusion’ will proceed to trial. Further litigation is stemming from uncertainty about how IP law applies to the output of AI models.

The UK’s Supreme Court recently ruled that an AI machine cannot be named as the inventor in a UK patent application, as the inventor must be human.

In short, the lineup of cases is growing, with The New York Times lawsuit one of the most interesting to date.

The outcome could cause a ripple effect in the industry.

AI’s next race

Fears that AI is advancing too quickly for legal regulation to keep up aren’t new. The safe and ethical development of AI has dominated public discourse in recent years, with the first global AI Safety Summit and a landmark EU agreement to regulate AI both taking place at the end of last year.

Today, debates surrounding the legal rules governing AI development and deployment are expanding beyond safety, and broader questions about how AI fits within existing legal systems are coming to the fore.

It seems the AI free-for-all, in which eagerness to explore AI’s potential has outweighed the desire to argue over intellectual property issues, is coming to an end.

It has now been replaced by a new era of litigation against big-tech companies, over the right to develop AI platforms in the way they have up until now.

Against this backdrop, could we be about to see AI companies gearing up for a new race, this time to license the data needed to develop their AI platforms, while compensating rights holders for the data they use.

If the market accepts the position – contrary to what OpenAI believes – that training AI machines requires a licence to use the training data, this could push the AI monopoly further into the hands of big tech, who can afford to enter into such reportedly high-value agreements for source data.

If the data upon which an AI machine is trained becomes a closed ecosystem within an elite club, there will inevitably be bias in the output of the machines.

A world of AI gatekeepers would, on one hand, appear to be something of a step backwards and in some ways reminiscent of the pre-internet age of record labels and publishers having control over the variety and accessibility of consumable media such as books and CDs.

What next for AI and copyright?

Regulation will play a key role in shaping the landscape. AI technology crosses geopolitical boundaries meaning that, in order to be effective, regulation of AI will need to be a collaborative effort between nations.

That presents significant challenges and probably indicates that progress in regulation will be slow, piecemeal and, importantly, may struggle to keep pace with developments in the technology.

This week, the UK shelved a code that would set out rules for training AI using copyright materials. The failure of industry executives to agree on a voluntary code of practice underscores how thorny an issue it is.

It’s clear that intellectual property rights and AI innovation sit at odds with one another in many ways, and this may prove to be a tough conflict to resolve.

Joshua Little is a partner at Marriott Harrison.