Toby Bond, associate at Bird & Bird, explains the intellectual property challenges being faced by businesses using artificial intelligence.
Many sectors are set to undergo significant disruption as a result of artificial intelligence. In addition to disrupting business models, AI will also disrupt our legal framework for the creation and exploitation of intellectual property rights, and give rise to new IP challenges for those seeking to develop and deploy AI systems.
IP Generated by AI
AI systems are already being used to generate content capable of attracting IP protection. Working out exactly who owns the IP rights in this content will become increasingly important, especially when it comes to licensing or enforcing those rights.
Ownership of copyright is determined by reference to the “author” of a work. Where an AI system is only used by a human as a tool for creating a work the human using the system will clearly be considered the author. However, what about situations where the AI is more instrumental in creating the work, e.g. the AI is fed a few basic inputs by a human and then goes on to create something which is much more than the sum of these inputs?
UK legislation provides that where a work “is generated by a computer in circumstances such that there is no human author of the work” the author will be “the person by whom the arrangements necessary for the creation of the work are undertaken”. This provision doesn’t leave any room for the AI itself to be considered the author.
Identifying the author of a work generated by an AI system will therefore be a two stage process. First, look to see if there is a human author of the work and second, if one can’t be found, look for the person “by whom the arrangements necessary for the creation of the work are undertaken”. However it is easy to foresee disputes arising at each stage where works are generated by an AI system, e.g. who made the arrangements, the person who built the core AI system, or the person who trained it?
Patent applications need to identify one or more named inventors. This requirement is understood to require the identification of a human being. Currently this requirement should not be a problem. Although AI systems can be used to assist in R&D, there will almost always be a human responsible for actual inventive concept. Candidates include those involved in developing the AI system, defining the problem for it to solve, and/or reviewing the output from the system, e.g. to identifying anything potentially patentable. It would be for the relevant court or intellectual property office to determine who was the actual inventor or inventors if multiple individuals were involved at each stage and there was a dispute about who should be named as the inventors.
Developing AI systems
Developing AI systems also gives rise to all the usual IP issues associated with developing software products. Contracts covering development work will obviously need to specify who will own the resulting system. However, the nature of AI systems also brings additional IP challenges.
Who owns the system?
Unlike a traditional software development situation, where every line of code is attributable to a human author, with a machine learning system large sections of the code will have been generated automatically as a result of the training process. Parties entering agreements to develop machine learning systems will need to think carefully about how the IP rights in the resulting system are going to be owned and licensed, and ensure this recorded in their agreements. Traditional models of deciding ownership based on who writes the code may no longer work.
A common part of developing an AI system is training it using large datasets, which the system can use to improve its decision making. But it will be important to consider who owns the IP in the dataset used to train the system.
One common misapprehension is that data available for free online can be re-used for any purpose. This generally isn’t the case; website terms and conditions along with copyright and other IP right protections will often prevent such data being used to train a machine learning system. Using data without permission presents a potential liability risk which could hinder the development and commercialisation of the system.