robot employee

Alexandra Mizzi, an associate in the employment team at law firm Howard Kennedy, explores the problems that are arising, and could arise, as a result of increasing automation in the workplace.

It seems every day sees a new dire warning about the effect that robots will have on the job market. With Deloitte estimating that 11 million UK jobs are at high risk of becoming automated within the next 20 years, the impact will be felt across the economy – including many high-skilled and professional jobs. “I for one welcome our robot overlords,” may become less of a B-movie trope and more likely to be adopted as a corporate motto.

Although most experts agree it will still be some time before robots can convincingly simulate human intelligence, they are undoubtedly growing both more sophisticated and more ubiquitous – and the calls for a legal framework for their use are becoming harder for governments to ignore.  The next decade is likely to see a rash of legislative attempts to regulate robotics. While security, privacy and safety are likely to be top of the priority list, some legislators are already beginning to grapple with more abstract, even philosophical issues such as the legal status and rights of robots.

EU developments

On 15th February this year, the European Parliament passed a resolution calling for robots to be granted a limited form of legal personhood, enabling them to be parties to litigation as a defendant and (in theory) as a claimant and to enter contracts. Although the concept of robots being deemed “electronic persons” attracted predictably sensationalist tabloid coverage, it’s really aimed at ensuring the manufacturers of robots can’t get off the hook for damage caused by the robot by arguing that the robot is an autonomous agent. Likewise, it would ensure a party to a contract could not evade their contractual obligations by using a robot to negotiate or execute a contract on their behalf – something that may well become common practice for simple supplier contracts on standardised terms.

Perhaps more significantly, the EU Parliament also called for a framework to be developed for the ethical design, production and use of robots. These proposals are now being examined by the EU Commission.

Robots in the workplace

A key area of focus for any regulatory framework will be the use of robots in the workplace. There is a whole raft of legal ethical issues which would need to be addressed, including:

  • Should employees be told when they are interacting with robots and how robots will be used in their workplace? It’s likely that many companies will use AI for business functions such as accounts and simple HR – and it may not be immediately obvious to employees that they aren’t dealing with a human employee. Should employers be required to tell them?
  • Who should be held liable for any harm caused by robots to staff – the employer or the manufacturer? Employers are generally vicariously liable for wrongful acts of their staff (such as harassment of other staff), but as a robot isn’t a legal person (and so cannot be primarily liable for its own actions) this would not apply to robot misbehaviour. The EU Parliament’s proposals might address this gap in legal protection.
  • If a robot learns bad behaviour from its human colleagues, should those individuals be held personally liable for “bullying by bot”? Most AI depends on learning from interaction with humans – and they can be unreliable teachers. In 2016, Microsoft launched a chatbot on Twitter, which was designed to mimic a 19-year-old girl. Within hours of interacting with other Twitter users, the bot became an expletive-ridden defender of genocide – a less than ideal colleague.

Rights for robots?

Should the law go a step further and grant robots basic employment rights? On the face of it, this seems rather ridiculous – why should a machine have rights? After all, even the word “robot” means slave. And it’s true that the argument for robot rights is hard to make when looking at the type of workplace robotics which are a realistic short-term possibility: the idea of a mechanised arm or AI program having the right to go on strike seems bizarre.

But fast-forward to the future beloved of science fiction, where robots look and sound convincingly human, and it doesn’t seem quite so far-fetched. Even if you remain sceptical about the idea of robots having legal rights, a workplace where some workers could be mistreated with impunity would be a rather unattractive one. Human employees (assuming there are any left) might feel very uncomfortable seeing their robot colleagues abused in the workplace – and if employers were able to treat robots harshly without redress, it’s not hard to see how that could spill over into treating human staff badly too.

Sceptics might also like to remember that it’s not so very long ago that the concept of human employees having basic rights was entirely alien – the first employment law cases talk about the contract between “master and servant” in language which makes us rather queasy today. Perhaps future generations will look at our unease with the idea of robots’ rights with the same horror which we reserve for Victorian workhouses and child labour.