The UK could lead in AI, but only if ethical norms are set


The House of Lords have pegged the UK as a potential world leader in the development of artificial intelligence, a new report has revealed.

Entitled “The House of Lords Artificial Intelligence Committee’s report – AI in the UK: Ready, Willing and Able”, the report considered the economic, ethical and social implications of advances in the technology. It stated that AI has great potential to boost the UK’s economy, solve problems and improve productivity – but only if it is handled with care and the ethical implications of it are considered. 

“The prejudices of the past must not be unwittingly built into automated systems, and such systems must be carefully designed from the beginning,” the report reads. There has been research into this matter, with conclusions that algorithmic bias could seriously impact how AI functions. The report warns against the potential for unethical AI to perpetuate prejudices towards certain groups; something that pioneers such as Joy Buolamwini, who is fighting bias in machine learning, would agree with.

To do so, the report encouraged hiring a more diverse range of AI specialists and establishing a universal code of AI ethics. 

Kriti Sharma, VP of AI at software provider Sage, commented on this: “This step will be critical to ensuring we are building safe and ethical AI – but we need to think carefully about their practical application and the split of responsibility between business and Government.”

Matt Walmsley, EMEA Director at Vectra – a company which uses AI to detect and respond to cyberattacks  – agrees that AI has socio-political dimensions as well as technical ones and questions how to mitigate this. 

“To influence and control an AI instance, its creator can program an ‘operating principle’ to replicate a chosen moral framework. This has potential to limit or stop the AI from what is considered ‘doing harm’. Human moral frameworks are, however, dynamic. This raises the question of who should choose and revise these ‘morals’ – should it be the user, the AI’s creator, the government or another legislative body?

“If AI ‘learns’ via observations, then the data set becomes fundamental to the outcome, and any bias on the input will certainly affect the algorithmic models created, and the output behaviour as a result,” he added. We can see this happening already; when different people in the same place search for the same keyword they’re likely to be served differing results based on their previous search and online behaviour. 

Besides bias, another concern around AI is the amount of data it consumes. The report argues that “Large companies which have control over vast quantities of data must be prevented from becoming overly powerful within this landscape.” Data privacy is a crucial concern, but AI could really make a difference in the world, especially in cybersecurity, Walmsley said. 

“AI is currently being used to combat cybersecurity adversaries by analysing digital communications in real-time and spotting the hidden signals to identify nefarious behaviour. A task that is simply beyond humans alone.  AI augments the human capabilities and security analysts to quickly identify, understand and respond in the case of a data breach.”

AI and jobs

On the other hand, there has been a fair amount of discussion surrounding how AI could cause job loss. This report was fairly positive on the matter, announcing the government’s National Retraining Scheme to help people find work as AI will make some jobs redundant but also create new ones.

Walmsley feels concerns in this area are over stated. “Our tendency to anthropomorphise AI technology perhaps comes from the widespread influence of science fiction. AI in today’s workplace is more ‘Robocop’ than ‘Terminator’s SkyNet’.

“It augments human capabilities so that systems can operate at speeds and scales that humans alone cannot. In this context, moral risk is extremely low,” he added.

Colin Lobley, CEO of the Cyber Security Challenge UK, agrees that AI could really benefit the workforce:

 “AI has long been tipped to be a job killer but in the security industry, it has the potential to open up opportunities to a much wider cross-section of society.

He continued: “With AI and machine learning, a lot of tasks can be automated, allowing analysts and security professionals to focus on the tasks that require the human touch – assessing flaws, mitigating damage caused by breaches and the like.”

The report included practical advice to help control the rapidly rising development of AI, including offering education on the technology to children. There is still a long way to go in clarifying the ins and out of AI, and uncertainties surrounding the ethics.

Still, Sharma said now is the time for government and business to act: “Once in generation a new technology arrives that changes everything.  For ours, it is artificial intelligence (AI). As we approach this cross road we need to ensure industry is ready to pivot and take advantage of the productivity gains that will be delivered through the automation of mundane, repetitive tasks – using AI to free businesses up to focus on what’s important.

Sharma praised the suggestion of an SME fund – to support schemes for UK SMEs working with AI. “It will provide much needed education and clarity about how adoption of this technology will supercharge the growth of all industries,” she said

To conclude, the report stresses that only if companies and the government work together to establish international norms on the design, development, regulation and deployment of artificial intelligence can the UK truly position itself as a leader in the tech.