This article, co-authored by Harvey Lewis, research director at Deloitte UK, unveils the possibilities presented by artificial intelligence.
In the 2015 film, “Ex Machina”, the character Nathan Bateman, an archetypal eccentric billionaire, suggests that “one day the AIs are going to look back on us the same way we look at fossil skeletons on the plains of Africa. An upright ape living in dust with crude language and tools, all set for extinction.”
Given the surge of interest in artificial intelligence (AI) in recent years, fueled by big data and ever more sophisticated algorithms and hardware, it should come as no surprise that famous entrepreneurs and even eminent scientists in the real world are asking whether computers could one day threaten the survival of humankind.
Governments and businesses around the world are continuing to invest billions of pounds in the technology.
Consider Google’s major acquisitions in AI, including robotics companies and the British machine learning startup, DeepMind, whose Go-playing AI computer, AlphaGo, defeated the world champion.
Or IBM’s ongoing commitment of $1bn towards Watson, its cognitive computing platform that can understand natural language and scour millions of documents of text in seconds.
And a number of other technology companies that are developing AI-powered ‘chatbots’ and virtual assistants, beginning what some are calling a “crucial paradigm shift in how we think about using the Internet”.
The impact of AI and other technologies on employment is forecast to be profound. Deloitte recently estimated that as many as 36% of jobs in the UK were at risk of being automated.
Retail, transportation, hospitality and manufacturing industries are set to be hardest hit.
The consequence of all this excitement, though, is that AI is now perceived to have almost magical powers of analysis and insight.
Yet it is rarely defined in terms that describe its relative immaturity, its rather more prosaic applications to business, or even its mistakes.
Yes, AI and deep learning can classify objects in photographs better than humans but those same algorithms can also be easily deceived into believing that a school bus is an ostrich.
Far away from the flickering lights of the silver screen, the all-singing, all-dancing ‘killer app’ – if you’ll excuse the expression – is still yet to emerge.
So how should we talk about AI and where does the technology’s business potential really lie? What further challenges will it face in the future?
A point of view
For Deloitte, a useful definition of AI is the theory and development of computer systems able to perform tasks that normally require human intelligence.
Examples include tasks such as visual perception, speech or handwriting recognition, diagnosis – in humans as well as other machines – and translation between languages.
Defining AI in terms of the tasks that humans do rather than how humans think allows us to discuss its practical applications.
Distinguishing between the broad field of AI and the discrete technologies that emanate from the field – what we refer to as cognitive technologies – means that we can marry technology to discrete tasks.
For instance, the technologies valuable to businesses and public sector organisations include speech recognition and translation, computer vision, machine learning, natural language processing, optimisation, planning and scheduling and autonomous systems.
· Computer vision has diverse applications in healthcare, for instance, and companies are using AI to analyse medical images to improve the diagnosis and treatment of diseases.
· Machine learning systems, which automatically discover patterns in data, are being used in myriad applications where data is being generated in ever-greater quantities, such as fraud screening, sales forecasting, inventory management, price discounting, oil and gas exploration, marketing, drug discovery and public health.
· Natural language processing enables companies to discover insights and hidden value in reams of unstructured text, for example by analysing call centre transcripts or customer feedback about a particular product or service, automating discovery in civil litigation and even automatically writing news stories.
· Speech recognition systems focus on automatically transcribing human speech and have applications ranging from medical dictation to banking security.
· Optimisation automates complex decisions and trade-offs about limited resources and is being used in predictive policing, for example, while planning and scheduling systems devise a sequence of actions to meet goals and observe constraints, such as task scheduling for a workforce.
· The benefits to organisations go beyond the simple cost savings typically associated with ‘automation’. Cognitive technologies applied to discrete tasks allow machines to work alongside not just instead of humans, augmenting their intelligence and enabling faster actions and decisions, better outcomes, greater efficiency and scale, and also driving innovation in product and service development. Technology, it turns out, has a habit of creating more human jobs than it destroys.
Road to adoption
The road to greater business adoption of AI is not necessarily a smooth one, though.
Different cognitive technologies are advancing faster than others.
In some cases, the advances are so rapid that the laws and norms of society cannot keep up.
So although task-oriented cognitive technologies are unlikely to decide arbitrarily that humans are an irrelevance, they will certainly challenge many of the fundamental principles upon which our society is based – becoming increasingly hungry for more data, more processing power and more connectivity.
This accumulation of data will also present new opportunities to cyber criminals seeking to disrupt individuals and institutions, potentially increasing the risk we all will face as new technologies are introduced.
The reality is that artificial intelligence will become more, not less, pervasive in the future as more applications are identified and relevant cognitive technologies are created and enhanced.
Its use will lead to questions that cut to the heart of debates about privacy and employment.
Our recommendation, therefore, is that enterprises should start today to develop greater awareness of these technologies and seek opportunities to pilot them.