Forget the red pill, take a chill pill

This month (31 March) marks 20 years since ‘The Matrix’ hit cinemas in the US (11th June in the UK), revolutionising the way we view the power of machines and our relationship with technology. Kevin Joyner, director of planning and insight, Croud, explores the progression of Artificial Intelligence and how much it’s moved on in the past 20 years.

When we’re asked to think of a world dominated by Artificial Intelligence, we probably imagine ourselves living inside The Matrix: wired up to liquid-filled pods and allowing machines to determine and operate every aspect of our lives. In the film – one of my favourites, 20 years old this year – we’ve lost a battle with machines, scorched the earth in an attempt to defend it, and become slaves – anesthetised by a simulated world. Scary, right?

With AI now embedded into much of how we live and what we do – whether healthcare, banking, online shopping and now our trusty voice assistants – we might easily imagine we’re not far from that world already, and that life as we know it is never going to be the same again.

But let’s take a moment to look at what needs to happen just to get us on the path to the Matrix. Firstly, we’d have to have general AI that can adapt to any task and imagine anything – absolutely anything. We’d need tremendous advances in robotics, and a machine that can have goals other than the ones given it by humanity – a machine that can determine its own purpose.

In contrast – in reality, today – although we’re seeing more processes driven by machine learning, normally this means training models on huge volumes of data, feeding them a limited number of factors, and retrieving mostly accurate mathematical predictions. The training data is cleaned and prepared through laborious human effort, which – funnily enough – is intended to free human time, in the long run: to be creative, and to do the things only humans can do. In summary, there is an enormous gulf between fiction and the most common applications of AI today.

Less scared? While machine learning is already involved in many aspects of how our society functions, that role has long been filled by what we used to call – more simply – automation. Now the automation is just slightly more intelligent, such that we can achieve a better ability to predict in a broader range of applications. Setting out to predict something now often doesn’t require as much maths. Instead, we prepare data, and tune models, and train a machine to have the single capability we’ve planned.

For instance we’ve used machine learning to help predict the performance improvements we’ll achieve by undertaking digital marketing tasks. We’ve developed a model that can be used to recommend individual tasks for a client’s Google Ads account. The machine doesn’t actually perform the tasks, but rather it helps us to plan our resource, so that we do the highest performing, most profitable work first. It’s technology that identifies the work that needs doing, and then prioritises it intelligently, for proven performance. It helps us to be even better at the jobs that only we can do.

The biggest real risks with AI are that we might put a machine learning model to work in society having trained it on data that amplifies our human shortcomings. There may be a real danger that the more we use machine learning, the more we abdicate from our responsibility to resist those shortcomings and to try to make ourselves better as a society.

Even so, for now, I’m sure we should concentrate on the bigger and unquestionably real threats… like global warming: an actual scorched, and sodden, earth. Just a thought.