Our blog

Terminator vs. Iron Man: is artificial intelligence going to take over the world?

k

14 October 2016

Dr Nicola Millard

Blogs by author:  Dr Nicola Millard , Head of Customer Insight and Futures, BT

LinkedInTwitter

AI is a fascinating, complex and occasionally scary technology, with the power to change the world. Dr Nicola Millard gives her thoughts on its evolution.

Is artificial intelligence going to take over the world? This is the question that is being debated around the planet at the moment. Talk is rife of mass unemployment, existential threats and whether this technology is safe (especially in the context of self-driving cars and drones).

I’ve been privileged to have been part of a number of these debates, both at conferences and with customers over the past few weeks. I joined the likes of chess master Gary Kasparov and ‘Robot Wars’ expert Noel Sharkey at the ‘Social Robotics & AI’ conference in Oxford; and chaired a fascinating panel with machine learning experts Joanna Bryson, Miranda Mowbray and Stephen Roberts, at ‘New Scientist Live’ at London’s Excel Centre.

Dr. Nicola Millard on stage at New Scientist LiveDr. Nicola Millard on stage at New Scientist Live

These debates aren’t new — they have been echoing around ever since Alan Turing proposed the Turing Test back in the 1950s. The Turing test is passed when a human can’t tell if they are talking to a computer or a person — and it has yet to be convincingly passed. This is because many things that might seem simple to a six year old child — like having a conversation, spotting interesting facts, instinctively liking people — are actually very difficult problems to encode and compute.

Huge progress has been made.

This subject tends to make me a little nostalgic. I started my career in BT in 1990 (at the age of six, obviously) developing machine learning technologies to support our contact centre advisors. ESCFE (Expert Systems for Customer Facing Environments) was originally trialled in our International Customer Service Centre in Burnham-on-Sea, Somerset. It helped our advisors to navigate the bewildering complexity of global network fault diagnostics by asking the right questions based on the available data residing on multiple back-end systems.

The idea was that they could concentrate on the conversation with the customer rather than navigating the systems. It worked, but the technology was unwieldy in those days. There were no deep learning capabilities or robotic process automation then, just a whole team of highly talented coders and technologies that literally scraped bits of data from a multitude of databases (which fell over if just one field on one system moved).

A lot has changed in 25 years. Machine learning algorithms are (often invisibly) learning about our buying and searching preferences on sites like Amazon and Google, spotting spam email and even beating the world’s leading Go player. The big difference is data. The machine learning engines that succeed are those where there is a lot of data in machine readable form out there. And there is A LOT more data now than there was in the 1990s — and there will be more as we enter a world of the Internet of Things, Clouds of Clouds and smart cities.

Responsible parenting required.

However, just giving a machine data and telling it to learn won’t necessarily give you good AI — as demonstrated by the initially innocent chatbots that were corrupted after being exposed to social media streams. There is an element of ‘responsible parenting’ required from us if we want to develop machines with something that could be interpreted as ‘ethical behaviour’ or even be something that might meet the high sociability standards of our own mothers and fathers.

Perhaps a different approach is required. Gary Kasparov’s experiences of being beaten at chess by IBM’s Deep Blue significantly developed his thinking on AI. He decided that if you can’t beat them, join them. From that came ‘advanced chess’ — where man and machine come together to play other man-machine players. Machines play chess differently to humans, but humans also play chess differently to machines — and an average human chess player can be elevated to a world champion if their partnership with the machine works.

Making it personal.

This echoes the main lesson we learned in the contact centre trial back in the 1990s. Smart customer-service people armed with smart technologies can deliver better customer experiences. Technology is good at sorting through massive data sets, spotting patterns, doing repetitive jobs and it can also tackle the dull, dirty and dangerous jobs we simply don’t want to do. We tend to bring a number of skills to the game that machines can’t — innovation, creativity, empathy, caring, negotiation and intuition. Things that make us uniquely human.

Why build a machine that simply does things that we do, when you can build one that enhances what we do? This is the ‘Terminator’ verses ‘Iron Man’ dilemma. I’d rather be a Tony Stark-like super cyborg than an irrelevance where the computer says “no”.

Read our point of view on innovation.