Artificial intelligence has the potential to save lives and give us better, faster, more personalised information. Surely we should welcome its advance with open arms? Or are there more sinister overtones to the march of AI? Should we be worried?
If you have bought anything off Amazon recently you will have experienced an artificial intelligence algorithm at work. This is what you looked at recently, so you are obviously interested in buying it: this is what people with a similar profile to you have bought.
Suddenly you start to feel that Amazon knows more about you than you know yourself. That is understandable: everything you have bought since you opened your account is stored: so is everything you have looked at. I cannot be the only person who deliberately does not look at certain products on Amazon as I do not want them in my browsing history…
But artificial intelligence is not going to limit itself solely to recommending camping equipment if you have bought a book about walking. Right now AI is being experimented with in virtually any area you can think of – medical research and diagnosis, driverless vehicles, insurance underwriting and risk assessment and, rather more chillingly, security, surveillance and criminal sentencing.
So is that good news or bad news for us mere mortals? And – rather more worryingly – will Artificial Intelligence become so intelligent that we no longer understand it?
Good news or bad news?
It is Monday morning, so let us start with the glass half full. A recent report by consultancy PwC has forecast that AI could boost the global economy by $15.7tn by the year 2030 – that is equivalent to more than £11tn: how can it not be good news?
Especially when AI will lead to more accurate medical diagnoses, more accurate insurance underwriting, better financial information on our spending and our saving – and even better weather forecasts.
And clearly, AI is going to save lives. Yes, there have been one or two accidents in the early development of driverless cars but – as many people have pointed out – driverless cars do not have to be perfect, they simply need to kill less people than we do.
But – and this is becoming an increasingly big but – we appear to be moving towards a situation where AI machines are making complex decisions that affect our lives. The problem is that the scientists and developers behind the machines, well… They do not quite understand how the machines are making the decisions.
The technical part
…Which I have tried to make as simple as possible! David Stern, a quantitative research manager at a firm specialising in machine learning in financial markets was quoted in a recent BBC article as saying,
“training [AI machines] involves the setting of millions of internal parameters which interact in complex ways and are very difficult to reverse engineer and explain.”
Another trend in AI involves what is known as ‘deep reinforcement learning’ whereby the designer gives the machine a certain goal, but the machine effectively teaches itself by interacting directly with its mathematical/medical/surveillance environment. “This results in a system which is difficult to understand,” says the helpful Mr. Stern.
What these two scenarios add up to is worrying – essentially it is the AI machines producing results, but the developers and designers of the machines not being sure how the machines arrive at the results.
You might reasonably ask the question, ‘Should we trust the results if we do not understand how they are arrived at?’ The problem is that we are trusting the results. AI is on the march, and that march appears to be unstoppable.
Does it matter?
On the face of it, that is a valid question. If it is a safer car, or a more accurate medical diagnosis, do we need to understand how the machine makes a decision? After all, if it is saving lives, it can only be a good thing. As Adrian Weller, programme director at the Alan Turing Institute says, “sometimes these issues might be more important than understanding how the machine works.”
Quite right. If a machine makes a decision that saves my life, I do not care how it made the decision.
But supposing the machine turns me down for life cover? Or decides that on the balance of probabilities my life-saving operation is not cost-effective: that the money is better spent elsewhere? Supposing – and here is a chilling thought for a Monday morning – that an artificial intelligence machine sends me to jail?
Crime and punishment
Many of you will have seen the film Minority Report, a vision of the future where a special police unit can arrest murderers before they commit their crimes. AI could well be bringing that closer: supposing the police start arresting people on suspicion of planning a crime, simply because the number crunching algorithm suggests that is what they are likely to do?
And supposing AI is ultimately used in criminal sentencing? Surely then we are entitled to understand how the machine is making its decision?
“If an [AI algorithm] sentenced me to six years,” said Adrian Weller, “I would want an explanation which would enable me to know if it had follow the appropriate process, and allow me to challenge the algorithm if I disagreed.”
He went on to say that we should know when a company is using an algorithm to make a decision. Well, we have all just survived GDPR – maybe we will soon all be asked to consent to the companies we use every day popping our data into an algorithm. But they are already doing that…
Thoughts for the future
The ever-increasing use of AI is going to pose huge moral and social problems in the future. By the time the majority of people wake up to the implications of AI it may well be too late to legislate for its irresistible advance. Yes, of course I want my life to be saved by an accurate diagnosis. Of course I want to drive a safer car. But I have no wish to have the police knocking on my door simply because I fit the profile determined by their algorithm. And I have even less wish to get six years’ porridge without understanding why…