An Overview of the Lectures, “Living with Artificial Intelligence”
The series Living with Artificial Intelligence is divided into four lectures. In the first Professor Stuart Russell discusses what in general Artificial Intelligence (AI) is, and what it does. The second lecture deals with military matters and warfare. For obvious reasons this is the most alarming. The third lecture deals with economic matters – including what happens if robots take over all our jobs, and what we could do with our lives. The fourth lecture deals with the control problem and how to stop robots doing undesirable things to the human race.
I was particularly interested in the topic of AI as I come from the software industry. I well remember a student exercise where we designed a medical diagnosis system and, going back 20 years, we realised that it would probably be more effective than human doctors in most cases. It could relieve them of the mundane work, so that they could have greater insights for their more difficult cases.
Many people believe (from science fiction) that AI and robots will become so powerful that they will destroy the human race. That means we have to be careful what objectives we give them when they are designed. Stupid objectives like the Sorcerer’s Apprentice collecting water, or turning everything to gold are famous examples that tell us that we should be careful what we wish for.
I didn’t know that the first autonomous car drove up a German autobahn in 1989. I find the German autobahns quite terrifying to drive on! I still believe that autonomous vehicles are still a long way off being used on historic roads. They might work in California or Milton Keynes, but not in a European town: so much depends on the drivers eyeing each other and giving signs.
AI has proved very good at translating languages and also pattern recognition, ie picking objects out of images. There are therefore many tasks – particularly those that are dangerous for humans, eg firefighting – where they can be really useful.
Autonomous warfare: by this we mean that robots will take over making war and kill people without being told to by humans. Problems, such as how robots will be able to avoid killing civilians, are discussed; but soldiers don’t ask people whether they are civilians before they shoot now.
It is possible that wars can be fought with robots which are more efficient and cost less. There could be many fewer human deaths, particularly if robots are fighting each other. The problems arise with cyber infiltration and escalation. There are still historic treaties in place which limit the size of weapons that can be used. These offer some protection against dangerous AI.
This lecture is about the economy and jobs. It is suggested that there might come a time when all work is done by robots and most humans will not have to work. History tells us otherwise: as the more mechanisation there is, the more service jobs are created, meeting other unmet needs. Eventually the old jobs disappear.
Therefore our socio-economic systems need to change to accomodate our new lifestyles. It appears that some people will always strive to work (even though they are retired or do not have a job) while others are happy to spend their time enjoying themselves. But the strivers are often happier and more fulfilled than the leisure-seekers.
Machines can do things very much more efficiently than humans. This raises the problem of controlling robots, which simply follow the objectives that they are given. If we put the wrong objective into the machine, we cannot stop it. We need to think differently about three principles:
- only to realise human objectives – ie be kind to humans;
- the machine is uncertain as to our preferences (it should give us more control and must ask);
- the robot should take account of normal human behaviour.
Can we put the principles into regulation?
If AI solves the problem, then it has to be of benefit to mankind. The robot must allow us to switch it off. Fixed objectives are dangerous. Professor Russell gives examples of loyal AI systems that serve their owners. but also a horrifying example of a babysitting robot that kills the cat to feed the children.
Dealing with human preferences is difficult for AI, as inferring preferences from behaviour is not easy. Our preferences often change and social media manipulate them.
There is a danger that we become like infants because of AI: we are helpless and lose the incentive to be educated. What happens when we don’t learn anything because we don’t need to, as the robot will do it? Should the robots stand back like parents to get the child to tie its own shoe laces?
To summarise, I found the series extremely interesting and I did learn plenty of new things, particularly on the moral and ethics side. I thought the Professor was particularly good at giving illustrative examples and was witty at times (although the audience was slow to appreciate it). The long question and discussions after each lecture involved people who knew their academic fields.
These lectures are still available on BBC Sounds. Kent Bylines will be publishing further articles by another author, Jan Fuscoe, which discuss each lecture in more detail. Juliet’s shorter article above is just to whet your appetite for more on this important topic.