Reith Lecture No 4- Artificial Intelligence and Ethics

Artificial Intelligence as a robot
https://pixabay.com/hu/users/hungquach679png-23795504/ free licence by pixabay

Scientists are guided by ethicists

This fourth Reith lecture is about whether AI is a threat to humans. Could a robot turn against its creator? Russell was surprisingly firm in answer to this: no, we need not worry as computer scientists are already being guided by ethicists.

Some decades ago, the science fiction writer Azimov formulated three principles which have now been taken up by AI scientists:

  • The AI entity must do no harm to humans
  • It must be programmed to understand human preferences
  • Whatever objective it is set to do, it must still comply with these preferences


Do humans know their preferences?

This provokes much thought and discussion about exactly what are human preferences. Sometimes we don’t really know our own preferences. Our preferences of today might be different from our preferences tomorrow. People living in the same household or family often have different preferences.

Computer scientists have to think very precisely to programme AI software for the desired objective, without any possibility of deviation or arriving at the objective in the wrong way. Russell gave an example in a fable about a AI baby-sitter. Finding that there was no ready food in the house when the children were hungry, the babysitter had the bright idea of killing the pet cat! It thus satisfied the over-riding objective of keeping the children safe, but it lethally defied human feelings for their pets. It just did not have enough background knowledge of the humans it was serving.

There was some pessimism about whether AI could ever be given enough of such background information. However, Russell listed the frontiers of computer science that had been reached in recent decades sooner than expected: beating humans at draughts or chess, decoding the human genome, language translation, leg robotics, facial recognition, self-driving etc.

pixabay.com free licence of artificial intelligence

Three schools of philosophical tradition

The role of ethics in safeguarding humans in the face of new AI functions was increasingly important. According to Russell, many social science lecturers in his home University (Berkeley, California) are moving across to the engineering departments, attracted by the growing importance of AI in human society. He said there are basically three schools of philosophic tradition that can define ethics for computer scientists: “Virtue-based”, “Rights-based” and utilitarian (ie the happiness of the greatest number).

He did not explicitly define “virtue-based” but I assume he means the tradition stemming from Aristotle’s Ethics, which specifies what virtue is and how it is developed. I guess these ideas underlay the part of the lecture which was on education.

The scenario was suggested that AI could take over from humans if humans became dependent on AI (since it is all-competent, and performs all needful tasks better than humans) such that humans cease to know how to do these tasks for their own maintenance, and thus succumb to futile inactive existence. On the contrary, Russell thinks that AI is unlikely to take over tasks that require human interaction any time soon.

Although AI can be a patient tutor (in language learning for example), the service jobs that require empathy cannot be replaced by computer-operated alternatives. This is good news for those who became fearful after lecture 3 which was about job-losses.

If the guiding ethics for computer programmers is to be the happiness of the greatest number, this could be open to the criticism that this is the ethics of greedy capitalism that has led us to over-consume and wreck the planet. What about the happiness of other species? And what is the standard of the good life? If it is taken to be that of Greenwich Village residents, then there is not enough room on the planet for such specious living. Already, the majority of city-dwellers have to be stacked up in skyscrapers.

Legislation for AI

In question-time, a law professor pointed out that legislation is already beginning for AI. The EU has started this process, and have just legislated against impersonation on the internet. Those working on this EU legislation base it on the document of Fundamental Rights of the EU, which goes further than the 1948 Universal Declaration of Human Rights treaty.

It is to be expected that our understanding of rights needs to be upgraded in the 70 years since the post-war declaration. For example, of relevance to the AI industry, the EU definition states rights to personal data. Russell admitted that law-making got a bit behind in regulating at the frontiers of the internet: for instance in the use of biased algorithms, and in the type of data-mining that can skew elections (Cambridge Analytica), and in the need to outlaw hate-speech from social media. The law lecturer was firm that a rights based approach was not just to guide the ethicists: there was an urgent need to regulate.

The UK is likely to fall behind in this, with a government led by free-booters. Post-Brexit, expect to see some connivance with dropping of regulations that accord with the EU Fundamental Rights if it can be boasted that this boosts Great Britain. As the companies on the frontiers of AI (Amazon, Google, Tesla, Facebook) are some of the biggest in the world, it is unlikely that the UK alone could tackle alone the abuses they cause.

The EU, however, does have the power and the will to start the process of regulating AI. This lecture reveals why we should all pay attention to this process.