Preliminary Draft Report of COMEST on Robotics Ethics August 2016

Questions and Answers

1. What is the Report about?

As part of  its  work  programme  for  2016-2017,  a working group of the World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), a scientific advisory body at UNESCO, decided  to  address  the  topic  of  robotics  ethics  building  on  COMEST’s previous reflection on ethical issues related to modern robotics, as well as the ethics of nanotechnologies and converging technologies.

The “Preliminary Draft Report of COMEST on Robotics Ethics” examines ethical issues related to the use of autonomous robots and how humans interact with them. The rapid development of highly intelligent autonomous robots is likely to challenge our current classification of beings according to their moral status, in the same or maybe an even more profound way than the animal rights movement, the report said.

2. Why are experts looking into the ethics of robotics?

Since the first industrial robots were used in car manufacturing in the United States in the 1950s, they have become a fact of modern life. Popularised by science fiction films and TV programmes, robots are increasingly more visible in modern societies.

Robots are used in factories around the world. Drones are used in warfare and robots are used to defuse bombs. Robots are starting to replace workers in service industries from shops to hotels. Robots resembling humans are being used to care for the elderly or in therapy for children with autism.
Robots (especially humanoid ones) owe a great deal of their popularity to literature and science fiction television and films, from Frankenstein’s monster and Star Wars to The Terminator.

The presence of robots in homes and the workplace and more generally in society has an impact on human behaviour. It also represents profound social and cultural changes.

The report on robotics ethics aims to raise awareness about ethical issues related to the use of autonomous robots in society.

3. What is a robot? How do you define one?

The word robot is of Czech origin and was coined by Karel Čapek in a science fiction play in 1920. The term originates from the word “robota,” which means “work” or “labour” in Czech.

The prevailing view of what a robot is – thanks to science fiction movies, TV series and literature – is that of a machine that looks, thinks and behaves like a human being. But robots do not necessarily take human form, they can simply be smart machines doing routine, repetitive and hazardous mechanical tasks, such as robots working in factories.

The ability to interact with their environment distinguishes robots from computers. Robots also have “bodies.”

Thanks to artificial intelligence (AI), machines can perform tasks which would require intelligence for humans to perform.

4. How have robots developed and how sophisticated might they be in the future?

The Encyclopaedia of Robotics distinguishes between five generations of robots.
The first generation (before 1980) was mechanical, stationary, precise, fast, physically rugged but without external sensors or artificial intelligence.

The second generation (1980-1990), thanks to microcomputer control, could be programmed, involved vision systems, as well as tactile, position and pressure sensors.

The third generation (mid 1990s and after) became mobile and autonomous, able to recognise and synthesize speech, incorporated navigation systems or was tele-operated with artificial intelligence.

The fourth and fifth generations are speculative robots of the future able, for example, to reproduce and to acquire various human characteristics such as a sense of humour.

5. Why is this report looking at the potential “moral” status of machines?

A robot’s behaviour -- even if the robot is highly complex, intelligent and autonomous -- is essentially determined by humans.

However, assuming future robots are likely to become even more sophisticated (perhaps to the point that they will be able to learn from past experience and programme themselves) the nature of their algorithms – a set of precise instructions on how the robot should operate – will likely become an issue worthy of serious ethical attention and reflection, according to COMEST.

The problem with development and utilization of robots, as was the case with many other technological innovations, are unforeseeable and unintended harm to humans.

It is likely that malfunctioning of today’s sophisticated robots is capable of inflicting significant harm to a very large number of human beings (e.g. armed military robots or autonomous robotic cars getting out of control).

The question is, therefore, not only if robotics ought to respect certain ethical norms, but whether certain ethical norms need to be programmed into robots themselves. Such a need is apparent if one focuses on personal robots and possible harm they could inflict on humans (e.g. robots for cooking, driving, fire protection, grocery shopping, bookkeeping, companionship, nursing).

Robot autonomy is likely to grow to the extent that their ethical regulation will become necessary, by programming them with ethical codes specifically designed to prevent their harmful behaviour (e.g. endangering humans and their environment).

An intriguing question regarding robots with enhanced autonomy and capacity for decision-making (possibly even moral decision-making) concerns their moral status.

Would such robots deserve the same moral respect and immunity from harm (having certain moral rights) as it currently is the case with humans and some non-human animals?

Depending on future advances in this research area, one should not exclude the possibility of future robots’ sentience, emotions and, accordingly, moral status.
The rapid development of highly intelligent autonomous robots is likely to challenge our current classification of beings according to their moral status, in the same or maybe even more profound way as it happened with non-human animals with the animal rights movement.

6. How is robotics regulated?

Bearing in mind the complexity of contemporary robots the question arises as to who should bear responsibility – ethically and legally – in cases where robots malfunction and harm human beings.

Robots, like so many technologies, can be used for both good and bad purposes.
Robotics remains both ethically and legally under regulated, probably because it is a relatively new and rapid changing field of research whose impact on the real world is often difficult to anticipate.

There are no specific ethical guidelines as to how robotic research and projects, especially those that have a direct impact on humans, should proceed.

There are no universally accepted codes of conduct for robotics. However, robots are treated in the same way as any other technological product in terms of legal regulation.

Back to top