Ethics and the Responsibility of
- Session 11
ICSU's Standing Committee on
With scientific progress, new and unfamiliar situations continually emerge, creating circumstances in which our traditional concepts (of, for example, truth, reality, space-time, mind, human nature and morality) are called into question. Classical notions may no longer seem applicable to reality by the new descriptions offered, and our habitual, accustomed attitudes or ways of life may come to appear threatened. Rapid scientific advance seems to outstrip our moral sensibility and judgement. There is often a dramatic tension between good and bad uses of new scientific concepts, theories and methods; as well as the notoriously tricky problem of deciding who is to determine what is good or bad: scientists? politicians? the general public? This is apparent in the advance of numerous disciplines; e.g., biotechnology questioning the human identity and person, the brain sciences questioning the self, or information technology threatening with cyber wars. The challenges are manifold: to construct a coherent ethical position that covers a wide variety of related issues; to balance emotional reactions against rational arguments; and, not least, properly to understand the scientific facts that underlie the situation.
(a) Biotechnological advances (e.g. recombinant DNA-techniques) have provided humans with tools giving rise to many difficult ethical problems. As the genome of various species (including the human) is gradually decoded, we gain possibilities to interfere with and design other living organisms. Consider, for example, the fact that we learn more and more about genetic dispositions that may or may not develop. Should we and/or others have unlimited access to this information? Given that we have the information, to what extent should it lead to action? Who is responsible for making these decisions? Concepts such as human dignity, and integrity are essential parts of this debate. The UNESCO Declaration on the Human Genome is an important measure to adopt a unified policy towards these important ethical issues. Similar problems emerge with non-human uses of biotechnology. The release of genetically modified organisms into nature, for instance, may have a profound impact on the existing gene pool. What risks should we regard as ethically acceptable?
(b) Developments in the brain sciences, psychiatry and the philosophy of mind, call into question many traditional views; notably, of the self. It is traditionally assumed that for each single normal human being, the number of personalities, persons, or selves must be exactly one. Various forms and interpretations of this belief have dominated most of mankinds intellectual history both in philosophy, science, religion, psychiatry, legal and social theory. But allusions to divided, fragmented minds, and to multiple, successive selves are nowadays commonplace in both theoretical and empirical studies. We no longer accept unquestioningly the various Western traditions these concepts challenge, which posit a single subject of experiences in every human. Scientific beliefs about the nature of the self have strong ethical relevance. Conceding a person a self is more than a logical conclusion: it is a moral gesture of admission into a socially important group.
(c) The revolution in information technology (IT) has risks as grand as its potentials. The development of Internet and the Web has not only brought fruitful advances in IT, but also created dependence on these results. As our deep concern for the switch to a new millennium reveals, many countries are extremely vulnerable to cyberspace breakdowns in their information-dependent systems, such as infrastructure (air traffic, electric power, etc.). Such breakdowns could happen due to accidents, or intentional interference (by hackers, for example); or they could become objects in a cyberspace war. Difficult problems of scientific ethics and international security ensue from this new situation; and, as Molander & Siang (1998) point out, "a comprehensive understanding of the impact of cyberwarfare has eluded the international security community", which is a cause of concern:
In November 1988, a committee of the UN General Assembly addressed this issue in a resolution, calling upon all Member States to: "promote the consideration of existing and potential threats in the field of information security", and help develop "international principles that would enhance information security and combat information terrorism and criminality". It is presumably not unreasonable to demand that the scientific community that developed the Internet and the Web share the responsibility of finding solutions to these rather dramatic problems.