In the industrialized nations of the world, the "information revolution" already has significantly altered many aspects of life -- in banking and commerce, work and employment, medical care, national defense, transportation and entertainment. Consequently, information technology has begun to affect (in both good and bad ways) community life, family life, human relationships, education, freedom, democracy, and so on (to name a few examples). Computer ethics in the broadest sense can be understood as that branch of applied ethics which studies and analyzes such social and ethical impacts of information technology.
In recent years, this robust new field has led to new university courses, conferences, workshops, professional organizations, curriculum materials, books, articles, journals, and research centers. And in the age of the world-wide-web, computer ethics is quickly being transformed into "global information ethics".
It has long been clear to me that the modern ultra-rapid computing machine was in principle an ideal central nervous system to an apparatus for automatic control; and that its input and output need not be in the form of numbers or diagrams. It might very well be, respectively, the readings of artificial sense organs, such as photoelectric cells or thermometers, and the performance of motors or solenoids ... . we are already in a position to construct artificial machines of almost any degree of elaborateness of performance. Long before Nagasaki and the public awareness of the atomic bomb, it had occurred to me that we were here in the presence of another social potentiality of unheard-of importance for good and for evil. (pp. 27-28)In 1950 Wiener published his monumental book, The Human Use of Human Beings. Although Wiener did not use the term "computer ethics" (which came into common use more than two decades later), he laid down a comprehensive foundation which remains today a powerful basis for computer ethics research and analysis.
Wiener's book included (1) an account of the purpose of a human life, (2) four principles of justice, (3) a powerful method for doing applied ethics, (4) discussions of the fundamental questions of computer ethics, and (5) examples of key computer ethics topics. [Wiener 1950/1954, see also Bynum 1999]
Wiener's foundation of computer ethics was far ahead of its time, and it was virtually ignored for decades. On his view, the integration of computer technology into society will eventually constitute the remaking of society -- the "second industrial revolution". It will require a multi-faceted process taking decades of effort, and it will radically change everything. A project so vast will necessarily include a wide diversity of tasks and challenges. Workers must adjust to radical changes in the work place; governments must establish new laws and regulations; industry and businesses must create new policies and practices; professional organizations must develop new codes of conduct for their members; sociologists and psychologists must study and understand new social and psychological phenomena; and philosophers must rethink and redefine old social and ethical concepts.
In the mid 1970s, Walter Maner (then of Old Dominion University in Virginia; now at Bowling Green State University in Ohio) began to use the term "computer ethics" to refer to that field of inquiry dealing with ethical problems aggravated, transformed or created by computer technology. Maner offered an experimental course on the subject at Old Dominion University. During the late 1970s (and indeed into the mid 1980s), Maner generated much interest in university-level computer ethics courses. He offered a variety of workshops and lectures at computer science conferences and philosophy conferences across America. In 1978 he also self-published and disseminated his Starter Kit in Computer Ethics, which contained curriculum materials and pedagogical advice for university teachers to develop computer ethics courses. The Starter Kit included suggested course descriptions for university catalogs, a rationale for offering such a course in the university curriculum, a list of course objectives, some teaching tips and discussions of topics like privacy and confidentiality, computer crime, computer decisions, technological dependence and professional codes of ethics. Maner's trailblazing course, plus his Starter Kit and the many conference workshops he conducted, had a significant impact upon the teaching of computer ethics across America. Many university courses were put in place because of him, and several important scholars were attracted into the field.
In the mid-80s, James Moor of Dartmouth College published his influential article "What Is Computer Ethics?" (see discussion below) in Computers and Ethics, a special issue of the journal Metaphilosophy [Moor, 1985]. In addition, Deborah Johnson of Rensselaer Polytechnic Institute published Computer Ethics [Johnson, 1985], the first textbook -- and for more than a decade, the defining textbook -- in the field. There were also relevant books published in psychology and sociology: for example, Sherry Turkle of MIT wrote The Second Self [Turkle, 1984], a book on the impact of computing on the human psyche; and Judith Perrolle produced Computers and Social Change: Information, Property and Power [Perrolle, 1987], a sociological approach to computing and human values.
In the early 80s, the present author (Terrell Ward Bynum) assisted Maner in publishing his Starter Kit in Computer Ethics [Maner, 1980] at a time when most philosophers and computer scientists considered the field to be unimportant [See Maner, 1996]. Bynum furthered Maner's mission of developing courses and organizing workshops, and in 1985, edited a special issue of Metaphilosophy devoted to computer ethics [Bynum, 1985]. In 1991 Bynum and Maner convened the first international multidisciplinary conference on computer ethics, which was seen by many as a major milestone of the field. It brought together, for the first time, philosophers, computer professionals, sociologists, psychologists, lawyers, business leaders, news reporters and government officials. It generated a set of monographs, video programs and curriculum materials [see van Speybroeck, July 1994].
These important developments were significantly aided by the pioneering work of Simon Rogerson of De Montfort University (UK), who established the Centre for Computing and Social Responsibility there. In Rogerson's view, there was need in the mid-1990s for a "second generation" of computer ethics developments:
The mid-1990s has heralded the beginning of a second generation of Computer Ethics. The time has come to build upon and elaborate the conceptual foundation whilst, in parallel, developing the frameworks within which practical action can occur, thus reducing the probability of unforeseen effects of information technology application [Rogerson, Spring 1996, 2; Rogerson and Bynum, 1997].
When he decided to use the term "computer ethics" in the mid-70s, Walter Maner defined the field as one which examines "ethical problems aggravated, transformed or created by computer technology". Some old ethical problems, he said, are made worse by computers, while others are wholly new because of information technology. By analogy with the more developed field of medical ethics, Maner focused attention upon applications of traditional ethical theories used by philosophers doing "applied ethics" -- especially analyses using the utilitarian ethics of the English philosophers Jeremy Bentham and John Stuart Mill, or the rationalist ethics of the German philosopher Immanual Kant.
In her book, Computer Ethics, Deborah Johnson  defined the field as one which studies the way in which computers "pose new versions of standard moral problems and moral dilemmas, exacerbating the old problems, and forcing us to apply ordinary moral norms in uncharted realms," [Johnson, page 1]. Like Maner before her, Johnson recommended the "applied ethics" approach of using procedures and concepts from utilitarianism and Kantianism. But, unlike Maner, she did not believe that computers create wholly new moral problems. Rather, she thought that computers gave a "new twist" to old ethical issues which were already well known.
James Moor's definition of computer ethics in his article "What Is Computer Ethics?" [Moor, 1985] was much broader and more wide-ranging than that of Maner or Johnson. It is independent of any specific philosopher's theory; and it is compatible with a wide variety of methodological approaches to ethical problem-solving. Over the past decade, Moor's definition has been the most influential one. He defined computer ethics as a field concerned with "policy vacuums" and "conceptual muddles" regarding the social and ethical use of information technology:
A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, that is, formulate policies to guide our actions.... One difficulty is that along with a policy vacuum there is often a conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection reveals a conceptual muddle. What is needed in such cases is an analysis that provides a coherent conceptual framework within which to formulate a policy for action [Moor, 1985, 266].Moor said that computer technology is genuinely revolutionary because it is "logically malleable":
Computers are logically malleable in that they can be shaped and molded to do any activity that can be characterized in terms of inputs, outputs and connecting logical operations....Because logic applies everywhere, the potential applications of computer technology appear limitless. The computer is the nearest thing we have to a universal tool. Indeed, the limits of computers are largely the limits of our own creativity [Moor, 1985, 269]According to Moor, the computer revolution is occurring in two stages. The first stage was that of "technological introduction" in which computer technology was developed and refined. This already occurred in America during the first forty years after the Second World War. The second stage -- one that the industrialized world has only recently entered -- is that of "technological permeation" in which technology gets integrated into everyday human activities and into social institutions, changing the very meaning of fundamental concepts, such as "money", "education", "work", and "fair elections".
Moor's way of defining the field of computer ethics is very powerful and suggestive. It is broad enough to be compatible with a wide range of philosophical theories and methodologies, and it is rooted in a perceptive understanding of how technological revolutions proceed. Currently it is the best available definition of the field.
Nevertheless, there is yet another way of understanding computer ethics that is also very helpful--and compatible with a wide variety of theories and approaches. This "other way" was the approach taken by Wiener in 1950 in his book The Human Use of Human Beings, and Moor also discussed it briefly in "What Is Computer Ethics?" . According to this alternative account, computer ethics identifies and analyzes the impacts of information technology upon human values like health, wealth, opportunity, freedom, democracy, knowledge, privacy, security, self-fulfillment, and so on. This very broad view of computer ethics embraces applied ethics, sociology of computing, technology assessment, computer law, and related fields; and it employs concepts, theories and methodologies from these and other relevant disciplines [Bynum, 1993]. The fruitfulness of this way of understanding computer ethics is reflected in the fact that it has served as the organizing theme of major conferences like the National Conference on Computing and Values (1991), and it is the basis of recent developments such as Brey's "disclosive computer ethics" methodology [Brey 2000] and the emerging research field of "value-sensitive computer design". (See, for example, [Friedman, 1997], [Friedman and Nissenbaum, 1996], [Introna and Nissenbaum, 2000].)
In the 1990s, Donald Gotterbarn became a strong advocate for a different approach to defining the field of computer ethics. In Gotterbarn's view, computer ethics should be viewed as a branch of professional ethics, which is concerned primarily with standards of practice and codes of conduct of computing professionals:
There is little attention paid to the domain of professional ethics -- the values that guide the day-to-day activities of computing professionals in their role as professionals. By computing professional I mean anyone involved in the design and development of computer artifacts... The ethical decisions made during the development of these artifacts have a direct relationship to many of the issues discussed under the broader concept of computer ethics [Gotterbarn, 1991].With this professional-ethics definition of computer ethics in mind, Gotterbarn has been involved in a number of related activities, such as co-authoring the third version of the ACM Code of Ethics and Professional Conduct and working to establish licensing standards for software engineers [Gotterbarn, 1992; Anderson, et al., 1993; Gotterbarn, et al., 1997].
The employment outlook, however, is not all bad. Consider, for example, the fact that the computer industry already has generated a wide variety of new jobs: hardware engineers, software engineers, systems analysts, webmasters, information technology teachers, computer sales clerks, and so on. Thus it appears that, in the short run, computer-generated unemployment will be an important social problem; but in the long run, information technology will create many more jobs than it eliminates.
Even when a job is not eliminated by computers, it can be radically altered. For example, airline pilots still sit at the controls of commercial airplanes; but during much of a flight the pilot simply watches as a computer flies the plane. Similarly, those who prepare food in restaurants or make products in factories may still have jobs; but often they simply push buttons and watch as computerized devices actually perform the needed tasks. In this way, it is possible for computers to cause "de-skilling" of workers, turning them into passive observers and button pushers. Again, however, the picture is not all bad because computers also have generated new jobs which require new sophisticated skills to perform -- for example, "computer assisted drafting" and "keyhole" surgery.
Another workplace issue concerns health and safety. As Forester and Morrison point out [Forester and Morrison, 140-72, Chapter 8], when information technology is introduced into a workplace, it is important to consider likely impacts upon health and job satisfaction of workers who will use it. It is possible, for example, that such workers will feel stressed trying to keep up with high-speed computerized devices -- or they may be injured by repeating the same physical movement over and over -- or their health may be threatened by radiation emanating from computer monitors. These are just a few of the social and ethical issues that arise when information technology is introduced into the workplace.
Computer crimes, such as embezzlement or planting of logic bombs, are normally committed by trusted personnel who have permission to use the computer system. Computer security, therefore, must also be concerned with the actions of trusted computer users.
Another major risk to computer security is the so-called "hacker" who breaks into someone's computer system without permission. Some hackers intentionally steal data or commit vandalism, while others merely "explore" the system to see how it works and what files it contains. These "explorers" often claim to be benevolent defenders of freedom and fighters against rip-offs by major corporations or spying by government agents. These self-appointed vigilantes of cyberspace say they do no harm, and claim to be helpful to society by exposing security risks. However every act of hacking is harmful, because any known successful penetration of a computer system requires the owner to thoroughly check for damaged or lost data and programs. Even if the hacker did indeed make no changes, the computer's owner must run through a costly and time-consuming investigation of the compromised system [Spafford, 1992].
One of the earliest computer ethics topics to arouse public interest was privacy. For example, in the mid-1960s the American government already had created large databases of information about private citizens (census data, tax records, military service records, welfare records, and so on). In the US Congress, bills were introduced to assign a personal identification number to every citizen and then gather all the government's data about each citizen under the corresponding ID number. A public outcry about "big-brother government" caused Congress to scrap this plan and led the US President to appoint committees to recommend privacy legislation. In the early 1970s, major computer privacy laws were passed in the USA. Ever since then, computer-threatened privacy has remained as a topic of public concern. The ease and efficiency with which computers and computer networks can be used to gather, store, search, compare, retrieve and share personal information make computer technology especially threatening to anyone who wishes to keep various kinds of "sensitive" information (e.g., medical records) out of the public domain or out of the hands of those who are perceived as potential threats. During the past decade, commercialization and rapid growth of the internet; the rise of the world-wide-web; increasing "user-friendliness" and processing power of computers; and decreasing costs of computer technology have led to new privacy issues, such as data-mining, data matching, recording of "click trails" on the web, and so on [see Tavani, 1999].
The variety of privacy-related issues generated by computer technology has led philosophers and other thinkers to re-examine the concept of privacy itself. Since the mid-1960s, for example, a number of scholars have elaborated a theory of privacy defined as "control over personal information" (see, for example, [Westin, 1967], [Miller, 1971], [Fried, 1984] and [Elgesem, 1996]). On the other hand, philosophers Moor and Tavani have argued that control of personal information is insufficient to establish or protect privacy, and "the concept of privacy itself is best defined in terms of restricted access, not control" [Tavani and Moor, 2001] (see also [Moor, 1997]). In addition, Nissenbaum has argued that there is even a sense of privacy in public spaces, or circumstances "other than the intimate." An adequate definition of privacy, therefore, must take account of "privacy in public" [Nissenbaum, 1998]. As computer technology rapidly advances -- creating ever new possibilities for compiling, storing, accessing and analyzing information -- philosophical debates about the meaning of "privacy" will likely continue (see also [Introna, 1997]).
Questions of anonymity on the internet are sometimes discussed in the same context with questions of privacy and the internet, because anonymity can provide many of the same benefits as privacy. For example, if someone is using the internet to obtain medical or psychological counseling, or to discuss sensitive topics (for example, AIDS, abortion, gay rights, venereal disease, political dissent), anonymity can afford protection similar to that of privacy. Similarly, both anonymity and privacy on the internet can be helpful in preserving human values such as security, mental health, self-fulfillment and peace of mind. Unfortunately, privacy and anonymity also can be exploited to facilitate unwanted and undesirable computer-aided activities in cyberspace, such as money laundering, drug trading, terrorism, or preying upon the vulnerable (see [Marx, 2001] and [Nissenbaum, 1999]).
These relationships involve a diversity of interests, and sometimes these interests can come into conflict with each other. Responsible computer professionals, therefore, will be aware of possible conflicts of interest and try to avoid them.
employer -- employee client -- professional professional -- professional society -- professional
Professional organizations in the USA, like the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronic Engineers (IEEE), have established codes of ethics, curriculum guidelines and accreditation requirements to help computer professionals understand and manage ethical responsibilities. For example, in 1991 a Joint Curriculum Task Force of the ACM and IEEE adopted a set of guidelines ("Curriculum 1991") for college programs in computer science. The guidelines say that a significant component of computer ethics (in the broad sense) should be included in undergraduate education in computer science [Turner, 1991].
In addition, both the ACM and IEEE have adopted Codes of Ethics for their members. The most recent ACM Code (1992), for example, includes "general moral imperatives", such as "avoid harm to others" and "be honest and trustworthy". And also included are "more specific professional responsibilities" like "acquire and maintain professional competence" and "know and respect existing laws pertaining to professional work." The IEEE Code of Ethics (1990) includes such principles as "avoid real or perceived conflicts of interest whenever possible" and "be honest and realistic in stating claims or estimates based on available data."
The Accreditation Board for Engineering Technologies (ABET) has long required an ethics component in the computer engineering curriculum. And in 1991, the Computer Sciences Accreditation Commission/Computer Sciences Accreditation Board (CSAC/CSAB) also adopted the requirement that a significant component of computer ethics be included in any computer sciences degree granting program that is nationally accredited [Conry, 1992].
It is clear that professional organizations in computer science recognize and insist upon standards of professional responsibility for their members.
In her 1999 ETHICOMP paper [Johnson, 1999], Johnson expressed a view which, upon first sight, may seem to be the same as Gorniak's. A closer look at the Johnson hypothesis reveals that it is a different kind of claim than Gorniak's, though not inconsistent with it. Johnson's hypothesis addresses the question of whether or not the name "computer ethics" (or perhaps "information ethics") will continue to be used by ethicists and others to refer to ethical questions and problems generated by information technology. On Johnson's view, as information technology becomes very commonplace -- as it gets integrated and absorbed into our everyday surroundings and is perceived simply as an aspect of ordinary life -- we may no longer notice its presence. At that point, we would no longer need a term like "computer ethics" to single out a subset of ethical issues arising from the use of information technology. Computer technology would be absorbed into the fabric of life, and computer ethics would thus be effectively absorbed into ordinary ethics.
Taken together, the Gorniak and Johnson hypotheses look to a future in which what we call "computer ethics" today is globally important and a vital aspect of everyday life, but the name "computer ethics" may no longer be used.