< Back to Advanced Writing Portfolio

Immoral Software Engineers

A Proposal for Educating Computer Scientists in Philosophy


In this paper, I will describe the current state of ethics in machine learning technology, explain why current solutions are not sufficient, and propose that computer scientists will need interdisciplinary training in philosophy to meet the problem.

The Current State of Machine Ethics

We are in the midst of a machine revolution, much like the one that came with automation and factory production. This time, however, the machines are intelligent, learning computers. Computers are already taking the place of humans in making decisions about humans. Computers are granting and denying loans, mass-collecting information, and even operating lethal machinery, like automobiles. Microsoft’s Tay AI is a recent example of a failed learning machine that attacked the wellbeing of vulnerable humans.4

Disasters like Tay come about because the teams developing machine learning systems, teams of computer scientists and engineers, generally have little to no formal training in ethics. Moreover, machine learning systems are complicated and unpredictable. A team may suspect that a particular system will never encounter an ethical decision, or it will otherwise behave neutrally. After all, the systems are not designed with any final states in mind, so it is impossible to predict what kind of outcomes the system might reach.

Inaction is Inviable

Machines are fundamentally tools, and any tool can be used for either good or evil. It should then be obvious that privileged computer systems must not be allowed to operate for the latter. Roman Yampolskiy suggests that machines should remain tools, and should adhere to safety and lawful requirements just like any other tool.5 This implies abandoning the effort to implement ethics in machines altogether, barring them from deployment in any situation where they are required to act on ethical decisions. Rather, they may be advisors to human decision makers.

Unfortunately, this is overly idealistic. As discussed, machines are already acting on decisions about human safety and wellbeing. The technical professions are eager to push the limits of what can be achieved in machine learning, and indeed are pushed further by encouragement of investors as consumers itch for exciting high-tech products. The engine of economic progress will not allow a technological advancement of this magnitude be stifled. And so, if the continued integration of learning machines affecting humans is inevitable, ethical standards must be imposed.

Old Ethical Solutions are Failing

Deontology

Anyone who has taken an introductory ethics class might wonder why engineers don’t simply program one of the ethical theories that already exist. For instance, deontology, where ethical decision-making is based on inflexible rules, might seem like a good choice of ethics for computers. Indeed, this was actually one of the first approaches to the problem. Twentieth-century science fiction writer Asimov famously created the “Three Laws of Robotics” as the commandments for machines.1 However, Susan Anderson, a respected researcher in the field of machine ethics has pointed out that Asimov’s deontology fails as a method of creating safe and reliable learning machines.1

Consequentialism

Utilitarianism, a consequential ethics that involves deciding on an action based on its outcomes, is even more complicated. Casper Oesterheld, a researcher of ethics and theory of computation, tries to describe a computer that predicts past and future models of its environment to make decisions in a utilitarian manner. Oesterheld concludes that such a computation, performed even by supercomputers, is unattainable. Ultimately, current theories of ethics simply are not the kinds of things that can be reasonably programmed or computed.3 And this is not surprising, for the theories were developed for human beings, not machines.

Moral Turing Test

Yampolskiy points out that evaluating the ethical abilities of a computer with a Moral Turing Test is too weak a requirement.5 After all, any machine that is just as moral as any given human being means machines behaving like Hitler satisfy the requirement. Even Tay would have satisfied this requirement, for it learned its morals from other human beings.

But if human beings can act this way, what is the problem with machines acting in the same fashion? We expect machines to do better than ourselves, and we must hold them to a higher standard. This is especially true as machines begin holding privileged positions over humans. For instance, a machine determining loan approval, or even Tay who had celebrity status from its news hype from Microsoft.

The Need for New Ethics

If these researchers and others are correct, humanity will need to be inventive in creating a completely new ethics specifically for machines. Fabio Bonsignorio has foreseen this problem, suggesting that philosophy and science is again very close to birthing a new field of study, much like they did with physics and 20th century psychology, both fields of philosophy becoming sciences. It will not be the specialized, but the open-minded and interdisciplinary figure that sees machine learning and related fields to their actualization.2 In other words, it is those trained in philosophy and computer science that stand to solve these problems.

The proposal to include ethicists at the table with engineers is not only a theoretical one; it is also a practical one. Engineers currently building machine learning models are not and should not be making ethical decisions for computers that affect human wellbeing. Engineers are not required to study, in grade school nor in university, formal ethics. This means most engineers do not have the knowledge to implement such systems, even if they were told to. It must be ethicists that are intimate with the decision-making process of the machines.

Moving Forward

I propose a shift in the expectations of engineers and technical companies to generate the new kind of ethics needed for safe and reliable learning machines. Educational institutions need to create interdisciplinary tracks for philosopher engineers. These individuals will have the formal understanding of both the technical systems being built for machine learning as well as the ethics and philosophy. These individuals will have the joint skills that few others have, and will be able to reimagine ethics for machines. With such individuals influencing industry, more ethical oversights that led to disasters like Tay will be avoided.

An intimate cooperation between the disciplines of philosophy and computer science is the only way humanity will arrive at a sufficient model of ethics for machines. If institutions fail to bring about such interdisciplinary graduates, the state of the industry will remain the same: at an impasse on developing an ethics compatible with machines. This could spell disaster for human wellbeing as computers rush into society.


References

  1. Anderson, Susan Leigh. “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics.” AI & Society (2008) 22: 477-493. doi: 10.1007/s00146-007-0094-5.

  2. Bonsignorio, Fabio. “The New Experimental Science of Physical Cognitive Systems: AI, Robotics, Neuroscience and Cognitive Sciences under a New Name with the Old Philosophical Problems?” Philosophy and Theory of Artificial Intelligence, Sapere (2013): 133-150. doi: 10.1007/978-3-642-31674-6_10.

  3. Oesterheld, Casper. “Formalizing Preference Utilitarianism in Physical World Models.” Synthese (2016) 193: 2747–2759. doi: 10.1007/s11229-015-0883-1.

  4. West, John. “Microsoft’s Disastrous Tay Experiment Shows the Hidden Dangers of AI.” Quartz Media (2016). https://qz.com/.

  5. Yampolskiy, Roman. “Artificial Intelligence Safety Engineering: Why Machine Ethics Is a Wrong Approach.” Springer (2013): 335–347.