< Back to Advanced Writing Portfolio

The Ethics of Machine Learning, Explained

You’ve probably heard the term “machine learning” used online or in the media. Tech and science leaders like Elon Musk and Stephen Hawking have warned about machine learning and artificial intelligence. It’s used in self-driving cars, in collecting data, in advertising. But are we sure this new technology is unbiased and ethical? In a lot of cases, like Microsoft’s Tay AI disaster, it isn’t.

What is machine learning?

In the context of machine learning, “machines” are just computers. The “learning” refers to new methods for programming computers that allow them to “learn” things just like human beings do. These computers interact with the world and develop beliefs, skills, and ideas about whatever task they’re set to.

What does it mean for a computer to learn?

To better consider what it means for computers to learn, let’s first try thinking a little differently about how we learn. Let’s pretend we can represent all your beliefs in this current moment. We’ll show this with a circle marked “A”.

Some beliefs you might be less sure of than others, but you can change your mind about them. For instance, let’s say right now, at “A”, you’re not sure if it will rain tomorrow. You watch a weather forecast from a station you trust. You’ve known since you were a child that weather forecasters are generally benevolent and wouldn’t openly lie on network television. This belief is also in “A”. So now you come to a new set of beliefs “B” that has been updated in regards to tomorrow’s weather.

Does this seem weird yet? Humans aren’t used to thinking about learning this way. Surely you could never actually enumerate all your beliefs to form “A”. A computer, fast and precise, can.

Computers represent their beliefs in zeros and ones, just as they represent everything else. If you ask a computer about tomorrow’s weather, it could tell you the exact confidence it has in the forecast. Say, 72% chance of rain.

Imagine being asked to enumerate all your beliefs as probabilities (50% sure it will rain tomorrow, 99% sure you had coffee this morning, 10% sure you will be reincarnated, etc.). It wouldn’t be easy. You’d have to do a lot of guesswork. And there are probably many things you have no opinion on. And would you really understand the difference between 10 and 11 percent confidence in reincarnation? It seems like the wrong kind of question for a human being. And yet a computer can understand a meaningful difference in 1 percent differences.

Computers are fast and precise about learning. But they’re missing something: ethics

Computers are already very good at defining beliefs and learning, but they’re missing ethics.

Above, when you updated your belief from “A” to “B” about the weather, you did so because you got information you trusted. There’s some base of beliefs that made that information trustworthy to you. “Trustworthiness” is not a subjective matter for computers. They may be programmed to trust only certain authoritarian sources of information, or they may be programmed to trust anything they hear. For instance, Microsoft’s Tay AI gathered all its information about its world from conversations on Twitter, and trusted all of it. The result, unfortunately, was a racist belief system.

When Tay was told, “Hitler was right” or “the holocaust was made up,” it had no foundational beliefs to draw on, and so it trusted. It learned and adopted the beliefs. In machine learning, this model is called “unsupervised learning.”1

Unsupervised learning models are commonly programmed into computers because engineers do not bother adding a foundation. However, they can hardly be blamed. Ethics are largely disagreed upon among humans, let alone machines, and engineers generally have little to no formal study of ethics. They make the assumption that machines, like humans, will learn morality inherently, or otherwise won’t have to consider it. But this isn’t guaranteed.

Why we need to teach computers ethics

We need to teach computers ethics for the same reason we need to teach our children ethics.1 Just as children are raised to treat other humans and animals ethically, computers that are now interacting with and making decisions about humans need to treat those humans ethically. Tay developed toxic beliefs about vulnerable groups of people and decided to attack them when prompted. A morally responsible person wouldn’t do that, or would at least have to bear the consequences of doing that.

What decisions by computers are affecting people?

Obviously, decisions about the weather aren’t exactly ethically loaded, and they’re not decisions affecting humans. But computers are making more and more decisions for humans every day. Tay’s decisions blatantly affected people, but computers make more subtle decisions that affect people, like granting and denying loans, altering prices of online goods, advertising, and driving. Computers that operate cars are one of the best examples of why the way machines gain their beliefs matter. These computers have the ability to make lethal decisions.

When a human being gets into an accident, she makes a split-second decision. The decision is uncalculated and instinctual. Computers, however, don’t have instincts. They think quickly and make their decisions deliberately.

So if a computer decides to take your life on the road, it better have a reason, and the reason better be ethical. You wouldn’t want Tay to be the computer deciding who to sacrifice in an inevitable car crash. And so computers that operate cars cannot have unsupervised learning models.

Then the question remaining is, “how should they decide to take your life?” For many, this is an uncomfortable question. Why should a computer ever take a life? The best answer to that question, though, is another question: Why should a human ever take a life? The reality is, humans meet these questions every day, and now computers are meeting the same questions. Machine learning is, if nothing else, an opportunity for us to reflect on our values and decide together on the ethical foundations of the computers of tomorrow.


References

  1. Powers, Thomas. “Incremental Machine Ethics.” IEEE (2011): 51-58. doi: 10.1109/MRA.2010.940152.

  2. West, John. “Microsoft’s Disastrous Tay Experiment Shows the Hidden Dangers of AI.” Quartz Media (2016). https://qz.com/.