On knowing the ‘why’ and the impossibility of ethical AI

Within normative ethics and philosophy discourse, ‘morality as a system of rules’ has come to feel like a straw man position. Yet, it is one that is frequently invoked in many applied ethics contexts, even while its controversial status is acknowledged. Health care ethics often invokes a ‘Four principles‘ approach to ethics. Media ethics continues to invoke a similar set of principles. Digital ethics sometimes follows the trend, and AI ethics also seems set to follow. All of these cases rely on statements of shared or universal values and effectively invoke the idea of morality as a system of rules. Consider some major AI developers’ statements on ethical artificial intelligence (italics added):

Microsoft claims that “Designing AI to be trustworthy requires creating solutions that reflect ethical principles that are deeply rooted in important and timeless values“.

IBM claims to understand that ethics would have to be embedded in the design and development process from the outset, and that it is not something that can be ‘added in’ after the fact. Their document on AI Ethics explains: “An ethical, human-centric AI must be designed and developed in a manner that is aligned with the values and ethical principles of a society or the community it affects. Ethics is based on well-founded standards of right and wrong that prescribe what humans ought to do, usually in terms of rights, obligations, benefits to society, fairness, or specific virtues.”

Google’s DeepMind has a commitment to “researching the ethical and social questions involving AI” in order to “ensure these topics remain at the heart of everything we do.”

The commitment to an ideal of ethical AI is important, but it might also be a contradiction. My worry is that ‘ethical AI’ might rest on a deep misunderstanding of moral philosophy, and on a mistaken definition of what constitutes the ‘ethical’.

So, why have ‘moral theory’ and ‘ethics’ moved away from principle-based approaches even while digital and applied ethics are embracing the idea of moral principles? The answer, on both sides, could be one and the same: ethics is too complicated. That ethics is complicated is a reason to want to simplify things. It is also a reason to reject simplification as inherently flawed. AI ethics, and a lot of applied ethics contexts, tends to rely on the former, while moral philosophy moves us towards the latter.

Consider Margaret Olivia Little’s explanation of the theory called ‘moral particularism’: “it argues, (in)famously, that the moral import of any consideration is irreducibly context dependent, that exceptions can be found to any proffered principles, and that moral wisdom consists in the ability to discern and interpret the shape of situations one encounters, not the ability to subsume them under codified rules” (32).

Moral particularism argues that understanding how context works is an essential part of understanding rules. It argues that what it is for something to be a moral rule is for it to be contextually dependent. Understanding moral generalizations requires recognizing paradigmatic contexts and distinguishing them from deviant contexts. Moral generalizations and moral rules, according to particularism, are subject to reversals or ‘valence flipping’: every putative moral rule or moral generalization could become its opposite, given the right context. Lying is wrong, but lying to the gestapo to protect a friend is the right thing to do. It is not right in spite of the lying. It is right because in the context, lying is right. Context matters.

Since moral decision-making requires close attention to context, it is not hard to see why tech ethics and AI ethics would be in trouble. AI notoriously has difficulty with context. 

The deeper problem for AI and digital ethics is that, according to moral particularism, paradigmatic contexts cannot be defined through statistical patterns. Statistically speaking, most contexts could be deviant with respect to a particular piece of moral wisdom. Moral understanding would require understanding what makes them deviant, not that deviance is a general pattern. Part of learning the rules is learning when to break them.

As Little explains, “When we issue a generalization to the effect that something has a certain feature, sometimes what we really want to say is not that such a connection always, or even usually, holds, but that the conditions in which it does hold are particularly revealing of that item’s nature… [We] are taking as privileged, in one way or another, cases in which the item has the feature specified” (37). 

And yet, that is how machine learning works. AI looks at huge swathes of data, and its neural networks ‘learn’ the patterns. Jason Pontin writes:  “Deep learning is math: a statistical method where computers learn to classify patterns using neural networks.” AI essentially does pattern recognition extremely well. The problem presented by moral particularism is that ethics is not a pattern, and so ethics cannot be recognized by AI. Representing ethics as a pattern is a reductionist misunderstanding of ethics. Ethics is as much about providing morally appropriate reasons as it is about doing the right thing. On a Kantian moral theory, we are tasked with doing the right thing for the right reason. Little draws on Aristotle to explain that “the person of moral wisdom must know the ‘why’, not just the ‘that'” (32).

We have known for a long time that ethics could not, and should not, be reduced to an algorithm. Even rule-focused moral theories suggest that moral wisdom requires more than blind rule-following. Distinguishing right from wrong requires judgment. Doing the right thing adds more judgement. Determining which rule applies, or which of two conflicting rules takes precedence, relies on a deeper understanding. These add up to a non-codifiability thesis. It has often been argued that ethics is not, in fact, codifiable. In the context of digital ethics, this has frequently led to the conclusion that ethics cannot be programmed.

But AI is not programmed in the same way. Neural networks and their programming function as “black boxes, whose outputs cannot be explained, raising doubts about their reliability and biases.”

AI is different from previous forms of programming. AI does not codify the rules. AI, or machine learning algorithms in particular, ask a computer system to look at huge swathes of complex data and ‘learn’ it as a pattern. It develops the ‘ability’ to follow the pattern, even while it is not asked to explain its predictions. It is not necessarily ‘taught’ about the reasons, so its ability to generate an explanation is limited. Sometimes AI gets it spectacularly wrong. In part, this is because: “their statistical way of learning makes their talents narrow and inflexible. Humans can think about the world using abstract concepts, and can remix those concepts to adapt to new situations. Machine learning can’t.”

And while next generation AI may well aspire to be more context-responsive, the worry about AI doing ethical decision-making is a deep one.

In a recent piece examining the impossibility of ethical AI, Tom Chatfield explains that “there is no such thing as ethical A.I, any more than there’s a single set of instructions spelling how to be good — […] our current fascinated focus on the “inside” of automated processes only takes us further away from the contested human contexts within which values and consequences actually exist.”

I would go further. Taken together, the decontextualized nature of ‘AI ethics’ and the deeply contextual nature of moral reasoning suggest that ethical AI is impossible. This conclusion follows from the very idea of moral wisdom, which requires not only knowing what to do, but why it is the right thing to do. Perhaps the impossibility of ethical artificial intelligence is better highlighted if we shift our goal to ‘ethical artificial wisdom’. Artificial wisdom seems like a contradiction, but it highlights something important that is being missed in the quest for ethical AI: the reasons matter. ‘Knowing the why’ is an essential part of ethics. Ethical AI will, unfortunately, remain an impossible goal as long as AI remains composed of outcome-oriented black boxes.

Works cited:

Little, Margaret Olivia, “On Knowing the ‘Why’: Particularism and Moral Theory” Hastings Center Report 31, No. 4 (2001): 32-40.

Leave a comment

Filed under Uncategorized

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s