top of page
  • Writer's pictureFred Guerin

On Equity, Empathy and the 'Moral' Robot



There is a great deal to be said for living one’s life according to certain generalized or universal moral principles. Thus, the demand not to commit murder, lie, steal from others, or the principle that we should treat others with respect appear as reasonable and even necessary in most societies. These latter universal edicts have a normative hold on us precisely because they emerged initially through our discursive and practical thinking-acting experience in a world shared with and among others.


However, it can sometimes happen that following a universal rule or moral precept leads to injustice in the particular. This is amply underscored in law. In other words, it is the very generality of laws and principles that is both a virtue and a source of difficulty in human affairs. It is a virtue to have a certain generality in the form of simple and straightforward rules and laws. But in the context of the human world—of that which can be, and often is, otherwise—this generality can sometimes prevent just or reasonable outcomes.


We all know that in most cases it is reasonable to follow the injunction not to lie. We also know there are cases where telling the truth can be much more damaging--if our neighbour or friend asks to borrow a handgun that we assume with good reason he or she will use to harm themselves or someone else, it would be appropriate to lie and say 'Sorry, I sold it or gave it away'.


Another more concrete example: the woman's rights movement began by calling into question the assumption that a woman's place is 'in the home', under the protection of a man. The subordinate status of women was, in fact, firmly entrenched in many legal systems (both common law and constitutional law). Indeed, women, people of colour or aboriginals often did not even exist as legal persons. Over time, the injustice of exclusion of women became apparent, and the need for judicial review of this form of discrimination led to either revision of or amendment to existing law. One way of understanding what might compel such a re-interpretation is through the notion of equity.


Equity is sometimes contrasted with equality. In most cases we might think it is right to treat all individuals equally. But in some cases this may lead to injustice. For example, if we treated a group or a person that was historically discriminated against as if they were equal to those who were not subject to such discrimination, then we would be repeating the original wrong precisely because we took no account of the fact that they were repeatedly discriminated against in the past.


Equity is, therefore, a corrective to the absolute character of laws and principles. It is what you might call a kind of practical reasoning and wisdom that involves empathy and presumes that because we are not omniscient beings we are unable to say, once and for all, how things should be. Our lack of omniscience means that from time to time the abstract universal law might have to be slightly revised, amended, broadened or deepened, in light of many unforeseen or unimagined situations. Over time, in light of new understandings or newly raised consciousness about ourselves and the world we are pressed to continuously re-examine our legal and moral precepts.


In this way, equity can be understood as a kind of discriminating sympathetic judgment called for in any situation where a presupposed universal rule, common perspective, conventional thinking is seen as unjust or becomes hegemonic and all-encompassing. When this situation occurs, the voice of the particular, the unique human situation is silenced.


The capacity to exercise good judgment with respect to some situation we have not encountered before is a matter of understanding that this particular situation cannot be subsumed under some general rule, without losing something important. Equity is, therefore, a determination to discover the right or just ‘fit’ between the general and the particular. The judge who is unjust is the judge who has no discriminating sympathy, and intentionally ignores the distinctiveness of the particular situation which calls for a correction, broadening or deepening of the general rule. The application of law, owing to the variability of human affairs, is always an unfinished project.


When the jurist or judge restrains the law through equity considerations this does not, in any way, diminish the law, but is, in fact, the means through which we discover a richer or more nuanced understanding of the law. In the spirit of Aristotle, the 20th century German philosopher Hans-Georg Gadamer relates that “…the law is always deficient, not because it is imperfect in itself, but because human reality is necessarily imperfect in comparison to the ordered world of the law, and hence allows of no simple application of the law”.


So, in a crucial way, when empathy and sympathetic judgment are realized through a notion of equity the jurist is allowed to surrender the ‘letter of the law’ in order to bring its legal meaning to fulfillment. In more modern language this notion of equity is associated with notions of restorative justice, rehabilitation and reconciliation—it is, to use a modern expression, judgment from the bottom-up, rather than the top-down. Importantly, the reasoning involved in practical judgments based on equity is not straight-forwardly ‘logical’, but rather grounded on the notion that we humans can think and imagine the pain, adversity, historical or cultural disadvantage, hardship or suffering that others might have gone through.


So, what's the point of all this?


In the last few years there has been a great deal of research into whether artificially intelligent machines can ‘learn’ how to make moral decisions. Many countries are actively funding programs to develop AI weaponry such as autonomous drones, and are conducting research into the design of Artificial Moral Agents (AMA’s).


The notion that machines can become moral agents is to some extent grounded in the behaviorist thesis that human behavior is not the result of thinking or empathic capacity, but merely a reflexive response to a given stimulus. From a behaviorist perspective, understanding human beings is just a matter of observing how humans behave in different situations given certain stimuli and predicting certain outcomes.1 From this same behaviourist perspective, discovering a 'moral order' is simply a matter of making an empirical study of individual preferences or determining what sorts of social conventions people reflexively conform to. This tends to reduce moral or ethical thinking to moral automatism, where we don’t think about what we do but simply operate reflexively as pleasure-seeking pain-avoiding programmed machines. When human moral agency is reduced to ‘stimulus-response’ it is easy to see how it could be replicated in a digitized format—though the ‘stimulus’ in this case would be a computer program and the reflexive response would be the robotic carrying out of sequences of arithmetic or logical operations.


Artificial Moral Agents do not set out purposes for themselves, do not set new or evaluate traditional values in new situations—they simply carry out a pre-ordained program. Put simply, artificially intelligent machines cannot embody anything like moral experience because they cannot ‘experience’ themselves as reflecting, interpret, judging, empathic beings among other like-minded beings.


From this behaviorist perspective robotic morality would have little room for equity or human empathy arising out of a given particular situation that does not quite fit the universal moral order of things. Our robot’s ‘moral’ decisions would be wholly rule-oriented, top-down calculations. Its machine intelligence might be quite capable, along utilitarian lines, of calculating odds or logically determining whether one alternative would result in more deaths than another.


What it would not be able to do is empathize or sympathize with a human being or judge that the particulars of this or that situation may morally enjoin it to re-interpret its general programming in a new more encompassing way.


It is precisely our human ability to do this: to reflectively not reflexively judge a situation through something like a notion of equity, where we are able to exercise our uniquely human capacity to think and imagine the pain, adversity, disadvantage, hardship or suffering that others might be going through.


1. The business model behind social media platforms such as Facebook and Twitter is based on how to predict and manipulate human behaviour. Algorithms are designed to attract user attention and amplify certain emotions (often negative ones). By clicking on an attractive image, advertisement or news story that affirms a bias or prejudice, users experience an addictive 'dopamine spike'. This keeps them coming back to the platform again and again. This behaviourist account of human response to attractive virtual stiumli can be instructively compared with B.F. Skinner's box. Skinner placed a hungry rat in a box that contained a lever on the side. As the rat moved about the box, it would accidentally knock the lever and a food pellet would drop into a container next to the lever. The rats learned to go straight to the lever after a few times of being put in the box. Whether we speak here about the pleasure involved in receiving food when a lever is pressed or the 'dopamine rush' that results from the exposure to pleasing images or news stories that affirm existing biases, the presumption is always that human behavioural responses will be reflexive not reflective, predictable and highly susceptible to manipulation. This has worked out well as a business model. Does it mean that behaviourism gives us an accurate description or demonstrably true theory about the way human beings experience or act in the world? As the political philosopher Hannah Arendt relates in her book 'The Human Condition', "The trouble with modern theories of behaviourism is not that they are wrong but that they could become true, that they actually are the best possible conceptualization of certain obvious trends in modern society." One might say, following Arendt, that the modern trend is simply to reduce human beings to something like programmable pleasure oriented robots precisely in order to prevent them from developing into critical thinking and imagining moral beings.

10 views0 comments

Recent Posts

See All
bottom of page