Showing posts with label Aquinas. Show all posts
Showing posts with label Aquinas. Show all posts

Monday 12 June 2023

The Euthyphro Dilemma: What Makes Something Moral?

The sixteenth-century nun and mystic, Saint Teresa. In her autobiography, she wrote that she was very fond of St. Augustine … for he was a sinner too

By Keith Tidman  

Consider this: Is the pious being loved by the gods because it is pious, or is it pious because it is being loved by the gods?  Plato, Euthyphro


Plato has Socrates asking just this of the Athenian prophet Euthyphro in one of his most famous dialogues. The characteristically riddlesome inquiry became known as the Euthyphro dilemma. Another way to frame the issue is to flip the question around: Is an action wrong because the gods forbid it, or do the gods forbid it because it is wrong? This version presents what is often referred to as the ‘two horns’ of the dilemma.

 

Put another way, if what’s morally good or bad is only what the gods arbitrarily make something, called the divine command theory (or divine fiat) — which Euthyphro subscribed to — then the gods may be presumed to have agency and omnipotence over these and other matters. However, if, instead, the gods simply point to what’s already, independently good or bad, then there must be a source of moral judgment that transcends the gods, leaving that other, higher source of moral absolutism yet to be explained millennia later. 

 

In the ancient world the gods notoriously quarreled with one another, engaging in scrappy tiffs over concerns about power, authority, ambition, influence, and jealousy, on occasion fueled by unabashed hubris. Disunity and disputation were the order of the day. Sometimes making for scandalous recounting, these quarrels comprised the stuff of modern students’ soap-opera-styled mythological entertainment. Yet, even when there is only one god, disagreements over orthodoxy and morality occur aplenty. The challenge mounted by the dilemma is as important to today’s world of a generally monotheistic god as it was to the polytheistic predispositions of ancient Athens. The medieval theologians’ explanations are not enough to persuade:


‘Since good as perceived by the intellect is the object of the will, it is impossible for God to will anything but what His wisdom approves. This is as it were, His law of justice, in accordance with which His will is right and just. Hence, what He does according to His will He does justly: as we do justly when we do according to the law. But whereas law comes to us from some higher power, God is a law unto Himself’ (St. Thomas Aquinas, Summa Theologica, First Part, Question 21, first article reply to Obj. 2).


In the seventeenth century, Gottfried Leibniz offered a firm challenge to ‘divine command theory’, in asking the following question about whether right and wrong can be known only by divine revelation. He suggested, rather, there ought to be reasons, apart from religious tradition only, why particular behaviour is moral or immoral:

 

‘In saying that things are not good by any rule of goodness, but sheerly by the will of God, it seems to me that one destroys, without realising it, all the love of God and all his glory. For why praise him for he has done if he would be equally praiseworthy in doing exactly the contrary?’ (Discourse on Metaphysics, 1686). 

 

Meantime, today’s monotheistic world religions offer, among other holy texts, the Bible, Qur’an, and Torah, bearing the moral and legal decrees professed to be handed down by God. But even in the situations’ dissimilarity — the ancient world of Greek deities and modern monotheism (as well as some of today’s polytheistic practices) — both serve as examples of the ‘divine command theory’. That is, what’s deemed pious is presumed to be the case precisely because God chooses to love it, in line with the theory. That pious something or other is not independently sitting adrift, noncontingently virtuous in its own right, with nothing transcendentally making it so.

 

This presupposes that God commands only what is good. It also presupposes that, for example, things like the giving of charity, the avoidance of adultery, and the refrain from stealing, murdering, and ‘graven images’ have their truth value from being morally good if, and only if, God loves these and other commandments. The complete taxonomy (or classification scheme) of edicts being aimed at placing guardrails on human behaviour in the expectation of a nobler, more sanctified world. But God loving what’s morally good for its own sake — that is, apart from God making it so — clearly denies ‘divine command theory’.

 

For, if the pious is loved by the gods because it is pious, which is one of the interpretations offered by Plato (through the mouth of Socrates) in challenging Euthyphro’s thinking, then it opens the door to an authority higher than God. Where matters of morality may exist outside of God’s reach, suggesting something other than God being all-powerful. Such a scenario pushes back against traditionally Abrahamic (monotheist) conceptualisations.

 

Yet, whether the situation calls for a single almighty God or a yet greater power of some indescribable sort, the philosopher Thomas Hobbes, who like St. Thomas Aquinas and Averroës believed that God commands only what is good, argued that God’s laws must conform to ‘natural reason’. Hobbes’s point makes for an essential truism, especially if the universe is to have rhyme and reason. This being true even if the governing forces of natural law and of objective morality are not entirely understood or, for that matter, not compressible into a singularly encompassing ‘theory of all’. 

 

Because of the principles of ‘divine command theory’, some people contend the necessary takeaway is that there can be no ethics in the absence of God to judge something as pious. In fact, Fyodor Dostoyevsky, in The Brothers Karamazov, presumptuously declared that ‘if God does not exist, everything is permitted’. Surely not so; you don’t have to be a theist of faith to spot the shortsighted dismissiveness of his assertion. After all, an atheist or agnostic might recognise the benevolence, even the categorical need, for adherence to manmade principles of morality, to foster the welfare of humanity at large for its own sufficient sake. Secular humanism, in other words  which greatly appeals to many people.

 

Immanuel Kant’s categorical imperative supports these human-centered, do-unto-others notions: ‘Act only in accordance with that maxim through which you can at the same time will that it become a universal law’. An ethic of respect toward all, as we mortals delineate between right and wrong. Even with ‘divine command theory’, it seems reasonable to suppose that a god would have reasons for preferring that moral principles not be arrived at willy-nilly.

  

Monday 18 November 2019

Getting the Ethics Right: Life and Death Decisions by Self-Driving Cars

Yes, the ethics of driverless cars are complicated.
Image credit: Iyad Rahwan
Posted by Keith Tidman

In 1967, the British philosopher Philippa Foot, daughter of a British Army major and sometime flatmate of the novelist Iris Murdoch,  published an iconic thought experiment illustrating what forever after would be known as ‘the trolley problem’. These are problems that probe our intuitions about whether it is permissible to kill one person to save many.

The issue has intrigued ethicists, sociologists, psychologists, neuroscientists, legal experts, anthropologists, and technologists alike, with recent discussions highlighting its potential relevance to future robots, drones, and self-driving cars, among other ‘smart’, increasingly autonomous technologies.

The classic version of the thought experiment goes along these lines: The driver of a runaway trolley (tram) sees that five people are ahead, working on the main track. He knows that the trolley, if left to continue straight ahead, will kill the five workers. However, the driver spots a side track, where he can choose to redirect the trolley. The catch is that a single worker is toiling on that side track, who will be killed if the driver redirects the trolley. The ethical conundrum is whether the driver should allow the trolley to stay the course and kill the five workers, or alternatively redirect the trolley and kill the single worker.

Many twists on the thought experiment have been explored. One, introduced by the American philosopher Judith Thomson a decade after Foot, involves an observer, aware of the runaway trolley, who sees a person on a bridge above the track. The observer knows that if he pushes the person onto the track, the person’s body will stop the trolley, though killing him. The ethical conundrum is whether the observer should do nothing, allowing the trolley to kill the five workers. Or push the person from the bridge, killing him alone. (Might a person choose, instead, to sacrifice himself for the greater good by leaping from the bridge onto the track?)

The ‘utilitarian’ choice, where consequences matter, is to redirect the trolley and kill the lone worker — or in the second scenario, to push the person from the bridge onto the track. This ‘consequentialist’ calculation, as it’s also known, results in the fewest deaths. On the other hand, the ‘deontological’ choice, where the morality of the act itself matters most, obliges the driver not to redirect the trolley because the act would be immoral — despite the larger number of resulting deaths. The same calculus applies to not pushing the person from the bridge — again, despite the resulting multiple deaths. Where, then, does one’s higher moral obligation lie; is it in acting, or in not acting?

The ‘doctrine of double effect’ might prove germane here. The principle, introduced by Thomas Aquinas in the thirteenth century, says that an act that causes harm, such as injuring or killing someone as a side effect (‘double effect’), may still be moral as long as it promotes some good end (as, let’s say, saving five lives rather than just the one).

Empirical research has shown that redirecting the runaway trolley toward the one worker is considered an easier choice — utilitarianism basis — whereas overwhelmingly visceral unease in pushing a person off the bridge is strong — deontological basis. Although both acts involve intentionality — resulting in killing one rather than five — it’s seemingly less morally offensive to impersonally pull a lever to redirect the trolley than to place hands on a person to push him off the bridge, sacrificing him for the good of the many.

In similar practical spirit, neuroscience has interestingly connected these reactions to regions of the brain, to show neuronal bases, by viewing subjects in a functional magnetic resonance imaging (fMRI) machine as they thought about trolley-type scenarios. Choosing, through deliberation, to steer the trolley onto the side track, reducing loss of life, resulted in more activity in the prefrontal cortex. Thinking about pushing the person from the bridge onto the track, with the attendant imagery and emotions, resulted in the amygdala showing greater activity. Follow-on studies have shown similar responses.

So, let’s now fast forward to the 21st century, to look at just one way this thought experiment might, intriguingly, become pertinent to modern technology: self-driving cars. The aim is to marry function and increasingly smart, deep-learning technology. The longer-range goal is for driverless cars to consistently outperform humans along various critical dimensions, especially human error (the latter estimated to account for some ninety percent of accidents) — while nontrivially easing congestion, improving fuel mileage, and polluting less.

As developers step toward what’s called ‘strong’ artificial intelligence — where AI (machine learning and big data) becomes increasingly capable of human-like functionality — automakers might find it prudent to fold ethics into their thinking. That is, to consider the risks on the road posed to self, passengers, drivers of other vehicles, pedestrians, and property. With the trolley problem in mind, ought, for example, the car’s ‘brain’ favour saving the driver over a pedestrian? A pedestrian over the driver? The young over the old? Women over men? Children over adults? Groups over an individual? And so forth — teasing apart the myriad conceivable circumstances. Societies, drawing from their own cultural norms, might call upon the ethicists and other experts mentioned in the opening paragraph to help get these moral choices ‘right’, in collaboration with policymakers, regulators, and manufacturers.

Thought experiments like this have gained new traction in our techno-centric world, including the forward-leaning development of ‘strong’ AI, big data, and powerful machine-learning algorithms for driverless cars: vital tools needed to address conflicting moral priorities as we venture into the longer-range future.