Showing posts with label utilitarianism. Show all posts
Showing posts with label utilitarianism. Show all posts

Monday, 15 August 2022

The Tangled Web We Weave


By Keith Tidman
 

Kant believed, as a universal ethical principle, that lying was always morally wrong. But was he right? And how might we decide that?

 

The eighteenth-century German philosopher asserted that everyone had ‘intrinsic worth’: that people are characteristically rational and free to make their own choices. Lying, he believed, degrades that aspect of moral worth, withdrawing others’ ability to exercise autonomy and make logical decisions, as we presume they might in possessing truth. 

 

Kant’s ground-level belief in these regards was that we should value others strictly ‘as ends’, and never see people ‘as merely means to ends’. A maxim that’s valued and commonly espoused in human affairs today, too, even if people sometimes come up short.

 

The belief that judgements of morality should be based on universal principles, or ‘directives’, without reference to the practical outcomes, is termed deontology. For example, according to this approach, all lies are immoral and condemnable. There are no attempts to parse right and wrong, to dig into nuance. It’s blanket censure.

 

But it’s easy to think of innumerable drawbacks to the inviolable rule of wholesale condemnation. Consider how you might respond to a terrorist demanding the place and time of a meeting to be held by the intended target. Deontologists like Kant would consider such a lie immoral.

 

Virtue ethics, to this extent compatible with Kant’s beliefs, also says that lying is morally wrong. Their reasoning, though, is that it violates a core virtue: honesty. Virtue ethicists are concerned to protect people’s character, where ‘virtues’ — like fairness, generosity, compassion, courage, fidelity, integrity, prudence, and kindness — lead people to behave in ways others will judge morally laudable. 

 

Other philosophers argue that, instead of turning to the rules-based beliefs of Kant and of virtue ethicists, we ought to weigh the (supposed) benefits and harms of a lie’s outcomes. This principle is called  consequentialist ethics, mirroring the utilitarianism of eighteenth/nineteenth-century philosophers Jeremy Bentham and John Stuart Mill, emphasising greatest happiness. 

 

Advocates of consequentialism claim that actions, including lying, are morally acceptable when the results of behaviour maximise benefits and minimise harms. A tall order! A lie is not always immoral, as long as outcomes on net balance favour the stakeholders.

 

Take the case of your saving a toddler from a burning house. Perhaps, however, you believe in not taking credit for altruism, concerned about being perceived conceitedly self-serving. You thus tell the emergency responders a different story about how the child came to safety, a lie that harms no one. Per Bentham’s utilitarianism, the ‘deception’ in this instance is not immoral.

 

Kant’s dyed-in-the-wool unforgiveness of lies invites examples that challenge the concept’s wisdom. Take the historical case of a Jewish woman concealed, from Nazi military occupiers, under the floorboards of a farmer’s cottage. The situation seems clear-cut, perhaps.

 

If grilled by enemy soldiers as to the woman’s whereabouts, the farmer lies rather than dooming her to being shot or sent to a concentration camp. The farmer chooses good over bad, echoing consequentialism and virtue ethics. His choice answers the question whether the lie elicits the better outcome than would truth. It would have been immoral not to lie.

 

Of course, the consequences of lying, even for an honorable person, may sometimes be hard to get right, differing in significant ways from reality or subjectively the greater good. One may overvalue or undervalue benefits — nontrivial possibilities.

 

But maybe what matters most in gauging consequences are motive and goal. As long as the purpose is to benefit, not to beguile or harm, then trust remains intact — of great benefit in itself.

 

Consider two more cases as examples. In the first, a doctor knowingly gives a cancer-ridden patient and family false (inflated) hope for recovery from treatment. In the second, a politician knowingly gives constituents false (inflated) expectations of benefits from legislation he sponsored and pushed through.

 

The doctor and politician both engage in ‘deceptions’, but critically with very different intent: Rightly or wrongly, the doctor believes, on personal principle, that he is being kind by uplifting the patient’s despondency. And the politician, rightly or wrongly, believes that his hold on his legislative seat will be bolstered, convinced that’s to his constituents’ benefit.

 

From a deontological — rules-focused — standpoint, both lies are immoral. Both parties know that they mislead — that what they say is false. (Though both might prefer to say something like they ‘bent the truth’, as if more palatable.) But how about from the standpoint of either consequentialism or virtue ethics? 

 

The Roman orator Quintilian is supposed to have advised, ‘A liar should have a good memory’. Handy practical advice, for those who ‘weave tangled webs’, benign or malign, and attempt to evade being called out for duplicity.

 

And damning all lies seems like a crude, blunt tool, with no real value by being wholly unworkable outside Kant’s absolutist disposition toward the matter; no one could unswervingly meet that rigorous standard. Indeed, a study by psychologist Robert Feldman claimed that people lie two to three times, in trivial and major ways, for every ten minutes of conversation! 

 

However, consequentialism and virtue ethics have their own shortcomings. They leave us with the problematic task of figuring out which consequences and virtues matter best in a given situation, and tailoring our decisions and actions accordingly. No small feat.

 

So, in parsing which lies on balance are ‘beneficial’ or ‘harmful’, and how to arrive at those assessments, ethicists still haven’t ventured close to crafting an airtight model: one that dots all the i’s and crosses all the t’s of the ethics of lying. 


At the very least, we can say that, no, Kant got it wrong in overbearingly rebuffing all lies as immoral. Not seeking reasonable exceptions may have been obvious folly. Yet, that may be cold comfort for some people, as lapses into excessive risk — weaving evermore tangled webs — court danger by unwary souls.


Meantime, while some more than others may feel they have been cut some slack, they might be advised to keep Quintilian’s advice close.




* ’O what a tangled web we weave / When first we practice to deceive’, Sir Walter Scott, poem, ‘Marmion: A Tale of Flodden Field’.

 

Monday, 14 February 2022

The Ethics of ‘Opt-out, Presumed-Consent’ Organ Donation

By Keith Tidman

According to current data, in the United States alone, some 107,000 people are now awaiting a life-saving organ transplant. Many times that number are of course in similar dire need worldwide, a situation found exasperating by many physicians, organ-donation activists, and patients and their families.


The trouble is that there’s a yawning lag between the number of organs donated in the United States and the number needed. The result is that by some estimates 22 Americans die every day, totaling 8,000 a year, while they desperately wait for a transplant that isn’t available in time.

 

It’s both a national and global challenge to balance the parallel exigencies — medical, social, and ethical — of recycling cadaveric kidneys, lungs, livers, pancreas, hearts, and other tissues in order to extend the lives of those with poorly functioning organs of their own, and more calamitously with end-stage organ failure.

 

The situation is made worse by the following discrepancy: Whereas 95% of adult Americans say they support organ donation upon a donor’s brain death, only slightly more than half actually register. Deeds don’t match bold proclamations. The resulting bottom line is there were only 14,000 donors in 2021, well shy of need. Again, the same worldwide, but in many cases much worse and fraught.

 

Yet, at the same time, there’s the following encouraging ratio, which points to the benefits of deceased-donor programs and should spur action: The organs garnered from one donor can astoundingly save eight lives.

 

Might the remedy for the gaping lag between need and availability therefore be to switch the model of cadaveric organ donation: from the opt-in, or expressed-consent, program to an op-out, or presumed-consent, program? There are several ways that America, and other opt-in countries, would benefit from this shift in organ-donation models.

 

One is that among the many nations having experienced an opt-out program — from Spain, Belgium, Japan, and Croatia to Columbia, Norway, Chile, and Singapore, among many others — presumed-consent rates in some cases reach over 90%.

 

Here’s just one instance of such extraordinary success: Whereas Germany, with an opt-in system, hovers around a low 12% consent rate, its neighbour, Austria, with an opt-out system, boasts a 99% presumed-consent rate.

 

An alternative approach that, however, raises new ethical issues might be for more countries to incentivise their citizens to register as organ donors, and stay on national registers for a minimum number of years. The incentive would be to move them up the queue as organ recipients, should they need a transplant in the future. Registered donors might spike, while patients’ needs have a better hope of getting met.

 

Some ethical, medical, and legal circles acknowledge there’s conceivably a strong version and a weak version of presumed-consent (opt-out) organ recovery. The strong variant excludes the donor’s family from hampering the donation process. The weak variant of presumed consent, meanwhile, requires the go-ahead of the donor’s family, if the family can be found, before organs may be recovered. How well all that works in practice is unclear.

 

Meanwhile, whereas people might believe that donating cadaveric organs to ailing people is an ethically admissible act, indeed of great benefit to communities, they might well draw the ethical line at donation somehow being mandated by society.


Another issue raised by some bioethicists concerns whether the organs of a brain-dead person are kept artificially functional, this to maximize the odds of successful recovery and donation. Doing so affects both the expressed-consent and presumed-consent models of donation, sometimes requiring to keep organs animate.

 

An ethical benefit of the opt-out model is that it still honours the principles of agency and self-determination, as core values, while protecting the rights of objectors to donation. That is, if some people wish to decline donating their cadaveric organs — perhaps because of religion (albeit many religions approve organ donation), personal philosophy, notions of what makes a ‘whole person’ even in death, or simple qualms — those individuals can freely choose not to donate organs.

 

In line with these principles, it’s imperative that each person be allowed to retain autonomy over his or her organs and body, balancing perceived goals around saving lives and the actions required to reach those goals. Decision-making authority continues to rest primarily in the hands of the individual.

 

From a utilitarian standpoint, an opt-out organ-donation program entailing presumed consent provides society with the greatest good for the greatest number of people — the classic utilitarian formula. Yet, the formula needs to account for the expectation that some people, who never wished for their cadeveric organs to be donated, simply never got around to opting out — which may be the entry point for family intervention in the case of the weak version of presumed consent.

 

From a consequentialist standpoint, there are many patients, with lives hanging by a precariously thinning thread, whose wellbeing is greatly improved (life giving) by repurposing valuable, essential organs through cadaveric organ transplantation. This consequentialist calculation points to the care needed to reassure the community that every medical effort is of course still made to save prospective, dying donors.

 

From the standpoint of altruism, the calculus is generally the same whether a person, in an opt-in country, in fact does register to donate their organs; or whether a person, in an opt-out country, chooses to leave intact their status of presumed consent. In either scenario, informed permission — expressed or presumed — to recover organs is granted and many more lives saved.

 

For reasons such as those laid out here, in my assessment the balance of the life-saving medical, pragmatic (supply-side efficiency), and ethical imperatives means that countries like the United States ought to switch from the opt-in, expressed-consent standard of cadaveric organ donation to the opt-out, presumed-consent standard.

 

Monday, 20 January 2020

Environmental Ethics and Climate Change

Posted by Keith Tidman

The signals of a degrading environment are many and on an existential scale, imperilling the world’s ecosystems. Rising surface temperature. Warming oceans. Sinking Greenland and Antarctic ice sheets. Glacial retreat. Decreased snow cover. Sea-level rise. Declining Arctic sea ice. Increased atmospheric water vapour. Permafrost thawing. Ocean acidification. And not least, supercharged weather events (more often, longer lasting, more intense).

Proxy (indirect) measurements — ice cores, tree rings, corals, ocean sediment — of carbon dioxide, a heat-trapping gas that plays an important role in creating the greenhouse effect on Earth, have spiked dramatically since the beginning of the Industrial Revolution. The measurements underscore that the recent increase far exceeds the natural ups and downs of the previous several hundred thousand years. Human activity — use of fossil fuels to generate energy and run industry, deforestation, cement production, land use changes, modes of travel, and much more — continues to be the accelerant.

The reports of the United Nations’ Intergovernmental Panel on Climate Change, contributed to by some 1,300 independent scientists and other researchers from more than 190 countries worldwide, reported that concentrations of carbon dioxide, methane, and nitrous oxides ‘have increased to levels unprecedented in at least 800,000 years’. The level of certainty of human activity being the leading cause, referred to as anthropogenic cause, has been placed at more than 95 percent.

That probability figure has legs, in terms of scientific method. Early logical positivists like A.J. Ayer had asserted that for validity, a scientific proposition must be capable of proof — that is, ‘verification’. Later, however, Karl Popper, in his The Logic of Scientific Discovery, argued that in the case of verification, no number of observations can be conclusive. As Popper said, no matter how many instances of white swans we may have observed, this does not justify the conclusion that all swans are white. (Lo and behold, a black swan shows up.) Instead, Popper said, the scientific test must be whether in principle the proposition can be disproved — referred to as ‘falsification’. Perhaps, then, the appropriate test is not ability to prove that mankind has affected the Earth’s climate; rather, it’s incumbent upon challengers to disprove (falsify) such claims. Something that  hasn’t happened and likely never will.

As for the ethics of human intervention into the environment, utilitarianism is the usual measure. That is to say, the consequences of human activity upon the environment govern the ethical judgments one makes of behavioural outcomes to nature. However, we must be cautious not to translate consequences solely in terms of benefits or disadvantages to humankind’s welfare; our welfare appropriately matters, of course, but not to the exclusion of all else in our environment. A bias to which we have often repeatedly succumbed.

The danger of such skewed calculations may be in sliding into what the philosopher Peter Singer coined ‘speciesism’. This is where, hierarchically, we place the worth of humans above all else in nature, as if the latter is solely at our beck and call. This anthropocentric favouring of ourselves is, I suggest, arbitrary and too narrow. The bias is also arguably misguided, especially if it disregards other species — depriving them of autonomy and inherent rights — irrespective of the sophistication of their consciousness. To this point, the 18th/19th-century utilitarian Jeremy Bentham asserted, ‘Can [animals] feel? If they can, then they deserve moral consideration’.

Assuredly, human beings are endowed with cognition that’s in many ways vastly more sophisticated than that of other species. Yet, without lapsing into speciesism, there seem to be distinct limits to the comparison, to avoid committing what’s referred to as a ‘category mistake’ — in this instance, assigning qualities to species (from orangutans and porpoises to snails and amoebas) that belong only to humans. In other words, an overwrought egalitarianism. Importantly, however, that’s not the be-all of the issue. Our planet is teeming not just with life, but with other features — from mountains to oceans to rainforest — that are arguably more than mere accouterments for simply enriching our existence. Such features have ‘intrinsic’ or inherent value — that is, they have independent value, apart from the utilitarianism of satisfying our needs and wants.

For perspective, perhaps it would be better to regard humans as nodes in what we consider a complex ‘bionet’. We are integral to nature; nature is integral to us; in their entirety, the two are indissoluble. Hence, while skirting implications of panpsychism — where everything material is thought to have at least an element of consciousness — there should be prima facie respect for all creation: from animate to inanimate. These elements have more than just the ‘instrumental’ value of satisfying the purposes of humans; all of nature is itself intrinsically the ends, not merely the means. Considerations of aesthetics, culture, and science, though important and necessary, aren’t sufficient.

As such, there is an intrinsic moral imperative not only to preserve Earth, but for it and us jointly to flourish — per Aristotle’s notion of ‘virtue’, with respect and care, including for the natural world. It’s a holistic view that concedes, on both the utilitarian and intrinsic sides of the moral equation, mutually serving roles. This position accordingly pushes back against the hubristic idea that human-centricism makes sense if the rest of nature collectively amounts only to a backstage for our purposes. That is, a backstage that provides us with a handy venue where we act out our roles, whose circumstances we try to manage (sometimes ham-fistedly) for self-satisfying purposes, where we tinker ostensibly to improve, and whose worth (virtue) we believe we’re in a position to judge rationally and bias-free.

It’s worth reflecting on a thought experiment, dubbed ‘the last man’, that the Australian philosopher Richard Routley introduced in the 1970s. He envisioned a single person surviving ‘the collapse of the world system’, choosing to go about eliminating ‘every living thing, animal and plant’, knowing that there’s no other person alive to be affected. Routley concluded that ‘one does not have to be committed to esoteric values to regard Mr. Last Man as behaving badly’. Whether Last Man was, or wasn’t, behaving unethically goes to the heart of intrinsic versus utilitarian values regarding nature —and presumptions about human supremacy in that larger calculus.

Groups like the UN Intergovernmental Panel on Climate Change have laid down markers as to tipping points beyond which extreme weather events might lead to disastrously runaway effects on the environment and humanity. Instincts related to the ‘tragedy of the commons’ — where people rapaciously consume natural resources and pollute, disregarding the good of humanity at large — have not yet been surmounted. That some other person, or other community, or other country will shoulder accountability for turning back the wave of environmental destruction and the upward-spiking curve of climate extremes has hampered the adequacy of attempted progress. Nature has thrown down the gauntlet. Will humanity pick it up in time?

Monday, 9 December 2019

Is Torture Morally Defensible?


Posted by Keith Tidman

Far from being unconscionable, today one metric of how societies have universalised torture is that, according to Amnesty International, some 140 countries resort to it: whether for use by domestic police, intelligence agencies, military forces, or other institutions. Incongruously, many of these countries are signatories to the United Nations Convention Against Torture, the one that forbids torture, whether domestic or outsourced to countries where torture is legal (by so-called renditions).

Philosophers too are ambivalent, conjuring up difficult scenarios in which torture seems somehow the only reasonable response:
An anarchist knows the whereabouts of a powerful bomb set to kill scores of civilians.
A kidnapper has hidden a four-year-old in a makeshift underground box, holding out for a ransom.
Or perhaps an authoritarian government, feeling threatened, has identified the ringleader of swelling political street opposition, and wants to know his accomplices’ names. Soldiers have a high-ranking captive, who knows details of the enemy’s plans to launch a counteroffensive. A kingpin drug supplier, and his metastasized network of street traffickers, routinely distributes highly contaminated drugs, resulting in a rash of deaths...

Do any of these hypothetical and real-world events, where information needs to be extracted for urgent purposes, rise to the level of resorting to torture? Are there other examples to which society ought morally consent to torture? If so, for what purposes? Or is torture never morally justified?

One common opinion is that if the outcome of torture is information that saves innocent lives, the practice is morally justified. I would argue that there are at least three aspects to this claim:
  • the multiple lives that will be saved (traded off against the fewer), sometimes referred to as ‘instrumental harm’; 
  • the collective innocence, in contrast to any aspect of culpability, of those people saved from harm; and
  • the overall benefit to society, as best can credibly be predicted with information at hand.
The 18th-century philosopher Jeremy Bentham’s famous phrase that ‘It is the greatest good for the greatest number of people which is the measure of right and wrong’ seems to apply here. Historically, many people have found, rightly or not, that this principle of ‘greatest good for the greater number’ rises to the level of common sense, as well as proving simpler to apply in establishing one’s own life doctrine than from competitive standards — such as discounting outcomes for chosen behaviours.

Other thinkers, such as Joseph Priestley (18th century) and John Stuart Mill (19th century), expressed similar utilitarian arguments, though using the word ‘happiness’ rather than ‘benefit’. (Both terms might, however, strike one as equally cryptic.) Here, the standard of morality is not a rulebook rooted in solemnised creed, but a standard based in everyday principles of usefulness to the many. Torture, too, may be looked at in those lights, speaking to factors like human rights and dignity — or whether individuals, by virtue of the perceived threat, forfeit those rights.

Utilitarianism has been criticised, however, for its obtuse ‘the ends justify the means’ mentality — an approach complicated by the difficulty of predicting consequences. Similarly, some ‘bills of rights’ have attempted to provide pushback against the simple calculus of benefiting the greatest number. Instead, they advance legal positions aimed at protecting the welfare of the few (the minority) against the possible tyranny of the many (the majority). ‘Natural rights’ — the right to life and liberty — inform these protective constitutional provisions.

If torture is approved of in some situations — ‘extreme cases’ or ‘emergencies’, as society might tell itself — the bar in some cases might lower. As a possible fast track in remedying a threat — maybe an extra–judicial fast track — torture is tempting, especially when used ‘for defence’. However, the uneasiness is in torture turning into an obligation — if shrouded in an alleged moral imperative, perhaps to exploit a permissive legal system. This dynamic may prove alluring if society finds it expeditious to shoehorn more cases into the hard-to-parse ‘existential risk’.

What remains key is whether society can be trusted to make such grim moral choices — such as those requiring the resort to torture. This blurriness has propelled some toward an ‘absolutist’ stance, censuring torture in all circumstances. The French poet Charles Baudelaire felt that ‘Torture, as the art of discovering truth, is barbaric nonsense’. Paradoxically, however, absolutism in the total ban on torture might itself be regarded as immoral, if the result is death of a kidnapped child or of scores of civilians. That said, there’s no escaping the reality that torture inflicts pain (physical and/or mental), shreds human dignity, and curbs personal sovereignty. To some, many even, it thus must be viewed as reprehensible and irredeemable — decoupled from outcomes.

This is especially apparent if torture is administered to inflict pain, terrorise, humiliate, or dehumanise for purposes of deterrence or punishment. But even if torture is used to extract information — information perhaps vital, as per the scenarios listed at the beginning — there is a problem: the information acquired is suspect, tales invented just to stop pain. Long ago, Aristotle stressed this point, saying plainly: ‘Evidence from torture may be considered utterly untrustworthy’. Even absolutists, however, cannot skip being involved in defining what rises to the threshold of clearer-cut torture and what perhaps falls just below  grist for considerable contentious debate.

The question remains: can torture ever be justified? And, linked to this, which moral principles might society want to normalise? Is it true, as the French philosopher Jean-Paul Sartre noted, that ‘Torture is senseless violence, born in fear’? As societies grapple with these questions, they reduce the alternatives to two: blanket condemnation of torture (and acceptance of possible dire, even existential consequences of inaction); or instead acceptance of the utility of torture in certain situations, coupled with controversial claims about the correct definitions of the practice.


I would argue one might morally come down on the side of the defensible utility of the practice  albeit in agreed-upon circumstances (like some of those listed above), where human rights are robustly aired side by side with the exigent dangers, potential aftermaths of inertia, and hard choices societies face.

Monday, 18 November 2019

Getting the Ethics Right: Life and Death Decisions by Self-Driving Cars

Yes, the ethics of driverless cars are complicated.
Image credit: Iyad Rahwan
Posted by Keith Tidman

In 1967, the British philosopher Philippa Foot, daughter of a British Army major and sometime flatmate of the novelist Iris Murdoch,  published an iconic thought experiment illustrating what forever after would be known as ‘the trolley problem’. These are problems that probe our intuitions about whether it is permissible to kill one person to save many.

The issue has intrigued ethicists, sociologists, psychologists, neuroscientists, legal experts, anthropologists, and technologists alike, with recent discussions highlighting its potential relevance to future robots, drones, and self-driving cars, among other ‘smart’, increasingly autonomous technologies.

The classic version of the thought experiment goes along these lines: The driver of a runaway trolley (tram) sees that five people are ahead, working on the main track. He knows that the trolley, if left to continue straight ahead, will kill the five workers. However, the driver spots a side track, where he can choose to redirect the trolley. The catch is that a single worker is toiling on that side track, who will be killed if the driver redirects the trolley. The ethical conundrum is whether the driver should allow the trolley to stay the course and kill the five workers, or alternatively redirect the trolley and kill the single worker.

Many twists on the thought experiment have been explored. One, introduced by the American philosopher Judith Thomson a decade after Foot, involves an observer, aware of the runaway trolley, who sees a person on a bridge above the track. The observer knows that if he pushes the person onto the track, the person’s body will stop the trolley, though killing him. The ethical conundrum is whether the observer should do nothing, allowing the trolley to kill the five workers. Or push the person from the bridge, killing him alone. (Might a person choose, instead, to sacrifice himself for the greater good by leaping from the bridge onto the track?)

The ‘utilitarian’ choice, where consequences matter, is to redirect the trolley and kill the lone worker — or in the second scenario, to push the person from the bridge onto the track. This ‘consequentialist’ calculation, as it’s also known, results in the fewest deaths. On the other hand, the ‘deontological’ choice, where the morality of the act itself matters most, obliges the driver not to redirect the trolley because the act would be immoral — despite the larger number of resulting deaths. The same calculus applies to not pushing the person from the bridge — again, despite the resulting multiple deaths. Where, then, does one’s higher moral obligation lie; is it in acting, or in not acting?

The ‘doctrine of double effect’ might prove germane here. The principle, introduced by Thomas Aquinas in the thirteenth century, says that an act that causes harm, such as injuring or killing someone as a side effect (‘double effect’), may still be moral as long as it promotes some good end (as, let’s say, saving five lives rather than just the one).

Empirical research has shown that redirecting the runaway trolley toward the one worker is considered an easier choice — utilitarianism basis — whereas overwhelmingly visceral unease in pushing a person off the bridge is strong — deontological basis. Although both acts involve intentionality — resulting in killing one rather than five — it’s seemingly less morally offensive to impersonally pull a lever to redirect the trolley than to place hands on a person to push him off the bridge, sacrificing him for the good of the many.

In similar practical spirit, neuroscience has interestingly connected these reactions to regions of the brain, to show neuronal bases, by viewing subjects in a functional magnetic resonance imaging (fMRI) machine as they thought about trolley-type scenarios. Choosing, through deliberation, to steer the trolley onto the side track, reducing loss of life, resulted in more activity in the prefrontal cortex. Thinking about pushing the person from the bridge onto the track, with the attendant imagery and emotions, resulted in the amygdala showing greater activity. Follow-on studies have shown similar responses.

So, let’s now fast forward to the 21st century, to look at just one way this thought experiment might, intriguingly, become pertinent to modern technology: self-driving cars. The aim is to marry function and increasingly smart, deep-learning technology. The longer-range goal is for driverless cars to consistently outperform humans along various critical dimensions, especially human error (the latter estimated to account for some ninety percent of accidents) — while nontrivially easing congestion, improving fuel mileage, and polluting less.

As developers step toward what’s called ‘strong’ artificial intelligence — where AI (machine learning and big data) becomes increasingly capable of human-like functionality — automakers might find it prudent to fold ethics into their thinking. That is, to consider the risks on the road posed to self, passengers, drivers of other vehicles, pedestrians, and property. With the trolley problem in mind, ought, for example, the car’s ‘brain’ favour saving the driver over a pedestrian? A pedestrian over the driver? The young over the old? Women over men? Children over adults? Groups over an individual? And so forth — teasing apart the myriad conceivable circumstances. Societies, drawing from their own cultural norms, might call upon the ethicists and other experts mentioned in the opening paragraph to help get these moral choices ‘right’, in collaboration with policymakers, regulators, and manufacturers.

Thought experiments like this have gained new traction in our techno-centric world, including the forward-leaning development of ‘strong’ AI, big data, and powerful machine-learning algorithms for driverless cars: vital tools needed to address conflicting moral priorities as we venture into the longer-range future.

Monday, 16 April 2018

'Evil': A Brief Search for Understanding

In medieval times, evil ws often personified in not-quite human forms

Posted by Keith Tidman

Plato may have been right in asserting that “There must always be something antagonistic to good.” Yet pause a moment, and wonder exactly why? And also what is it about ‘evil’ that means it can be understood and defined equally from both religious and secularist viewpoints? I would argue that fundamental to an exploration of both these questions is the notion that for something to be evil, there must be an essential component: moral agency. And as to this critical point, it might help to begin with a case where moral agency and evil arguably have converged.

The case in question is repeated uses of chemical weapons in Syria, made all too real recently. Graphic images of gassed children, women, and men, gasping for air and writhing in pain, have circulated globally and shocked people’s sense of humanity. The efficacy of chemical weapons against populations lies not only in the weapons’ lethality but — just as distressingly and perhaps more to the weapons’ purpose — in the resulting terror, shock, and panic, among civilians and combatants alike. Such use of chemical weapons does not take place, however, without someone, indeed many people, making a deliberate, freely made decision to engage in the practice. Here is, the intentionality of deed that infuses human moral agency and, in turn, gives rise to a shared perception that such behaviour aligns with ‘evil’.

One wonders what the calculus was among the instigators (who they are need not concern us, much as it matters from the poltiical standpoint) to begin and sustain the indiscriminate use of chemical weapons. And what were the considerations as to whom to 'sacrifice' (the question of presumed human dispensability) in the name of an ideology or quest for simple self-survival? Were the choices viewed and the decisions made on ‘utilitarian’ grounds? That is, was the intent to maim and kill in such shocking ways to demoralise and dissuade insurgency’s continuation (short-term consequences), perhaps in expectation that the conflict will end quicker (longer-term consequences)? Was it part of some larger gopolitical messaging between Russia and the United States? (Some even claim the attacks were orchestrated by the latter to discredit the former...)

Whatever the political scenario, it seems that the ‘deontological’ judgement of the act — the use of chemical weapons — has been lost. This, after all, can only make the use utterly immoral irrespective of consequences. Meanwhile, world hesitancy or confusion — fails to stop another atrocity against humanity, and the hesitancy itself has its own pernicious effects. The 19th-century British philosopher John Stuart Mill underscored this point, observing that:
“A person may cause evil to others not only by his actions but by his inaction, and in either case he is justly accountable to them for the injury.”
Keeping the preceding scenario in Syria in mind, let’s further explore the dimensions of rational moral agency and evil. Although  the label ‘evil’ is most familiar when used to qualify the affairs of human beings it can be used more widely, for example in relation to natural phenomena. Yet, I focus here on people because although, for example, predatory animals can and do cause serious harm, even death, I would argue that the behaviour of animals more fittingly falls under the rubric of ‘natural phenomena’ and that only humans are truly capable of evil.

As one distinction, people can readily anticipate — project and understand — the potential for harm, on an existential level; other species probably cannot (with research continuing). As for differentiating between, say, wrongdoing and full-on evil, context is critical. Another instantiation of evil is history’s many impositions of colonial rule, as having been practiced in all parts of the world. It not uncommonly oppressed its victims, in all manner of scarring ways, by sowing fear, injustice, stripping away of human rights, physical and emotional pain, and destruction of indigenous traditions.

This tipping point from wrongdoing, from say, someone under-reporting taxable income or skipping out on paying a restaurant bill, into full-on evil is made evident in these additional examples. These are deeds that range the gamut: serial murder that preys on communities, terrorist attacks on subway trains, genocide aimed at helpless minority groups, massacres, enslavement of people, torture, abuses of civilians during conflicts, summary executions, and mutilation, as well as child abuse, rape, racism, and environmental destruction. Such atrocities happen because people arrive at freely made choices: deliberateness, leading to causation.

These incidences, and their perpetrators (society condemns both doer and deed) aren’t just ‘wrong’, or ‘bad’, or even ‘contemptible’, they’re evil. Even though context matters and can add valuable explanation — circumstances that mitigate or extenuate deeds, including instigators’ motives — rendering judgements about evil is still possible, even if occasionally tenuously. So, for example, mitigation might include being unaware of the harmful consequences of one's actions, well-meaning intent that unpredictably goes awry, pernicious effects of a corrupting childhood, or lack of empathy of a psychopath. Under these conditions, blame and culpability hardly seem appropriate. Extenuation, on the other hand, might be deliberate, cruel infliction of pain and the pleasure derived from it, such as might occur during the venal kidnapping of a woman or child.

As for a religious dimension to moral agency, such agency might be viewed as applying to a god, in the capacity as creator of the universe. In this model of creation, such a god is seen as serving as the moral agent behind what I referred to above as ‘natural evil’ — from hurricanes, earthquakes, volcano eruptions, tsunamis, and droughts to illnesses, famine, pain, and grief. They of course often have destructive, even deadly, consequences. Importantly, that such evil occurs in the realm of nature doesn’t award it exceptional status. This, despite occasional claims to the contrary, such as the overly reductionist, but commonplace, assertion of the ancient Roman emperor-philosopher Marcus Aurelius:
 “Nothing is evil which is according to nature.”
In the case of natural events, evil may be seen as stemming less from intentions and only from the consequences of such phenomena — starvation, precarious subsistence, homelessness, broken-up families, desolation, widespread chronic diseases, rampant infant mortality, breakdown of social systems, malaise, mass exoduses of desperate migrants escaping violence, and gnawing hopelessness.

Such things have prompted faith-based debates over evil in the world. Specifically, if, as commonly assumed by religious adherents, there is a god that’s all-powerful, all-knowing, and all-benevolent, then why is there evil, including our examples above of natural evil? In one familiar take on theodicy, the 4th-century philosopher Saint Augustine offered a partial explanation, averring that:
 “God judged it better to bring good out of evil than to suffer no evil to exist.” 
 Other philosophers have asserted that the absence of evil, where people could only act for the good (as well as a god’s supposed fore-knowledge of people’s choices) would a priori render free will unnecessary and, of note, choices being predetermined.

Yet, the Gordian knot remains untied: our preceding definition of a god that is all-powerful and all-benevolent would rationally include being able to, as well as wanting to, eliminate evil and the suffering stemming from it. Especially, and surely, in the framework of that god’s own moral agency and unfettered free will. Since, however, evil and suffering are present — ubiquitously and incessantly — a reasonable inquiry is whether a god therefore exists. If one were to conclude that a god does exist, then recurring natural evil might suggest that the god did not create the universe expressly, or at least not entirely, for the benefit of humankind. That is, that humankind isn’t, perhaps, central or exceptional, but rather is incidental, to the universe’s existence. Accordingly, one might presuppose an ontological demotion.

Human moral agency remains core even when it is institutions — for example, governments and organisations of various kinds — that formalise actions. Here, again, the pitiless use of chemical weapons in Syria presents us with a case in point to better understand institutional behaviour. Importantly, however, even at the institutional level, human beings inescapably remain fundamental and essential to decisions and deeds, while institutions serve as tools to leverage those decisions and deeds. National governments around the world routinely suppress and brutalise minority populations, often with little or no provocation. Put another way, it is the people, as they course through the corridors of institutions, who serve as the central actors. They make, and bear responsibility for policies.

It is through institutions that people’s decisions and deeds become externalised — ideas instantiated in the form of policies, plans, regulations, acts, and programs. In this model of individual and collective human behaviour, institutions have the capacity for evil, even in cases when bad outcomes are unintended. Which affirms, one might note in addressing institutional behaviour, that the 20th-century French novelist and philosopher, Albert Camus, was perhaps right in observing:
“Good intentions may do as much harm as malevolence if they lack understanding.”
So, to the point: an institution’s ostensibly well-intended policy — for example, freeing up corporate enterprise to create jobs and boost national productivity — may nonetheless unintentionally cause suffering — for example, increased toxins in the soil, water, and air, affecting the health of communities. Hence again is a way in which effects, not only intentions, express bad outcomes.

But at other times, the moral agency behind decisions and deeds perpetrated by institutions’ human occupants may intentionally aim toward evil. Cases range the breadth of actions: launching wars overtly with plunder or hegemonism in mind; instigating pogroms or death fields; materially disadvantaging people based on identities like race, ethnicity, religion, or national origin (harsh treatment of migrants being a recent example); ignoring the dehumanising and stunting effects of child labour; showing policy disregard as society’s poorest elderly live in squalor; allowing industries to seep toxins into the environment for monetary gain — there are myriad examples. Institutions aren’t, therefore, simply bricks and mortar. They have a pulse, comprising the vision, philosophy, and mission of the people who design and implement their policies, benign or malign.

Evil, then, involves more than what Saint Augustine saw as the ‘privation’ of good — privation of virtuousness, equality, empathy, responsible social stewardship, health, compassion, peace, and so forth. In reality, evil is far less passive than Saint Augustine’s vision. Rather, evil arises from the deliberate, freely making of life’s decisions and one's choice to act on them, in clear contravention to humanity’s well-being. Evil is distinguished from the mere absence of good, and is much more than Plato’s insight that there must always be something ‘antagonistic’ to good. In many instances, evil is flagrant, such as in our example of the use of chemical weapons in Syria; in other instances, evil is more insidious and sometimes veiled, such as in the corruption of government plutocrats invidiously dipping into national coffers at the expense of the populace's quality of life. In either case, it is evident that evil, whether in its manmade or in its natural variant, exists in its own right and thus can be parsed and understood from both the religious and the secular vantage point.