Showing posts with label Aristotle. Show all posts
Showing posts with label Aristotle. Show all posts

Monday, 26 June 2023

Ideas Animate Democracy


Keith Tidman
 

The philosopher Soren Kierkegaard once advised, ‘Life can only be understood backwards … but it must be lived forward’ — that is, life understood with one eye turned to history, and presaged with the other eye turned to competing future prospects. An observation about understanding and living life that applies across the board, to individuals, communities, and nations. Another way of putting it is that ideas are the grist for thinking not only about ideals but about the richness of learnable history and the alternative futures from which society asserts agency in freely choosing its way ahead. 


As of late, though, we seem to have lost sight that one way for democracy to wilt is to shunt aside ideas that might otherwise inspire minds to think, imagine, solve, create, discover and innovate — the source of democracy’s intellectual muscularity. For reflexively rebuffing ideas and their sources is really about constraining inquiry and debate in the public square. Instead, there has been much chatter about democracies facing existential grudge matches against exploitative autocratic regimes that issue their triumphalist narrative and view democracy as weak-kneed.  


In mirroring the decrees of the Ministry of Truth in the dystopian world of George Orwell’s book Nineteen Eighty-Four — where two plus two equals five, war is peace, freedom is slavery, and ignorance is strength — unbridled censorship and historical revisionism begin and end with the fear of ideas. Ideas snubbed by authoritarians’ heavy hand. The short of it is that prohibitions on ideas end up a jumbled net, a capricious exercise in power and control. Accordingly, much exertion is put into shaping society’s sanctioned norms, where dissent isn’t brooked. A point to which philosopher Hannah Arendt cautioned, ‘Totalitarianism has discovered a means of dominating and terrorising human beings from within’. Where trodden-upon voting and ardent circulation of propagandistic themes, both of which torque reality, hamper free expression.

 

This tale about prospective prohibitions on ideas is about choices between the resulting richness of thought or the poverty of thought — a choice we must get right, and can do so only by making it possible for new intellectual shoots to sprout from the raked seedbed. The optimistic expectation from this is that we get to understand and act on firmer notions of what’s real and true. But which reality? One reality is that each idea that’s arbitrarily embargoed delivers yet another kink to democracy’s armour; a very different reality is that each idea, however provocative, allows democracy to flourish.

 

Only a small part of the grappling over ideas is for dominion over which ideas will reasonably prevail long term. The larger motive is to honour the openness of ideas’ free flow, to be celebrated. This exercise brims with questions about knowledge. Like these: What do we know, how do we know it, with what certainty or uncertainty do we know it, how do we confirm or refute it, how do we use it for constructive purposes, and how do we allow for change? Such fundamental questions crisscross all fields of study. New knowledge ferments to improve insight into what’s true. Emboldened by this essential exercise, an informed democracy is steadfastly enabled to resist the siren songs of autocracy.

 

Ideas are accelerants in the public forum. Ideas are what undergird democracy’s resilience and rootedness, on which standards and norms are founded. Democracy at its best allows for the unobstructed flow of different social and political thought, side by side. As Benjamin Franklin, polymath and statesman, prophetically said: ‘Freedom of speech is a principal pillar of a free government’. A lead worth following. In this churn, ideas soar or flop by virtue of the quality of their content and the strength of their persuasion. Democracy allows its citizens to pick which ideas normalise standards — through debate and subjecting ideas to scrutiny, leading to their acceptance or refutation. Acid tests, in other words, of the cohesion and sustainability of ideas. At its best, debate arouses actionable policy and meaningful change.

 

Despite society being buffeted daily by roiling politics and social unrest, democracy’s institutions are resilient. Our institutions might flex under stress, but they are capable of enduring the broadsides of ideological competitiveness as society makes policy. The democratic republic is not existentially imperiled. It’s not fragilely brittle. America’s Founding Fathers set in place hardy institutions, which, despite public handwringing, have endured challenges over the last two-and-a-half centuries. Historical tests of our institutions’ mettle have inflicted only superficial scratches — well within institutions’ ability to rebound again and again, eventually as robust as ever.

 

Yet, as Aristotle importantly pointed out by way of a caveat to democracy’s sovereignty and survivability, 


‘If liberty and equality . . . are chiefly to be found in democracy, they will be attained when all persons share in the government to the utmost.’


A tall order, as many have found, but one that’s worthy and essential, teed up for democracies to assiduously pursue. Democracy might seem scruffy at times. But at its best, democracy ought not fear ideas. Fear that commonly bubbles up from overwrought narrative and unreasoned parochialism, in the form of ham-handed constraints on thought and expression.

 

The fear of ideas is often more injurious than the content of ideas, especially in the shadows of disagreeableness intended to cause fissures in society. Ideas are thus to be hallowed, not hollowed. To countenance contesting ideas — majority and minority opinions alike, forged on the anvil of rationalism, pluralism, and critical thinking — is essential to the origination of constructive policies and, ultimately, how democracy is constitutionally braced.

 

 

Sunday, 26 February 2023

Universal Human Rights for Everyone, Everywhere

Jean-Jacques Rousseau

By Keith Tidman


Human rights exist only if people believe that they do and act accordingly. To that extent, we are, collectively, architects of our destiny — taking part in an exercise in the powers of human dignity and sovereignty. Might we, therefore, justly consider human rights as universal?

To presume that there are such rights, governments must be fashioned according to the people’s freely subscribed blueprints, in such ways that policymaking and consignment of authority in society represent citizens’ choices and that power is willingly shared. Such individual autonomy is itself a fundamental human right: a norm to be exercised by all, in all corners. Despite scattered conspicuous headwinds. Respect for and attachment to human rights in relations with others is binding, prevailing over the mercurial whimsy of institutional dictates.

For clarity, universal human rights are inalienable norms that apply to everyone, everywhere. No nation ought to self-immunise as an exception. These human rights are not mere privileges. By definition they represent the natural order of things; that is, these rights are naturally, not institutionally, endowed. There’s no place for governmental, legal, or social neglect or misapplication of those norms, heretically violating human dignity. This point about dignity is redolent of Jean-Jacques Rousseau’s notions of civil society, explained in his Social Contract (1762), which provocatively opens with the famous ‘Man was born free, and he is everywhere in chains’. By which Rousseau was referring to the tradeoff between people’s deference to government authority over moral behaviour in exchange for whatever freedoms civilisation might grant as part of the social contract. The contrary notion, however, asserts that human rights are natural, protected from government caprice in their unassailability — claims secured by the humanitarianism of citizens in all countries, regardless of cultural differences.

The idea that everyone has a claim to immutable rights has the appeal of providing a platform for calling out wrongful behaviour and a moral voice for preventing or remedying harms, in compliance with universal standards. The standards act as moral guarantees and assurance of oversight. The differences among cultures should not translate to the warped misplacement of relativism in calculating otherwise clear-cut universal rights aimed to protect.

International nongovernmental organisations (such as Human Rights Watch) have laboured to protect fundamental liberties around the world, investigating abuses. Several other human rights organisations, such as the United Nations, have sought to codify people's rights, like those spelled out in the UN Declaration of Human Rights. The many universal human rights listed by the declaration include these:
All human beings are born free; everyone has the right to life, liberty, and security; no one shall be subjected to torture; everyone has the right to freedom of thought, conscience, and religion; everyone has the right to education; no one shall be held in slavery; all are equal before the law’. 
(Here’s the full UN declaration, for a grasp of its breadth.) 

These aims have been ‘hallowed’ by the several documents spelling out moral canon, in aggregate amounting to an international bill of rights to which countries are to commit and abide by. This has been done without regard to appeals to national sovereignty or cultural differences, which might otherwise prejudice the process, skew policy, undermine moral universalism, lay claim to government dominion, or cater to geopolitical bickering — such things always threatening to pull the legs out from under citizens’ human rights.

These kinds of organisations have set the philosophical framework for determining, spelling out, justifying, and promoting the implementation of human rights on as maximum global scale as possible. Aristotle, in Nicomachean Ethics, wrote to this core point, saying: 
A rule of justice is natural that has the same validity everywhere, and does not depend on our accepting it’.
That is, natural justice foreruns social, historical, and political institutions shaped to bring about conformance to their arbitrary, self-serving systems of fairness and justice. Aristotle goes on:
Some people think that all rules of justice are merely conventional, because whereas a law of nature is immutable and has the same validity everywhere, as fire burns both here and in Persia, rules of justice are seen to vary. That rules of justice vary is not absolutely true, but only with qualifications. Among the gods indeed it is perhaps not true at all; but in our world, although there is such a thing as Natural Justice, all rules of justice are variable. But nevertheless there is such a thing as Natural Justice as well as justice not ordained by nature’.
Natural justice accordingly applies to everyone, everywhere, where moral beliefs are objectively corroborated as universal truths and certified as profound human goods. In this model, it is the individual who shoulders the task of appraising the moral content of institutional decision-making.

Likewise, it was John Locke, the 17th-century English philosopher, who argued, in his Two Treaties of Government, the case that individuals enjoy natural rights, entirely non-contingent of the nation-state. And that whatever authority the state might lay claim to rested in guarding, promoting, and serving the natural rights of citizens. The natural rights to life, liberty, and property set clear limits to the power of the state. There was no mystery as to Locke’s position: states existed singularly to serve the natural rights of the people.

A century later, Immanuel Kant was in the vanguard in similarly taking a strong moral position on validating the importance of human rights, chiefly the entangled ideals of equality and the moral autonomy and self-determination of rational people.

The combination of the universality and moral heft of human rights clearly imparts greater potency to people’s rights, untethered to legal, institutional force of acknowledgment. As such, human rights are enjoyed equally, by everyone, all the time. It makes sense to conclude that everyone is therefore responsible for guarding the rights of fellow citizens, not just their own. Yet, in practice it is the political regime and perhaps international organisations that bear that load.

And within the ranks of philosophers, human-rights universalism has sometimes clashed with relativists, who reject universal (objective) moral canon. They paint human rights as influenced contingently by social, historical, and cultural factors. The belief is that rights in society are considered apropos only for those countries whose culture allows. Yet, surely, relativism still permits the universality of numerous rights. We instinctively know that not all rights are relative. At the least, societies must parse which rights endure as universal and which endure as relative, and hope the former are favoured.

That optimism notwithstanding, many national governments around the world choose not to uphold, either in part or in whole, fundamental rights in their countries. Perhaps the most transfixing case for universal human rights, as entitlements, is the inhumanity that haunts swaths of the world today, instigated for the most trifling of reasons.

Monday, 13 June 2022

The Diamond–Water Paradox


All that glitters is not gold! Or at least, is not worth as much as gold. Here, richly interwoven cubic crystals of light metallic golden pyrite – also known as fool’s gold – are rare but nowhere near as valuable. Why’s that?

By Keith Tidman


One of the notable contributions of the Enlightenment philosopher, Adam Smith, to the development of modern economics concerned the so-called ‘paradox of value’.

That is, the question of why one of the most-critical items in people’s lives, water, is typically valued far less than, say, a diamond, which may be a nice decorative bauble to flaunt but is considerably less essential to life? As Smith couched the issue in his magnum opus, titled An Inquiry Into the Nature and Causes of the Wealth of Nations (1776):
‘Nothing is more useful than water: but it will purchase scarcely anything; scarcely anything can be had in exchange for it. A diamond, on the contrary, has scarcely any use-value; but a very great quantity of other goods may frequently be had in exchange for it’.
It turns out that the question has deep roots, dating back more than two millennia, explored by Plato and Aristotle, as well as later luminaries, like the seventeenth-century philosopher John Locke and eighteenth-century economist John Law.

For Aristotle, the solution to the paradox involved distinguishing between two kinds of ‘value’: the value of a product in its use, such as water in slaking thirst, and its value in exchange, epitomised by a precious metal conveying the power to buy, or barter for, another good or service.

But, in the minds of later thinkers on the topic, that explanation seemed not to suffice. So, Smith came at the paradox differently, through the theory of the ‘cost of production’ — the expenditure of capital and labour. In many regions of the world, where rain is plentiful, water is easy to find and retrieve in abundance, perhaps by digging a well, or walking to a river or lake, or simply turning on a kitchen faucet. However, diamonds are everywhere harder to find, retrieve, and prepare.

Of course, that balance in value might dramatically tip in water’s favour in largely barren regions, where droughts may be commonplace — with consequences for food security, infant survival, and disease prevalence — with local inhabitants therefore rightly and necessarily regarding water as precious in and of itself. So context matters.

Clearly, however, for someone lost in the desert, parched and staggering around under a blistering sun, the use-value of water exceeds that of a diamond. ‘Utility’ in this instance is how well something gratifies a person’s wants or needs, a subjective measure. Accordingly, John Locke, too, pinned a commodity’s value to its utility — the satisfaction that a good or service gives someone.

For such a person dying of thirst in the desert, ‘opportunity cost’, or what they could obtain in exchange for a diamond at a later time (what’s lost in giving up the other choice), wouldn’t matter — especially if they otherwise couldn’t be assured of making it safely out of the broiling sand alive and healthy.

But what if, instead, that same choice between water and a diamond is reliably offered to the person every fifteen minutes rather than as a one-off? It now makes sense, let’s say, to opt for a diamond three times out of the four offers made each hour, and to choose water once an hour. Where access to an additional unit (bottle) of water each hour will suffice for survival and health, securing the individual’s safe exit from the desert. A scenario that captures the so-called ‘marginal utility’ explanation of value.

However, as with many things in life, the more water an individual acquires in even this harsh desert setting, with basic needs met, the less useful or gratifying the water becomes, referred to as the ‘law of diminishing marginal utility’. An extra unit of water gives very little or even no extra satisfaction.

According to ‘marginal utility’, then, a person will use a commodity to meet a need or want, based on perceived hierarchy of priorities. In the nineteenth century, the Austrian economic theorist Eugen Ritter von Böhm-Bawerk provided an illustration of this concept, exemplified by a farmer owning five sacks of grain:
  • The farmer sets aside the first sack to make bread, for the basics of survival. 
  • He uses the second sack of grain to make yet more bread so that he’s fit enough to perform strenuous work around the farm. 
  • He devotes the third sack to feed his farm animals. 
  • The fourth he uses in distilling alcohol. 
  • And the last sack of grain the farmer uses to feed birds.
If one of those bags is inexplicably lost, the farmer will not then reduce each of the remaining activities by one-fifth, as that would thoughtlessly cut into higher-priority needs. Instead, he will stop feeding the birds, deemed the least-valuable activity, leaving intact the grain for the four more-valuable activities in order to meet what he deems greater needs.

Accordingly, the next least-productive (least-valuable) sack is the fourth, set aside to make alcohol, which would be sacrificed if another sack is lost. And so on, working backwards, until, in a worst-case situation, the farmer is left with the first sack — that is, the grain essential for feeding him so that he stays alive. This situation of the farmer and his five sacks of grain illustrates how the ‘marginal utility’ of a good is driven by personal judgement of least and highest importance, always within a context.

Life today provides contemporary instances of this paradox of value.

Consider, for example, how society pays individual megastars in entertainment and sports vastly more than, say, school teachers. This is so, even though citizens insist they highly value teachers, entrusting them with educating the next generation for societys future competitive economic development. Megastar entertainers and athletes are of course rare, while teachers are plentiful. According to diminishing marginal utility, acquiring one other teacher is easier and cheaper than acquiring one other top entertainer or athlete.

Consider, too, collectables like historical stamps and ancient coins. Afar from their original purpose, these commodities no longer have use-value. 
Yet, ‘a very great quantity of other goods may frequently be had in exchange for them, to evoke Smiths diamond analogue. Factors like scarcity, condition, provenance, and subjective constructs of worth in the minds of the collector community fuel value, when swapping, selling, buying — or exchanging for other goods and services.

Of course, the dynamics of value can prove brittle. History has taught us that many times. Recall, for example, the exuberant valuing of tulips in seventeenth-century Holland. Speculation in tulips skyrocketed — with some varieties worth more than houses in Amsterdam — in what was surely one of the most-curious bubbles ever. Eventually, tulipmania came to a sudden end; however, whether the valuing of, say, todays cryptocurrencies, which are digital, intangible, and volatile, will follow suit and falter, or compete indefinitely with dollars, euros, pounds, and renminbi, remains an unclosed chapter in the paradox of value.

Ultimately, value is demonstrably an emergent construct of the mind, whereby ‘knowledge, as perhaps the most-ubiquitous commodity, poses a special paradoxical case. Knowledge has value simultaneously and equally in its use and ‘in its exchange. In the former, that is in its use, knowledge is applied to acquire ones own needs and wants; in the latter, that is in its exchange, knowledge becomes of benefit to others in acquiring their needs and wants. Is there perhaps a solution to Smith’s paradox here?

Monday, 13 September 2021

The Play of Old and New

by Andrew Porter
In trying to figure out what's valuable in the old and the new, what should we keep or discard? Should change be invited or checked?
We know there is a relationship between the old and the new. It's both complex and fascinating. What is established may stay in place, or it may be replaced and perish.

If we want to help change society, or government, or ourselves for the better, how much of the old should we keep, and how much discard? Is modest reform in order, or a revolution? Should the depletion of, say, rain forests be allowed or prevented?

Aristotle delineated 'potential' as material, and 'actual' as form. We gather, therefore, that what exists is often on its way to completion, whereas the goal is the actual. This contrasts with the view that what exists is the 'actual', while future possibilities are 'potential'. Added to this is the fact that the old was once new, and the new will become old.

It might help us clarify the relationship if we can articulate the flow of old to new in real time.

Should we see it as a flow, or as a fixed contrast? What does a dynamic tension mean in this case? Is the new a rejection of the ossification of the old, or is it in harmony with it? How do old and new relate to the metaphysical principles of Order and Freedom? Are the old and the new in a dance with each other, the new emerging from the potentiality which already exists? Does novelty merely help advance and develop what has been?

Something that goes on throughout nature may sort much of this out. We regenerate skin and bone and muscle tissue, while certain sets of brain cells endure past these changes. It is all us. We are old and new.

Take a reformer, in politics or elsewhere, who wants to enact significant change. They have to deal both with the old and with the new. Existing patterns to overcome, new ideas to consider and implement. How will society change? How much hold ought it to have? The old is a mixed bag. How justified is the new? Potential beckons, but is it in that which exists, or in the ends at which a process aims?

Old and new act as permeable membranes to each other, each in flux in relation to the other. Novelty is in the potential of current things. A reformer usually tries to jettison a large chunk of the old, but, like their own body, must keep a substantial part of it. Imagine if both current existents and new emergences followed a reason. Would it be different in nature than in human life?

I'll skip away now with the questions. I blithely leave the answers to you.


Photo credit GharPedia

Monday, 19 July 2021

The ‘Common Good’ and Equality of Opportunity

Adam Smith, the 19th-century Scottish philosopher, warned against both
monopoly interests and government intervention in private economic arrangements.

Posted by Keith Tidman
 

Every nation grapples with balancing things that benefit the community as a whole — the common good — and those that benefit individuals — the private good. Untangling which things fall under each of the two rubrics is just one of the challenges. Decisions hinge on a nation’s history, political philosophy, approach to governance, and the general will of its citizenry.

 

At the core is recognition that community, civic relationships, and interdependencies matter in building a just society, as what is ‘just’ is a shared enterprise based on liberal Enlightenment principles around rights and ethics. Acting on this recognition drives whether a nation’s social system allows for every individual to benefit impartially from its bounty.

 

Although capitalism has proven to be the most-dynamic engine of nations’ wealth in terms of gross domestic product, it also commonly fosters gaping inequality between the multibillionaires and the many tens of millions of people left destitute. There are those left without homes, without food, without medical care — and without hope. As philosopher and political economist Adam Smith observed: 


‘Wherever there is great property there is great inequality. For one very rich man there must be at least five hundred poor, and the affluence of the few supposes the indigence of the many’.


Today, this gap between the two extreme poles in wealth inequality is widening and becoming uglier in both material and moral terms. Among the worst injustices, however, is inequality not only of income or of wealth — the two traditional standards of inequality — but (underlying them both) inequality of opportunity. Opportunity as in access to education or training, meaningful work, a home in which to raise a family, leisure activity, the chance to excel unhampered by caste or discrimination. Such benefits ultimately stem from opportunity, without which there is little by way of quality of life.

 

I would argue that the presence or absence of opportunity in life is the root of whether society is fair and just and moral. The notion of the common good, as a civically moral imperative, reaches back to the ancient world, adjusting in accordance with the passage and rhythm of history and the gyrations of social composition. Aristotle stated in the Politics that ‘governments, which have a regard to the common interest, are constituted in accordance with strict principles of justice’.

 

The cornerstone of the common good is shared conditions, facilities, and establishments that redound to every citizen’s benefit. A foundation where freedom, autonomy, agency, and self-governance are realised through collective participation. Not as atomised citizens, with narrow self-interests. And not where society myopically hails populist individual rights and liberties. But rather through communal action in the spirit of liberalised markets and liberalised constitutional government institutions.

 

Common examples include law courts and an impartial system of justice, accessible public healthcare, civic-minded policing and order, affordable and sufficient food, thriving economic system, national defense to safeguard peace, well-maintained infrastructure, responsive system of governance, accessible public education, libraries and museums, protection of the environment, and public transportation.

 

The cornerstone of the private good is individual rights, with which the common good must be seeded and counterweighted. These rights, or civic liberties, commonly include those of free speech, conscience, public assembly, and religion. As well as rights to life, personal property, petition of the government, privacy, fair trial (due process), movement, and safety. That is, natural, inalienable human rights that governments ought not attempt to take away but rather ought always to protect.

 

One challenge is how to manage the potential pluralism of a society, where there are dissimilar interest groups (constituencies) whose objectives might conflict. In modern societies, these dissimilar groups are many, divided along lines of race, ethnicity, gender, country of origin, religion, and socioeconomic rank. Establishing a common good from such a mix is something society may find difficult.

 

A second challenge is how to settle the predictable differences of opinion over the relative worth of those values that align with the common good and the private good. When it comes to ‘best’ government and social policy, there must be caution not to allow the shrillest voices, whether among the majority or minority of society, to crowd out others’ opinions. The risk is in opportunity undeservedly accruing to one group in society.

 

Just as the common good requires that everyone has access to it, it requires that all of us must help to sustain it. The common good commands effort, including a sharing of burdens and occasional sacrifice. When people benefit from, but choose not to help sustain it (perhaps like a manufacturer’s operators ignoring their civic obligation and polluting air and water, even as they expect access themselves to clean resources), they freeload.

 

Merit will always matter, of course, but as only one variable in the calculus of opportunity. And so, to mitigate inequality of opportunity, the common good may call for a ‘distributive’ element. Distributive justice emphasises the allocation of shared outcomes and benefits. To uplift the least-advantaged members of society, based on access, participation, proportionality, need, and impartiality.

 

Government policy and social conscience are both pivotal in ensuring that merit doesn’t recklessly eclipse or cancel equality of opportunity. Solutions for access to improved education, work, healthcare, legal justice, and myriad other necessities to establish a floor to quality of life are as much political as social. It is through such measures that we see how sincere society’s concerns really are — for the common good.

Monday, 8 February 2021

Will Democracy Survive?

Image via https://www.ancient-origins.net/history-famous-people/cleisthenes-father-democracy-invented-form-government-has-endured-over-021247

Cleisthenes, the Father of Democracy, Invented a Form of Government That Has Endured for 2,500 Years


Posted by Keith Tidman

How well is democracy faring? Will democracy emerge from despots’ modern-day assaults unscathed?

Some 2,500 years ago there was a bold experiment: Democracy was born in Athens. The name of this daring form of governance sprang from two Greek words (demos and kratos), meaning ‘rule by the people’. Democracy offered the public a voice. The political reformer Cleisthenes is the acknowledged ‘father of democracy’, setting up one of ancient Greece’s most-lasting contributions to the modern world.

 

In Athens, the brand was direct democracy, where citizens composed an assembly as the governing body, writing laws on which citizens had the right to vote. The assembly also decided matters of war and foreign policy. A council of representatives, chosen by lot from the ten Athenian tribes, was responsible for everyday governance. And the courts, in which citizens brought cases before jurors selected from the populace by a lottery, was the third branch. Aristotle believed the courts ‘contributed most to the strength of democracy’.

 

As the ancient Greek historian, Herodotus, put it, in this democratic experiment ‘there is, first, that most splendid of virtues, equality before the law’. Yet, there was a major proviso to this ‘equality’: Only ‘citizens’ were qualified to take part, who were limited to free males — less than half of Athens’s population — excluding women, immigrants, and slaves.

 

Nor did every Greek philosopher or historian in the ancient world share Herodotus’s enthusiasm for democracy’s ‘splendid virtues’. Some found various ways to express the idea that one unsavory product of democracy was mob rule. Socrates, as Plato recalls in the Republic, referred unsparingly to the ‘foolish leaders of democracy . . . full of disorder, and dispensing a sort of equality to equals and unequaled alike’.

 

Others, like the historian Thucydides, Aristotle, the playwright Aristophanes, the historian and philosopher Xenophon, and the anonymous writer dubbed the Old Oligarch, expanded on this thinking. They critiqued democracy for dragging with it the citizens’ perceived faults, including ignorance, lack of virtue, corruptibility, shortsightedness, tyranny of the collective, selfishness, and deceptive sway by the specious rhetoric of orators. No matter, Athens’s democracy endured 200 years, before ceding ground to aristocratic-styled rule: what Herodotus labeled ‘the one man, the best’.

 

Many of the deprecations that ancient Greece’s philosophers heaped upon democratic governance and the ‘masses’ are redolent of the problems that democracy, in its representative form, would face again.


Such internal contradictions recently resulted in the United States, the longest-standing democratic republic in the modern world, having its Congress assailed by a mob, in an abortive attempt to stymie the legislators’ certification of the results of the presidential election. However, order was restored that same day (and congressional certification of the democratic will completed). The inauguration of the new president took place without incident, on the date constitutionally laid out. Democracy working.

 

Yet, around the world, in increasing numbers of countries, people doubt democracy’s ability to advance citizens’ interests. Disillusion and cynicism have settled in. Autocrats and firebrands have gladly filled that vacuum of faith. They scoff at democracy. The rule of law has declined, as reported by the World Justice Project. Its index has documented sharp falloffs in the robustness of proscriptions on government abuse and extravagant power. Freedom House has similarly reported on the tenuousness of government accountability, human rights, and civil liberties. ‘Rulers for life’ dot the global landscape.

 

That democracy and freedoms have absorbed body blows around the world has been underscored by attacks from populist leaders who rebuff pluralism and highjack power to nurture their own ambitions and those of closely orbiting supporters. A triumphalism achieved at the public’s expense. In parts of Eastern Europe, Asia Pacific, sub-Saharan Africa, Middle East and North Africa, South and Central America, and elsewhere. The result has been to weaken free speech and press, free religious expression, free assembly, independence of judiciaries, petition of the government, thwarts to corruption, and other rights, norms, and expectations in more and more countries.


Examples of national leaders turning back democracy in favour of authoritarian rule stretch worldwide. Central Europe's populist overreach, of concern to the European Union, has been displayed in abruptly curtailing freedoms, abolishing democratic checks and balances, self-servingly politicising systems of justice, and brazen leaders acquiring unlimited power indefinitely.


Some Latin American countries, too, have experienced waning democracy, accompanied by turns to populist governments and illiberal policies. Destabilised counterbalances to government authority, acute socioeconomic inequalities, attacks on human rights and civic engagement, emphasis on law and order, leanings toward surveillance states, and power-ravenous leaders have symbolised the backsliding.

 

Such cases notwithstanding, people do have agency to dissent and intervene in their destiny, which is, after all, the crux of democracy. Citizens are not confined to abetting or turning a blind eye toward strongmen’s grab for control of the levers of power or ultranationalistic penchants. In particular, there might be reforms, inspired by ancient Athens’s novel experiment, to bolster democracy’s appeal, shifting power from the acquisitive hands of elites and restoring citizens’ faith. 

 

One systemic course correction might be to return to the variant of direct democracy of Aristotle’s Athens, or at least a hybrid of it, where policymaking becomes a far more populous activity. Decisions and policy are molded by what the citizens decide and decree. A counterweight for wholly representative democracy: the latter emboldening politicians, encouraging the conceit of self-styled philosopher-kings whose judgment they mistakenly presume surpasses that of citizens. 

 

It might behoove democracies to have fewer of these professional politicians, serving as ‘administrators’ clearing roadblocks to the will of the people, while crafting the legal wording of legislation embodying majority public pronouncements on policy. The nomenclature of such a body — assembly, council, congress, parliament, or other — matters little, of course, compared with function: party-less technocrats in direct support of the citizenry.

 

The greatest foe to democracies’ longevity, purity, and salience is often the heavy-handed overreach of elected executives, not insurrectionist armies from within the city gates. Reforms might therefore bear on severe restriction or even elimination of an executive-level figurehead, who otherwise might find the giddy allure of trying to accrete more power irresistible and unquenchable. Other reforms might include:

 

• A return to popular votes and referenda to agree on or reject national and local policies; 

• Normalising of constitutional amendments, to ensure congruence with major social change;

• Fewer terms served in office, to avoid ‘professionalising’ political positions; 

• Limits on campaign length, to motivate focused appeals to electors and voter attentiveness.


Still other reforms might be the public funding of campaigns, to constrain expenditures and, especially, avoid bought candidates. Curtailing of special-interest supplicants, who serve deep-pocketed elites. Ethical and financial reviews to safeguard against corruption, with express accountability. Mandatory voting, on specially designated paid holidays, to solicit all voices for inclusivity. Civic service, based on communal convictions and norms-based standards. And reinvention of public institutions, to amplify pertinence, efficacy, and efficiency.

 

Many more ways to refit democracy’s architecture exist, of course. The starting point, however, is that people must believe democracy works and are prepared to foster it. In the arc of history, democracy is most vulnerable if resignedly allowed to be.

 

Testaments to democracy should be ideas, not majestic buildings or monuments. Despots will not cheerfully yield ground; the swag is too great. Yet ideas, which flourish in liberal democracy, are greater.

 

Above all, an alert, restive citizenry is democracy’s best sentinel: determined to triumph rather than capitulate, despite democracy’s turbulence two and a half millennia after ancient Athens’s audacious experiment. 

Monday, 14 December 2020

Persuasion v. Manipulation in the Pandemic


Posted by Keith Tidman

Persuasion and manipulation to steer public behaviour are more than just special cases of each other. Manipulation, in particular, risks short-circuiting rational deliberation and free agency. So, where is the line drawn between these two ways of appealing to the public to act in a certain way, to ‘adopt the right behaviour’, especially during the current coronavirus pandemic? And where does the ‘common good’ fit into choices?

 

Consider two related aspects of the current pandemic: mask-wearing and being vaccinated. Based on research, such as that reported on in Nature (‘Face masks: what the data say’, Oct. 2020), mask-wearing is shown to diminish the spread of virus-loaded airborne particles to others, as well as to diminish one’s own exposure to others’ exhaled viruses. 


Many governments, scientists, medical professionals, and public-policy specialists argue that people therefore ought to wear masks, to help mitigate the contagion. A manifestly utilitarian policy position, but one rooted in controversy nonetheless. In the following, I explain why.

 

In some locales, mask-wearing is mandated and backed by sanctions; in other cases, officials seek willing compliance, in the spirit of communitarianism. Implicit in all this is the ethics-based notion of the ‘common good’. That we owe fellow citizens something, in a sense of community-mindedness. And of course, many philosophers have discussed this ‘common good’; indeed, the subject has proven a major thread through Western political and ethical philosophy, dating to ancient thinkers like Plato and Aristotle.


In The Republic, Plato records Socrates as saying that the greatest social good is the ‘cohesion and unity’ that stems from shared feelings of pleasure and pain that result when all members of a society are glad or sorry for the same successes and failures. And Aristotle argues in The Politics, for example, that the concept of community represented by the city-state of his time was ‘established for the sake of some good’, which overarches all other goods.


Two thousand years later, Jean-Jacques Rousseau asserted that citizens’ voluntary, collective commitment — that is, the ‘general will’ or common good of the community — was superior to each person’s ‘private will’. And prominent among recent thinkers to have explored the ‘common good’ is the political philosopher John Rawls, who has defined the common good as ‘certain general conditions that are . . . equally to everyone’s advantage’ (Theory of Justice, 1971).

 

In line with seeking the ‘common good’, many people conclude that being urged to wear a mask falls under the heading of civic-minded persuasion that’s commonsensical. Other people see an overly heavy hand in such measures, which they argue deprives individuals of the right — constitutional, civil, or otherwise — to freely make decisions and take action, or choose not to act. Free agency itself also being a common good, an intrinsic good. For some concerned citizens, compelled mask-wearing smacks of a dictate, falling under the heading of manipulation. Seen, by them, as the loss of agency and autonomous choice.

 

The readying of coronavirus vaccines, including early rollout, has led to its own controversies around choice. Health officials advising the public to roll up their sleeves for the vaccine has run into its own buzzsaw from some quarters. Pragmatic concerns persist: how fast the vaccines were developed and tested, their longer-term efficacy and safety, prioritisation of recipients, assessment of risk across diverse demographics and communities, cloudy public-messaging narratives, cracks in the supply chain, and the perceived politicising of regulatory oversight.


As a result of these concerns, nontrivial numbers of people remain leery, distrusting authority and harbouring qualms. As recent Pew, Gallup, and other polling on these matters unsurprisingly shows, some people might assiduously refuse ever to be vaccinated, or at least resist until greater clarity is shed on what they view as confusing noise or until early results roll in that might reassure. The trend lines will be watched.

 

All the while, officials point to vaccines as key to reaching a high enough level of population immunity to reduce the virus’s threat. Resulting in less contagion and fewer deaths, while allowing besieged economies to reopen with the business, social, and health benefits that entails. For all sorts of reasons — cultural, political, personal — some citizens see officials’ urgings regarding vaccinations as benign, well-intentioned persuasion, while others see it as guileful manipulation. One might consider where the Rawlsian common good fits in, and how the concept sways local, national, and international policy decision-making bearing on vaccine uptake.

 

People are surely entitled to persuade, even intensely. Perhaps on the basis of ethics or social norms or simple honesty: matters of integrity. But they may not be entitled to resort to deception or coercion, even to correct purportedly ‘wrongful’ decisions and behaviours. The worry being that whereas persuasion innocuously induces human behaviour broadly for the common good, coercive manipulation invalidates consent, corrupting the baseline morality of the very process itself. To that point, corrupt means taint ends.

 

Influence and persuasion do not themselves rise to the moral censure of coercive or deceptive manipulation. The word ‘manipulation’, which took on pejorative baggage in the eighteen hundreds, has special usages. Often unscrupulous in purpose, such as to gain unjust advantage. Meantime, persuasion may allow for abridged assumptions, facts, and intentions, to align with community expectations and with hoped-for behavioural outcomes to uphold the common good. A calculation that considers the veracity, sufficiency, and integrity of narratives designed to influence public choices, informed by the behavioural science behind effective public health communications. A subtler way, perhaps, to look at the two-dimensional axes of persuasion versus manipulation.

 

The seed bedding of these issues is that people live in social relationships, not as fragmented, isolated, socially disinterested individuals. They live in the completeness of what it means to be citizens. They live within relationships that define the Rawlsian common good. A concept that helps us parse persuasion and manipulation in the framework of inducing societal behaviour: like the real-world cases of mask-wearing and vaccinations, as the global community counterattacks this lethal pandemic.

 

Monday, 9 November 2020

The Certainty of Uncertainty


Posted by Keith Tidman
 

We favour certainty over uncertainty. That’s understandable. Our subscribing to certainty reassures us that perhaps we do indeed live in a world of absolute truths, and that all we have to do is stay the course in our quest to stitch the pieces of objective reality together.

 

We imagine the pursuit of truths as comprising a lengthening string of eureka moments, as we put a check mark next to each section in our tapestry of reality. But might that reassurance about absolute truths prove illusory? Might it be, instead, ‘uncertainty’ that wins the tussle?

 

Uncertainty taunts us. The pursuit of certainty, on the other hand, gets us closer and closer to reality, that is, closer to believing that there’s actually an external world. But absolute reality remains tantalizingly just beyond our finger tips, perhaps forever.

 

And yet it is uncertainty, not certainty, that incites us to continue conducting the intellectual searches that inform us and our behaviours, even if imperfectly, as we seek a fuller understanding of the world. Even if the reality we think we have glimpsed is one characterised by enough ambiguity to keep surprising and sobering us.

 

The real danger lies in an overly hasty, blinkered turn to certainty. This trust stems from a cognitive bias — the one that causes us to overvalue our knowledge and aptitudes. Psychologists call it the Dunning-Kruger effect.

 

What’s that about then? Well, this effect precludes us from spotting the fallacies in what we think we know, and discerning problems with the conclusions, decisions, predictions, and policies growing out of these presumptions. We fail to recognise our limitations in deconstructing and judging the truth of the narratives we have created, limits that additional research and critical scrutiny so often unmask. 

 

The Achilles’ heel of certainty is our habitual resort to inductive reasoning. Induction occurs when we conclude from many observations that something is universally true: that the past will predict the future. Or, as the Scottish philosopher, David Hume, put it in the eighteenth century, our inferring ‘that instances of which we have had no experience resemble those of which we have had experience’. 

 

A much-cited example of such reasoning consists of someone concluding that, because they have only ever observed white swans, all swans are therefore white — shifting from the specific to the general. Indeed, Aristotle uses the white swan as an example of a logically necessary relationship. Yet, someone spotting just one black swan disproves the generalisation. 

 

Bertrand Russell once set out the issue in this colourful way:

 

‘Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to uniformity of nature would have been useful to the chicken’.

 

The person’s theory that all swans are white — or the chicken’s theory that the man will continue to feed it — can be falsified, which sits at the core of the ‘falsification’ principle developed by philosopher of science Karl Popper. The heart of this principle is that in science a hypothesis or theory or proposition must be falsifiable, that is, to possibly being shown wrong. Or, in other words, to be testable through evidence. For Popper, a claim that is untestable is no longer scientific. 

 

However, a testable hypothesis that is proven through experience to be wrong (falsified) can be revised, or perhaps discarded and replaced by a wholly new proposition or paradigm. This happens in science all the time, of course. But here’s the rub: humanity can’t let uncertainty paralyse progress. As Russell also said: 

 

‘One ought to be able to act vigorously in spite of the doubt. . . . One has in practical life to act upon probabilities’.

 

So, in practice, whether implicitly or explicitly, we accept uncertainty as a condition in all fields — throughout the humanities, social sciences, formal sciences, and natural sciences — especially if we judge the prevailing uncertainty to be tiny enough to live with. Here’s a concrete example, from science.

 

In the 1960s, the British theoretical physicist, Peter Higgs, mathematically predicted the existence of a specific subatomic particle. The last missing piece in the Standard Model of particle physics. But no one had yet seen it, so the elusive particle remained a hypothesis. Only several decades later, in 2012, did CERN’s Large Hadron Collider reveal the particle, whose field is claimed to have the effect of giving all other particles their mass. (Earning Higgs, and his colleague Francis Englert, the Nobel prize in physics.)

 

The CERN scientists’ announcement said that their confirmation bore ‘five-sigma’ certainty. That is, there was only 1 chance in 3.5 million that what was sighted was a fluke, or something other than the then-named Higgs boson. A level of certainty (or of uncertainty, if you will) that physicists could very comfortably live with. Though as Kyle Cranmer, one of the scientists on the team that discovered the particle, appropriately stresses, there remains an element of uncertainty: 

 

“People want to hear declarative statements, like ‘The probability that there’s a Higgs is 99.9 percent,’ but the real statement has an ‘if’ in there. There’s a conditional. There’s no way to remove the conditional.”

 

Of course, not in many instances in everyday life do we have to calculate the probability of reality. But we might, through either reasoning or subconscious means, come to conclusions about the likelihood of what we choose to act on as being right, or safely right enough. The stakes of being wrong matter — sometimes a little, other times consequentially. Peter Higgs got it right; Bertrand Russell’s chicken got it wrong.

  

The takeaway from all this is that we cannot know things with absolute epistemic certainty. Theories are provisional. Scepticism is essential. Even wrong theories kindle progress. The so-called ‘theory of everything’ will remain evasively slippery. Yet, we’re aware we know some things with greater certainty than other things. We use that awareness to advantage, informing theory, understanding, and policy, ranging from the esoteric to the everyday.

 

Monday, 20 January 2020

Environmental Ethics and Climate Change

Posted by Keith Tidman

The signals of a degrading environment are many and on an existential scale, imperilling the world’s ecosystems. Rising surface temperature. Warming oceans. Sinking Greenland and Antarctic ice sheets. Glacial retreat. Decreased snow cover. Sea-level rise. Declining Arctic sea ice. Increased atmospheric water vapour. Permafrost thawing. Ocean acidification. And not least, supercharged weather events (more often, longer lasting, more intense).

Proxy (indirect) measurements — ice cores, tree rings, corals, ocean sediment — of carbon dioxide, a heat-trapping gas that plays an important role in creating the greenhouse effect on Earth, have spiked dramatically since the beginning of the Industrial Revolution. The measurements underscore that the recent increase far exceeds the natural ups and downs of the previous several hundred thousand years. Human activity — use of fossil fuels to generate energy and run industry, deforestation, cement production, land use changes, modes of travel, and much more — continues to be the accelerant.

The reports of the United Nations’ Intergovernmental Panel on Climate Change, contributed to by some 1,300 independent scientists and other researchers from more than 190 countries worldwide, reported that concentrations of carbon dioxide, methane, and nitrous oxides ‘have increased to levels unprecedented in at least 800,000 years’. The level of certainty of human activity being the leading cause, referred to as anthropogenic cause, has been placed at more than 95 percent.

That probability figure has legs, in terms of scientific method. Early logical positivists like A.J. Ayer had asserted that for validity, a scientific proposition must be capable of proof — that is, ‘verification’. Later, however, Karl Popper, in his The Logic of Scientific Discovery, argued that in the case of verification, no number of observations can be conclusive. As Popper said, no matter how many instances of white swans we may have observed, this does not justify the conclusion that all swans are white. (Lo and behold, a black swan shows up.) Instead, Popper said, the scientific test must be whether in principle the proposition can be disproved — referred to as ‘falsification’. Perhaps, then, the appropriate test is not ability to prove that mankind has affected the Earth’s climate; rather, it’s incumbent upon challengers to disprove (falsify) such claims. Something that  hasn’t happened and likely never will.

As for the ethics of human intervention into the environment, utilitarianism is the usual measure. That is to say, the consequences of human activity upon the environment govern the ethical judgments one makes of behavioural outcomes to nature. However, we must be cautious not to translate consequences solely in terms of benefits or disadvantages to humankind’s welfare; our welfare appropriately matters, of course, but not to the exclusion of all else in our environment. A bias to which we have often repeatedly succumbed.

The danger of such skewed calculations may be in sliding into what the philosopher Peter Singer coined ‘speciesism’. This is where, hierarchically, we place the worth of humans above all else in nature, as if the latter is solely at our beck and call. This anthropocentric favouring of ourselves is, I suggest, arbitrary and too narrow. The bias is also arguably misguided, especially if it disregards other species — depriving them of autonomy and inherent rights — irrespective of the sophistication of their consciousness. To this point, the 18th/19th-century utilitarian Jeremy Bentham asserted, ‘Can [animals] feel? If they can, then they deserve moral consideration’.

Assuredly, human beings are endowed with cognition that’s in many ways vastly more sophisticated than that of other species. Yet, without lapsing into speciesism, there seem to be distinct limits to the comparison, to avoid committing what’s referred to as a ‘category mistake’ — in this instance, assigning qualities to species (from orangutans and porpoises to snails and amoebas) that belong only to humans. In other words, an overwrought egalitarianism. Importantly, however, that’s not the be-all of the issue. Our planet is teeming not just with life, but with other features — from mountains to oceans to rainforest — that are arguably more than mere accouterments for simply enriching our existence. Such features have ‘intrinsic’ or inherent value — that is, they have independent value, apart from the utilitarianism of satisfying our needs and wants.

For perspective, perhaps it would be better to regard humans as nodes in what we consider a complex ‘bionet’. We are integral to nature; nature is integral to us; in their entirety, the two are indissoluble. Hence, while skirting implications of panpsychism — where everything material is thought to have at least an element of consciousness — there should be prima facie respect for all creation: from animate to inanimate. These elements have more than just the ‘instrumental’ value of satisfying the purposes of humans; all of nature is itself intrinsically the ends, not merely the means. Considerations of aesthetics, culture, and science, though important and necessary, aren’t sufficient.

As such, there is an intrinsic moral imperative not only to preserve Earth, but for it and us jointly to flourish — per Aristotle’s notion of ‘virtue’, with respect and care, including for the natural world. It’s a holistic view that concedes, on both the utilitarian and intrinsic sides of the moral equation, mutually serving roles. This position accordingly pushes back against the hubristic idea that human-centricism makes sense if the rest of nature collectively amounts only to a backstage for our purposes. That is, a backstage that provides us with a handy venue where we act out our roles, whose circumstances we try to manage (sometimes ham-fistedly) for self-satisfying purposes, where we tinker ostensibly to improve, and whose worth (virtue) we believe we’re in a position to judge rationally and bias-free.

It’s worth reflecting on a thought experiment, dubbed ‘the last man’, that the Australian philosopher Richard Routley introduced in the 1970s. He envisioned a single person surviving ‘the collapse of the world system’, choosing to go about eliminating ‘every living thing, animal and plant’, knowing that there’s no other person alive to be affected. Routley concluded that ‘one does not have to be committed to esoteric values to regard Mr. Last Man as behaving badly’. Whether Last Man was, or wasn’t, behaving unethically goes to the heart of intrinsic versus utilitarian values regarding nature —and presumptions about human supremacy in that larger calculus.

Groups like the UN Intergovernmental Panel on Climate Change have laid down markers as to tipping points beyond which extreme weather events might lead to disastrously runaway effects on the environment and humanity. Instincts related to the ‘tragedy of the commons’ — where people rapaciously consume natural resources and pollute, disregarding the good of humanity at large — have not yet been surmounted. That some other person, or other community, or other country will shoulder accountability for turning back the wave of environmental destruction and the upward-spiking curve of climate extremes has hampered the adequacy of attempted progress. Nature has thrown down the gauntlet. Will humanity pick it up in time?

Monday, 9 December 2019

Is Torture Morally Defensible?


Posted by Keith Tidman

Far from being unconscionable, today one metric of how societies have universalised torture is that, according to Amnesty International, some 140 countries resort to it: whether for use by domestic police, intelligence agencies, military forces, or other institutions. Incongruously, many of these countries are signatories to the United Nations Convention Against Torture, the one that forbids torture, whether domestic or outsourced to countries where torture is legal (by so-called renditions).

Philosophers too are ambivalent, conjuring up difficult scenarios in which torture seems somehow the only reasonable response:
An anarchist knows the whereabouts of a powerful bomb set to kill scores of civilians.
A kidnapper has hidden a four-year-old in a makeshift underground box, holding out for a ransom.
Or perhaps an authoritarian government, feeling threatened, has identified the ringleader of swelling political street opposition, and wants to know his accomplices’ names. Soldiers have a high-ranking captive, who knows details of the enemy’s plans to launch a counteroffensive. A kingpin drug supplier, and his metastasized network of street traffickers, routinely distributes highly contaminated drugs, resulting in a rash of deaths...

Do any of these hypothetical and real-world events, where information needs to be extracted for urgent purposes, rise to the level of resorting to torture? Are there other examples to which society ought morally consent to torture? If so, for what purposes? Or is torture never morally justified?

One common opinion is that if the outcome of torture is information that saves innocent lives, the practice is morally justified. I would argue that there are at least three aspects to this claim:
  • the multiple lives that will be saved (traded off against the fewer), sometimes referred to as ‘instrumental harm’; 
  • the collective innocence, in contrast to any aspect of culpability, of those people saved from harm; and
  • the overall benefit to society, as best can credibly be predicted with information at hand.
The 18th-century philosopher Jeremy Bentham’s famous phrase that ‘It is the greatest good for the greatest number of people which is the measure of right and wrong’ seems to apply here. Historically, many people have found, rightly or not, that this principle of ‘greatest good for the greater number’ rises to the level of common sense, as well as proving simpler to apply in establishing one’s own life doctrine than from competitive standards — such as discounting outcomes for chosen behaviours.

Other thinkers, such as Joseph Priestley (18th century) and John Stuart Mill (19th century), expressed similar utilitarian arguments, though using the word ‘happiness’ rather than ‘benefit’. (Both terms might, however, strike one as equally cryptic.) Here, the standard of morality is not a rulebook rooted in solemnised creed, but a standard based in everyday principles of usefulness to the many. Torture, too, may be looked at in those lights, speaking to factors like human rights and dignity — or whether individuals, by virtue of the perceived threat, forfeit those rights.

Utilitarianism has been criticised, however, for its obtuse ‘the ends justify the means’ mentality — an approach complicated by the difficulty of predicting consequences. Similarly, some ‘bills of rights’ have attempted to provide pushback against the simple calculus of benefiting the greatest number. Instead, they advance legal positions aimed at protecting the welfare of the few (the minority) against the possible tyranny of the many (the majority). ‘Natural rights’ — the right to life and liberty — inform these protective constitutional provisions.

If torture is approved of in some situations — ‘extreme cases’ or ‘emergencies’, as society might tell itself — the bar in some cases might lower. As a possible fast track in remedying a threat — maybe an extra–judicial fast track — torture is tempting, especially when used ‘for defence’. However, the uneasiness is in torture turning into an obligation — if shrouded in an alleged moral imperative, perhaps to exploit a permissive legal system. This dynamic may prove alluring if society finds it expeditious to shoehorn more cases into the hard-to-parse ‘existential risk’.

What remains key is whether society can be trusted to make such grim moral choices — such as those requiring the resort to torture. This blurriness has propelled some toward an ‘absolutist’ stance, censuring torture in all circumstances. The French poet Charles Baudelaire felt that ‘Torture, as the art of discovering truth, is barbaric nonsense’. Paradoxically, however, absolutism in the total ban on torture might itself be regarded as immoral, if the result is death of a kidnapped child or of scores of civilians. That said, there’s no escaping the reality that torture inflicts pain (physical and/or mental), shreds human dignity, and curbs personal sovereignty. To some, many even, it thus must be viewed as reprehensible and irredeemable — decoupled from outcomes.

This is especially apparent if torture is administered to inflict pain, terrorise, humiliate, or dehumanise for purposes of deterrence or punishment. But even if torture is used to extract information — information perhaps vital, as per the scenarios listed at the beginning — there is a problem: the information acquired is suspect, tales invented just to stop pain. Long ago, Aristotle stressed this point, saying plainly: ‘Evidence from torture may be considered utterly untrustworthy’. Even absolutists, however, cannot skip being involved in defining what rises to the threshold of clearer-cut torture and what perhaps falls just below  grist for considerable contentious debate.

The question remains: can torture ever be justified? And, linked to this, which moral principles might society want to normalise? Is it true, as the French philosopher Jean-Paul Sartre noted, that ‘Torture is senseless violence, born in fear’? As societies grapple with these questions, they reduce the alternatives to two: blanket condemnation of torture (and acceptance of possible dire, even existential consequences of inaction); or instead acceptance of the utility of torture in certain situations, coupled with controversial claims about the correct definitions of the practice.


I would argue one might morally come down on the side of the defensible utility of the practice  albeit in agreed-upon circumstances (like some of those listed above), where human rights are robustly aired side by side with the exigent dangers, potential aftermaths of inertia, and hard choices societies face.