Showing posts with label epistemology. Show all posts
Showing posts with label epistemology. Show all posts

Monday, 23 May 2022

Are There Limits to Human Knowledge?


By Keith Tidman

‘Any research that cannot be reduced to actual visual observation is excluded where the stars are concerned…. It is inconceivable that we should ever be able to study, by any means whatsoever, their chemical or mineralogical structure’.
A premature declaration of the end of knowledge, made by the French philosopher, Auguste Comte, in 1835.
People often take delight in saying dolphins are smart. Yet, does even the smartest dolphin in the ocean understand quantum theory? No. Will it ever understand the theory, no matter how hard it tries? Of course not. We have no difficulty accepting that dolphins have cognitive limitations, fixed by their brains’ biology. We do not anticipate dolphins even asking the right questions, let alone answering them.

Some people then conclude that for the same reason — built-in biological boundaries of our species’ brains — humans likewise have hard limits to knowledge. And that, therefore, although we acquired an understanding of quantum theory, which has eluded dolphins, we may not arrive at solutions to other riddles. Like the unification of quantum mechanics and the theory of relativity, both effective in their own dominions. Or a definitive understanding of how and from where within the brain that consciousness arises, and what a complete description of consciousness might look like.

The thinking isn’t that such unification of branches of physics is impossible or that consciousness doesn’t exist, but that supposedly we’ll never be able to fully explain either one, for want of natural cognitive capacity. It’s argued that because of our allegedly ill-equipped brains, some things will forever remain a mystery to us. Just as dolphins will never understand calculus or infinity or the dolphin genome, human brains are likewise closed off from categories of intractable concepts.

Or at least, as it has been said.

Some among these believers of this view have adopted the self-describing moniker ‘mysterians’. They assert that as a member of the animal kingdom, homo sapiens are subject to the same kinds of insuperable cognitive walls. And that it is hubris, self-deception, and pretension to proclaim otherwise. There’s a needless resignation.

After all, the fact that early hominids did not yet understand the natural order of the universe does not mean that they were ill-equipped to eventually acquire such understanding, or that they were suffering so-called ‘cognitive closure’. Early humans were not fixed solely on survival, subsistence, and reproduction, where existence was defined solely by a daily grind over the millennia in a struggle to hold onto the status quo.

Instead, we were endowed from the start with a remarkable evolutionary path that got us to where we are today, and to where we will be in the future. With dexterously intelligent minds that enable us to wonder, discover, model, and refine our understanding of the world around us. To ponder our species’ position within the cosmic order. To contemplate our meaning, purpose, and destiny. And to continue this evolutionary path for however long our biological selves ensure our survival as opposed to extinction at our own hand or by external factors.

How is it, then, that we even come to know things? There are sundry methods, including (but not limited to) these: Logical, which entails the laws (rules) of formal logic, as exemplified by the iconic syllogism where conclusion follow premises. Semantic, which entails the denotative and connotative definitions and context-based meanings of words. Systemic, which entails the use of symbols, words, and operations/functions related to the universally agreed-upon rules of mathematics. And empirical, which entails evidence, information, and observation that come to us through our senses and such tools like those below for analysis, to confirm or finetune or discard hypotheses.

Sometimes the resulting understanding is truly paradigm-shifting; other times it’s progressive, incremental, and cumulative — contributed to by multiple people assembling elements from previous theories, not infrequently stretching over generations. Either way, belief follows — that is, until the cycle of reflection and reinvention begins again. Even as one theory is substituted for another, we remain buoyed by belief in the commonsensical fundamentals of attempting to understand the natural order of things. Theories and methodologies might both change; nonetheless, we stay faithful to the task, embracing the search for knowledge. Knowledge acquisition is thus fluid, persistently fed by new and better ideas that inform our models of reality.

We are aided in this intellectual quest by five baskets of ‘implements’: Physical devices like quantum computers, space-based telescopes, DNA sequencers, and particle accelerators. Tools for smart simulation, like artificial intelligence, augmented reality, big data, and machine learning. Symbolic representations, like natural languages (spoken and written), imagery, and mathematical modeling. The multiplicative collaboration of human minds, functioning like a hive of powerful biological parallel processors. And, lastly, the nexus among these implements.

This nexus among implements continually expands, at a quickening pace; we are, after all, consummate crafters of tools and collaborators. We might fairly presume that the nexus will indeed lead to an understanding of the ‘brass ring’ of knowledge, human consciousness. The cause-and-effect dynamic is cyclic: theoretical knowledge driving empirical knowledge driving theoretical knowledge — and so on indefinitely, part of the conjectural froth in which we ask and answer the tough questions. Such explanations of reality must take account, in balance, of both the natural world and metaphysical world, in their respective multiplicity of forms.

My conclusion is that, uniquely, the human species has boundless cognitive access rather than bounded cognitive closure. Such that even the long-sought ‘theory of everything’ will actually be just another mile marker on our intellectual journey to the next theory of everything, and the next one — all transient placeholders, extending ad infinitum.

There will be no end to curiosity, questions, and reflection; there will be no end to the paradigm-shifting effects of imagination, creativity, rationalism, and what-ifs; and there will be no end to answers, as human knowledge incessantly accrues.

Monday, 15 November 2021

The Limits of the ‘Unknowable’

In this image, the indeterminacy principle is here about the initial state of a particle. The colour (white, blue, green) indicates the phase, that is the position and direction of motion, of the particle. The position is initially determined with high precision, but the momentum is not. 

By Keith Tidman

 

We’re used to talking about the known and unknown. But rarely do we talk about the unknowable, which is a very different thing. The unknowable can make us uncomfortable, yet, the shadow of unknowability stretches across all disciplines, from the natural sciences to history and philosophy, as people encounter limits of their individual fields in the course of research. For this reason, unknowability invites a closer look.

 

Over the many years there has been a noteworthy shift. What I mean is this: Human intellectual endeavour has been steadily turning academic disciplines from the islands they had increasingly become over the centuries back into continents of shared interests, where specialized knowledge flows over one another’s boundaries in recognition of the interconnectedness of ideas and understanding of reality.

 

The result is fewer margins and gaps separating the assorted sciences and humanities. Interdependence has been regaining respectability. What we know benefits from these commonalities and this collaboration, allowing knowledge to profit: to expand and evolve across disciplines’ dimensions. And yet, despite this growing matrix of knowledge, unknowables still persist.

 

Consider some examples.

 

Forecasts of future outcomes characteristically fall into the unknowable, with outcomes often different from predictions. Such forecasts range widely, from the weather to political contests, economic conditions, vagaries of language, technology inventions, stock prices, occurrence of accidents, human behaviour, moment of death, demographics, wars and revolutions, roulette wheels, human development, and artificial intelligence, among many others. The longer the reach of a forecast, often the more unknowable the outcome. The ‘now’ and the short term come with improved certainty, but still not absolute. Reasons for many predictions’ dubiousness may include the following.

 

First, the initial conditions may be too many and indeterminate to acquire a coherent, comprehensive picture of starting points. 


Second, the untold, opaquely diverging and converging paths along which initial conditions travel may overwhelm: too many to trace. 


Third, how forces jostle those pathways in both subtle and large ways are impossible to model and take account of with precision and confidence. 


Fourth, chaos and complexity — along with volatility, temperamentality, and imperceptibly tiny fluctuations — may make deep understanding impossible to attain.

 

Ethics is another domain where unknowability persists. The subjectivity of societies’ norms, values, standards, and belief systems — derived from a society’s history, culture, language, traditions, lore, and religions, where change provides a backdraft to ‘moral truths’ — leaves objective ethics outside the realm of what is knowable. Contingencies and indefiniteness can interfere with moral decision-making. Accordingly, no matter how rational and informed individuals might be, there will remain unsettled moral disagreements.


On the level of being, why there is something rather than nothing is similarly unknowable. In principle,  ‘nothingness’ is just as possible as ‘something’, but for some unknown reason apart from the unlikelihood of spontaneous manifestation, ‘something’ demonstrably prevailed over its absence. Conspicuously, ‘nothingness’ would preclude the initial conditions required for ‘something’ to emerge from it. However, we and the universe of course exist; in its fine-tuned balance, the model of being is not just thinkable, it discernibly works. Yet, the reason why ‘something’ won out over ‘nothingness’ is not just unknown, it’s unknowable.

 

Anthropology arguably offers a narrower instance of unknowability, concerning our understanding of early hominids. The inevitable skimpiness of evidence and of fine-grained confirmatory records  compounded by uncertain interpretations stemming from the paucity of physical remains, and of their unvalidated connections and meaning in pre-historical context  suggests that the big picture of our more-distant predecessors will remain incomplete. A case of epistemic limits.


Another important instance of unknowability comes out of physics. The Heisenberg uncertainty principle, at the foundation of quantum mechanics, famously tells us that the more precisely we know about a subatomic particle’s position, the less we know about its momentum, and vice versa. There is a fundamental limit, therefore, to what one can know about a quantum system.

 

To be clear, though, seemingly intractable intellectual problems may not ultimately be insoluble, that is, they need not join the ranks of the unknowable. There’s an important distinction. Let me briefly suggest three examples.

 

The first is ‘dark energy and dark matter’, which together compose 95% of the universe. Remarkably, the tiny 5% left over constitutes the entire visible contents of the universe! Science is attempting to learn what dark energy and dark matter are, despite their prevalence compared with observable matter. The direct effects of dark energy and dark matter, such as on the universes known accelerating expansion, offer a glimpse. Someday, investigators will understand them; they are not unknowable.

 

Second is Fermat’s ‘last theorem’, the one that he teed up in the seventeenth century as a note in the margin of his copy of an ancient Greek text. He explained, to the dismay of generations of mathematicians, that the page’s margin was ‘too small to contain’ the proof. Fermat did suggest, however, that the proof is short and elegant. Four centuries passed before a twentieth-century British mathematician solved the theorem. The proof, shown to be long, turned out not to be unknowable as some had speculated, just terribly difficult.

 

A last instance that I’ll offer involves our understanding of consciousness. For millennia, we’ve been spellbound by the attributes that define our experience as persons, holding that ‘consciousness’ is the vital glue of mind and identity. Yet, a decisive explanation of consciousness, despite earnest attempts, has continued to elude us through the ages. Inventive hypotheses have abounded, though remained unsettled. Maybe thats not surprising, in light of the human brain’s physiological and functional complexity.

 

But as the investigative tools that neuroscientists and philosophers of the mind yield in the course of collaboration become more powerful in dissecting the layers of the brain and mind, consciousness will probably yield its secrets. Such as why and how, through the physical processes of the brain, we have very personalised experiences. It’s likely that one day we will get a sounder handle on what makes us, us. Difficult, yes; unknowable, no.

 

Even as we might take some satisfaction in what we know and anticipate knowing, we are at the same time humbled by two epistemic factors. First is that much of what we presume to know will turn out wrong or at most partial right, subject to revised models of reality. But the second humbling factor is a paradox: that the full extent of what is unknowable is itself unknowable.

 

Monday, 9 November 2020

The Certainty of Uncertainty


Posted by Keith Tidman
 

We favour certainty over uncertainty. That’s understandable. Our subscribing to certainty reassures us that perhaps we do indeed live in a world of absolute truths, and that all we have to do is stay the course in our quest to stitch the pieces of objective reality together.

 

We imagine the pursuit of truths as comprising a lengthening string of eureka moments, as we put a check mark next to each section in our tapestry of reality. But might that reassurance about absolute truths prove illusory? Might it be, instead, ‘uncertainty’ that wins the tussle?

 

Uncertainty taunts us. The pursuit of certainty, on the other hand, gets us closer and closer to reality, that is, closer to believing that there’s actually an external world. But absolute reality remains tantalizingly just beyond our finger tips, perhaps forever.

 

And yet it is uncertainty, not certainty, that incites us to continue conducting the intellectual searches that inform us and our behaviours, even if imperfectly, as we seek a fuller understanding of the world. Even if the reality we think we have glimpsed is one characterised by enough ambiguity to keep surprising and sobering us.

 

The real danger lies in an overly hasty, blinkered turn to certainty. This trust stems from a cognitive bias — the one that causes us to overvalue our knowledge and aptitudes. Psychologists call it the Dunning-Kruger effect.

 

What’s that about then? Well, this effect precludes us from spotting the fallacies in what we think we know, and discerning problems with the conclusions, decisions, predictions, and policies growing out of these presumptions. We fail to recognise our limitations in deconstructing and judging the truth of the narratives we have created, limits that additional research and critical scrutiny so often unmask. 

 

The Achilles’ heel of certainty is our habitual resort to inductive reasoning. Induction occurs when we conclude from many observations that something is universally true: that the past will predict the future. Or, as the Scottish philosopher, David Hume, put it in the eighteenth century, our inferring ‘that instances of which we have had no experience resemble those of which we have had experience’. 

 

A much-cited example of such reasoning consists of someone concluding that, because they have only ever observed white swans, all swans are therefore white — shifting from the specific to the general. Indeed, Aristotle uses the white swan as an example of a logically necessary relationship. Yet, someone spotting just one black swan disproves the generalisation. 

 

Bertrand Russell once set out the issue in this colourful way:

 

‘Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to uniformity of nature would have been useful to the chicken’.

 

The person’s theory that all swans are white — or the chicken’s theory that the man will continue to feed it — can be falsified, which sits at the core of the ‘falsification’ principle developed by philosopher of science Karl Popper. The heart of this principle is that in science a hypothesis or theory or proposition must be falsifiable, that is, to possibly being shown wrong. Or, in other words, to be testable through evidence. For Popper, a claim that is untestable is no longer scientific. 

 

However, a testable hypothesis that is proven through experience to be wrong (falsified) can be revised, or perhaps discarded and replaced by a wholly new proposition or paradigm. This happens in science all the time, of course. But here’s the rub: humanity can’t let uncertainty paralyse progress. As Russell also said: 

 

‘One ought to be able to act vigorously in spite of the doubt. . . . One has in practical life to act upon probabilities’.

 

So, in practice, whether implicitly or explicitly, we accept uncertainty as a condition in all fields — throughout the humanities, social sciences, formal sciences, and natural sciences — especially if we judge the prevailing uncertainty to be tiny enough to live with. Here’s a concrete example, from science.

 

In the 1960s, the British theoretical physicist, Peter Higgs, mathematically predicted the existence of a specific subatomic particle. The last missing piece in the Standard Model of particle physics. But no one had yet seen it, so the elusive particle remained a hypothesis. Only several decades later, in 2012, did CERN’s Large Hadron Collider reveal the particle, whose field is claimed to have the effect of giving all other particles their mass. (Earning Higgs, and his colleague Francis Englert, the Nobel prize in physics.)

 

The CERN scientists’ announcement said that their confirmation bore ‘five-sigma’ certainty. That is, there was only 1 chance in 3.5 million that what was sighted was a fluke, or something other than the then-named Higgs boson. A level of certainty (or of uncertainty, if you will) that physicists could very comfortably live with. Though as Kyle Cranmer, one of the scientists on the team that discovered the particle, appropriately stresses, there remains an element of uncertainty: 

 

“People want to hear declarative statements, like ‘The probability that there’s a Higgs is 99.9 percent,’ but the real statement has an ‘if’ in there. There’s a conditional. There’s no way to remove the conditional.”

 

Of course, not in many instances in everyday life do we have to calculate the probability of reality. But we might, through either reasoning or subconscious means, come to conclusions about the likelihood of what we choose to act on as being right, or safely right enough. The stakes of being wrong matter — sometimes a little, other times consequentially. Peter Higgs got it right; Bertrand Russell’s chicken got it wrong.

  

The takeaway from all this is that we cannot know things with absolute epistemic certainty. Theories are provisional. Scepticism is essential. Even wrong theories kindle progress. The so-called ‘theory of everything’ will remain evasively slippery. Yet, we’re aware we know some things with greater certainty than other things. We use that awareness to advantage, informing theory, understanding, and policy, ranging from the esoteric to the everyday.

 

Monday, 13 January 2020

A Modest Proposal for Science

Posted by Andrew Porter

For several centuries, modern science has banked on and prided itself in ‘the scientific method’. This scheme of hypothesis and experiment has been useful and effective in countering superstition. Discoveries of all sorts have been made and verified, from the circumference of orbits to the range of elements to the function of organelles and proteins in a cell. Confirmation from experiment seems like a clear way to separate fact from fiction. But it is crucial to note that the scientific method also fails.

Recent conundrums of physicality, consciousness, entanglement, dark matter, and the nature of natural laws have spurred many to rethink assumptions and even findings. Our search for what is real and natural needs a new method, one that is in keeping with the natural facts themselves – natural facts not as reduced or squeezed or contorted by the scientific method, but as their own holistic selves. The method of approach and apprehending that seems to offer the most promising advance is that which consists of a whole person in a whole natural environment.

Why do I emphasise wholeness? Because facts shrink away at the first sign of partiality or limited agenda. Truth, conversely, tends to open itself to an apt seeker, to a method that goes whole at a host of levels. Nature tends to recognise her own, it seems.

Kristin Coyne, in an article called ‘Science on the Edge’ in the February 17, 2017 issue of the magazine, Fields: Science, Discovery & Magnetism, writes:
‘At the dividing line between two things, there’s often no hard line at all. Rather, there’s a system, phenomenon or region rich in diversity or novel behavior – something entirely different from the two things that created it.’
She offers various examples of the same: fringe physics, borderline biology, and crossover chemistry. Such ‘science on the edge’ is one aspect of the changes typical science is undergoing. Other researchers in areas such as telepathy and theoretical physics are pushing the bounds of science while arguing that it certainly is science, just a deeper form.

This suggested new method, that would largely overturn contemporary science, would measure, as it were, by that of nature’s measurements: it is anti-reductionist; it is synthetic more than analytic. As we are learning, it may not be too much to say that one has to be the facts to know the facts, to be a synergy of ‘observer’ and ‘observed’ at all levels. The knowledge gleaned from wholeness is like a star’s heat and light understood, not just the hydrogen and helium involved.

This idea of the ‘scientist’ in tune with nature in a thorough way would be the human equivalent of a goshawk whose instincts are a portion of Earth-wide wildness. No disjunct with results that turn self-referential and untrue. If one is studying an ecosystem, for instance, he or she, or his or her team, must, by the requirements of nature, be of the same stuff and of the same conceptions as the individualities, relations, and wholes of that ecosystem. So much more of the actuality reveals itself to the sympathetic, of-a-piece ‘observer’. If we ignore or shunt aside the question of what is a whole person, how can we ever expect to discern the deeper reality of nature?

It seems to hold true that the more receptive the subject is to the essence and character of the object, the better it is understood. Who knows one’s dog better: a sympathetic owner or an objective voice? If the dog is sick, perhaps the latter, but all the time the dog is exuberantly healthy, the former is the one who comprehends.

The goal, of course, is to elucidate facts, to unite in some meaningful way with reality. Delusion is all too easy, and partial truths sustain centuries of institutions, positions, governments, and cultures. Modern science started out as reactionary in the sense of being hostile to things like superstition or intuition or revelation. It substituted experiment and observation, keeping the studied apart from those who studied. This is fine for shallow comprehension, but it only gets you so far. It obscures another possibility, that is somewhat similar to the communion and connection between the quantum realm and the macro world.

I suggest that deep facts only reveal themselves to a person metamorphosed, as it were, into ways of being in keeping with the parts or portions of nature studied. All nature may be of this type, open to human comprehension only as that comprehension is within a whole person. What a complete person is and what a fullness of nature is might not only be a philosopher’s job, but the focus of science itself, re-trained to benefit from its transformed method.

The hint in current puzzlements is that science in the 21st century and beyond may benefit significantly by re-crafting itself. A transformed method might yield deeper or actual knowledge. That is, knowing as opposed to seeming to know, may require a new approach.

Jacob Needleman and David Applebaum wrote, ‘Unless scientific progress is balanced by another kind of enquiry, it will inevitably become an instrument of self-destruction.’

The ‘objective’ revolution need not be the last. In today’s world, we have the ball-and-chain of modern scientific ways and even scientism weighting our thinking; it would be good to free ourselves from this. But we are confused. About what of objectivity is liberating or limiting, and what of subjectivity is useful or obfuscatory.

Monday, 8 October 2018

[Abandoned Draft] Philosophy and Infinity

Number 1, by Jackson Pollock, 1949
Posted by Thomas Scarborough
The new view our world is one of infinite relations.  In the words of philosopher Mel Thompson, we see ‘a seamless web of causality that goes forwards and backward in time and outwards in space’.
This was not the case in previous ages.  While philosophers of the past indeed had some sense of ‘everything’, they typically interested themselves in more immediate things: for example, the nature of matter, the requirements of the ‘good life’, or the principles of politics.

It stands to reason then that, if we live in a world of infinite relations, the concept of infinity becomes central to (post) modern philosophy.  With this in mind, this essay briefly explores the nature of this infinity, and what it may mean to us.

Typically, we see infinity as something which is pinned somewhere.  We know the expression, ‘From here to infinity,’ which, importantly, implies not only that infinity exists, but that there is a ‘here’ to it.  It is proverbial.  That is, while infinity is open on one side, it is bounded on the other.

‘Countable’ infinities are well known to mathematicians, if not the rest of us.  These are related to the countable natural numbers: say, 1, 2, 3, 4 ... and so on.  So, too, are ‘continuum’ infinities, which are related to whole numbers—and with them, rational and irrational numbers—say, 1/1, 1/2, 1/3, 1/4 ... and so on.  All these infinities are bounded on one side, but open on the other.

An infinity of relations, however, differs from the infinity of mathematics and science.  It is unbounded on both sides—or rather, on all sides, everywhere.  This is the reality in which we live—at least, when we try to imagine it from a detached point of view, and not the point of view of my own fixed self or any other fixed origin.

Our view of infinity powerfully shapes our thinking.  If, in anything, one assumes a single bounded side to infinity, this immediately legitimises the single bound or fixed point. Without it, the entire system must collapse.  It has to be referenced to that point.  Thus infinity's bound is anchored—yet anchored, as it were, in a bottomless and endless sea.

I have a fixed point in a formula: 0 (nought).  I have a proven fact: ‘Hydrogen and oxygen makes water.’  I have a principle: ‘Do no harm.’  Or I have a philosophy: existentialism, for example, or rationalism or idealism.  But because all of these are deposited in the midst of the infinity without bounds, they exist in isolation, and can ground nothing more than themselves.

This is most palpable in the area of ethics.  If we approach ethics from the point of view of ‘is’—which is fact—we cannot reach the point of view of ‘ought’—which is value.  The problem lies not only with simple, descriptive facts.  It lies with purported facts of any kind: moral facts, social facts, political facts, religious facts.

Every one without exception cannot lead us to value.  They cannot, because they exist in an arbitrary and very specific place in an unbounded infinity, without being referenced to any fixed point except their own—while the question of ethics is what there is beyond limited scenarios.  This is what makes it such a big question in philosophy.

There is only one viable option open to us which may ‘fix’ something entirely.  That is to reference our thinking to infinity itself.  One might wonder how such a thought could be of any use to us.  An analogy might help.

If I reference the position of a buoy to the seaweed I see underneath it, or the birds which circle overhead, I have an unstable reference.  If I reference it to the sandy shore in the distance, this would seem more stable, though not completely so.  Or I may reference it to the stars—but even the stars will move.  Ideally, it would be referenced to everything.

It all suggests, in a sense, a philosophical theory of relativity, in which a static universe of thought is discarded.  Space does not allow the further development of this thought in this post, but possible applications to ethics appear in my Pi article, How Shall We Re-Establish Ethics in Our Time?

Monday, 26 June 2017

The Death Penalty: An Argument for Global Abolition


Posted by Keith Tidman

In 1957, Albert Camus wrote an essay called Reflections on the Guillotine. As well as arguing against it on grounds of principle, he also speaks of the ineffectiveness of the punishment:
‘According to one magistrate, the overwhelming majority of the murderers he had tried did not know, when they shaved themselves that morning, that they were going to kill someone that night. In short, capital punishment cannot intimidate the man who throws himself upon crime as one throws oneself into misery.’
For myself, too, the death penalty is an archaic practice, a vestige with no place in a 21st-century world. In the arena of constitutional law, the death penalty amounts to ‘cruel and unusual’ (inhumane) punishment. In the arena of ethics, the death penalty is an immoral assault on human rights, dignity, and life’s preeminence.

Through the millennia, social norms habitually tethered criminal punishment to ‘retribution’ — which minus the rhetorical dressing distils to ‘revenge’. ‘Due process of law’ and ‘equal protection under the law’ were random, rare, and capricious. In exercising retribution, societies shunted aside the rule of authentic proportionality, with execution the go-to punishment for a far-ranging set of offenses, both big and small — murder only one among them. In some societies, matters like corruption, treason, terrorism, antigovernment agitation, and even select ‘antisocial’ behaviours likewise qualified for execution — and other extreme recourses — shades of which linger today.

Resort through the ages to state-sanctioned, ceremonial killing (and other severe corporal punishment) reflected the prevailing norms of societies, with little stock placed on the deep-rooted, inviolable value of human life. The aim was variously to control, coerce, impose suffering, and ultimately dehumanise — very much as enemies in war find it easier to kill if they create ‘subhuman’ caricatures of the enemy. Despite the death penalty’s barbarity, some present-day societies retain this remnant from humanity’s darker past: According to Amnesty International, twenty-three countries — scattered among the Asia-Pacific, Africa, the United States in the Americas, and Belarus in Europe — carried out executions in 2016; while fifty-five countries sentenced people to death that year.

But condemnation of the death penalty does not, of course, preclude imposing harsh punishment for criminal activity. Even the most progressive, liberally democratic countries, abiding by enlightened notions of justice, appropriately accommodate strict punishment — though well short of society’s premeditatedly killing its citizens through application of the death penalty. The aims of severe punishment may be several and, for sure, reasonable: to preserve social orderliness, disincentivise criminal behaviour, mollify victims, reinforce legal canon, express moral indignation, cement a vision of fairness, and reprimand those found culpable. Largely fair objectives, if exercised dispassionately through due process of law. These principles are fundamental and immutable to civil, working — and rules-based — societies. Nowhere, however, does the death penalty fit in there; and nowhere is it obvious that death is a proportionate (and just) response to murder.
________________________________________

‘One ought not return injustice
for injustice’ — Socrates
________________________________________

Let’s take a moment, then, to look at punishment. Sentencing may be couched as ‘consequentialist’, in which case punishment’s purpose is utilitarian and forward looking. That is, punishment for wrongdoing anticipates future outcomes for society, such as eliminating (or more realistically, curtailing) criminal behaviour. The general interest and welfare of society — decidedly abstract notions, subject to various definitions — serve as the desired and sufficient end state.

Alternatively, punishment may be couched as ‘deontological’. In that event, the deed of punishment is itself considered a moral good, apart from consequences. Deontology entails rules-based ethics — living under the rule of law, as a norm within either liberal or conservative societies and systems of governance — while still attaining retributive objectives. Or, commonly, punishment may be understood as an alliance of both consequentialism and deontology. Regardless of choice — whether emphasis is on consequentialism or deontology or a hybrid of the two — the risk of punishing the innocent, especially given the irreversibility of the death penalty in the case of discovered mistakes, looms large. As such, the choice among consequentialism, deontology, or a hybrid matters little to any attempt to support a case for capital punishment.

Furthermore, the meting out of justice works only if knowledge is reliable and certain. That is, knowledge of individuals’ culpability, the competence of defense and prosecutorial lawyers, unbiased evidence (both exculpatory and inculpatory), the randomness of convictions across demographics, the sense of just desserts, the fairness of particular punishments (proportionality), and the prospective benefits to society of specific punitive measures. Broadly speaking, what do we know, how do we know it, and the weight of what counts — epistemological issues that are bound by the ethical issues. In many instances, racial, ethnic, gender, educational, or socioeconomic prejudices (toward defendants and victims alike) skew considerations of guilt and, in particular, the discretionary imposition of the death penalty. In some countries, politics and ideology — even what’s perceived to threaten a regime’s legitimacy — may damn the accused. To those sociological extents, ‘equal protection of the law’ becomes largely moot.

Yet at the core, neither consequentialism — purported gains to society from punishment’s outcomes — nor deontology — purported intrinsic, self-evident morality of particular sentences — rises to the level of sufficiently undergirding the ethical case for resorting to the death penalty. Nor does retribution (revenge) or proportionality (‘eye for an eye, tooth for a tooth’). After all, whether death is the proportionate response to murder remains highly suspect. Indeed, no qualitative or quantitative logic, no matter how elegantly crafted, successfully supports society’s recourse to premeditatedly and ceremoniously executing citizens as part of its penal code.
_____________________________________________

‘Capital punishment is the most
premeditated of murders’ — Albert Camus
_____________________________________________

There is no public-safety angle, furthermore, that could not be served equally well by lifetime incarceration — without, if so adjudged, consideration of rehabilitation and redemption, and thus without the possibility of parole. Indeed, evidence does not point to the death penalty improving public safety. For example, the death penalty has no deterrent value — that is, perpetrators don’t first contemplate the possibility of execution in calculating whether or not to commit murder or other violent crime. The starting position therefore ought to be that human life is sacrosanct — life’s natural origins, its natural course, and its natural end. Society ought not deviate from that principle in normalising particular punishments for criminal — even heinously criminal — behaviour. The guiding moral principle is singular: that it’s ethically unprincipled for a government to premeditatedly take its citizenries’ lives in order to punish, a measure that morally sullies the society condoning it.

Society’s applying the death penalty as an institutional sentence for a crime is a cruel vestige of a time when life was less sacred and society (the elite, that is) was less inclined to censor its own behavior: intentionally executing in order, with glaring irony, to model how killing is wrong. Society cannot compartmentalise this lethal deed, purporting that sanctioned death penalty is the exception to the ethical rule not to kill premeditatedly. Indeed, as Salil Shetty, secretary-general of Amnesty International, laconically observed, ‘the death penalty is a symptom of a culture of violence, not a solution to it’.

Although individuals, like victim family members, may instinctively and viscerally want society to thrash out in revenge on their behalf — with which many people may equally instinctively and understandably sympathise — it’s incumbent upon society to administer justice rationally, impartially, and, yes, even dispassionately. With no carveout for excepted crimes, no matter how odious, the death penalty is a corrosive practice that flagrantly mocks the basis of humanity and civilisation — that is, it scorns the very notion of a ‘civil’ society.

The death penalty is a historical legacy that should thus be consigned to the dustbin. States, across the globe, have no higher, sober moral stake than to strike the death penalty from their legal code and practices. With enough time, it will happen; the future augurs a world absent state-sanctioned execution as a misdirected exercise in the absolute power of government.