Showing posts with label Keith Tidman. Show all posts
Showing posts with label Keith Tidman. Show all posts

Monday 16 September 2024

Plato’s Allegory of the Cave: And the Deception of Perception



By Keith Tidman

 

It is a tribute to the timelessness of Plato’s ideas that his philosophical stories still echo powerfully in the contemporary world. People still do live in the flickering shadows of Plato’s cave, mistaking myths for reality and surmising evidence. We are metaphorically bound, for example, to watch and assent to the shadows cast by social media, influencing our notions of reality. An increasingly subjective and debatable reality, shaped by the passing of gossamer shadows flung onto the wall (today the computer screen) by puppeteers. Today, there’s clearly a risk of deception by partial perception, of information exploited for political ends.


It was in his most-read work, The Republic, written about 380 BCE, that Plato recounted an exchange between Glaucon and Socrates, sometimes called the Allegory of the Cave. Socrates describes how in this cave, seated in a line, are prisoners who have been there since birth, entirely cut off from the outside world. Tightly restrained by chains such that they cannot move, their lived experience is limited to staring at the cave wall in front of them. 

 

What they cannot know is that just behind where they sit is a parapet and fire, in front of which other people carry variously shaped objects, and it is these that cast the strange shadows. The shadows on the wall, and not the fire or the objects themselves, are the prisoners’ only visible reality — the only world they can know. Of the causes of the moving shadows, of the distinction between the abstract and the real, they can know nothing. 

 

Plato asks us to consider what might happen if one of the prisoners is then unchained and forced reluctantly to leave the cave, into the glaring light of the sun. At first, he says, the brightness must obscure the freed prisoner’s vision,  so that he can see only shadows and reflections, similar to being in the cave. However, after a while, his eyes would grow accustomed to the light, and eventually he would be able to see other people and objects themselves, not just their shadows. As the former prisoner adjusts, he begins to believe the outside world offers what he construes as a very different, even better reality than the shadows in the dusky cave.

 

But now suppose, Plato asks, that this prisoner decides to return to the cave to share his experience — to try to convince the prisoners to follow his lead to the sunlight and the ‘forms’ of the outside world. Would they willingly seize the chance? But no, quite the contrary, Plato warns. Far from seizing the opportunity to see more clearly, he thinks the other prisoners would defiantly resist, believing the outside world to be harmful and dangerous and not wanting to leave the security of their cave with the shadows they have become so familiar with, even so expert at interpreting. 

 

The allegory of the cave is part of Plato’s larger theory of knowledge — of ideals and forms. The cave and shadows are representative of how people usually live, often ensconced within the one reality they’re comfortable with and assume to be of greatest good. All the while, they are confronted by having to interpret, adjust to, and live in a wholly dissimilar world. The so-called truth that people meet up with is shaped by contextual circumstances they happened to have been exposed to (their upbringing, education, and experiences, for example), in turn swaying their interpretations, judgments, beliefs, and norms. All often cherished. Change requires overcoming inertia and myopia, which proves arduous, given prevailing human nature.

 

People may wonder which is in fact the most authentic reality. And they may wonder how they might ultimately overcome trepidation, choosing whether or not to turn their backs to their former reality, and understanding and embracing the alternative truth. A process that perhaps happens again and again. The undertaking, or journey, from one state of consciousness to another entails conflict and requires parsing the differences in awareness of one truth over another, to be edified of the supposed higher levels of reality and to overcome what one might call the deception of perception: the unreal world of blurry appearances..

 

Some two and a half millennia after Plato crafted his allegory of the cave, popular culture has borrowed the core storyline, in both literature as well as movies. For example, the pilots of both Fahrenheit 451, by Ray Bradbury, and Country of the Blind, by H.G. Wells, concern eventual enlightened awareness, where key characters come to grips with the shallowness of the world with which they’re familiar every day. 


Similarly, in the movie The Matrix, the lead character, Neo, is asked to make a difficult choice: to either take a blue pill and continue living his current existence of comfort but obscurity and ignorance, or take a red pill and learn the hard truth. He opts for the red pill, and in doing so becomes aware that the world he has been living in is merely a contrivance, a computer-generated simulation of reality intended to pacify people.

 

Or take the movie The Truman Show. In this, the lead character, Truman Burbank, lives a suburban, family life as an insurance agent for some thirty years, before the illusion starts to crumble and he suspects his family is made up of actors and everything else is counterfeit. It even turns out that he is living on a set that comprises several thousand hidden cameras producing a TV show for the entertainment of spectators worldwide. It is all a duplicitous manipulation of reality — a deception of perception, again — creating a struggle for freedom. And in this movie, after increasingly questioning the unfathomable goings-on around him, Truman (like the prisoner who leaves Plato’s cave) manages to escape the TV set and to enter the real world.

 

Perhaps, then, what is most remarkable about the Allegory of the Cave is there is nothing about it that anchors it exclusively to the ancient world in which it was first imagined. Instead, Plato’s cave is, if anything, even more pertinent in the technological world of today, split as it is between spectral appearances and physical reality. Being surrounded today with the illusory shadows of digital technology, with our attention guided by algorithm-steering, belief-reinforcing social media, strikes a warning note. That today, more than ever, it is our responsibility to continually question our assumptions.

 

Monday 12 August 2024

The Distressed Spider and Intervention: A Thought Experiment


By Keith Tidman

To intervene, or not to intervene?

 

Philosopher Thomas Nagel set the stage for a curious thought experiment. Nagel described how, while a university professor, he noticed what he considered a puzzling scene play out. It was a spider trapped in … let us say, a sink ... in the building housing the philosophy department. The spider, despite defensively scurrying around its tightly limited terrain, seemed condemned throughout the day to becoming doused with water, incapable of altering its fate — if altering its fate was what it even wanted to do. Weeks passed.

 

As Nagel portrayed the scene, the spider’s “life seemed miserable and exhausting,” which led him to conclude he should “liberate” it, in a dash to freedom and a better life. Seemingly the morally right thing to do, despite the relative insignificance of a single spider. Nagel finally justified intervention on the presumption that the spider could readily find its way back to its spot in the sink if it “didn’t like it on the outside.”

 

That is, could Nagel’s well-intentioned rescue afford the spider a more meaningful, happier life — assuming, for the sake of argument, the spider could think in such abstract terms? Or was such interventionism haughty and presumptuous? Nagel, pondering higher-level causes and effects, humbly confessed that his emancipation of the spider was therefore done with “much uncertainty and hesitation.”

 

Regardless, Nagel went ahead and reached out with a paper towel in the spider’s direction, which the spider, intentionally or instinctively, grabbed on to with its gangly legs, to be hoisted onto the floor. Thus carefully deposited, however, the spider remained still, even while prodded gently with the paper towel. “Playing dead,” perhaps — and afraid of carelessly being stomped on by people walking around? The next day, Nagel “found it in the same place, his legs shriveled in that way characteristic of dead spiders.”

 

Nagel’s experience, and the thought experiment derived from it, tees up at least two inferences regarding the ground rules governing intervention in others’ lives. On the one hand, no matter how benevolently intended our deeds, intervention might exact unanticipated outcomes. Some ugly. On the other hand, indecisiveness and inaction might likewise result in harm — as the renowned “trolley problem” demonstrates, in which choices, including the option not to redirect the trolley, still lead to some loss of life. In short, indecision is a decision — with repercussions.

 

We therefore have to parse the circumstances and priorities as best we can, deciding to intercede or stay removed from the scene. Either choice is swayed by our conspicuous biases as to meaningfulness in life, despite the choices’ innate subjectivity. Both choices — intervene or leave alone — are entrapped in the unavoidable moral morass and practical implications of their respective consequences.

 

Nagel’s spider incident was, of course, also metaphorical of the lives of people — and whether we should judge the merits or demerits of someone’s stage-managed life circumstances, going so far as to urge change. We might perceive such advice as prudent and empowering, even morally right; but maybe in reality the advice is none of those things, and instead is tantamount to the wrong-headed extraction of the “ailing” spider. The next two paragraphs provide examples of everyday, real-world circumstances that might spur intervention. That is, let's ask this: In these and other real-world cases, of which the count is endless, does the proverbial spider warrant extrication?

 

For instance, do we regard someone’s work life as mundane, a dead-end, as beneath the person’s talents? Do we regard someone’s choices regarding nutrition and exercise and other behavioral habits as impairing the person’s health? Or what if we see someone’s level of education as too scant and misfocused relative to modern society’s fast-paced, high-tech needs? Do we fault-findingly regard someone’s choice of a partner to be unfavorable and not life enhancing? Do we consider someone’s activities as embodying calculable risks, to be evaded? Do we deem someone’s financial decisions to be imprudently impulsive?

 

Maybe those “someones,” in being judged, begrudge what they view as the superciliousness of such intercession. Who has the right (the moral authority) to arbitrate, after all, people’s definition of happiness and the meaningfulness of life, and thus choices to make, where there may be few universal truths? Where do resolute biases contaminate decision-making? One possible answer is that we ought to leave the proverbial spider to its fate — to its natural course.

 

But let’s also look at possible, real-world interventionism on a more expansive scale. Do we properly consider both the pragmatic and moral consequences of interceding in matters of the environment, biodiversity, and ecosystems, where life in general has inherent value and decisions are morally freighted? How about, in international relations, the promotion of humanitarian standards, the maintenance of security, and cultural, civilizational affairs? And what about in other nations’ domestic and foreign policy decision-making that bear ubiquitously across the interconnected, globalised planet?

 

Even the sunniest of intentions, instilled with empathy and wistful introspection, may turn out ill-informed — absent a full understanding of someone else’s situation, where the setting is key to the person’s happiness and sense of meaningfulness. Perhaps that particular someone did not need to be removed from the fabled appliance, so to speak, in order that he might scurry off toward safety.

 

Nagel assumed the spider might feel forlorn; but perhaps it didn’t. Maybe it was a case of infelicitous projection or a desire simply to assuage raw instincts. Let’s not forget, the spider died — and did so as the consequence of intervention. Lessons applicable to all frames of reference, from the globe to the community and individual, whom we might assume needs rescuing.

 

The thought experiment prods us to go beyond shallow, short-term consequentialism — beyond what happens right off the bat as the result of intervention — instead to dig into primary principles guiding the verdicts we render. Foundational moral values, personal and societal — even  universal — matter greatly in these presumptive decisions.

 

Monday 15 July 2024

Are We Alone in the Universe, or Not? And Does It Matter?

Peering through Saturn’s rings, the Cassini probe caught a glimpse of a faraway planet and its moon. At a distance of just under 900 million miles, Earth shines bright among the many stars in the sky, distinguished by its bluish tint.

By Keith Tidman

The writer and futurist Arthur C. Clarke once wrote: “Two possibilities exist: Either we are alone in the universe, or we are not. Both are equally terrifying.” 


But are the two alternatives really terrifying? And even if they were, then what might be the upshot?

 

In exploring the possible consequences of Clarke’s thought experiment, I’ll avoid enmeshing us in a discussion of whether extraterrestrials have already visited Earth, or whether we will get to visit their planets in the near term. For the foreseeable future, the distances are too large for that to happen, where suspected extraterrestrial civilisations are thousands, millions, or billions of light-years away. Those distances hamper hunts for signals engaged in by the Search for Extraterrestrial Intelligence (SETI) Institute, which metaphorically dips only an infinitesimally small scoop into the vast cosmic ocean. And such distances hamper interstellar travel.

 

Accordingly, we are currently in no position to respond definitively to the challenge Enrico Fermi, also known as “the architect of the nuclear age,” raised with his lunchtime colleagues at the Los Alamos National Laboratory in New Mexico in 1950, referring to extraterrestrials: “Where is everybody?”

 

One piece of crucial context for our conversation here is that of scale: the known universe is currently thought to be some 93 billion light-years in diameter. Recall that a light-year is a measurement of distance, not time, so that in Earthly ‘miles,’ the cosmic diameter is an easy, but boggling, calculation: 93 billion multiplied by 5.8 trillion miles. Add that, in the case of travel or electromagnetic communications (beamed signals) between us and extraterrestrials, the velocity of light is the fixed upper limit — as far as current science is concerned, anyway. All of which is problematic for detecting aliens and their biomarkers or technomarkers, quite apart from anyone engaging in neighbourly interstellar space visitation.

 

Yet, in a universe kickstarted some 13.8 billion years ago — with hundreds of billions of galaxies, and trillions of stars and planets (many of those exoplanets conceivably habitable, even if not twins of our world) — it’s surely arguable that extraterrestrial civilisations, carbon-based or differently constituted physically, are out there, similarly staring toward the skies, quizzically pondering. Alien cosmologists asking, “Where is everybody?,” making great strides developing their own technology, and calculating probabilities for sundry constants and variables assumed necessary for technologically advanced life to prosper elsewhere.

 

There are two key assumption in asking whether we are alone in the universe or we are among teeming alien life strewn throughout the universe. The first assumption, of a general nature, is to define ourselves as a conscious, intelligent, sophisticated species; the second is to assume the extraterrestrials we envision in our discussion are likewise conscious and intelligent and sophisticated — at least equally or maybe considerably more so, options we’ll explore.

 

A third assumption is an evolutionary process, transitioning from physics to chemistry to biology to consciousness. Higher-order consciousness is presumed to be the evolutionary apex both for our species — what it is like to be us — and for extraterrestrials — what it is like to be them. Consciousness may end up the evolutionary apex for our and their machine technology, too. Given that higher-order consciousness is central, we need a baseline for what we mean by the term. Taking a physicalist or materialist point of view, the mind and consciousness are rooted in the neurophysiological activity of the brain, reducible to one and the same. This, rather than existing dualistically in some ethereal, transcendental state separate from the brain, as has sometimes been mythologized.

 

As a placeholder here, consciousness is assumed to be fundamentally similar in its range of domains both for our species and for extraterrestrials, comprising variations of these features: experience, awareness, perception, identity, sentience, thought experimentation, emotion, imagination, innovation, curiosity, memory, chronicled past, projected future, executive function, curation, normative idealism, knowledge, understanding, cognition, metacognition — among others. On these important fronts, the features’ levels of development between us and extraterrestrials may well differ in form and magnitude.

 

As for one of Arthur C. Clarke’s alternative scenarios — that our species is alone in the universe — I can’t help but wonder why, then, the universe is so old, big, and still rapidly growing, if the cosmic carnival is experienced by us alone. We might scratch our heads over the seeming lack of sense in that, whereby the imposing panorama captured by space-based telescopes dwarfs us. We might, therefore, construe that particular scenario as favouring an exceptional place for our species in the otherwise unoccupied cosmic wonderment, or in a different (and more terrifying?) vein suggesting our presence is inconsequential.

 

That is, neither aloneness nor uniqueness necessarily equates to the specialness of a species, but to the contrary a trifling one-off situation. Where we have to come to grips with the indeterminacy of why this majestic display of light-years-sized star nurseries, galaxies rushing toward or away from one another, the insatiability of hungry supermassive black holes, supernovas sending ripples through the faraway reaches of spacetime, and so much more.

 

As for the possibility of sophisticated other life in the universe, we might turn to the so-called anthropic principle for the possible how and why of such occurrences. The principle posits that many constants of the Earth, of the solar system, of the Milky Way, and of the universe are so extraordinarily fine-tuned that only in those ways might conscious, intelligent, advanced life like ours ever to have evolutionarily come into being.

 

The universe would be unstable, as the anthropic principle says, if any of those parameters would shift even a minuscule amount, the cosmos being like a pencil balanced precariously on its pointed tip. It’s likely, therefore, that our species is not floating alone in an unimaginably vast, roiling but barren cosmic sea; according to a more expansive view of the error-less anthropic principle, the latter makes the creation and sustenance of extraterrestrial life possible, too, as fellow players in the cosmic froth. Fine-tuned, after all, doesn't necessarily equate to rare. 

 

We might thus wonder about the consequences for our self-identity and image if some among these teeming numbers of higher-order intelligent extraterrestrials inhabiting the universe got a developmental jumpstart on our species’ civilisation of a million or more years. It’s reasonable to assume that those species would have experienced many-orders-of-magnitude advances biologically, scientifically, technologically, culturally, and institutionally, fundamentally skewing how humanity perceives itself.

 

The impact of these realities on human self-perception might lead some to worry over the glaring inequality and possibly perceived menace, resulting in dents in the armour of our persistent self-exceptionalism, raising larger questions about our purpose. These are profoundly philosophical considerations. We might thereby opt to capitulate, grasping at straws of self-indulgent excuses. Yet, extraterrestrials capable of interstellar travel might choose — whether for benign purposes (e.g., development, enlightenment, resource sharing), or for malign ones (e.g., hegemonism, hubris, manifest destiny, self-exceptionalism, colonisation), or for a hybrid of reasons — that interventionism, with its mix of calculated and unpremeditated consequences, might seem the natural course.

 

Our reactions to gargantuan inter-species differences might range from giddy exceptionalism at one end to dimmed significance at the other. On a religious front, a crisis might ensue in the presence of remarkably advanced extraterrestrials, influencing factors surrounding faith, creeds, dicta, values, patriarchy. Some of our religious constructs — scriptures, symbology, philosophies — might collapse as shallow affectations. For example, in light of hyper-advanced extraterrestrials, our history of expressing religious imagery in anthropomorphic terms (our species described doctrinally as being “in God’s image,” for example) may no longer make sense, fundamentally altering belief systems.

 

We would have to revisit the principles of ethics, including the degree that ethics are culturally and societally contingent. Or the impact might lead to our being elated that life has advanced to such a remarkable degree, covetous for what it might mean for benefits for our species — to model what seems to have worked magnificently for a cutting-edge alien civilisation. The potential for learning vastly advanced natural science and technology and societal paradigms would be immense, where, for instance, extraterrestrials might be hybrids of the best of biology and the best of machines.

 

As potentially confounding either of Clarke’s scenarios might prove, neither need be terrifying; instead, both scenarios have the potential of being exhilarating. But let me toss one last unavoidable constant into the cosmic cauldron. And this is the concept of entropy — the irreversibly increasing (net) disorder within a closed, isolated system like the universe, with its expanding galactic and stellar separation accelerating toward a thermodynamic demise. Entropy is a fact of life of the universe: providing an expiry date, and eventually rendering everything extinct. The end of history, the end of physics — and the end of metaphysics.

 

Monday 20 May 2024

America’s Polarised Public Square and the Case of the 2024 Presidential Campaign

Plato’s tale of shadows being misinterpreted in the cave
can be taken as a warning about the dangers of propaganda and misinformation


By Keith Tidman 

There’s a thinking error, sometimes called the Dunning-Kruger effect that warns us that cognitive biases can lead people to overvalue their own knowledge and understanding, amplified by tilted campaign narratives that confound voters. Sometimes voters fail to recognize their patchy ability to referee the truth of what they see and hear from the presidential campaigns and various other sources, including both social media and mainstream media. The effect skews public debate, as the electorate cloisters around hardened policy affecting America’s future. It is a tendency that has prompted many thinkers, from among the ancient Athenians to some of America’s founders, to be wary of democracy.


So, perhaps today more than ever, the manner of political discourse profoundly matters. Disinformation from dubious sources and the razor-edged negative branding of the other candidate’s political positions abound, leading to distrust, rifts, confusion, and polarised partisanship within society. The bursts of incivility and brickbats are infectious, sapping many among the electorate. Witness today’s presidential campaign in the United States.

 

Even before the conventions of this summer, the Democratic and Republican presidential candidates are a lock; yet, any expectations of orderliness are an illusion. President Joe Biden and former president Donald Trump, with candid campaign devotees deployed alongside, are immersed in spirited political tussles. The limited-government mindset of Enlightenment philosopher John Locke might well stoke the hurrahs of libertarians, but not of the mainstream political parties thriving on the nectar of activism and adversarial politics.

 

We’re left asking, then, what facts can the electorate trust as they make political choices? With what degree of certainty should the public approach the information they’re served by the campaigns and legions of doctrinaire pundits talking at cross purposes? And is it possible to cut through the diffusion of doctrine and immoderate conviction? 

 

Facts are indispensable to describing what’s happening inside the political arena, as well as to arbitrate policy changes. Despite the sometimes-uncertain provenance and pertinence of facts, they serve as tinder to fuel policy choices. The cautious expectation is that verifiable facts can translate to the meeting of minds. The web of relationships that gives rise to ideas creates an understanding of the tapestry that the public stitches together from the many fragments. The idealised objective is a Rousseau-like social contract, where the public and elected representatives intend to collaborate in pursuit of the common good — a squishy concept, at best.

 

Today, anyway, the reality is very different: discourse in the public square often gets trampled, as camps stake out ownership of the politically littered battleground. The combustibility of political back-and-forth makes the exchanges harder, as prickly disputants amplify their differences rather than constructively bridge divides. In the process, facts get shaded by politically motivated groups metaphorically wielding high-decibel bullhorns, reflecting one set or another of political, societal, and cultural norms. Hyperpartisanship displaces bipartisanship. 

 

Consider the case of refugees and migrants arriving cross-border in the United States. The political atmosphere has been heavy with opposing points of view. One camp, described by some as nativist, contends that porous borders threaten the fabric of the nation. They fear marginalisation, believing “fortress America” is the solution. Another, progressive camp contends that the migrants add to the nation’s economy, enrich our already-dynamic multiculturalism, and on humanitarian grounds merit assistance. Yet, the cantankerous rhetorical parrying between the camps continues to enlarge, not narrow, the political gap.

 

Disputes over book bans, racial discrimination, reproductive rights, tax policy, inequality, role of religion, public demonstrations, gun safety, rules of democracy, and other normative and transactional wedge issues are equally fraught among intransigent politicians of diametrically contrasting views and immune to persuasion. Such flashpoints are made worse by intra-party, not just cross-party, hubs at boisterous variance with one another — leaving one wondering how best to arrive at a collective of settled norms.

 

Instead of being the anchors of social discourse, real or disputed facts may be used to propagate discord or to disadvantage the “other.” Facts fuel jaundiced competition over political power and control: and as historian and politician Lord Acton said, such “power tends to corrupt and absolute power corrupts absolutely.” Many people complain that this “other” is rooted in systemic bias and ranges across race, ethnicity, gender, national origin, language, religion, education, familial pedigree, and socioeconomics. The view is that marginalisation and disenfranchisement result from the polemical fray, which may have been the underlying aim all along.

 

Unfortunately, while the world democratises access to information through the ubiquity of technology, individuals with manipulative purposes may take advantage of those consumers of information who are disinclined or unprepared to thoughtfully question the messaging. That is, what do political narratives really say, who’s formulating the narratives, what are their benign or malign purposes, and who’s entrusted with curating and vetting? Both leftwing and rightwing populism roams freely. It recalls Thomas Paine’s advice in The Rights of Man that “moderation in temper is always a virtue; but moderation in principle is always a vice.” Shrewd advice too often left unheeded in the presidential campaign, and in the churn of events has itself become the tinder of the dissent mentioned above.

 

Today, dubious facts are scattered across the communications landscape, steering beliefs, driving confirmation bias, stoking messianic zeal, stirring identity warfare, and fueling ill-informed voting. As Thomas Jefferson observed, the resulting uncertainty short-circuits the capacity of ordinary people to subscribe to the notion “That government is the strongest of which every [citizen] feels himself a part.” A notion foundational to democracy, one might say. Accordingly, the public has to grapple with discerning which politicians are honest brokers, or which might beguile. Nor can the public readily know the workings of social media’s opaque algorithms, which compete for the inside track on the content of candidates’ messaging. Communication skirmishes are underway for political leverage between the Biden and Trump campaigns. 

 

Jettisoning political stridency and hardened positions proves difficult, of course, especially among political evangelists at loggerheads. But it’s doable: The aim of sincere conciliation is to moderate the rancorous political discourse, while not fearing but rather accommodating the unbridled sharing of diverse ideas, which is foundational for democracy operating at its best.  

Monday 6 May 2024

On the Trail of Human Consciousness


By Keith Tidman
 

Daniel Dennett once called consciousness the “last surviving mystery” humankind faces. That may be premature and even a bit hyperbolic, but not by much. At the very least, consciousness ranks among the biggest of the remaining mysteries. Two questions central to this are: Does the source of conscious experience rest solely in the neurophysiology of the brain, reducible to myriad sets of mechanical functions that necessarily conform to physical laws? Or, as some have contended, is consciousness somehow airily, dualistically separate from the brain, existing in some sort of undefinably ethereal dimension? 

Consciousness is an empirical, bridge-like connection to things, events, and conditions, boiling down to external stimuli that require vetting within the brain. Conscious states entail a wide range of human experiences, such as awareness, identity, cognition, wakefulness, sentience, imagination, presence in time and space, perception, enthrallment, emotion, visions of alternative futures, anchors to history, ideation, attention, volition, sense of agency, thought experimentation, self-optimisation, memories, opinions — and much more. Not to forget higher-order states of reality, able to include the social, political, legal, familial, educational, environmental, scientific, and ethical norms of the community. The process includes the brain's ability to orchestrate how the states of consciousness play their roles in harmony. As philosopher Thomas Nagel therefore concluded, “there is something it is like to be [us]” — that something being our sense of identity, acquired through individual awareness, perception, and experience.


The conscious mind empirically, subjectively edits objective reality. In the phrase of David Chalmers, philosopher of mind and cognitive scientist, “there is a whir of information processing” as all that complexly happens. The presence of such states makes it hard, if not impossible, to disbelieve our own existence as just an illusion, even if we have hesitancy about the accuracy of our perception of the presumed objective reality encircling us. Thought, introspection, sensing, knowing, belief, the arrow of perpetual change — as well as the spatial and temporal discernments of the world — contribute to confirming what we are about. It’s us, in an inexorable abundance of curiosity, wondering as we gaze upon the micro to the macro dimensions of the universe.

 

None of these states, however, requires the presence of mysterious goings-on — an “ethereal mind,” operating on a level separate from the neuronal, synaptic activity of the brain. Accordingly, “consciousness is real and irreducible,” as Dennett’s fellow philosopher, John Searle, observed while pointing out that the seat of consciousness is the brain; “you can’t get rid of it.” True enough. The centuries-old Cartesian mind-body distinction, with its suspicious otherworldly spiritual, even religious, underpinnings and motive, has long been displaced by today’s neuroscience, physics, and biology. Today, philosophers of mind cheerfully weigh in on the what-if modeling aspects of human consciousness. But it must be said that the baton for elucidating consciousness, two and a half millennia after the ancient world’s musings on the subject, has been handed off to the natural sciences. And there is every reason to trust the latter will eventually triumph, filling the current explanatory gap — whether the path to ultimate understanding follows a straight line or, perhaps more likely, zigs and zags. A mix of dusky and well-lit alleys.

 

Sensations, like the taste of silky chocolate, the sight of northern lights, the sound of a violin concerto, the smell of a petunia, hunger before an aromatic meal, pleasure from being touched, pain from an accident, fear of dark spaces, roughness of volcanic rock, or happiness while watching children play on the beach, are sometimes called qualia. These are the subjective, qualitative properties of experience, which purportedly differ from one person to another. Each person interpreting, or editing, reality differently, whether only marginally so or perhaps to significant extents — all the while getting close enough to external reality for us to get on with everyday life in workably practical ways. 


So, for example, my experience of an icy breeze might be different from yours because of differences — even microscopically — between our respective neurobiological reactions. This being the subjective nature of experience of the same thing, at the same time and in the same place. And yet, qualia might well be, in the words of Chalmers, the “hard problem” in understanding consciousness; but they aren’t an insoluble problem. The individualisation of these experiences, or something that seems like them, will likely prove traceable to brain circuitry and activity, requiring us to penetrate the finer-coarse granularity of the bustling mind. Consciousness can thus be defined as a blend of what our senses absorb and process, as well as how we resultantly act. Put another way, decisions and behaviours matter.

 

The point is, all this neurophysiological activity doesn’t merely represent the surfacing or emergence or groundswell of consciousness, it is consciousness — both necessary and sufficient. That is, mind and consciousness don’t hover separate from the brain, in oddly spectral form. It steadfastly remains a fundamentally materialist framework, containing the very nucleus of human nature. The promise is that in the process of developing an increasingly better understanding of the complexity — of the nuance and richness — of consciousness, humanity will be provided with a vital key for unlocking what makes us, us

 

Monday 17 July 2023

When Is a Heap Not a Heap? The Sorites Paradox and ‘Fuzzy Logic’


By Keith Tidman
 

Imagine you are looking at a ‘heap’ of wheat comprising some several million grains and just one grain is removed. Surely you would agree with everyone that afterward you are still staring at a heap. And that the onlookers were right to continue concluding ‘the heap’ remains reality if another grain were to be removed — and then another and another. But as the pile shrinks, the situation eventually gets trickier.

 

If grains continue to be removed one at a time, in incremental fashion, when does the heap no longer qualify, in the minds of the onlookers, as a heap? Which numbered grain makes the difference between a heap of wheat and not a heap of wheat? 

 

Arguably we face the same conundrum if we were to reverse the situation: starting with zero grains of wheat, then incrementally adding one grain at a time, one after the other (n + 1, n + 2 ...). In that case, which numbered grain causes the accumulating grains of wheat to transition into a heap? Put another way, what are the borderlines between true and not true as to pronouncing there’s a heap?

 

What we’re describing here is called the Sorites paradox, invented by the fourth-century BC Athenian Eubulides, a philosopher of the Megarian school, named after Euclides of Megara, one of the pupils of Socrates. The school, or group, is famous for paradoxes like this one. ‘Sorites’, by the way, derives not from a particular person, but from the Greek word soros, meaning ‘heap’ or ‘pile’. The focus here being on the boundary of ‘being a heap’ or ‘not being a heap’, which is indistinct when single grains are either added or removed. The paradox is deceptive in appearing simple, even simplistic, yet, any number of critically important real-world applications attest to its decided significance. 

 

A particularly provocative case in point, exemplifying the central incrementalism of the Sorites paradox, is concerns deciding when a fetus transitions into a person. Across the milestones of conception, birth, and infancy, the fetus-cum-person acquires increasing physical and cognitive complexity and sophistication, occurring in successively tiny changes. Involving not just the number of features, but of course also the particular type of features (that is, qualitative factors). Leading us to ask, what are the borderlines between true and not true as to pronouncing there’s a person. As we know, this example of gradualism has led to highly consequential medical, legal, constitutional, and ethical implications being heatedly and tirelessly debated in public forums. 

 

Likewise, with regard to this same Sorites-like incrementalism, we might assess which ‘grain-by-grain’ change rises to the level of a ‘human being’ close to the end of a life — when, let’s say, deep dementia increasingly ravages aspects of a person’s consciousness, identity, and rationalism, greatly impacting awareness. Or, say, when some other devasting health event results in gradually nearing brain death, and alternative decisions hover perilously over how much to intervene medically, given best-in-practice efforts at a prognosis and taking into account the patient’s and family’s humanity, dignity, and socially recognised rights.

 

Ot take the stepwise development of ‘megacomplex artificial intelligence’. Again, involving consideration of not just ‘how many features’ (n + 1 or n - 1), but also ‘which features’, the latter entailing qualifiable features. The discussion has stirred intense debate over the race for intellectual competitiveness, prompting hyperbolic public alarms about ‘existential risks’ to humanity and civilisation. The machine equivalence of human neurophysiology is speculated to transition, over years of gradual optimisation (and down the road, even self-optimisation), into human-like consciousness, awareness, and cognition. Leading us to ask, where are the borderlines between true and not true as to pronouncing it has consciousness and greater-than-human intelligence? 

 

In the three examples of Sorites ‘grain-by-grain’ incrementalism above — start of life, end of life, and artificial general intelligence — words like ‘human’, ‘consciousness’, ‘perception’, ‘sentience’, and ‘person’ provide grist for neuroscientists, philosophers of mind, ethicists, and AI technologists to work with, until the desired threshold is reached. The limitations of natural language, even in circumstances mainly governed by the prescribed rules of logic and mathematics, might not make it any easier to concretely describe these crystalising concepts.

 

Given the nebulousness of terms like personhood and consciousness, which tend to bob up and down in natural languages like English, bivalent logic — where a statement is either true or false, but not both or in-between — may be insufficient. The Achilles’ heel is that the meaning of these kinds of terms may obscure truth as we struggle to define them. Whereas classical logic says there either is or is not a heap, with no shades in the middle, there’s something called fuzzy logic that scraps bivalence.

 

Fuzzy logic recognises there are both large and subtle gradations between categorically true and categorically false. There’s a continuum, where statements can be partially true and partially false, while also shifting in their truth value. A state of becoming, one might say. A line may thus be drawn between concepts that lie on such continuums. Accordingly, as individual grains of wheat are removed, the heap becomes, in tiny increments, less and less a heap — arriving at a threshold where people may reasonably concur it’s no longer a heap.

 

That tipping point is key, for vagueness isn’t just a matter of logic, it’s also a matter of knowledge and understanding (a matter of epistemology). In particular, what do we know, with what degree of certainty and uncertainty do we know it, when do we know it, and when does what we know really matter? Also, how do we use natural language to capture all the functionality of that language? Despite the gradations of true and false that we just talked about in confirming or refuting a heap, realistically the addition or removal of just one grain does in fact tip whether it’s a heap, even if we’re not aware which grain it was. Just one grain, that is, ought to be enough in measuring ‘heapness’, even if it’s hard to recognise where that threshold is.

 

Another situation involves the moral incrementalism of decisions and actions: what are the borderlines between true and not true as to pronouncing that a decision or action is moral? An important case is when we regard or disregard the moral effects of our actions. Such as, environmentally, on the welfare of other species sharing this planet, or concerning the effects on the larger ecosystem in ways that exacerbate the extreme outcomes of climate change.

 

Judgments as to the merits of actions are not ethically bivalent, either — by which I mean they do not tidily split between being decidedly good or decidedly bad, leaving out any middle ground. Rather, according to fuzzy logic, judgments allow for ethical incrementalism between what’s unconditionally good at one extreme and what’s unconditionally bad at the other extreme. Life doesn’t work quite so cleanly, of course. As we discussed earlier, the process entails switching out from standard logic to allow for imprecise concepts, and to accommodate the ground between two distant outliers.

 

Oblique concepts such as ‘good versus bad’, ‘being human’, ‘consciousness’, ‘moral’, ‘standards’ — and, yes, ‘heap’ — have very little basis from which to derive exact meanings. A classic example of such imprecision is voiced by science’s uncertainty principle: that is, we cannot know both the speed and location of a particle with simultaneously equal accuracy. As our knowledge of one factor increases in precision, knowledge of the other decreases in precision.

 

The assertion that ‘there is a heap’ becomes less true the more we take grains away from a heap, and becomes increasingly true the more we add grains. Finding the borderlines between true and not true in the sorts of consequential pronouncements above is key. And so, regardless of the paradox’s ancient provenance, the gradualism of the Sorites metaphor underscores its value in making everyday determinations between truth and falsity.


Monday 26 June 2023

Ideas Animate Democracy


Keith Tidman
 

The philosopher Soren Kierkegaard once advised, ‘Life can only be understood backwards … but it must be lived forward’ — that is, life understood with one eye turned to history, and presaged with the other eye turned to competing future prospects. An observation about understanding and living life that applies across the board, to individuals, communities, and nations. Another way of putting it is that ideas are the grist for thinking not only about ideals but about the richness of learnable history and the alternative futures from which society asserts agency in freely choosing its way ahead. 


As of late, though, we seem to have lost sight that one way for democracy to wilt is to shunt aside ideas that might otherwise inspire minds to think, imagine, solve, create, discover and innovate — the source of democracy’s intellectual muscularity. For reflexively rebuffing ideas and their sources is really about constraining inquiry and debate in the public square. Instead, there has been much chatter about democracies facing existential grudge matches against exploitative autocratic regimes that issue their triumphalist narrative and view democracy as weak-kneed.  


In mirroring the decrees of the Ministry of Truth in the dystopian world of George Orwell’s book Nineteen Eighty-Four — where two plus two equals five, war is peace, freedom is slavery, and ignorance is strength — unbridled censorship and historical revisionism begin and end with the fear of ideas. Ideas snubbed by authoritarians’ heavy hand. The short of it is that prohibitions on ideas end up a jumbled net, a capricious exercise in power and control. Accordingly, much exertion is put into shaping society’s sanctioned norms, where dissent isn’t brooked. A point to which philosopher Hannah Arendt cautioned, ‘Totalitarianism has discovered a means of dominating and terrorising human beings from within’. Where trodden-upon voting and ardent circulation of propagandistic themes, both of which torque reality, hamper free expression.

 

This tale about prospective prohibitions on ideas is about choices between the resulting richness of thought or the poverty of thought — a choice we must get right, and can do so only by making it possible for new intellectual shoots to sprout from the raked seedbed. The optimistic expectation from this is that we get to understand and act on firmer notions of what’s real and true. But which reality? One reality is that each idea that’s arbitrarily embargoed delivers yet another kink to democracy’s armour; a very different reality is that each idea, however provocative, allows democracy to flourish.

 

Only a small part of the grappling over ideas is for dominion over which ideas will reasonably prevail long term. The larger motive is to honour the openness of ideas’ free flow, to be celebrated. This exercise brims with questions about knowledge. Like these: What do we know, how do we know it, with what certainty or uncertainty do we know it, how do we confirm or refute it, how do we use it for constructive purposes, and how do we allow for change? Such fundamental questions crisscross all fields of study. New knowledge ferments to improve insight into what’s true. Emboldened by this essential exercise, an informed democracy is steadfastly enabled to resist the siren songs of autocracy.

 

Ideas are accelerants in the public forum. Ideas are what undergird democracy’s resilience and rootedness, on which standards and norms are founded. Democracy at its best allows for the unobstructed flow of different social and political thought, side by side. As Benjamin Franklin, polymath and statesman, prophetically said: ‘Freedom of speech is a principal pillar of a free government’. A lead worth following. In this churn, ideas soar or flop by virtue of the quality of their content and the strength of their persuasion. Democracy allows its citizens to pick which ideas normalise standards — through debate and subjecting ideas to scrutiny, leading to their acceptance or refutation. Acid tests, in other words, of the cohesion and sustainability of ideas. At its best, debate arouses actionable policy and meaningful change.

 

Despite society being buffeted daily by roiling politics and social unrest, democracy’s institutions are resilient. Our institutions might flex under stress, but they are capable of enduring the broadsides of ideological competitiveness as society makes policy. The democratic republic is not existentially imperiled. It’s not fragilely brittle. America’s Founding Fathers set in place hardy institutions, which, despite public handwringing, have endured challenges over the last two-and-a-half centuries. Historical tests of our institutions’ mettle have inflicted only superficial scratches — well within institutions’ ability to rebound again and again, eventually as robust as ever.

 

Yet, as Aristotle importantly pointed out by way of a caveat to democracy’s sovereignty and survivability, 


‘If liberty and equality . . . are chiefly to be found in democracy, they will be attained when all persons share in the government to the utmost.’


A tall order, as many have found, but one that’s worthy and essential, teed up for democracies to assiduously pursue. Democracy might seem scruffy at times. But at its best, democracy ought not fear ideas. Fear that commonly bubbles up from overwrought narrative and unreasoned parochialism, in the form of ham-handed constraints on thought and expression.

 

The fear of ideas is often more injurious than the content of ideas, especially in the shadows of disagreeableness intended to cause fissures in society. Ideas are thus to be hallowed, not hollowed. To countenance contesting ideas — majority and minority opinions alike, forged on the anvil of rationalism, pluralism, and critical thinking — is essential to the origination of constructive policies and, ultimately, how democracy is constitutionally braced.

 

 

Monday 12 June 2023

The Euthyphro Dilemma: What Makes Something Moral?

The sixteenth-century nun and mystic, Saint Teresa. In her autobiography, she wrote that she was very fond of St. Augustine … for he was a sinner too

By Keith Tidman  

Consider this: Is the pious being loved by the gods because it is pious, or is it pious because it is being loved by the gods?  Plato, Euthyphro


Plato has Socrates asking just this of the Athenian prophet Euthyphro in one of his most famous dialogues. The characteristically riddlesome inquiry became known as the Euthyphro dilemma. Another way to frame the issue is to flip the question around: Is an action wrong because the gods forbid it, or do the gods forbid it because it is wrong? This version presents what is often referred to as the ‘two horns’ of the dilemma.

 

Put another way, if what’s morally good or bad is only what the gods arbitrarily make something, called the divine command theory (or divine fiat) — which Euthyphro subscribed to — then the gods may be presumed to have agency and omnipotence over these and other matters. However, if, instead, the gods simply point to what’s already, independently good or bad, then there must be a source of moral judgment that transcends the gods, leaving that other, higher source of moral absolutism yet to be explained millennia later. 

 

In the ancient world the gods notoriously quarreled with one another, engaging in scrappy tiffs over concerns about power, authority, ambition, influence, and jealousy, on occasion fueled by unabashed hubris. Disunity and disputation were the order of the day. Sometimes making for scandalous recounting, these quarrels comprised the stuff of modern students’ soap-opera-styled mythological entertainment. Yet, even when there is only one god, disagreements over orthodoxy and morality occur aplenty. The challenge mounted by the dilemma is as important to today’s world of a generally monotheistic god as it was to the polytheistic predispositions of ancient Athens. The medieval theologians’ explanations are not enough to persuade:


‘Since good as perceived by the intellect is the object of the will, it is impossible for God to will anything but what His wisdom approves. This is as it were, His law of justice, in accordance with which His will is right and just. Hence, what He does according to His will He does justly: as we do justly when we do according to the law. But whereas law comes to us from some higher power, God is a law unto Himself’ (St. Thomas Aquinas, Summa Theologica, First Part, Question 21, first article reply to Obj. 2).


In the seventeenth century, Gottfried Leibniz offered a firm challenge to ‘divine command theory’, in asking the following question about whether right and wrong can be known only by divine revelation. He suggested, rather, there ought to be reasons, apart from religious tradition only, why particular behaviour is moral or immoral:

 

‘In saying that things are not good by any rule of goodness, but sheerly by the will of God, it seems to me that one destroys, without realising it, all the love of God and all his glory. For why praise him for he has done if he would be equally praiseworthy in doing exactly the contrary?’ (Discourse on Metaphysics, 1686). 

 

Meantime, today’s monotheistic world religions offer, among other holy texts, the Bible, Qur’an, and Torah, bearing the moral and legal decrees professed to be handed down by God. But even in the situations’ dissimilarity — the ancient world of Greek deities and modern monotheism (as well as some of today’s polytheistic practices) — both serve as examples of the ‘divine command theory’. That is, what’s deemed pious is presumed to be the case precisely because God chooses to love it, in line with the theory. That pious something or other is not independently sitting adrift, noncontingently virtuous in its own right, with nothing transcendentally making it so.

 

This presupposes that God commands only what is good. It also presupposes that, for example, things like the giving of charity, the avoidance of adultery, and the refrain from stealing, murdering, and ‘graven images’ have their truth value from being morally good if, and only if, God loves these and other commandments. The complete taxonomy (or classification scheme) of edicts being aimed at placing guardrails on human behaviour in the expectation of a nobler, more sanctified world. But God loving what’s morally good for its own sake — that is, apart from God making it so — clearly denies ‘divine command theory’.

 

For, if the pious is loved by the gods because it is pious, which is one of the interpretations offered by Plato (through the mouth of Socrates) in challenging Euthyphro’s thinking, then it opens the door to an authority higher than God. Where matters of morality may exist outside of God’s reach, suggesting something other than God being all-powerful. Such a scenario pushes back against traditionally Abrahamic (monotheist) conceptualisations.

 

Yet, whether the situation calls for a single almighty God or a yet greater power of some indescribable sort, the philosopher Thomas Hobbes, who like St. Thomas Aquinas and Averroës believed that God commands only what is good, argued that God’s laws must conform to ‘natural reason’. Hobbes’s point makes for an essential truism, especially if the universe is to have rhyme and reason. This being true even if the governing forces of natural law and of objective morality are not entirely understood or, for that matter, not compressible into a singularly encompassing ‘theory of all’. 

 

Because of the principles of ‘divine command theory’, some people contend the necessary takeaway is that there can be no ethics in the absence of God to judge something as pious. In fact, Fyodor Dostoyevsky, in The Brothers Karamazov, presumptuously declared that ‘if God does not exist, everything is permitted’. Surely not so; you don’t have to be a theist of faith to spot the shortsighted dismissiveness of his assertion. After all, an atheist or agnostic might recognise the benevolence, even the categorical need, for adherence to manmade principles of morality, to foster the welfare of humanity at large for its own sufficient sake. Secular humanism, in other words  which greatly appeals to many people.

 

Immanuel Kant’s categorical imperative supports these human-centered, do-unto-others notions: ‘Act only in accordance with that maxim through which you can at the same time will that it become a universal law’. An ethic of respect toward all, as we mortals delineate between right and wrong. Even with ‘divine command theory’, it seems reasonable to suppose that a god would have reasons for preferring that moral principles not be arrived at willy-nilly.