Showing posts with label Keith Tidman. Show all posts
Showing posts with label Keith Tidman. Show all posts

Monday 20 May 2024

America’s Polarised Public Square and the Case of the 2024 Presidential Campaign

Plato’s tale of shadows being misinterpreted in the cave
can be taken as a warning about the dangers of propaganda and misinformation


By Keith Tidman 

There’s a thinking error, sometimes called the Dunning-Kruger effect that warns us that cognitive biases can lead people to overvalue their own knowledge and understanding, amplified by tilted campaign narratives that confound voters. Sometimes voters fail to recognize their patchy ability to referee the truth of what they see and hear from the presidential campaigns and various other sources, including both social media and mainstream media. The effect skews public debate, as the electorate cloisters around hardened policy affecting America’s future. It is a tendency that has prompted many thinkers, from among the ancient Athenians to some of America’s founders, to be wary of democracy.


So, perhaps today more than ever, the manner of political discourse profoundly matters. Disinformation from dubious sources and the razor-edged negative branding of the other candidate’s political positions abound, leading to distrust, rifts, confusion, and polarised partisanship within society. The bursts of incivility and brickbats are infectious, sapping many among the electorate. Witness today’s presidential campaign in the United States.

 

Even before the conventions of this summer, the Democratic and Republican presidential candidates are a lock; yet, any expectations of orderliness are an illusion. President Joe Biden and former president Donald Trump, with candid campaign devotees deployed alongside, are immersed in spirited political tussles. The limited-government mindset of Enlightenment philosopher John Locke might well stoke the hurrahs of libertarians, but not of the mainstream political parties thriving on the nectar of activism and adversarial politics.

 

We’re left asking, then, what facts can the electorate trust as they make political choices? With what degree of certainty should the public approach the information they’re served by the campaigns and legions of doctrinaire pundits talking at cross purposes? And is it possible to cut through the diffusion of doctrine and immoderate conviction? 

 

Facts are indispensable to describing what’s happening inside the political arena, as well as to arbitrate policy changes. Despite the sometimes-uncertain provenance and pertinence of facts, they serve as tinder to fuel policy choices. The cautious expectation is that verifiable facts can translate to the meeting of minds. The web of relationships that gives rise to ideas creates an understanding of the tapestry that the public stitches together from the many fragments. The idealised objective is a Rousseau-like social contract, where the public and elected representatives intend to collaborate in pursuit of the common good — a squishy concept, at best.

 

Today, anyway, the reality is very different: discourse in the public square often gets trampled, as camps stake out ownership of the politically littered battleground. The combustibility of political back-and-forth makes the exchanges harder, as prickly disputants amplify their differences rather than constructively bridge divides. In the process, facts get shaded by politically motivated groups metaphorically wielding high-decibel bullhorns, reflecting one set or another of political, societal, and cultural norms. Hyperpartisanship displaces bipartisanship. 

 

Consider the case of refugees and migrants arriving cross-border in the United States. The political atmosphere has been heavy with opposing points of view. One camp, described by some as nativist, contends that porous borders threaten the fabric of the nation. They fear marginalisation, believing “fortress America” is the solution. Another, progressive camp contends that the migrants add to the nation’s economy, enrich our already-dynamic multiculturalism, and on humanitarian grounds merit assistance. Yet, the cantankerous rhetorical parrying between the camps continues to enlarge, not narrow, the political gap.

 

Disputes over book bans, racial discrimination, reproductive rights, tax policy, inequality, role of religion, public demonstrations, gun safety, rules of democracy, and other normative and transactional wedge issues are equally fraught among intransigent politicians of diametrically contrasting views and immune to persuasion. Such flashpoints are made worse by intra-party, not just cross-party, hubs at boisterous variance with one another — leaving one wondering how best to arrive at a collective of settled norms.

 

Instead of being the anchors of social discourse, real or disputed facts may be used to propagate discord or to disadvantage the “other.” Facts fuel jaundiced competition over political power and control: and as historian and politician Lord Acton said, such “power tends to corrupt and absolute power corrupts absolutely.” Many people complain that this “other” is rooted in systemic bias and ranges across race, ethnicity, gender, national origin, language, religion, education, familial pedigree, and socioeconomics. The view is that marginalisation and disenfranchisement result from the polemical fray, which may have been the underlying aim all along.

 

Unfortunately, while the world democratises access to information through the ubiquity of technology, individuals with manipulative purposes may take advantage of those consumers of information who are disinclined or unprepared to thoughtfully question the messaging. That is, what do political narratives really say, who’s formulating the narratives, what are their benign or malign purposes, and who’s entrusted with curating and vetting? Both leftwing and rightwing populism roams freely. It recalls Thomas Paine’s advice in The Rights of Man that “moderation in temper is always a virtue; but moderation in principle is always a vice.” Shrewd advice too often left unheeded in the presidential campaign, and in the churn of events has itself become the tinder of the dissent mentioned above.

 

Today, dubious facts are scattered across the communications landscape, steering beliefs, driving confirmation bias, stoking messianic zeal, stirring identity warfare, and fueling ill-informed voting. As Thomas Jefferson observed, the resulting uncertainty short-circuits the capacity of ordinary people to subscribe to the notion “That government is the strongest of which every [citizen] feels himself a part.” A notion foundational to democracy, one might say. Accordingly, the public has to grapple with discerning which politicians are honest brokers, or which might beguile. Nor can the public readily know the workings of social media’s opaque algorithms, which compete for the inside track on the content of candidates’ messaging. Communication skirmishes are underway for political leverage between the Biden and Trump campaigns. 

 

Jettisoning political stridency and hardened positions proves difficult, of course, especially among political evangelists at loggerheads. But it’s doable: The aim of sincere conciliation is to moderate the rancorous political discourse, while not fearing but rather accommodating the unbridled sharing of diverse ideas, which is foundational for democracy operating at its best.  

Monday 6 May 2024

On the Trail of Human Consciousness


By Keith Tidman
 

Daniel Dennett once called consciousness the “last surviving mystery” humankind faces. That may be premature and even a bit hyperbolic, but not by much. At the very least, consciousness ranks among the biggest of the remaining mysteries. Two questions central to this are: Does the source of conscious experience rest solely in the neurophysiology of the brain, reducible to myriad sets of mechanical functions that necessarily conform to physical laws? Or, as some have contended, is consciousness somehow airily, dualistically separate from the brain, existing in some sort of undefinably ethereal dimension? 

Consciousness is an empirical, bridge-like connection to things, events, and conditions, boiling down to external stimuli that require vetting within the brain. Conscious states entail a wide range of human experiences, such as awareness, identity, cognition, wakefulness, sentience, imagination, presence in time and space, perception, enthrallment, emotion, visions of alternative futures, anchors to history, ideation, attention, volition, sense of agency, thought experimentation, self-optimisation, memories, opinions — and much more. Not to forget higher-order states of reality, able to include the social, political, legal, familial, educational, environmental, scientific, and ethical norms of the community. The process includes the brain's ability to orchestrate how the states of consciousness play their roles in harmony. As philosopher Thomas Nagel therefore concluded, “there is something it is like to be [us]” — that something being our sense of identity, acquired through individual awareness, perception, and experience.


The conscious mind empirically, subjectively edits objective reality. In the phrase of David Chalmers, philosopher of mind and cognitive scientist, “there is a whir of information processing” as all that complexly happens. The presence of such states makes it hard, if not impossible, to disbelieve our own existence as just an illusion, even if we have hesitancy about the accuracy of our perception of the presumed objective reality encircling us. Thought, introspection, sensing, knowing, belief, the arrow of perpetual change — as well as the spatial and temporal discernments of the world — contribute to confirming what we are about. It’s us, in an inexorable abundance of curiosity, wondering as we gaze upon the micro to the macro dimensions of the universe.

 

None of these states, however, requires the presence of mysterious goings-on — an “ethereal mind,” operating on a level separate from the neuronal, synaptic activity of the brain. Accordingly, “consciousness is real and irreducible,” as Dennett’s fellow philosopher, John Searle, observed while pointing out that the seat of consciousness is the brain; “you can’t get rid of it.” True enough. The centuries-old Cartesian mind-body distinction, with its suspicious otherworldly spiritual, even religious, underpinnings and motive, has long been displaced by today’s neuroscience, physics, and biology. Today, philosophers of mind cheerfully weigh in on the what-if modeling aspects of human consciousness. But it must be said that the baton for elucidating consciousness, two and a half millennia after the ancient world’s musings on the subject, has been handed off to the natural sciences. And there is every reason to trust the latter will eventually triumph, filling the current explanatory gap — whether the path to ultimate understanding follows a straight line or, perhaps more likely, zigs and zags. A mix of dusky and well-lit alleys.

 

Sensations, like the taste of silky chocolate, the sight of northern lights, the sound of a violin concerto, the smell of a petunia, hunger before an aromatic meal, pleasure from being touched, pain from an accident, fear of dark spaces, roughness of volcanic rock, or happiness while watching children play on the beach, are sometimes called qualia. These are the subjective, qualitative properties of experience, which purportedly differ from one person to another. Each person interpreting, or editing, reality differently, whether only marginally so or perhaps to significant extents — all the while getting close enough to external reality for us to get on with everyday life in workably practical ways. 


So, for example, my experience of an icy breeze might be different from yours because of differences — even microscopically — between our respective neurobiological reactions. This being the subjective nature of experience of the same thing, at the same time and in the same place. And yet, qualia might well be, in the words of Chalmers, the “hard problem” in understanding consciousness; but they aren’t an insoluble problem. The individualisation of these experiences, or something that seems like them, will likely prove traceable to brain circuitry and activity, requiring us to penetrate the finer-coarse granularity of the bustling mind. Consciousness can thus be defined as a blend of what our senses absorb and process, as well as how we resultantly act. Put another way, decisions and behaviours matter.

 

The point is, all this neurophysiological activity doesn’t merely represent the surfacing or emergence or groundswell of consciousness, it is consciousness — both necessary and sufficient. That is, mind and consciousness don’t hover separate from the brain, in oddly spectral form. It steadfastly remains a fundamentally materialist framework, containing the very nucleus of human nature. The promise is that in the process of developing an increasingly better understanding of the complexity — of the nuance and richness — of consciousness, humanity will be provided with a vital key for unlocking what makes us, us

 

Monday 17 July 2023

When Is a Heap Not a Heap? The Sorites Paradox and ‘Fuzzy Logic’


By Keith Tidman
 

Imagine you are looking at a ‘heap’ of wheat comprising some several million grains and just one grain is removed. Surely you would agree with everyone that afterward you are still staring at a heap. And that the onlookers were right to continue concluding ‘the heap’ remains reality if another grain were to be removed — and then another and another. But as the pile shrinks, the situation eventually gets trickier.

 

If grains continue to be removed one at a time, in incremental fashion, when does the heap no longer qualify, in the minds of the onlookers, as a heap? Which numbered grain makes the difference between a heap of wheat and not a heap of wheat? 

 

Arguably we face the same conundrum if we were to reverse the situation: starting with zero grains of wheat, then incrementally adding one grain at a time, one after the other (n + 1, n + 2 ...). In that case, which numbered grain causes the accumulating grains of wheat to transition into a heap? Put another way, what are the borderlines between true and not true as to pronouncing there’s a heap?

 

What we’re describing here is called the Sorites paradox, invented by the fourth-century BC Athenian Eubulides, a philosopher of the Megarian school, named after Euclides of Megara, one of the pupils of Socrates. The school, or group, is famous for paradoxes like this one. ‘Sorites’, by the way, derives not from a particular person, but from the Greek word soros, meaning ‘heap’ or ‘pile’. The focus here being on the boundary of ‘being a heap’ or ‘not being a heap’, which is indistinct when single grains are either added or removed. The paradox is deceptive in appearing simple, even simplistic, yet, any number of critically important real-world applications attest to its decided significance. 

 

A particularly provocative case in point, exemplifying the central incrementalism of the Sorites paradox, is concerns deciding when a fetus transitions into a person. Across the milestones of conception, birth, and infancy, the fetus-cum-person acquires increasing physical and cognitive complexity and sophistication, occurring in successively tiny changes. Involving not just the number of features, but of course also the particular type of features (that is, qualitative factors). Leading us to ask, what are the borderlines between true and not true as to pronouncing there’s a person. As we know, this example of gradualism has led to highly consequential medical, legal, constitutional, and ethical implications being heatedly and tirelessly debated in public forums. 

 

Likewise, with regard to this same Sorites-like incrementalism, we might assess which ‘grain-by-grain’ change rises to the level of a ‘human being’ close to the end of a life — when, let’s say, deep dementia increasingly ravages aspects of a person’s consciousness, identity, and rationalism, greatly impacting awareness. Or, say, when some other devasting health event results in gradually nearing brain death, and alternative decisions hover perilously over how much to intervene medically, given best-in-practice efforts at a prognosis and taking into account the patient’s and family’s humanity, dignity, and socially recognised rights.

 

Ot take the stepwise development of ‘megacomplex artificial intelligence’. Again, involving consideration of not just ‘how many features’ (n + 1 or n - 1), but also ‘which features’, the latter entailing qualifiable features. The discussion has stirred intense debate over the race for intellectual competitiveness, prompting hyperbolic public alarms about ‘existential risks’ to humanity and civilisation. The machine equivalence of human neurophysiology is speculated to transition, over years of gradual optimisation (and down the road, even self-optimisation), into human-like consciousness, awareness, and cognition. Leading us to ask, where are the borderlines between true and not true as to pronouncing it has consciousness and greater-than-human intelligence? 

 

In the three examples of Sorites ‘grain-by-grain’ incrementalism above — start of life, end of life, and artificial general intelligence — words like ‘human’, ‘consciousness’, ‘perception’, ‘sentience’, and ‘person’ provide grist for neuroscientists, philosophers of mind, ethicists, and AI technologists to work with, until the desired threshold is reached. The limitations of natural language, even in circumstances mainly governed by the prescribed rules of logic and mathematics, might not make it any easier to concretely describe these crystalising concepts.

 

Given the nebulousness of terms like personhood and consciousness, which tend to bob up and down in natural languages like English, bivalent logic — where a statement is either true or false, but not both or in-between — may be insufficient. The Achilles’ heel is that the meaning of these kinds of terms may obscure truth as we struggle to define them. Whereas classical logic says there either is or is not a heap, with no shades in the middle, there’s something called fuzzy logic that scraps bivalence.

 

Fuzzy logic recognises there are both large and subtle gradations between categorically true and categorically false. There’s a continuum, where statements can be partially true and partially false, while also shifting in their truth value. A state of becoming, one might say. A line may thus be drawn between concepts that lie on such continuums. Accordingly, as individual grains of wheat are removed, the heap becomes, in tiny increments, less and less a heap — arriving at a threshold where people may reasonably concur it’s no longer a heap.

 

That tipping point is key, for vagueness isn’t just a matter of logic, it’s also a matter of knowledge and understanding (a matter of epistemology). In particular, what do we know, with what degree of certainty and uncertainty do we know it, when do we know it, and when does what we know really matter? Also, how do we use natural language to capture all the functionality of that language? Despite the gradations of true and false that we just talked about in confirming or refuting a heap, realistically the addition or removal of just one grain does in fact tip whether it’s a heap, even if we’re not aware which grain it was. Just one grain, that is, ought to be enough in measuring ‘heapness’, even if it’s hard to recognise where that threshold is.

 

Another situation involves the moral incrementalism of decisions and actions: what are the borderlines between true and not true as to pronouncing that a decision or action is moral? An important case is when we regard or disregard the moral effects of our actions. Such as, environmentally, on the welfare of other species sharing this planet, or concerning the effects on the larger ecosystem in ways that exacerbate the extreme outcomes of climate change.

 

Judgments as to the merits of actions are not ethically bivalent, either — by which I mean they do not tidily split between being decidedly good or decidedly bad, leaving out any middle ground. Rather, according to fuzzy logic, judgments allow for ethical incrementalism between what’s unconditionally good at one extreme and what’s unconditionally bad at the other extreme. Life doesn’t work quite so cleanly, of course. As we discussed earlier, the process entails switching out from standard logic to allow for imprecise concepts, and to accommodate the ground between two distant outliers.

 

Oblique concepts such as ‘good versus bad’, ‘being human’, ‘consciousness’, ‘moral’, ‘standards’ — and, yes, ‘heap’ — have very little basis from which to derive exact meanings. A classic example of such imprecision is voiced by science’s uncertainty principle: that is, we cannot know both the speed and location of a particle with simultaneously equal accuracy. As our knowledge of one factor increases in precision, knowledge of the other decreases in precision.

 

The assertion that ‘there is a heap’ becomes less true the more we take grains away from a heap, and becomes increasingly true the more we add grains. Finding the borderlines between true and not true in the sorts of consequential pronouncements above is key. And so, regardless of the paradox’s ancient provenance, the gradualism of the Sorites metaphor underscores its value in making everyday determinations between truth and falsity.


Monday 26 June 2023

Ideas Animate Democracy


Keith Tidman
 

The philosopher Soren Kierkegaard once advised, ‘Life can only be understood backwards … but it must be lived forward’ — that is, life understood with one eye turned to history, and presaged with the other eye turned to competing future prospects. An observation about understanding and living life that applies across the board, to individuals, communities, and nations. Another way of putting it is that ideas are the grist for thinking not only about ideals but about the richness of learnable history and the alternative futures from which society asserts agency in freely choosing its way ahead. 


As of late, though, we seem to have lost sight that one way for democracy to wilt is to shunt aside ideas that might otherwise inspire minds to think, imagine, solve, create, discover and innovate — the source of democracy’s intellectual muscularity. For reflexively rebuffing ideas and their sources is really about constraining inquiry and debate in the public square. Instead, there has been much chatter about democracies facing existential grudge matches against exploitative autocratic regimes that issue their triumphalist narrative and view democracy as weak-kneed.  


In mirroring the decrees of the Ministry of Truth in the dystopian world of George Orwell’s book Nineteen Eighty-Four — where two plus two equals five, war is peace, freedom is slavery, and ignorance is strength — unbridled censorship and historical revisionism begin and end with the fear of ideas. Ideas snubbed by authoritarians’ heavy hand. The short of it is that prohibitions on ideas end up a jumbled net, a capricious exercise in power and control. Accordingly, much exertion is put into shaping society’s sanctioned norms, where dissent isn’t brooked. A point to which philosopher Hannah Arendt cautioned, ‘Totalitarianism has discovered a means of dominating and terrorising human beings from within’. Where trodden-upon voting and ardent circulation of propagandistic themes, both of which torque reality, hamper free expression.

 

This tale about prospective prohibitions on ideas is about choices between the resulting richness of thought or the poverty of thought — a choice we must get right, and can do so only by making it possible for new intellectual shoots to sprout from the raked seedbed. The optimistic expectation from this is that we get to understand and act on firmer notions of what’s real and true. But which reality? One reality is that each idea that’s arbitrarily embargoed delivers yet another kink to democracy’s armour; a very different reality is that each idea, however provocative, allows democracy to flourish.

 

Only a small part of the grappling over ideas is for dominion over which ideas will reasonably prevail long term. The larger motive is to honour the openness of ideas’ free flow, to be celebrated. This exercise brims with questions about knowledge. Like these: What do we know, how do we know it, with what certainty or uncertainty do we know it, how do we confirm or refute it, how do we use it for constructive purposes, and how do we allow for change? Such fundamental questions crisscross all fields of study. New knowledge ferments to improve insight into what’s true. Emboldened by this essential exercise, an informed democracy is steadfastly enabled to resist the siren songs of autocracy.

 

Ideas are accelerants in the public forum. Ideas are what undergird democracy’s resilience and rootedness, on which standards and norms are founded. Democracy at its best allows for the unobstructed flow of different social and political thought, side by side. As Benjamin Franklin, polymath and statesman, prophetically said: ‘Freedom of speech is a principal pillar of a free government’. A lead worth following. In this churn, ideas soar or flop by virtue of the quality of their content and the strength of their persuasion. Democracy allows its citizens to pick which ideas normalise standards — through debate and subjecting ideas to scrutiny, leading to their acceptance or refutation. Acid tests, in other words, of the cohesion and sustainability of ideas. At its best, debate arouses actionable policy and meaningful change.

 

Despite society being buffeted daily by roiling politics and social unrest, democracy’s institutions are resilient. Our institutions might flex under stress, but they are capable of enduring the broadsides of ideological competitiveness as society makes policy. The democratic republic is not existentially imperiled. It’s not fragilely brittle. America’s Founding Fathers set in place hardy institutions, which, despite public handwringing, have endured challenges over the last two-and-a-half centuries. Historical tests of our institutions’ mettle have inflicted only superficial scratches — well within institutions’ ability to rebound again and again, eventually as robust as ever.

 

Yet, as Aristotle importantly pointed out by way of a caveat to democracy’s sovereignty and survivability, 


‘If liberty and equality . . . are chiefly to be found in democracy, they will be attained when all persons share in the government to the utmost.’


A tall order, as many have found, but one that’s worthy and essential, teed up for democracies to assiduously pursue. Democracy might seem scruffy at times. But at its best, democracy ought not fear ideas. Fear that commonly bubbles up from overwrought narrative and unreasoned parochialism, in the form of ham-handed constraints on thought and expression.

 

The fear of ideas is often more injurious than the content of ideas, especially in the shadows of disagreeableness intended to cause fissures in society. Ideas are thus to be hallowed, not hollowed. To countenance contesting ideas — majority and minority opinions alike, forged on the anvil of rationalism, pluralism, and critical thinking — is essential to the origination of constructive policies and, ultimately, how democracy is constitutionally braced.

 

 

Monday 12 June 2023

The Euthyphro Dilemma: What Makes Something Moral?

The sixteenth-century nun and mystic, Saint Teresa. In her autobiography, she wrote that she was very fond of St. Augustine … for he was a sinner too

By Keith Tidman  

Consider this: Is the pious being loved by the gods because it is pious, or is it pious because it is being loved by the gods?  Plato, Euthyphro


Plato has Socrates asking just this of the Athenian prophet Euthyphro in one of his most famous dialogues. The characteristically riddlesome inquiry became known as the Euthyphro dilemma. Another way to frame the issue is to flip the question around: Is an action wrong because the gods forbid it, or do the gods forbid it because it is wrong? This version presents what is often referred to as the ‘two horns’ of the dilemma.

 

Put another way, if what’s morally good or bad is only what the gods arbitrarily make something, called the divine command theory (or divine fiat) — which Euthyphro subscribed to — then the gods may be presumed to have agency and omnipotence over these and other matters. However, if, instead, the gods simply point to what’s already, independently good or bad, then there must be a source of moral judgment that transcends the gods, leaving that other, higher source of moral absolutism yet to be explained millennia later. 

 

In the ancient world the gods notoriously quarreled with one another, engaging in scrappy tiffs over concerns about power, authority, ambition, influence, and jealousy, on occasion fueled by unabashed hubris. Disunity and disputation were the order of the day. Sometimes making for scandalous recounting, these quarrels comprised the stuff of modern students’ soap-opera-styled mythological entertainment. Yet, even when there is only one god, disagreements over orthodoxy and morality occur aplenty. The challenge mounted by the dilemma is as important to today’s world of a generally monotheistic god as it was to the polytheistic predispositions of ancient Athens. The medieval theologians’ explanations are not enough to persuade:


‘Since good as perceived by the intellect is the object of the will, it is impossible for God to will anything but what His wisdom approves. This is as it were, His law of justice, in accordance with which His will is right and just. Hence, what He does according to His will He does justly: as we do justly when we do according to the law. But whereas law comes to us from some higher power, God is a law unto Himself’ (St. Thomas Aquinas, Summa Theologica, First Part, Question 21, first article reply to Obj. 2).


In the seventeenth century, Gottfried Leibniz offered a firm challenge to ‘divine command theory’, in asking the following question about whether right and wrong can be known only by divine revelation. He suggested, rather, there ought to be reasons, apart from religious tradition only, why particular behaviour is moral or immoral:

 

‘In saying that things are not good by any rule of goodness, but sheerly by the will of God, it seems to me that one destroys, without realising it, all the love of God and all his glory. For why praise him for he has done if he would be equally praiseworthy in doing exactly the contrary?’ (Discourse on Metaphysics, 1686). 

 

Meantime, today’s monotheistic world religions offer, among other holy texts, the Bible, Qur’an, and Torah, bearing the moral and legal decrees professed to be handed down by God. But even in the situations’ dissimilarity — the ancient world of Greek deities and modern monotheism (as well as some of today’s polytheistic practices) — both serve as examples of the ‘divine command theory’. That is, what’s deemed pious is presumed to be the case precisely because God chooses to love it, in line with the theory. That pious something or other is not independently sitting adrift, noncontingently virtuous in its own right, with nothing transcendentally making it so.

 

This presupposes that God commands only what is good. It also presupposes that, for example, things like the giving of charity, the avoidance of adultery, and the refrain from stealing, murdering, and ‘graven images’ have their truth value from being morally good if, and only if, God loves these and other commandments. The complete taxonomy (or classification scheme) of edicts being aimed at placing guardrails on human behaviour in the expectation of a nobler, more sanctified world. But God loving what’s morally good for its own sake — that is, apart from God making it so — clearly denies ‘divine command theory’.

 

For, if the pious is loved by the gods because it is pious, which is one of the interpretations offered by Plato (through the mouth of Socrates) in challenging Euthyphro’s thinking, then it opens the door to an authority higher than God. Where matters of morality may exist outside of God’s reach, suggesting something other than God being all-powerful. Such a scenario pushes back against traditionally Abrahamic (monotheist) conceptualisations.

 

Yet, whether the situation calls for a single almighty God or a yet greater power of some indescribable sort, the philosopher Thomas Hobbes, who like St. Thomas Aquinas and Averroës believed that God commands only what is good, argued that God’s laws must conform to ‘natural reason’. Hobbes’s point makes for an essential truism, especially if the universe is to have rhyme and reason. This being true even if the governing forces of natural law and of objective morality are not entirely understood or, for that matter, not compressible into a singularly encompassing ‘theory of all’. 

 

Because of the principles of ‘divine command theory’, some people contend the necessary takeaway is that there can be no ethics in the absence of God to judge something as pious. In fact, Fyodor Dostoyevsky, in The Brothers Karamazov, presumptuously declared that ‘if God does not exist, everything is permitted’. Surely not so; you don’t have to be a theist of faith to spot the shortsighted dismissiveness of his assertion. After all, an atheist or agnostic might recognise the benevolence, even the categorical need, for adherence to manmade principles of morality, to foster the welfare of humanity at large for its own sufficient sake. Secular humanism, in other words  which greatly appeals to many people.

 

Immanuel Kant’s categorical imperative supports these human-centered, do-unto-others notions: ‘Act only in accordance with that maxim through which you can at the same time will that it become a universal law’. An ethic of respect toward all, as we mortals delineate between right and wrong. Even with ‘divine command theory’, it seems reasonable to suppose that a god would have reasons for preferring that moral principles not be arrived at willy-nilly.

  

Monday 15 May 2023

‘Game Theory’: Strategic Thinking for Optimal Solutions

Cortes began his campaign to conquer the Aztec Empire by having all but one of his ships scuttled, which meant that he and his men would either conquer the Aztecs Empire or die trying.. Initially, the Aztecs did not see the Spanish as a threat. In fact, their ruler, Moctezuma II, sent emissaries to present gifts to these foreign strangers. 



By Keith Tidman

 

The Peloponnesian War, chronicled by the historian Thucydides, pitted two major powers of Ancient Greece against each other, the Athenians and the Spartans. The Battle of Delium, which took place in 424 BC, was one of the war’s decisive battles. In two of his dialogues (Laches and Symposium), Plato had Socrates, who actually fought in the war, apocryphally recalling the battle, bearing on combatants’ strategic choices.

 

One episode recalls a soldier on the front line, awaiting the enemy to attack, pondering his options in the context of self-interest — what works best for him. For example, if his comrades are believed to be capable of successfully repelling the attack, his own role will contribute only inconsequentially to the fight, yet he risks pointlessly being killed. If, however, the enemy is certain to win the battle, the soldier’s own death is all the more likely and senseless, given that the front line will be routed, anyway, no matter what it does.

 

The soldier concludes from these mental somersaults that his best option is to flee, regardless of which side wins the battle. His ‘dominant strategy’ being to stay alive and unharmed. However, based on the same line of reasoning, all the soldier’s fellow men-in-arms should decide to flee also, to avoid the inevitability of being cut down, rather than to stand their ground. Yet, if all flee, the soldiers are guaranteed to lose the battle before the sides have even engaged.

 

This kind of strategic analysis is sometimes called game theoryHistory provides us with many other examples of game theory applied to the real world, too. In 1591, the Spanish conqueror Cortéz landed in the Western Hemisphere, intending to march inland and vanquish the Aztec Empire. He feared, however, that his soldiers, exhausted from the ocean journey, might be reluctant to fight the Aztec warriors, who happened also to greatly outnumber his own force.

 

Instead of counting on the motivation of individual soldier’s courage or even group ésprit de corps, Cortéz scuttled his fleet. His strategy was to remove the risk of the ships tempting his men to retreat rather than fight — and thus, with no option, to pursue the Aztecs in a fight-or-die (vice a fight-or-flee) scenario. The calculus for each of Cortéz’s soldiers in weighing his survivalist self-interest had shifted dramatically. At the same time, in brazenly scuttling his ships in the manner of a metaphorical weapon, Cortéz wanted to dramatically demonstrate to the enemy that for reasons the latter couldn’t fathom, his outnumbered force nonetheless appeared fearlessly confident to engage in the upcoming battle.

 

It’s a striking historical example of one way in which game theory provides means to assess situations where parties make strategic decisions that take account of each other’s possible decisions. The parties aim to arrive at best strategies in the framework of their own interests — business, economic, political, etc. — while factoring in what they believe to be the thinking (strategising) of opposite players whose interests may align or differ or even be a blend of both.

 

The term, and the philosophy of game theory, is much more recent, of course, developed in the early twentieth century by the mathematician John von Neumann and the economist Oskar Morgenstern. They focused on the theory’s application to economic decision-making, with what they considered the game-like nature of the field of economics. Some ten years later, another mathematician, called John Nash, along with others expanded the discipline, to include strategic decisions applicable to a wide range of fields and scenarios, analysing how competitors with diverse interests choose to contest with one another in pursuit of optimised outcomes. 

 

Whereas some of the earliest cases focused on ‘zero-sum’ games involving two players whose interests sharply conflicted, later scenarios and games were far more intricate. Such as ‘variable-sum’ games, where there may be all winners or all losers, as in a labour dispute. Or ‘constant-variable’ games, like poker, characterised as pure competition, entailing total conflict. The more intricately constructed games accommodate multiple players, involve a blend of shared and divergent interests, involve successive moves, and have at least one player with more information to inform and shape his own strategic choices than the information his competitors hold in hand.

 

The techniques of game theory and the scenarios examined are notable for their range of applications, including business, economics, politics, law, diplomacy, sports, social sciences, and war. Some features of the competitive scenarios are challenging to probe, such as accurately discerning the intentions of rivals and trying to discriminate behavioural patterns. That being said, many features of scenarios and alternative strategies can be studied by the methods of game theory, grounded in mathematics and logic.

 

Among the real-world applications of the methods are planning to mitigate the effects of climate extremes; running management-labour negotiations to get to a new contract and head off costly strikes; siting a power-generating plant to reflect regional needs; anticipating the choices of voter blocs; selecting and rejecting candidates for jury duty during voir dire; engaging in a price war between catty-cornered grocery stores rather than both keeping their prices aligned and high; avoiding predictable plays in sports, to make it harder to defend against; foretelling the formation of political coalitions; and negotiating a treaty between two antagonistic, saber-rattling countries to head off runaway arms spending or outright conflict.

 

Perhaps more trivially, applications of game theory stretch to so-called parlour games, too, like chess, checkers, poker, and Go, which are finite in the number of players and optional plays, and in which progress is achieved via a string of alternating single moves. The contestant who presages a competitor’s optimal answer to their own move will experience more favourable outcomes than if they try to deduce that their opponent will make a particular move associated with a particular probability ranking.

 

Given the large diversity of ‘games’, there are necessarily multiple forms of game theory. Fundamental to each theory, however, is that features of the strategising are actively managed by the players rather than through resort to just chance, hence why game theory goes several steps farther than mere probability theory.

 

The classic example of a two-person, noncooperative game is the Prisoner’s Dilemma. This is how it goes. Detectives believe that their two suspects collaborated in robbing a bank, but they don’t have enough admissible evidence to prove the charges beyond a reasonable doubt. They need more on which to base their otherwise shaky case. The prisoners are kept apart, out of hearing range of each other, as interrogators try to coax each into admitting to the crime.

 

Each prisoner mulls their options for getting the shortest prison term. But in deciding whether to confess, they’re unaware of what their accomplice will decide to do. However, both prisoners are mindful of their options and consequences: If both own up to the robbery, both get a five-year prison term; if neither confesses, both are sentenced to a one-year term (on a lesser charge); and if one squeals on the other, that one goes free, while the prisoner who stays silent goes to prison for fifteen years. 

 

The issue of trust is of course central to weighing the options presented by the ‘game’. In terms of sentences, both prisoners are better off choosing to act unselfishly and remain hush, with each serving one year. But if they choose to act selfishly in expectation of outmaneuvering the unsuspecting (presumed gullible) partner — which is to say, both prisoners picture themselves going free by spilling the beans while mistakenly anticipating that the other will stay silent — the result is much worse: a five-year sentence for both.


Presaging these types of game theoretic arguments, the English philosopher Thomas Hobbes, in Leviathan (1651), described citizens believing, on general principle, they’re best off with unrestrained freedom. Though, as Hobbes theorised, they will come to realise there are occasions when their interests will be better served by cooperating. The aim being to jointly accomplish things not doable by an individual alone. However, some individuals may inconsiderately conclude their interests will be served best by reaping the benefits of collaboration — that is, soliciting help from a neighbour in the form of physical labour, equipment, and time in tilling — but later defaulting when the occasion is for such help to be reciprocated.

 

Resentment, distrust, and cutthroat competitiveness take hold. Faith in the integrity of neighbours in the community plummets, and the chain of sharing resources to leverage the force-multiplicity of teamwork is broken. Society is worse off — where, as Hobbes memorably put it, life then becomes all the more ‘solitary, poor, nasty, brutish and short’. Hobbes’s conclusion, to avoid what he referred to as a ‘war of all against all’, was that people therefore need a central government — operating with significant authority — holding people accountable and punishing accordingly, intended to keep citizens and their transactions on the up and up.

 

What’s germane about Hobbes’s example is how its core themes resonate with today’s game theory. In particular, Hobbes’s argument regarding the need for an ‘undivided’, authoritative government is in line with modern-day game theorists’ solutions to protecting people against what theorists label as ‘social dilemmas’. That is, when people cause fissures within society by dishonourably taking advantage of other citizens rather than cooperating and reciprocating assistance, where collaboration benefits the common good. To Hobbes, the strategic play is between what he refers to as the ‘tyranny’ of an authoritative government and the ‘anarchy’ of no government. He argues that tyranny is the lesser ‘evil’ of the two. 

 

In dicing real-world ‘games’, people have rationally intuited workable strategies, with those solutions sufficing in many everyday circumstances. What the methodologies of game theory offer are ways to formalise, validate, and optimise the outcomes of select intuitions where outcomes matter more. All the while taking into account the opponent and his anticipated strategy, and extracting the highest benefits from choices based on one’s principles and preferences.

 

Monday 1 May 2023

Problems with the Problem of Evil


By Keith Tidman

  

Do we really reside in what German polymath Gottlieb Wilhelm Leibniz referred to as ‘the best of all possible worlds’, picked by God from among an infinite variety of world orders at God’s disposal, based on the greatest number of supposed perfections? (A claim that the French Enlightenment writer Voltaire satirised in his novella Candide.)

 

How do we safely arrive at Leibniz’s sweeping assessment of ‘best’ here, given the world’s harrowing circumstances, from widespread violence to epidemics to famine, of which we’re reminded every day? After all, the Augustinian faith-based explanation for the presence of evil has been punishment for Adam and Eve’s original sin and expulsion from the Garden of Eden. From this emerged Leibniz’s term ‘theodicy’, created from two Greek words for the expression ‘justifying God’ (Theodicy: Essays on the Goodness of God, the Freedom of Man and the Origin of Evil, 1710).


No, there’s a problem … the ‘problem of evil’. If God is all powerful (omnipotent), all knowing (omniscient), all places (omnipresent), all good and loving (omnibenevolent), and all wise, then why is there evil in the very world that God is said to have designed and created? Not having averted or fixed the problem, instead permitting unrestrained reins and abiding by noninterventionism. There is not just one form of evil, but at least two: moral evil (volitionally wrongful human conduct) and natural evil (ranging from illnesses and other human suffering, to natural law causing ruinous and lethal calamities).

 

There are competitor explanations for evil, of course, like that developed by the first-century Greek bishop Saint Irenaeus, whose rationalisation was that evil presented the population with incentives and opportunities to learn, develop, and evolve toward ever-greater perfection. The shortcoming with this Irenaean description, however, is that it fails to account for the ubiquity and diversity of natural disasters, like tsunamis, volcanoes, earthquakes, wildfires, hurricanes, and many other manifestations of natural law taking its toll around the globe.

 

Yet, it has been argued that even harmful natural hazards like avalanches and lightning, not just moral indiscretions, are part of the plan for people’s moral, epistemic growth, spurring virtues like courage, charity, gratitude, patience, and compassion. It seems that both the Augustinian and Irenaean models of the universe adhere to the anthropic principle that cosmic constants are imperatively fine grained enough (balanced on a sharp edge) to allow for human life to exist at this location, at this point in time.

 

Meanwhile, although some people might conceivably respond to natural hazards and pressing moral hardships by honing their awareness, which some claim, other people are overcome by the devastating effects of the hazards. These outcomes point to another in the battery of explanations for evil, in the reassuring form of a spiritual life after death. Some people assert that such rewards may be expected to tower over mundane earthly challenges and suffering, and that the suffering that moral and natural evil evokes conditions people for the enlightenment of an afterlife. 

 

At this stage, the worldly reasons for natural hazards and moral torment (purportedly the intentions behind a god’s strategy) become apparent. Meanwhile, others argue that the searing realities of, say, the Holocaust or any other genocidal atrocities or savagery or warring in this world are not even remotely mitigated, let alone vindicated, by the anticipated jubilation of life after death, no matter the form that the latter might take.

 

Still another contending explanation is that what we label evil in terms of human conduct is not a separate ‘thing’ that happens to be negative, but rather is the absence of a particular good, such as the absence of hope, integrity, forbearance, friendship, altruism, prudence, principle, and generosity, among other virtues. In short, evil isn’t the opposite of good, but is the nonattendance of good. Not so simple to resolve in this model, however, is the following: Would not a god, as original cause, have had to create the conditions for that absence of good to come to be?

 

Others have asserted that God’s design and the presence of evil are in fact compatible, not a contradiction or intrinsic failing, and not preparation either for development in the here and now or for post-death enlightenment. American philosopher Alvin Plantinga has supported this denial of a contradiction between the existence of an all-capable and all-benevolent (almighty) god and the existence of evil:

 

‘There are people who display a sort of creative moral heroism in the face of suffering and adversity — a heroism that inspires others and creates a good situation out of a bad one. In a situation like this the evil, of course, remains evil; but the total state of affairs — someone’s bearing pain magnificently, for example — may be good. If it is, then the good present must outweigh the evil; otherwise, the total situation would not be good’ (God, Freedom, and Evil, 1977).

 

Or then, as British philosopher John Hick imagines, perhaps evil exists only as a corruption of goodness. Here is Hick’s version of the common premises stated and conclusion drawn: ‘If God is omnipotent, God can prevent evil. If God is perfectly good, God must want to prevent all evil. Evil exists. Thus, God is either not omnipotent or perfectly good, or both’. It does appear that many arguments cycle back to those similarly couched observations about incidents of seeming discrepancy.

 

Yet others have taken an opposite view, seeing incompatibilities between a world designed by a god figure and the commonness of evil. Here, the word ‘design’ conveys similarities between the evidence of complex (intelligent) design behind the cosmos’s existence and complex (intelligent) design behind many things made by humans, from particle accelerators, quantum computers, and space-based telescopes, to cuneiform clay tablets and the carved code of Hammurabi law.


Unknowability matters, however, to this aspect of design and evil. For the presence, even prevalence, of evil does not necessarily contradict the logical or metaphysical possibility of a transcendental being as designer of our world. That being said, some people postulate that the very existence, as well as the categorical abstractness of qualities and intentions, of any such overarching designer are likely to remain incurably unknowable, beyond confirmation or falsifiability.

 

Although the argument by design has circulated for millennia, it was popularised by the English theologian William Paley early in the nineteenth century. Before him, the Scottish philosopher David Hume shaped his criticism of the design argument by paraphrasing Epicurus: ‘Is God willing to prevent evil, but not able? Then he is impotent. Is he able, but not willing? Then he is malevolent. Is he both able and willing? Whence then is evil? Is he neither able nor will? Then why call him God?’ (Dialogues Concerning Natural Religion, 1779).

 

Another in the catalog of explanations of moral evil is associated with itself a provocative claim, which is that we have free will. That is, we are presented with the possibility, not inevitability, of moral evil. Left to their own unconstrained devices, people are empowered either to freely reject or freely choose immoral decisions or actions. From among a large constellation, like venality, malice, and injustice. As such, free will is essential to human agency and by extension to moral evil (for obvious reasons, leaving natural evil out). Plantinga is among those who promote this free-will defense of the existence of moral evil. 

 

Leibniz was wrong about ours being ‘the best of all possible worlds’. Better worlds are indeed imaginable, where plausibly evil in its sundry guises pales in comparison. The gauntlet as to what those better worlds resemble, among myriad possibilities, idles provocatively on the ground. For us to dare to pick up, perhaps. However, reconciling evil, in the presence of theistic paradoxes like professed omnipotence, omniscience, and omnibenevolence, remains problematic. As Candide asked, ‘If this is the best ... what are the others?

 

Monday 3 April 2023

The Chinese Room Experiment ... and Today’s AI Chatbots


By Keith Tidman

 

It was back in 1980 that the American philosopher John Searle formulated the so-called ‘Chinese room thought experiment’ in an article, his aim being to emphasise the bounds of machine cognition and to push back against what he viewed, even back then, as hyperbolic claims surrounding artificial intelligence (AI). His purpose was to make the case that computers don’t ‘think’, but rather merely manipulate symbols in the absence of understanding.

 

Searle subsequently went on to explain his rationale this way: 


‘The reason that no computer can ever be a mind is simply that a computer is only syntactical [concerned with the formal structure of language, such as the arrangement of words and phrases], and minds are more than syntactical. Minds are semantical, in the sense that they have … content [substance, meaning, and understanding]’.

 

He continued to point out, by way of further explanation, that the latest technology metaphor for purportedly representing and trying to understand the brain has consistently shifted over the centuries: for example, from Leibniz, who compared the brain to a mill, to Freud comparing it to ‘hydraulic and electromagnetic systems’, to the present-day computer. With none, frankly, yet serving as anything like good analogs of the human brain, given what we know today of the neurophysiology, experiential pathways, functionality, expression of consciousness, and emergence of mind associated with the brain.

 

In a moment, I want to segue to today’s debate over AI chatbots, but first, let’s recall Searle’s Chinese room argument in a bit more detail. It began with a person in a room, who accepts pieces of paper slipped under the door and into the room. The paper bears Chinese characters, which, unbeknownst to the people outside, the monolingual person in the room has absolutely no ability to translate. The characters unsurprisingly look like unintelligible patterns of squiggles and strokes. The person in the room then feeds those characters into a digital computer, whose program (metaphorically represented in the original description of the experiment by a book of instructions’) searches a massive database of written Chinese (originally represented by a box of symbols’).

 

The powerful computer program can hypothetically find every possible combination of Chinese words in its records. When the computer spots a match with what’s on the paper, it makes a note of the string of words that immediately follow, printing those out so the person can slip the piece of paper back out of the room. Because of the perfect Chinese response to the query sent into the room, the people outside, unaware of the computer’s and program’s presence inside, mistakenly but reasonably conclude that the person in the room has to be a native speaker of Chinese.

 

Here, as an example, is what might have been slipped under the door, into the room: 


什么是智慧 


Which is the Mandarin translation of the age-old question ‘What is wisdom?’ And here’s what might have been passed back out, the result of the computer’s search: 


了解知识的界限


Which is the Mandarin translation of ‘Understanding the boundary/limits of knowledge’, an answer (among many) convincing the people gathered in anticipation outside the room that a fluent speaker of Mandarin was within, answering their questions in informed, insightful fashion.

 

The outcome of Searle’s thought experiment seemed to satisfy the criteria of the famous Turing test (he himself called it ‘the imitation game’), designed by the computer scientist and mathematician Alan Turing in 1950. The controversial challenge he posed with the test was whether a computer could think like — that is, exhibit intelligent behaviour indistinguishable from — a human being. And who could tell.


It was in an article for the journal Mind, called ‘Computing Machinery and Intelligence’, that Turing himself set out the ‘Turing test’, which inspired Searle’s later thought experiment. After first expressing concern with the ambiguity of the words machine and think in a closed question like ‘Can machines think?’, Turing went on to describe his test as follows:

The [challenge] can be described in terms of a game, which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The aim of the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ of ‘X is B and Y is A’. The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?


Now suppose X is actually A, then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be: ‘My hair is shingled, and the longest strands are about nine inches long’.


In order that tone of voice may not help the interrogator, the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprompter communicating between the two rooms. Alternatively, the question and answers can be repeated by an intermediary. The object of the game is for the third party (B) to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as ‘I am the woman, don’t listen to him!’ to her answers, but it will avail nothing as the man makes similar remarks.


We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’  

Note that as Turing framed the inquiry at the time, the question arises of whether a computer can ‘be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a [person]?’ The word ‘imitation’ here is key, allowing for the hypothetical computer in Searle’s Chinese room experiment to pass the test — albeit importantly not proving that computers think semantically, which is a whole other capacity not yet achieved even by today’s strongest AI.

 

Let’s fast-forward a few decades and examine the generative AI chatbots whose development much of the world has been enthusiastically tracking in anticipation of what’s to be. When someone engages with the AI algorithms powering the bots, the AI seems to respond intelligently. The result being either back-and-forth conversations with the chatbots, or the use of carefully crafted natural-language input to prompt the bots to write speeches, correspondence, school papers, corporate reports, summaries, emails, computer code, or any number of other written products. End products are based on the bots having been ‘trained’ on the massive body of text on the internet. And where output sometimes gets reformulated by the bot based on the user’s rejiggered prompts.

 

It’s as if the chatbots think. But they don’t. Rather, the chatbots’ capacity to leverage the massive mounds of information on the internet to produce predictive responses is remarkably much more analogous to what the computer was doing in Searle’s Chinese room forty years earlier. With long-term future implications for developmental advances in neuroscience, artificial intelligence and computer science, philosophy of language and mind, epistemology, and models of consciousness, awareness, and perception.

 

In the midst of this evolution, the range of generative AI will expand AI’s reach across the multivariate domains of modern society: education, business, medicine, finance, science, governance, law, and entertainment, among them. So far, so good. Meanwhile, despite machine learning, possible errors and biases and nonsensicalness in algorithmic decision-making, should they occur, are more problematic in some domains (like medicine, military, and lending) than in others. Importantly remembering, though, that gaffs of any magnitude, type, and regularity can quickly erode trust, no matter the field.

 

Sure, current algorithms, natural-language processing, and the underpinnings of developmental engineering are more complex than when Searle first presented the Chinese room argument. But chatbots still don’t understand the meaning of content. They don’t have knowledge as such. Nor do they venture much by way of beliefs, opinions, predictions, or convictions, leaving swaths of important topics off the table. Reassembly of facts scraped from myriad sources is more the recipe of the day — and even then, errors and eyebrow-raising incoherence occurs, including unexplainably incomplete and spurious references.

 

The chatbots revealingly write output by muscularly matching words provided by the prompts with strings of words located online, including words then shown to follow probabilistically, predictively building their answers based on a form of pattern recognition. There’s still a mimicking of computational, rather than thinking, theories of mind. Sure, what the bots produce would pass the Turing test, but today surely that’s a pretty low bar. 

 

Meantime, people have argued that the AI’s writing reveals markers, such as lacking the nuance of varied cadence, phraseology, word choice, modulation, creativity, originality, and individuality, as well as the curation of appropriate content, that human beings often display when they write. At the moment, anyway, the resulting products from chatbots tend to present a formulaic feel, posing challenges to AI’s algorithms for remediation.

 

Three decades after first unspooling his ingenious Chinese room argument, Searle wrote, ‘I demonstrated years ago … that the implementation of the computer program is not itself sufficient for consciousness or intentionality [mental states representing things]’. Both then and now, that’s true enough. We’re barely closing in on completing the first lap. It’s all still computation, not thinking or understanding.


Accordingly, the ‘intelligence’ one might perceive in Searle’s computer and the program his computer runs in order to search for patterns that match the Chinese words is very much like the ‘intelligence’ one might misperceive in a chatbot’s answers to natural-language prompts. In both cases, what we may misinterpret as intelligence is really a deception of sorts. Because in both cases, what’s really happening, despite the large differences in the programs’ developmental sophistication arising from the passage of time, is little more than brute-force searches of massive amounts of information in order to predict what the next words likely should be. Often getting it right, but sometimes getting it wrong — with good, bad, or trifling consequences.

 

I propose, however, that the development of artificial intelligence — particularly what is called ‘artificial general intelligence’ (AGI) — will get us there: an analog of the human brain, with an understanding of semantic content. Where today’s chatbots will look like novelties if not entirely obedient in their functional execution, and where ‘neural networks’ of feasibly self-optimising artificial general intelligence will match up against or elastically stretch beyond human cognition, where the hotbed issues of what consciousness is get rethought.