Showing posts with label thought experiment. Show all posts
Showing posts with label thought experiment. Show all posts

Monday, 12 August 2024

The Distressed Spider and Intervention: A Thought Experiment


By Keith Tidman

To intervene, or not to intervene?

 

Philosopher Thomas Nagel set the stage for a curious thought experiment. Nagel described how, while a university professor, he noticed what he considered a puzzling scene play out. It was a spider trapped in … let us say, a sink ... in the building housing the philosophy department. The spider, despite defensively scurrying around its tightly limited terrain, seemed condemned throughout the day to becoming doused with water, incapable of altering its fate — if altering its fate was what it even wanted to do. Weeks passed.

 

As Nagel portrayed the scene, the spider’s “life seemed miserable and exhausting,” which led him to conclude he should “liberate” it, in a dash to freedom and a better life. Seemingly the morally right thing to do, despite the relative insignificance of a single spider. Nagel finally justified intervention on the presumption that the spider could readily find its way back to its spot in the sink if it “didn’t like it on the outside.”

 

That is, could Nagel’s well-intentioned rescue afford the spider a more meaningful, happier life — assuming, for the sake of argument, the spider could think in such abstract terms? Or was such interventionism haughty and presumptuous? Nagel, pondering higher-level causes and effects, humbly confessed that his emancipation of the spider was therefore done with “much uncertainty and hesitation.”

 

Regardless, Nagel went ahead and reached out with a paper towel in the spider’s direction, which the spider, intentionally or instinctively, grabbed on to with its gangly legs, to be hoisted onto the floor. Thus carefully deposited, however, the spider remained still, even while prodded gently with the paper towel. “Playing dead,” perhaps — and afraid of carelessly being stomped on by people walking around? The next day, Nagel “found it in the same place, his legs shriveled in that way characteristic of dead spiders.”

 

Nagel’s experience, and the thought experiment derived from it, tees up at least two inferences regarding the ground rules governing intervention in others’ lives. On the one hand, no matter how benevolently intended our deeds, intervention might exact unanticipated outcomes. Some ugly. On the other hand, indecisiveness and inaction might likewise result in harm — as the renowned “trolley problem” demonstrates, in which choices, including the option not to redirect the trolley, still lead to some loss of life. In short, indecision is a decision — with repercussions.

 

We therefore have to parse the circumstances and priorities as best we can, deciding to intercede or stay removed from the scene. Either choice is swayed by our conspicuous biases as to meaningfulness in life, despite the choices’ innate subjectivity. Both choices — intervene or leave alone — are entrapped in the unavoidable moral morass and practical implications of their respective consequences.

 

Nagel’s spider incident was, of course, also metaphorical of the lives of people — and whether we should judge the merits or demerits of someone’s stage-managed life circumstances, going so far as to urge change. We might perceive such advice as prudent and empowering, even morally right; but maybe in reality the advice is none of those things, and instead is tantamount to the wrong-headed extraction of the “ailing” spider. The next two paragraphs provide examples of everyday, real-world circumstances that might spur intervention. That is, let's ask this: In these and other real-world cases, of which the count is endless, does the proverbial spider warrant extrication?

 

For instance, do we regard someone’s work life as mundane, a dead-end, as beneath the person’s talents? Do we regard someone’s choices regarding nutrition and exercise and other behavioral habits as impairing the person’s health? Or what if we see someone’s level of education as too scant and misfocused relative to modern society’s fast-paced, high-tech needs? Do we fault-findingly regard someone’s choice of a partner to be unfavorable and not life enhancing? Do we consider someone’s activities as embodying calculable risks, to be evaded? Do we deem someone’s financial decisions to be imprudently impulsive?

 

Maybe those “someones,” in being judged, begrudge what they view as the superciliousness of such intercession. Who has the right (the moral authority) to arbitrate, after all, people’s definition of happiness and the meaningfulness of life, and thus choices to make, where there may be few universal truths? Where do resolute biases contaminate decision-making? One possible answer is that we ought to leave the proverbial spider to its fate — to its natural course.

 

But let’s also look at possible, real-world interventionism on a more expansive scale. Do we properly consider both the pragmatic and moral consequences of interceding in matters of the environment, biodiversity, and ecosystems, where life in general has inherent value and decisions are morally freighted? How about, in international relations, the promotion of humanitarian standards, the maintenance of security, and cultural, civilizational affairs? And what about in other nations’ domestic and foreign policy decision-making that bear ubiquitously across the interconnected, globalised planet?

 

Even the sunniest of intentions, instilled with empathy and wistful introspection, may turn out ill-informed — absent a full understanding of someone else’s situation, where the setting is key to the person’s happiness and sense of meaningfulness. Perhaps that particular someone did not need to be removed from the fabled appliance, so to speak, in order that he might scurry off toward safety.

 

Nagel assumed the spider might feel forlorn; but perhaps it didn’t. Maybe it was a case of infelicitous projection or a desire simply to assuage raw instincts. Let’s not forget, the spider died — and did so as the consequence of intervention. Lessons applicable to all frames of reference, from the globe to the community and individual, whom we might assume needs rescuing.

 

The thought experiment prods us to go beyond shallow, short-term consequentialism — beyond what happens right off the bat as the result of intervention — instead to dig into primary principles guiding the verdicts we render. Foundational moral values, personal and societal — even  universal — matter greatly in these presumptive decisions.

 

Monday, 15 July 2024

Are We Alone in the Universe, or Not? And Does It Matter?

Peering through Saturn’s rings, the Cassini probe caught a glimpse of a faraway planet and its moon. At a distance of just under 900 million miles, Earth shines bright among the many stars in the sky, distinguished by its bluish tint.

By Keith Tidman

The writer and futurist Arthur C. Clarke once wrote: “Two possibilities exist: Either we are alone in the universe, or we are not. Both are equally terrifying.” 


But are the two alternatives really terrifying? And even if they were, then what might be the upshot?

 

In exploring the possible consequences of Clarke’s thought experiment, I’ll avoid enmeshing us in a discussion of whether extraterrestrials have already visited Earth, or whether we will get to visit their planets in the near term. For the foreseeable future, the distances are too large for that to happen, where suspected extraterrestrial civilisations are thousands, millions, or billions of light-years away. Those distances hamper hunts for signals engaged in by the Search for Extraterrestrial Intelligence (SETI) Institute, which metaphorically dips only an infinitesimally small scoop into the vast cosmic ocean. And such distances hamper interstellar travel.

 

Accordingly, we are currently in no position to respond definitively to the challenge Enrico Fermi, also known as “the architect of the nuclear age,” raised with his lunchtime colleagues at the Los Alamos National Laboratory in New Mexico in 1950, referring to extraterrestrials: “Where is everybody?”

 

One piece of crucial context for our conversation here is that of scale: the known universe is currently thought to be some 93 billion light-years in diameter. Recall that a light-year is a measurement of distance, not time, so that in Earthly ‘miles,’ the cosmic diameter is an easy, but boggling, calculation: 93 billion multiplied by 5.8 trillion miles. Add that, in the case of travel or electromagnetic communications (beamed signals) between us and extraterrestrials, the velocity of light is the fixed upper limit — as far as current science is concerned, anyway. All of which is problematic for detecting aliens and their biomarkers or technomarkers, quite apart from anyone engaging in neighbourly interstellar space visitation.

 

Yet, in a universe kickstarted some 13.8 billion years ago — with hundreds of billions of galaxies, and trillions of stars and planets (many of those exoplanets conceivably habitable, even if not twins of our world) — it’s surely arguable that extraterrestrial civilisations, carbon-based or differently constituted physically, are out there, similarly staring toward the skies, quizzically pondering. Alien cosmologists asking, “Where is everybody?,” making great strides developing their own technology, and calculating probabilities for sundry constants and variables assumed necessary for technologically advanced life to prosper elsewhere.

 

There are two key assumption in asking whether we are alone in the universe or we are among teeming alien life strewn throughout the universe. The first assumption, of a general nature, is to define ourselves as a conscious, intelligent, sophisticated species; the second is to assume the extraterrestrials we envision in our discussion are likewise conscious and intelligent and sophisticated — at least equally or maybe considerably more so, options we’ll explore.

 

A third assumption is an evolutionary process, transitioning from physics to chemistry to biology to consciousness. Higher-order consciousness is presumed to be the evolutionary apex both for our species — what it is like to be us — and for extraterrestrials — what it is like to be them. Consciousness may end up the evolutionary apex for our and their machine technology, too. Given that higher-order consciousness is central, we need a baseline for what we mean by the term. Taking a physicalist or materialist point of view, the mind and consciousness are rooted in the neurophysiological activity of the brain, reducible to one and the same. This, rather than existing dualistically in some ethereal, transcendental state separate from the brain, as has sometimes been mythologized.

 

As a placeholder here, consciousness is assumed to be fundamentally similar in its range of domains both for our species and for extraterrestrials, comprising variations of these features: experience, awareness, perception, identity, sentience, thought experimentation, emotion, imagination, innovation, curiosity, memory, chronicled past, projected future, executive function, curation, normative idealism, knowledge, understanding, cognition, metacognition — among others. On these important fronts, the features’ levels of development between us and extraterrestrials may well differ in form and magnitude.

 

As for one of Arthur C. Clarke’s alternative scenarios — that our species is alone in the universe — I can’t help but wonder why, then, the universe is so old, big, and still rapidly growing, if the cosmic carnival is experienced by us alone. We might scratch our heads over the seeming lack of sense in that, whereby the imposing panorama captured by space-based telescopes dwarfs us. We might, therefore, construe that particular scenario as favouring an exceptional place for our species in the otherwise unoccupied cosmic wonderment, or in a different (and more terrifying?) vein suggesting our presence is inconsequential.

 

That is, neither aloneness nor uniqueness necessarily equates to the specialness of a species, but to the contrary a trifling one-off situation. Where we have to come to grips with the indeterminacy of why this majestic display of light-years-sized star nurseries, galaxies rushing toward or away from one another, the insatiability of hungry supermassive black holes, supernovas sending ripples through the faraway reaches of spacetime, and so much more.

 

As for the possibility of sophisticated other life in the universe, we might turn to the so-called anthropic principle for the possible how and why of such occurrences. The principle posits that many constants of the Earth, of the solar system, of the Milky Way, and of the universe are so extraordinarily fine-tuned that only in those ways might conscious, intelligent, advanced life like ours ever to have evolutionarily come into being.

 

The universe would be unstable, as the anthropic principle says, if any of those parameters would shift even a minuscule amount, the cosmos being like a pencil balanced precariously on its pointed tip. It’s likely, therefore, that our species is not floating alone in an unimaginably vast, roiling but barren cosmic sea; according to a more expansive view of the error-less anthropic principle, the latter makes the creation and sustenance of extraterrestrial life possible, too, as fellow players in the cosmic froth. Fine-tuned, after all, doesn't necessarily equate to rare. 

 

We might thus wonder about the consequences for our self-identity and image if some among these teeming numbers of higher-order intelligent extraterrestrials inhabiting the universe got a developmental jumpstart on our species’ civilisation of a million or more years. It’s reasonable to assume that those species would have experienced many-orders-of-magnitude advances biologically, scientifically, technologically, culturally, and institutionally, fundamentally skewing how humanity perceives itself.

 

The impact of these realities on human self-perception might lead some to worry over the glaring inequality and possibly perceived menace, resulting in dents in the armour of our persistent self-exceptionalism, raising larger questions about our purpose. These are profoundly philosophical considerations. We might thereby opt to capitulate, grasping at straws of self-indulgent excuses. Yet, extraterrestrials capable of interstellar travel might choose — whether for benign purposes (e.g., development, enlightenment, resource sharing), or for malign ones (e.g., hegemonism, hubris, manifest destiny, self-exceptionalism, colonisation), or for a hybrid of reasons — that interventionism, with its mix of calculated and unpremeditated consequences, might seem the natural course.

 

Our reactions to gargantuan inter-species differences might range from giddy exceptionalism at one end to dimmed significance at the other. On a religious front, a crisis might ensue in the presence of remarkably advanced extraterrestrials, influencing factors surrounding faith, creeds, dicta, values, patriarchy. Some of our religious constructs — scriptures, symbology, philosophies — might collapse as shallow affectations. For example, in light of hyper-advanced extraterrestrials, our history of expressing religious imagery in anthropomorphic terms (our species described doctrinally as being “in God’s image,” for example) may no longer make sense, fundamentally altering belief systems.

 

We would have to revisit the principles of ethics, including the degree that ethics are culturally and societally contingent. Or the impact might lead to our being elated that life has advanced to such a remarkable degree, covetous for what it might mean for benefits for our species — to model what seems to have worked magnificently for a cutting-edge alien civilisation. The potential for learning vastly advanced natural science and technology and societal paradigms would be immense, where, for instance, extraterrestrials might be hybrids of the best of biology and the best of machines.

 

As potentially confounding either of Clarke’s scenarios might prove, neither need be terrifying; instead, both scenarios have the potential of being exhilarating. But let me toss one last unavoidable constant into the cosmic cauldron. And this is the concept of entropy — the irreversibly increasing (net) disorder within a closed, isolated system like the universe, with its expanding galactic and stellar separation accelerating toward a thermodynamic demise. Entropy is a fact of life of the universe: providing an expiry date, and eventually rendering everything extinct. The end of history, the end of physics — and the end of metaphysics.

 

Monday, 3 April 2023

The Chinese Room Experiment ... and Today’s AI Chatbots


By Keith Tidman

 

It was back in 1980 that the American philosopher John Searle formulated the so-called ‘Chinese room thought experiment’ in an article, his aim being to emphasise the bounds of machine cognition and to push back against what he viewed, even back then, as hyperbolic claims surrounding artificial intelligence (AI). His purpose was to make the case that computers don’t ‘think’, but rather merely manipulate symbols in the absence of understanding.

 

Searle subsequently went on to explain his rationale this way: 


‘The reason that no computer can ever be a mind is simply that a computer is only syntactical [concerned with the formal structure of language, such as the arrangement of words and phrases], and minds are more than syntactical. Minds are semantical, in the sense that they have … content [substance, meaning, and understanding]’.

 

He continued to point out, by way of further explanation, that the latest technology metaphor for purportedly representing and trying to understand the brain has consistently shifted over the centuries: for example, from Leibniz, who compared the brain to a mill, to Freud comparing it to ‘hydraulic and electromagnetic systems’, to the present-day computer. With none, frankly, yet serving as anything like good analogs of the human brain, given what we know today of the neurophysiology, experiential pathways, functionality, expression of consciousness, and emergence of mind associated with the brain.

 

In a moment, I want to segue to today’s debate over AI chatbots, but first, let’s recall Searle’s Chinese room argument in a bit more detail. It began with a person in a room, who accepts pieces of paper slipped under the door and into the room. The paper bears Chinese characters, which, unbeknownst to the people outside, the monolingual person in the room has absolutely no ability to translate. The characters unsurprisingly look like unintelligible patterns of squiggles and strokes. The person in the room then feeds those characters into a digital computer, whose program (metaphorically represented in the original description of the experiment by a book of instructions’) searches a massive database of written Chinese (originally represented by a box of symbols’).

 

The powerful computer program can hypothetically find every possible combination of Chinese words in its records. When the computer spots a match with what’s on the paper, it makes a note of the string of words that immediately follow, printing those out so the person can slip the piece of paper back out of the room. Because of the perfect Chinese response to the query sent into the room, the people outside, unaware of the computer’s and program’s presence inside, mistakenly but reasonably conclude that the person in the room has to be a native speaker of Chinese.

 

Here, as an example, is what might have been slipped under the door, into the room: 


什么是智慧 


Which is the Mandarin translation of the age-old question ‘What is wisdom?’ And here’s what might have been passed back out, the result of the computer’s search: 


了解知识的界限


Which is the Mandarin translation of ‘Understanding the boundary/limits of knowledge’, an answer (among many) convincing the people gathered in anticipation outside the room that a fluent speaker of Mandarin was within, answering their questions in informed, insightful fashion.

 

The outcome of Searle’s thought experiment seemed to satisfy the criteria of the famous Turing test (he himself called it ‘the imitation game’), designed by the computer scientist and mathematician Alan Turing in 1950. The controversial challenge he posed with the test was whether a computer could think like — that is, exhibit intelligent behaviour indistinguishable from — a human being. And who could tell.


It was in an article for the journal Mind, called ‘Computing Machinery and Intelligence’, that Turing himself set out the ‘Turing test’, which inspired Searle’s later thought experiment. After first expressing concern with the ambiguity of the words machine and think in a closed question like ‘Can machines think?’, Turing went on to describe his test as follows:

The [challenge] can be described in terms of a game, which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The aim of the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ of ‘X is B and Y is A’. The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?


Now suppose X is actually A, then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be: ‘My hair is shingled, and the longest strands are about nine inches long’.


In order that tone of voice may not help the interrogator, the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprompter communicating between the two rooms. Alternatively, the question and answers can be repeated by an intermediary. The object of the game is for the third party (B) to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as ‘I am the woman, don’t listen to him!’ to her answers, but it will avail nothing as the man makes similar remarks.


We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’  

Note that as Turing framed the inquiry at the time, the question arises of whether a computer can ‘be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a [person]?’ The word ‘imitation’ here is key, allowing for the hypothetical computer in Searle’s Chinese room experiment to pass the test — albeit importantly not proving that computers think semantically, which is a whole other capacity not yet achieved even by today’s strongest AI.

 

Let’s fast-forward a few decades and examine the generative AI chatbots whose development much of the world has been enthusiastically tracking in anticipation of what’s to be. When someone engages with the AI algorithms powering the bots, the AI seems to respond intelligently. The result being either back-and-forth conversations with the chatbots, or the use of carefully crafted natural-language input to prompt the bots to write speeches, correspondence, school papers, corporate reports, summaries, emails, computer code, or any number of other written products. End products are based on the bots having been ‘trained’ on the massive body of text on the internet. And where output sometimes gets reformulated by the bot based on the user’s rejiggered prompts.

 

It’s as if the chatbots think. But they don’t. Rather, the chatbots’ capacity to leverage the massive mounds of information on the internet to produce predictive responses is remarkably much more analogous to what the computer was doing in Searle’s Chinese room forty years earlier. With long-term future implications for developmental advances in neuroscience, artificial intelligence and computer science, philosophy of language and mind, epistemology, and models of consciousness, awareness, and perception.

 

In the midst of this evolution, the range of generative AI will expand AI’s reach across the multivariate domains of modern society: education, business, medicine, finance, science, governance, law, and entertainment, among them. So far, so good. Meanwhile, despite machine learning, possible errors and biases and nonsensicalness in algorithmic decision-making, should they occur, are more problematic in some domains (like medicine, military, and lending) than in others. Importantly remembering, though, that gaffs of any magnitude, type, and regularity can quickly erode trust, no matter the field.

 

Sure, current algorithms, natural-language processing, and the underpinnings of developmental engineering are more complex than when Searle first presented the Chinese room argument. But chatbots still don’t understand the meaning of content. They don’t have knowledge as such. Nor do they venture much by way of beliefs, opinions, predictions, or convictions, leaving swaths of important topics off the table. Reassembly of facts scraped from myriad sources is more the recipe of the day — and even then, errors and eyebrow-raising incoherence occurs, including unexplainably incomplete and spurious references.

 

The chatbots revealingly write output by muscularly matching words provided by the prompts with strings of words located online, including words then shown to follow probabilistically, predictively building their answers based on a form of pattern recognition. There’s still a mimicking of computational, rather than thinking, theories of mind. Sure, what the bots produce would pass the Turing test, but today surely that’s a pretty low bar. 

 

Meantime, people have argued that the AI’s writing reveals markers, such as lacking the nuance of varied cadence, phraseology, word choice, modulation, creativity, originality, and individuality, as well as the curation of appropriate content, that human beings often display when they write. At the moment, anyway, the resulting products from chatbots tend to present a formulaic feel, posing challenges to AI’s algorithms for remediation.

 

Three decades after first unspooling his ingenious Chinese room argument, Searle wrote, ‘I demonstrated years ago … that the implementation of the computer program is not itself sufficient for consciousness or intentionality [mental states representing things]’. Both then and now, that’s true enough. We’re barely closing in on completing the first lap. It’s all still computation, not thinking or understanding.


Accordingly, the ‘intelligence’ one might perceive in Searle’s computer and the program his computer runs in order to search for patterns that match the Chinese words is very much like the ‘intelligence’ one might misperceive in a chatbot’s answers to natural-language prompts. In both cases, what we may misinterpret as intelligence is really a deception of sorts. Because in both cases, what’s really happening, despite the large differences in the programs’ developmental sophistication arising from the passage of time, is little more than brute-force searches of massive amounts of information in order to predict what the next words likely should be. Often getting it right, but sometimes getting it wrong — with good, bad, or trifling consequences.

 

I propose, however, that the development of artificial intelligence — particularly what is called ‘artificial general intelligence’ (AGI) — will get us there: an analog of the human brain, with an understanding of semantic content. Where today’s chatbots will look like novelties if not entirely obedient in their functional execution, and where ‘neural networks’ of feasibly self-optimising artificial general intelligence will match up against or elastically stretch beyond human cognition, where the hotbed issues of what consciousness is get rethought.


Monday, 31 October 2022

Beetle in a Box: A Thought Experiment


By Keith Tidman


Let’s hypothesise that everyone in a community has a box containing a ‘beetle’. Each person can peer into only his or her box, and never into anyone else’s. Each person insists, upon looking into their own box, that they know what a ‘beetle’ is.

But there’s a catch: Each box might contain something different from some or all the others; each box might contain something that continually changes; or each box might actually contain nothing at all. Yet upon being asked, each person resolutely continues to use the word ‘beetle’ to describe what’s in their box. Refusing, even if probed, to more fully describe what they see, even if not showing it. The word ‘beetle’ thus simply meaning ‘that thing inside a person’s box’.

So, what does the thought experiment, set out by the influential twentieth-century philosopher Ludwig Wittgenstein in his book Philosophical Investigations,  tell us about language, mind, and reality?

As part of this experiment, Wittgenstein introduced the concept of a ‘private language’. That is, a language with a vocabulary and structure that only its originator and sole user understands, all the while untranslatable and obscure to everyone else. The original notion of a private (personal) language was in being analogous to what a person might use in attempting to convey his or her unique experiences, perceptions, and senses — the person’s individualised mental state. However, one criticism of such a personal language, by reason of being mostly unfathomable to others, is in its not holding to the definitional purpose of a working language as we commonly know it: to communicate with others, using mutually agreed-upon and comprehended guidelines.

Notably, however, the idea of a ‘private language’ has been subject to different interpretations over the years — besides in expressing to others one’s own mental state — on account of what some people have held are its inherent ambiguities. Even on its surface, such a private language does seem handicapped, inadequate for faithfully representing external reality among multiple users. A language unable to tie external reality to ‘internal’ reality — to a person’s ‘immediate private sensations’, as Wittgenstein put it, such as pain the individual feels. That is, to the user’s subjective, qualitative state of mind. Yet, the idea that people’s frames of mind, subjective experiences, and sense of awareness are unknowable by others, or at least uncertainly known, seems to come to us quite naturally.

Conventionally speaking, we become familiar with what something is because of its intrinsic physical characteristics. That ‘something’ has an external, material reality, comfortably and knowingly acknowledged by others in abidance to norms within the community. The something holds to the familiar terms of the ‘public language’ we use to describe it. It conveys knowledge. It denotes the world as we know it, precipitated by the habitual awareness of things and events. There’s a reassuringly objective concreteness to it.

So, if you were to describe to someone else some of the conventional features of, say, a sheet of paper or of an airplane or of a dog, we would imagine that other people could fathom, with minimal cognitive effort and without bewilderment, what the item you were describing was. A ‘private language’ can’t do any of that, its denying us a universally agreed-upon understanding of what Wittgenstein’s beetle-in-the-box might actually be. To the point about effectiveness, a ‘private language’ — where definitions of terms may be adversely arbitrary, unorthodox, imprecise, and unfamiliar  differs greatly from a ‘public language’ — where definitions of terms and syntactical form stick to conventional doctrine.

Meanwhile, such a realisation about the shortcomings of a ‘private language’ points to an analogy applicable to a ‘shared’ (or public) language: What happens in the case of expressing one’s personal, private experiences? Is it even possible to do so in an intelligible fashion? The discussion now pivots to the realm of the mind, interrogating aspects such as perception, appearance, attention, awareness, understanding, belief, and knowledge.

For example, if someone is in pain, or feeling joy, fear, or boredom, what’s actually conveyed and understood in trying to project their situation to other people? It’s likely that only they can understand their own mental state: their pain, joy, fear, or boredom. And any person with whom they are speaking, while perhaps genuinely empathetic and commiserative, in reality can only infer the other individual’s pain while understanding only their own.

Put another way, neither person can look into the other’s ‘box’; neither can reach into the other’s mind and hope to know. There are epistemic (knowledge-related) limits to how familiar we can be with another person’s subjective experience, even to the extent of the experience’s validation. Pain, joy, fear, and boredom are inexpressible and incomprehensible, beyond rough generalizations and approximations, whether resorting to either a ‘private’ or public language.

What’s important is that subjective feelings obscurely lack form — like the mysterious ‘beetle’. They lack the concrete, external reality mentioned previously. The reason being that your feelings and those of the other person are individualised, qualitative, and subjective. They are what philosophy of mind calls qualia. Such that your worry, pleasure, pride, and anxiety likely don’t squarely align with mine or the next person’s. Defaulting, as Wittgenstein put it, to a ‘language game’ with consequences, with its own puzzling syntactical rules and lexicon. And as such, the game’s challenge to translate reality into precise, logical, decipherable meaning.

All of which echoes Wittgenstein’s counsel against the inchoate, rudimentary notion of a ‘private language’, precisely because of its lacking necessary social, cultural, historical, and semiotic context. A social backdrop whereby a language must be predictably translatable into coherent concepts (with the notable exception of qualia). Such as giving things identifiable, inherent form readily perceived by others, according to the norms of social engagement and shared discourse among people within a community.

Shape-shifting ‘beetles’ are a convenient analogue of shape-shifting mental states. Reflecting altering ways our qualitative, subjective states of mind influence our choices and behaviours, through which other people develop some sense of our states of mind and how others may define us  a process that, because  of its mercurial nature, is seldom reliable. The limitations discussed here of Wittgenstein’s ‘private language’ arguably render such a medium of communication unhelpful to this process.

We make assumptions, based on looking in the box at our metaphorical beetle (the thing or idea or sensation inside), that will uncover a link: a connection between internal, subjective reality — like the pain that Wittgenstein’s theorising demonstrably focused on, but also happiness, surprise, sadness, enthrallment, envy, boredom — and external, objective reality. However, the dynamics of linguistically expressing qualitative, individualised mental states like pain need to be better understood.

So, what truths about others states of mind are closed off from us, because we’re restricted to looking at only our own ‘beetle’ (experience, perception, sensation)? And because we have to reconcile ourselves to trying to bridge gaps in our knowledge by imperfectly divining, based on externalities like behaviour and language, what’s inside the boxes’ (minds) of everyone else?

Monday, 21 March 2022

Would You Plug Into Nozick’s ‘Experience Machine’?

Clockwork Eyes by Michael Ryan

By Keith Tidman

 

Life may have emotionally whipsawed you. Maybe to the extent that you begin to imagine how life’s experiences might somehow be ‘better’. And then you hear about a machine that ensures you experience only pleasure, and no pain. What not to like!


It was the American philosopher Robert Nozick who,  in 1974, hypothesised a way to fill in the blanks of our imaginings of a happier, more fulfilled life by creating his classic Experience Machine thought experiment.

 

According to this, we can choose to be hooked up to such a machine that ensures we experience only pleasure, and eliminates pain. Over the intervening years, Nozick offered different versions of the scenario, as did other writers, but here’s one that will serve our purposes:

 

‘Imagine a machine that could give you any experience (or sequence of experiences) you might desire. When connected to this experience machine [floating in a tank, with electrodes attached to your brain], you can have the experience of writing a great poem or bringing about world peace or loving someone and being loved in return. You can experience the felt pleasures of these things. . . . While in the tank you won’t know that you’re there; you’ll think it’s all actually happening’.

 

At which point, Nozick went on to ask the key question. If given such a choice, would you plug into the machine for the rest of your life?

 

Maybe if we assume that our view of the greatest intrinsic good is a state of general wellbeing, referred to as welfarism, then on utilitarian grounds it might make sense to plug into the machine. But this theory might itself be a naïve, incomplete summary of what we value — what deeply matters to us in living out our lives — and the totality of the upside and downside consequences of our desires, choices, and actions.

 

Our pursuit of wellbeing notwithstanding, Nozick expects most of us would rebuff his invitation and by extension rebuff ethical hedonism, with its origins reaching back millennia. Our opting instead to live a life ‘in contact with reality’, as Nozick put it. That is, to take part of experiences authentically of the world — reflecting a reality of greater consequence than a manufactured illusion. A choice that originates, at least in part, from a bias toward the status quo. This so-called status quo bias leads some people  if told to imagine their lives to date having been produced by an ‘experience machine’  to choose not to detach from the machine.

 

However, researchers have found many people are reluctant to plug into the machine. This seems to be due to several factors. Factors beyond individuals finding the thought of plugging in too scary, icky, or alien’, as philosopher Ben Bramble interestingly characterised the prospect. And beyond such prosaic grounds as apprehension of something askew happening. For example, either the complex technology could malfunction, or the technicians overseeing the process might be sloppy one day, or there might be malign human intrusion (along the lines of the ‘fundamentalist zealots’ that Bramble invented) — any of which might cause a person’s experience in the machine to go terribly awry.

 

A philosophical reason to refuse being plugged in is that we prefer to do things, not just experience things, the former bringing deeper meaning to life than simply figuring out how to maximise pleasure and minimise pain. So, for example, its more rewarding to objectively (actually) write great plays, visit a foreign land, win chess championships, make new friends, compose orchestral music, terraform Mars, love one’s children, have a conversation with Plato, or invent new thought experiments than only subjectively think we did. An intuitive preference we have for tangible achievements and experiences over machine-made, simulated sensations.

 

Another factor in choosing not to plug into the machine may be that we’re apprehensive about the resulting loss of autonomy and free will in sorting choices, making decisions, taking action, and being accountable for consequences. People don’t want to be deprived of the perceived dignity that comes from self-regulation and intentional behaviour. That is, we wouldn’t want to defer to the Experience Machine to make determinations about life on our behalf, such as how to excel at or enjoy activities, without giving us the opportunity to intervene, to veto, to remold as we see fit. An autonomy or agency we prefer, even if all that might cause far more aggrievement than the supposed bliss provided by Nozick’s thought experiment.

 

Further in that vein, sensations are often understood, appreciated, and made real by their opposites. That is to say, in order for us to feel pleasure, arguably we must also experience its contrast: some manner of disappointment, obstacles, sorrow, and pain. So, to feel the pride of hearing our original orchestral composition played to an audience’s adulation, our journey getting there might have been dotted by occasional stumbles, even occasionally critical reviews. Besides, it’s conceivable that a menu only of successes and pleasure might grow tedious, and less and less satisfying with time, in face of its interminable predictability.

 

Human connections deeply matter, too, of course, all part of a life that conforms with Nozick’s notion of maintaining ‘contact with reality’. Yes, as long as we’re plugged in we’d be unaware of the inauthenticity of relationships with the family members and friends simulated by the machine. But the nontrivial fact is that family and friends in the real world — outside the machine — would remain unreachable.

 

Because we’d be blithely unaware of the sadness of not being reachable by family and friends for as long as we’re hooked up to the electrodes, we would have no reason to be concerned once embedded in the experience machine. Yet real family and friends, in the outside world, whom we care about may indeed grieve. The anticipation of such grief by loved ones in the real world may well lead most of us to reject lowering ourselves into the machine for a life of counterfeit relationships.

 

In light of these sundry factors, especially the loss of relationships outside of the device, Nozick concludes that the pursuit of hedonic pleasure in the form of simulations — the constructs of the mind that the Experience Machine would provide in place of objective reality – makes plugging into the machine a lot less attractive. Indeed, he says, it begins to look more like ‘a kind of suicide’.