Showing posts with label thought experiment. Show all posts
Showing posts with label thought experiment. Show all posts

Monday 3 April 2023

The Chinese Room Experiment ... and Today’s AI Chatbots


By Keith Tidman

 

It was back in 1980 that the American philosopher John Searle formulated the so-called ‘Chinese room thought experiment’ in an article, his aim being to emphasise the bounds of machine cognition and to push back against what he viewed, even back then, as hyperbolic claims surrounding artificial intelligence (AI). His purpose was to make the case that computers don’t ‘think’, but rather merely manipulate symbols in the absence of understanding.

 

Searle subsequently went on to explain his rationale this way: 


‘The reason that no computer can ever be a mind is simply that a computer is only syntactical [concerned with the formal structure of language, such as the arrangement of words and phrases], and minds are more than syntactical. Minds are semantical, in the sense that they have … content [substance, meaning, and understanding]’.

 

He continued to point out, by way of further explanation, that the latest technology metaphor for purportedly representing and trying to understand the brain has consistently shifted over the centuries: for example, from Leibniz, who compared the brain to a mill, to Freud comparing it to ‘hydraulic and electromagnetic systems’, to the present-day computer. With none, frankly, yet serving as anything like good analogs of the human brain, given what we know today of the neurophysiology, experiential pathways, functionality, expression of consciousness, and emergence of mind associated with the brain.

 

In a moment, I want to segue to today’s debate over AI chatbots, but first, let’s recall Searle’s Chinese room argument in a bit more detail. It began with a person in a room, who accepts pieces of paper slipped under the door and into the room. The paper bears Chinese characters, which, unbeknownst to the people outside, the monolingual person in the room has absolutely no ability to translate. The characters unsurprisingly look like unintelligible patterns of squiggles and strokes. The person in the room then feeds those characters into a digital computer, whose program (metaphorically represented in the original description of the experiment by a book of instructions’) searches a massive database of written Chinese (originally represented by a box of symbols’).

 

The powerful computer program can hypothetically find every possible combination of Chinese words in its records. When the computer spots a match with what’s on the paper, it makes a note of the string of words that immediately follow, printing those out so the person can slip the piece of paper back out of the room. Because of the perfect Chinese response to the query sent into the room, the people outside, unaware of the computer’s and program’s presence inside, mistakenly but reasonably conclude that the person in the room has to be a native speaker of Chinese.

 

Here, as an example, is what might have been slipped under the door, into the room: 


什么是智慧 


Which is the Mandarin translation of the age-old question ‘What is wisdom?’ And here’s what might have been passed back out, the result of the computer’s search: 


了解知识的界限


Which is the Mandarin translation of ‘Understanding the boundary/limits of knowledge’, an answer (among many) convincing the people gathered in anticipation outside the room that a fluent speaker of Mandarin was within, answering their questions in informed, insightful fashion.

 

The outcome of Searle’s thought experiment seemed to satisfy the criteria of the famous Turing test (he himself called it ‘the imitation game’), designed by the computer scientist and mathematician Alan Turing in 1950. The controversial challenge he posed with the test was whether a computer could think like — that is, exhibit intelligent behaviour indistinguishable from — a human being. And who could tell.


It was in an article for the journal Mind, called ‘Computing Machinery and Intelligence’, that Turing himself set out the ‘Turing test’, which inspired Searle’s later thought experiment. After first expressing concern with the ambiguity of the words machine and think in a closed question like ‘Can machines think?’, Turing went on to describe his test as follows:

The [challenge] can be described in terms of a game, which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The aim of the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ of ‘X is B and Y is A’. The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?


Now suppose X is actually A, then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be: ‘My hair is shingled, and the longest strands are about nine inches long’.


In order that tone of voice may not help the interrogator, the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprompter communicating between the two rooms. Alternatively, the question and answers can be repeated by an intermediary. The object of the game is for the third party (B) to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as ‘I am the woman, don’t listen to him!’ to her answers, but it will avail nothing as the man makes similar remarks.


We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’  

Note that as Turing framed the inquiry at the time, the question arises of whether a computer can ‘be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a [person]?’ The word ‘imitation’ here is key, allowing for the hypothetical computer in Searle’s Chinese room experiment to pass the test — albeit importantly not proving that computers think semantically, which is a whole other capacity not yet achieved even by today’s strongest AI.

 

Let’s fast-forward a few decades and examine the generative AI chatbots whose development much of the world has been enthusiastically tracking in anticipation of what’s to be. When someone engages with the AI algorithms powering the bots, the AI seems to respond intelligently. The result being either back-and-forth conversations with the chatbots, or the use of carefully crafted natural-language input to prompt the bots to write speeches, correspondence, school papers, corporate reports, summaries, emails, computer code, or any number of other written products. End products are based on the bots having been ‘trained’ on the massive body of text on the internet. And where output sometimes gets reformulated by the bot based on the user’s rejiggered prompts.

 

It’s as if the chatbots think. But they don’t. Rather, the chatbots’ capacity to leverage the massive mounds of information on the internet to produce predictive responses is remarkably much more analogous to what the computer was doing in Searle’s Chinese room forty years earlier. With long-term future implications for developmental advances in neuroscience, artificial intelligence and computer science, philosophy of language and mind, epistemology, and models of consciousness, awareness, and perception.

 

In the midst of this evolution, the range of generative AI will expand AI’s reach across the multivariate domains of modern society: education, business, medicine, finance, science, governance, law, and entertainment, among them. So far, so good. Meanwhile, despite machine learning, possible errors and biases and nonsensicalness in algorithmic decision-making, should they occur, are more problematic in some domains (like medicine, military, and lending) than in others. Importantly remembering, though, that gaffs of any magnitude, type, and regularity can quickly erode trust, no matter the field.

 

Sure, current algorithms, natural-language processing, and the underpinnings of developmental engineering are more complex than when Searle first presented the Chinese room argument. But chatbots still don’t understand the meaning of content. They don’t have knowledge as such. Nor do they venture much by way of beliefs, opinions, predictions, or convictions, leaving swaths of important topics off the table. Reassembly of facts scraped from myriad sources is more the recipe of the day — and even then, errors and eyebrow-raising incoherence occurs, including unexplainably incomplete and spurious references.

 

The chatbots revealingly write output by muscularly matching words provided by the prompts with strings of words located online, including words then shown to follow probabilistically, predictively building their answers based on a form of pattern recognition. There’s still a mimicking of computational, rather than thinking, theories of mind. Sure, what the bots produce would pass the Turing test, but today surely that’s a pretty low bar. 

 

Meantime, people have argued that the AI’s writing reveals markers, such as lacking the nuance of varied cadence, phraseology, word choice, modulation, creativity, originality, and individuality, as well as the curation of appropriate content, that human beings often display when they write. At the moment, anyway, the resulting products from chatbots tend to present a formulaic feel, posing challenges to AI’s algorithms for remediation.

 

Three decades after first unspooling his ingenious Chinese room argument, Searle wrote, ‘I demonstrated years ago … that the implementation of the computer program is not itself sufficient for consciousness or intentionality [mental states representing things]’. Both then and now, that’s true enough. We’re barely closing in on completing the first lap. It’s all still computation, not thinking or understanding.


Accordingly, the ‘intelligence’ one might perceive in Searle’s computer and the program his computer runs in order to search for patterns that match the Chinese words is very much like the ‘intelligence’ one might misperceive in a chatbot’s answers to natural-language prompts. In both cases, what we may misinterpret as intelligence is really a deception of sorts. Because in both cases, what’s really happening, despite the large differences in the programs’ developmental sophistication arising from the passage of time, is little more than brute-force searches of massive amounts of information in order to predict what the next words likely should be. Often getting it right, but sometimes getting it wrong — with good, bad, or trifling consequences.

 

I propose, however, that the development of artificial intelligence — particularly what is called ‘artificial general intelligence’ (AGI) — will get us there: an analog of the human brain, with an understanding of semantic content. Where today’s chatbots will look like novelties if not entirely obedient in their functional execution, and where ‘neural networks’ of feasibly self-optimising artificial general intelligence will match up against or elastically stretch beyond human cognition, where the hotbed issues of what consciousness is get rethought.


Monday 31 October 2022

Beetle in a Box: A Thought Experiment


By Keith Tidman


Let’s hypothesise that everyone in a community has a box containing a ‘beetle’. Each person can peer into only his or her box, and never into anyone else’s. Each person insists, upon looking into their own box, that they know what a ‘beetle’ is.

But there’s a catch: Each box might contain something different from some or all the others; each box might contain something that continually changes; or each box might actually contain nothing at all. Yet upon being asked, each person resolutely continues to use the word ‘beetle’ to describe what’s in their box. Refusing, even if probed, to more fully describe what they see, even if not showing it. The word ‘beetle’ thus simply meaning ‘that thing inside a person’s box’.

So, what does the thought experiment, set out by the influential twentieth-century philosopher Ludwig Wittgenstein in his book Philosophical Investigations,  tell us about language, mind, and reality?

As part of this experiment, Wittgenstein introduced the concept of a ‘private language’. That is, a language with a vocabulary and structure that only its originator and sole user understands, all the while untranslatable and obscure to everyone else. The original notion of a private (personal) language was in being analogous to what a person might use in attempting to convey his or her unique experiences, perceptions, and senses — the person’s individualised mental state. However, one criticism of such a personal language, by reason of being mostly unfathomable to others, is in its not holding to the definitional purpose of a working language as we commonly know it: to communicate with others, using mutually agreed-upon and comprehended guidelines.

Notably, however, the idea of a ‘private language’ has been subject to different interpretations over the years — besides in expressing to others one’s own mental state — on account of what some people have held are its inherent ambiguities. Even on its surface, such a private language does seem handicapped, inadequate for faithfully representing external reality among multiple users. A language unable to tie external reality to ‘internal’ reality — to a person’s ‘immediate private sensations’, as Wittgenstein put it, such as pain the individual feels. That is, to the user’s subjective, qualitative state of mind. Yet, the idea that people’s frames of mind, subjective experiences, and sense of awareness are unknowable by others, or at least uncertainly known, seems to come to us quite naturally.

Conventionally speaking, we become familiar with what something is because of its intrinsic physical characteristics. That ‘something’ has an external, material reality, comfortably and knowingly acknowledged by others in abidance to norms within the community. The something holds to the familiar terms of the ‘public language’ we use to describe it. It conveys knowledge. It denotes the world as we know it, precipitated by the habitual awareness of things and events. There’s a reassuringly objective concreteness to it.

So, if you were to describe to someone else some of the conventional features of, say, a sheet of paper or of an airplane or of a dog, we would imagine that other people could fathom, with minimal cognitive effort and without bewilderment, what the item you were describing was. A ‘private language’ can’t do any of that, its denying us a universally agreed-upon understanding of what Wittgenstein’s beetle-in-the-box might actually be. To the point about effectiveness, a ‘private language’ — where definitions of terms may be adversely arbitrary, unorthodox, imprecise, and unfamiliar  differs greatly from a ‘public language’ — where definitions of terms and syntactical form stick to conventional doctrine.

Meanwhile, such a realisation about the shortcomings of a ‘private language’ points to an analogy applicable to a ‘shared’ (or public) language: What happens in the case of expressing one’s personal, private experiences? Is it even possible to do so in an intelligible fashion? The discussion now pivots to the realm of the mind, interrogating aspects such as perception, appearance, attention, awareness, understanding, belief, and knowledge.

For example, if someone is in pain, or feeling joy, fear, or boredom, what’s actually conveyed and understood in trying to project their situation to other people? It’s likely that only they can understand their own mental state: their pain, joy, fear, or boredom. And any person with whom they are speaking, while perhaps genuinely empathetic and commiserative, in reality can only infer the other individual’s pain while understanding only their own.

Put another way, neither person can look into the other’s ‘box’; neither can reach into the other’s mind and hope to know. There are epistemic (knowledge-related) limits to how familiar we can be with another person’s subjective experience, even to the extent of the experience’s validation. Pain, joy, fear, and boredom are inexpressible and incomprehensible, beyond rough generalizations and approximations, whether resorting to either a ‘private’ or public language.

What’s important is that subjective feelings obscurely lack form — like the mysterious ‘beetle’. They lack the concrete, external reality mentioned previously. The reason being that your feelings and those of the other person are individualised, qualitative, and subjective. They are what philosophy of mind calls qualia. Such that your worry, pleasure, pride, and anxiety likely don’t squarely align with mine or the next person’s. Defaulting, as Wittgenstein put it, to a ‘language game’ with consequences, with its own puzzling syntactical rules and lexicon. And as such, the game’s challenge to translate reality into precise, logical, decipherable meaning.

All of which echoes Wittgenstein’s counsel against the inchoate, rudimentary notion of a ‘private language’, precisely because of its lacking necessary social, cultural, historical, and semiotic context. A social backdrop whereby a language must be predictably translatable into coherent concepts (with the notable exception of qualia). Such as giving things identifiable, inherent form readily perceived by others, according to the norms of social engagement and shared discourse among people within a community.

Shape-shifting ‘beetles’ are a convenient analogue of shape-shifting mental states. Reflecting altering ways our qualitative, subjective states of mind influence our choices and behaviours, through which other people develop some sense of our states of mind and how others may define us  a process that, because  of its mercurial nature, is seldom reliable. The limitations discussed here of Wittgenstein’s ‘private language’ arguably render such a medium of communication unhelpful to this process.

We make assumptions, based on looking in the box at our metaphorical beetle (the thing or idea or sensation inside), that will uncover a link: a connection between internal, subjective reality — like the pain that Wittgenstein’s theorising demonstrably focused on, but also happiness, surprise, sadness, enthrallment, envy, boredom — and external, objective reality. However, the dynamics of linguistically expressing qualitative, individualised mental states like pain need to be better understood.

So, what truths about others states of mind are closed off from us, because we’re restricted to looking at only our own ‘beetle’ (experience, perception, sensation)? And because we have to reconcile ourselves to trying to bridge gaps in our knowledge by imperfectly divining, based on externalities like behaviour and language, what’s inside the boxes’ (minds) of everyone else?

Monday 21 March 2022

Would You Plug Into Nozick’s ‘Experience Machine’?

Clockwork Eyes by Michael Ryan

By Keith Tidman

 

Life may have emotionally whipsawed you. Maybe to the extent that you begin to imagine how life’s experiences might somehow be ‘better’. And then you hear about a machine that ensures you experience only pleasure, and no pain. What not to like!


It was the American philosopher Robert Nozick who,  in 1974, hypothesised a way to fill in the blanks of our imaginings of a happier, more fulfilled life by creating his classic Experience Machine thought experiment.

 

According to this, we can choose to be hooked up to such a machine that ensures we experience only pleasure, and eliminates pain. Over the intervening years, Nozick offered different versions of the scenario, as did other writers, but here’s one that will serve our purposes:

 

‘Imagine a machine that could give you any experience (or sequence of experiences) you might desire. When connected to this experience machine [floating in a tank, with electrodes attached to your brain], you can have the experience of writing a great poem or bringing about world peace or loving someone and being loved in return. You can experience the felt pleasures of these things. . . . While in the tank you won’t know that you’re there; you’ll think it’s all actually happening’.

 

At which point, Nozick went on to ask the key question. If given such a choice, would you plug into the machine for the rest of your life?

 

Maybe if we assume that our view of the greatest intrinsic good is a state of general wellbeing, referred to as welfarism, then on utilitarian grounds it might make sense to plug into the machine. But this theory might itself be a naïve, incomplete summary of what we value — what deeply matters to us in living out our lives — and the totality of the upside and downside consequences of our desires, choices, and actions.

 

Our pursuit of wellbeing notwithstanding, Nozick expects most of us would rebuff his invitation and by extension rebuff ethical hedonism, with its origins reaching back millennia. Our opting instead to live a life ‘in contact with reality’, as Nozick put it. That is, to take part of experiences authentically of the world — reflecting a reality of greater consequence than a manufactured illusion. A choice that originates, at least in part, from a bias toward the status quo. This so-called status quo bias leads some people  if told to imagine their lives to date having been produced by an ‘experience machine’  to choose not to detach from the machine.

 

However, researchers have found many people are reluctant to plug into the machine. This seems to be due to several factors. Factors beyond individuals finding the thought of plugging in too scary, icky, or alien’, as philosopher Ben Bramble interestingly characterised the prospect. And beyond such prosaic grounds as apprehension of something askew happening. For example, either the complex technology could malfunction, or the technicians overseeing the process might be sloppy one day, or there might be malign human intrusion (along the lines of the ‘fundamentalist zealots’ that Bramble invented) — any of which might cause a person’s experience in the machine to go terribly awry.

 

A philosophical reason to refuse being plugged in is that we prefer to do things, not just experience things, the former bringing deeper meaning to life than simply figuring out how to maximise pleasure and minimise pain. So, for example, its more rewarding to objectively (actually) write great plays, visit a foreign land, win chess championships, make new friends, compose orchestral music, terraform Mars, love one’s children, have a conversation with Plato, or invent new thought experiments than only subjectively think we did. An intuitive preference we have for tangible achievements and experiences over machine-made, simulated sensations.

 

Another factor in choosing not to plug into the machine may be that we’re apprehensive about the resulting loss of autonomy and free will in sorting choices, making decisions, taking action, and being accountable for consequences. People don’t want to be deprived of the perceived dignity that comes from self-regulation and intentional behaviour. That is, we wouldn’t want to defer to the Experience Machine to make determinations about life on our behalf, such as how to excel at or enjoy activities, without giving us the opportunity to intervene, to veto, to remold as we see fit. An autonomy or agency we prefer, even if all that might cause far more aggrievement than the supposed bliss provided by Nozick’s thought experiment.

 

Further in that vein, sensations are often understood, appreciated, and made real by their opposites. That is to say, in order for us to feel pleasure, arguably we must also experience its contrast: some manner of disappointment, obstacles, sorrow, and pain. So, to feel the pride of hearing our original orchestral composition played to an audience’s adulation, our journey getting there might have been dotted by occasional stumbles, even occasionally critical reviews. Besides, it’s conceivable that a menu only of successes and pleasure might grow tedious, and less and less satisfying with time, in face of its interminable predictability.

 

Human connections deeply matter, too, of course, all part of a life that conforms with Nozick’s notion of maintaining ‘contact with reality’. Yes, as long as we’re plugged in we’d be unaware of the inauthenticity of relationships with the family members and friends simulated by the machine. But the nontrivial fact is that family and friends in the real world — outside the machine — would remain unreachable.

 

Because we’d be blithely unaware of the sadness of not being reachable by family and friends for as long as we’re hooked up to the electrodes, we would have no reason to be concerned once embedded in the experience machine. Yet real family and friends, in the outside world, whom we care about may indeed grieve. The anticipation of such grief by loved ones in the real world may well lead most of us to reject lowering ourselves into the machine for a life of counterfeit relationships.

 

In light of these sundry factors, especially the loss of relationships outside of the device, Nozick concludes that the pursuit of hedonic pleasure in the form of simulations — the constructs of the mind that the Experience Machine would provide in place of objective reality – makes plugging into the machine a lot less attractive. Indeed, he says, it begins to look more like ‘a kind of suicide’.