Showing posts with label John Searle. Show all posts
Showing posts with label John Searle. Show all posts

Monday 6 May 2024

On the Trail of Human Consciousness


By Keith Tidman
 

Daniel Dennett once called consciousness the “last surviving mystery” humankind faces. That may be premature and even a bit hyperbolic, but not by much. At the very least, consciousness ranks among the biggest of the remaining mysteries. Two questions central to this are: Does the source of conscious experience rest solely in the neurophysiology of the brain, reducible to myriad sets of mechanical functions that necessarily conform to physical laws? Or, as some have contended, is consciousness somehow airily, dualistically separate from the brain, existing in some sort of undefinably ethereal dimension? 

Consciousness is an empirical, bridge-like connection to things, events, and conditions, boiling down to external stimuli that require vetting within the brain. Conscious states entail a wide range of human experiences, such as awareness, identity, cognition, wakefulness, sentience, imagination, presence in time and space, perception, enthrallment, emotion, visions of alternative futures, anchors to history, ideation, attention, volition, sense of agency, thought experimentation, self-optimisation, memories, opinions — and much more. Not to forget higher-order states of reality, able to include the social, political, legal, familial, educational, environmental, scientific, and ethical norms of the community. The process includes the brain's ability to orchestrate how the states of consciousness play their roles in harmony. As philosopher Thomas Nagel therefore concluded, “there is something it is like to be [us]” — that something being our sense of identity, acquired through individual awareness, perception, and experience.


The conscious mind empirically, subjectively edits objective reality. In the phrase of David Chalmers, philosopher of mind and cognitive scientist, “there is a whir of information processing” as all that complexly happens. The presence of such states makes it hard, if not impossible, to disbelieve our own existence as just an illusion, even if we have hesitancy about the accuracy of our perception of the presumed objective reality encircling us. Thought, introspection, sensing, knowing, belief, the arrow of perpetual change — as well as the spatial and temporal discernments of the world — contribute to confirming what we are about. It’s us, in an inexorable abundance of curiosity, wondering as we gaze upon the micro to the macro dimensions of the universe.

 

None of these states, however, requires the presence of mysterious goings-on — an “ethereal mind,” operating on a level separate from the neuronal, synaptic activity of the brain. Accordingly, “consciousness is real and irreducible,” as Dennett’s fellow philosopher, John Searle, observed while pointing out that the seat of consciousness is the brain; “you can’t get rid of it.” True enough. The centuries-old Cartesian mind-body distinction, with its suspicious otherworldly spiritual, even religious, underpinnings and motive, has long been displaced by today’s neuroscience, physics, and biology. Today, philosophers of mind cheerfully weigh in on the what-if modeling aspects of human consciousness. But it must be said that the baton for elucidating consciousness, two and a half millennia after the ancient world’s musings on the subject, has been handed off to the natural sciences. And there is every reason to trust the latter will eventually triumph, filling the current explanatory gap — whether the path to ultimate understanding follows a straight line or, perhaps more likely, zigs and zags. A mix of dusky and well-lit alleys.

 

Sensations, like the taste of silky chocolate, the sight of northern lights, the sound of a violin concerto, the smell of a petunia, hunger before an aromatic meal, pleasure from being touched, pain from an accident, fear of dark spaces, roughness of volcanic rock, or happiness while watching children play on the beach, are sometimes called qualia. These are the subjective, qualitative properties of experience, which purportedly differ from one person to another. Each person interpreting, or editing, reality differently, whether only marginally so or perhaps to significant extents — all the while getting close enough to external reality for us to get on with everyday life in workably practical ways. 


So, for example, my experience of an icy breeze might be different from yours because of differences — even microscopically — between our respective neurobiological reactions. This being the subjective nature of experience of the same thing, at the same time and in the same place. And yet, qualia might well be, in the words of Chalmers, the “hard problem” in understanding consciousness; but they aren’t an insoluble problem. The individualisation of these experiences, or something that seems like them, will likely prove traceable to brain circuitry and activity, requiring us to penetrate the finer-coarse granularity of the bustling mind. Consciousness can thus be defined as a blend of what our senses absorb and process, as well as how we resultantly act. Put another way, decisions and behaviours matter.

 

The point is, all this neurophysiological activity doesn’t merely represent the surfacing or emergence or groundswell of consciousness, it is consciousness — both necessary and sufficient. That is, mind and consciousness don’t hover separate from the brain, in oddly spectral form. It steadfastly remains a fundamentally materialist framework, containing the very nucleus of human nature. The promise is that in the process of developing an increasingly better understanding of the complexity — of the nuance and richness — of consciousness, humanity will be provided with a vital key for unlocking what makes us, us

 

Monday 3 April 2023

The Chinese Room Experiment ... and Today’s AI Chatbots


By Keith Tidman

 

It was back in 1980 that the American philosopher John Searle formulated the so-called ‘Chinese room thought experiment’ in an article, his aim being to emphasise the bounds of machine cognition and to push back against what he viewed, even back then, as hyperbolic claims surrounding artificial intelligence (AI). His purpose was to make the case that computers don’t ‘think’, but rather merely manipulate symbols in the absence of understanding.

 

Searle subsequently went on to explain his rationale this way: 


‘The reason that no computer can ever be a mind is simply that a computer is only syntactical [concerned with the formal structure of language, such as the arrangement of words and phrases], and minds are more than syntactical. Minds are semantical, in the sense that they have … content [substance, meaning, and understanding]’.

 

He continued to point out, by way of further explanation, that the latest technology metaphor for purportedly representing and trying to understand the brain has consistently shifted over the centuries: for example, from Leibniz, who compared the brain to a mill, to Freud comparing it to ‘hydraulic and electromagnetic systems’, to the present-day computer. With none, frankly, yet serving as anything like good analogs of the human brain, given what we know today of the neurophysiology, experiential pathways, functionality, expression of consciousness, and emergence of mind associated with the brain.

 

In a moment, I want to segue to today’s debate over AI chatbots, but first, let’s recall Searle’s Chinese room argument in a bit more detail. It began with a person in a room, who accepts pieces of paper slipped under the door and into the room. The paper bears Chinese characters, which, unbeknownst to the people outside, the monolingual person in the room has absolutely no ability to translate. The characters unsurprisingly look like unintelligible patterns of squiggles and strokes. The person in the room then feeds those characters into a digital computer, whose program (metaphorically represented in the original description of the experiment by a book of instructions’) searches a massive database of written Chinese (originally represented by a box of symbols’).

 

The powerful computer program can hypothetically find every possible combination of Chinese words in its records. When the computer spots a match with what’s on the paper, it makes a note of the string of words that immediately follow, printing those out so the person can slip the piece of paper back out of the room. Because of the perfect Chinese response to the query sent into the room, the people outside, unaware of the computer’s and program’s presence inside, mistakenly but reasonably conclude that the person in the room has to be a native speaker of Chinese.

 

Here, as an example, is what might have been slipped under the door, into the room: 


什么是智慧 


Which is the Mandarin translation of the age-old question ‘What is wisdom?’ And here’s what might have been passed back out, the result of the computer’s search: 


了解知识的界限


Which is the Mandarin translation of ‘Understanding the boundary/limits of knowledge’, an answer (among many) convincing the people gathered in anticipation outside the room that a fluent speaker of Mandarin was within, answering their questions in informed, insightful fashion.

 

The outcome of Searle’s thought experiment seemed to satisfy the criteria of the famous Turing test (he himself called it ‘the imitation game’), designed by the computer scientist and mathematician Alan Turing in 1950. The controversial challenge he posed with the test was whether a computer could think like — that is, exhibit intelligent behaviour indistinguishable from — a human being. And who could tell.


It was in an article for the journal Mind, called ‘Computing Machinery and Intelligence’, that Turing himself set out the ‘Turing test’, which inspired Searle’s later thought experiment. After first expressing concern with the ambiguity of the words machine and think in a closed question like ‘Can machines think?’, Turing went on to describe his test as follows:

The [challenge] can be described in terms of a game, which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The aim of the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ of ‘X is B and Y is A’. The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?


Now suppose X is actually A, then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be: ‘My hair is shingled, and the longest strands are about nine inches long’.


In order that tone of voice may not help the interrogator, the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprompter communicating between the two rooms. Alternatively, the question and answers can be repeated by an intermediary. The object of the game is for the third party (B) to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as ‘I am the woman, don’t listen to him!’ to her answers, but it will avail nothing as the man makes similar remarks.


We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’  

Note that as Turing framed the inquiry at the time, the question arises of whether a computer can ‘be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a [person]?’ The word ‘imitation’ here is key, allowing for the hypothetical computer in Searle’s Chinese room experiment to pass the test — albeit importantly not proving that computers think semantically, which is a whole other capacity not yet achieved even by today’s strongest AI.

 

Let’s fast-forward a few decades and examine the generative AI chatbots whose development much of the world has been enthusiastically tracking in anticipation of what’s to be. When someone engages with the AI algorithms powering the bots, the AI seems to respond intelligently. The result being either back-and-forth conversations with the chatbots, or the use of carefully crafted natural-language input to prompt the bots to write speeches, correspondence, school papers, corporate reports, summaries, emails, computer code, or any number of other written products. End products are based on the bots having been ‘trained’ on the massive body of text on the internet. And where output sometimes gets reformulated by the bot based on the user’s rejiggered prompts.

 

It’s as if the chatbots think. But they don’t. Rather, the chatbots’ capacity to leverage the massive mounds of information on the internet to produce predictive responses is remarkably much more analogous to what the computer was doing in Searle’s Chinese room forty years earlier. With long-term future implications for developmental advances in neuroscience, artificial intelligence and computer science, philosophy of language and mind, epistemology, and models of consciousness, awareness, and perception.

 

In the midst of this evolution, the range of generative AI will expand AI’s reach across the multivariate domains of modern society: education, business, medicine, finance, science, governance, law, and entertainment, among them. So far, so good. Meanwhile, despite machine learning, possible errors and biases and nonsensicalness in algorithmic decision-making, should they occur, are more problematic in some domains (like medicine, military, and lending) than in others. Importantly remembering, though, that gaffs of any magnitude, type, and regularity can quickly erode trust, no matter the field.

 

Sure, current algorithms, natural-language processing, and the underpinnings of developmental engineering are more complex than when Searle first presented the Chinese room argument. But chatbots still don’t understand the meaning of content. They don’t have knowledge as such. Nor do they venture much by way of beliefs, opinions, predictions, or convictions, leaving swaths of important topics off the table. Reassembly of facts scraped from myriad sources is more the recipe of the day — and even then, errors and eyebrow-raising incoherence occurs, including unexplainably incomplete and spurious references.

 

The chatbots revealingly write output by muscularly matching words provided by the prompts with strings of words located online, including words then shown to follow probabilistically, predictively building their answers based on a form of pattern recognition. There’s still a mimicking of computational, rather than thinking, theories of mind. Sure, what the bots produce would pass the Turing test, but today surely that’s a pretty low bar. 

 

Meantime, people have argued that the AI’s writing reveals markers, such as lacking the nuance of varied cadence, phraseology, word choice, modulation, creativity, originality, and individuality, as well as the curation of appropriate content, that human beings often display when they write. At the moment, anyway, the resulting products from chatbots tend to present a formulaic feel, posing challenges to AI’s algorithms for remediation.

 

Three decades after first unspooling his ingenious Chinese room argument, Searle wrote, ‘I demonstrated years ago … that the implementation of the computer program is not itself sufficient for consciousness or intentionality [mental states representing things]’. Both then and now, that’s true enough. We’re barely closing in on completing the first lap. It’s all still computation, not thinking or understanding.


Accordingly, the ‘intelligence’ one might perceive in Searle’s computer and the program his computer runs in order to search for patterns that match the Chinese words is very much like the ‘intelligence’ one might misperceive in a chatbot’s answers to natural-language prompts. In both cases, what we may misinterpret as intelligence is really a deception of sorts. Because in both cases, what’s really happening, despite the large differences in the programs’ developmental sophistication arising from the passage of time, is little more than brute-force searches of massive amounts of information in order to predict what the next words likely should be. Often getting it right, but sometimes getting it wrong — with good, bad, or trifling consequences.

 

I propose, however, that the development of artificial intelligence — particularly what is called ‘artificial general intelligence’ (AGI) — will get us there: an analog of the human brain, with an understanding of semantic content. Where today’s chatbots will look like novelties if not entirely obedient in their functional execution, and where ‘neural networks’ of feasibly self-optimising artificial general intelligence will match up against or elastically stretch beyond human cognition, where the hotbed issues of what consciousness is get rethought.


Monday 30 March 2020

Making the Case for Multiculturalism



Posted by Keith Tidman

Multiculturalism and ‘identity politics’ have both overlapping and discrete characteristics. Identity politics, for example, widens out to race, ethnicity, gender, age, sexual orientation, national origin, language, religion, disability, and so forth. Humanity’s mosaic. It’s where, in a shift toward pluralism, barriers dissolve — where sidelined minority groups become increasingly mainstreamed, self-determination acquires steam, and both individual and group rights equally pertain to the ideal.

This situation is historically marked by differences between those people who, on one hand, emphasise individual rights, goods, intrinsic value, liberties, and well-being, where each person’s independence stands highest and apart from cultural belonging. And, on the other hand, the communitarians, who emphasise a group perspective. Communitarians regard the individual as ‘irreducibly social’, to borrow Canadian philosopher Charles Taylor’s shorthand.

The group perspective subordinately depends on society. This group perspective needs affirmation, addressing status inequality, with remedies concentrated in political change, redistributive economics, valuing cultural self-worth, and other factors. Communitarians assign primacy to collective rights, socialising goods, intrinsic value, liberties, and well-being. In other words, civic virtue — with individuals freely opting in and opting out of the group. Communitarians and individualists offer opposed views of how our identities are formed. 

But the presumed distinctions between the individual and community may go too far. Rather, reality arguably comprises a coexistent folding together of both liberal individualism and communitarianism in terms of multiculturalism and identity. To this point, people are capable of learning from each other’s ideas, customs, and social behaviour, moving toward an increasingly hybrid, cosmopolitan philosophy based on a new communal lexicon, fostering human advancement.

The English writer (and enthusiastic contributor to Pi’s sister publication, The Philosopher) G. K. Chesterton always emphasised the integrity of this learning process, cautioning:

‘We have never even begun to understand a people until we have found something that we do not understand. So long as we find the character easy to read, we are reading into it our own character’.

Other thinkers point out that cultures have rarely been easily cordoned off or culturally pristine. They contend that groups have always been influenced by others through diverse means, both malign and benign: invasion, colonialism, slavery, commerce, migration, flow of ideas, ideologies, religions, popular culture, and other factors. The cross-pollination has often been reciprocal — affecting the cultural flashpoints, social norms, and future trajectories of both groups.

Globalisation only continues to hasten this process. As the New Zealand philosopher of law Jeremy Waldron puts it, commenting on the phenomenom of cultural overlap:

‘We live in a world formed by technology and trade; by economic, religious, and political imperialism and their offspring; by mass migration and the dispersion of cultural influences’.

How groups reckon with these historical influences, as groups become more pluralistic, deserves attention, so that change can happen more by design than chance.

After all, it’s a high bar to surmount the historic balkanisation of minority cultures and to push back against the negativism of those who trumpet (far too prematurely) multiculturalism’s failure. The political reality is that societies continue to reveal dynamically moving parts. Real-world multiculturalism is, all the time, coalescing into new shapes and continuing to enrich societies.

Multiculturalism in political philosophy involves acknowledging and understanding the fact of diverse cultural moorings in society and the challenges they pose in terms of status, equality, and power — along with remedies. Yet, in this context, the question recurs time and again: has the case really been made for multiculturalism?

The American philosopher John Searle, in the context of education, questions the importance of ‘Western rationalistic tradition’ — where what we know is ‘a mind-independent reality . . . subject to constraints of rationality and logic’. Adding: ‘You do not understand your own tradition if you do not see it in relation to others’.

Charles Taylor, however, sees multiculturalism differently, as an offshoot of liberal political theory, unhampered by heavily forward-leaning ideology. This aligns with postmodernist thinking, distrusting rationalism as to truth and reality. The merits of scepticism, criticism, subjectivism, contextualism, and relativism are endorsed, along with the distinctiveness of individuals and minority groups within society.

Advocates of multiculturalism warn against attempts to shoehorn minority groups into the prevailing culture, or worse. Where today we see rampant nationalism in many corners of the world — suppressing, tyrannizing, and even attempting to stamp out minority communities — eighty years ago Mahatma Gandhi warned of such attempts:

‘No culture can live if it attempts to be exclusive’.

Monday 9 January 2017

Is Consciousness Bound Inextricably by the Brain?

From Qualia to Comprehension

Posted by Keith Tidman
According to the contemporary American philosopher, Daniel Dennett, consciousness is the ‘last surviving mystery’ humankind faces.
Well, that may be overstating human achievements, but at the very least, consciousness ranks among the most consequential mysteries. With its importance acknowledged, does the genesis of conscious experience rest solely in the brain? That is, should investigations of consciousness adhere to the simplest, most direct explanation, where neurophysiological activity accounts for this core feature of our being?

Consciousness is a fundamental property of life—an empirical connection to the phenomenal. Conscious states entail a wide range of (mechanistic) experiences, such as wakefulness, cognition, awareness of self and others, sentience, imagination, presence in time and space, perception, emotions, focused attention, information processing, vision of what can be, self-optimisation, memories, opinions—and much more. An element of consciousness is its ability to orchestrate how these intrinsic states of consciousness express themselves.

None of these states, however, requires the presence of a mysterious dynamic—a ‘mind’ operating dualistically separate from the neuronal, synaptic activity of the brain. In that vein, ‘Consciousness is real and irreducible’, as Dennett's contempoary, John Searle, observed in pointing out the seat of consciousness being the brain; ‘you can’t get rid of it’. Accordingly, Cartesian dualism—the mind-body distinction—has long since been displaced by today’s neuroscience, physics, mathematical descriptions, and philosophy.

Of significance, here, is that the list of conscious experiences in the neurophysiology of the brain includes colour awareness (‘blueness’ of eyes), pain from illness, happiness in children’s company, sight of northern lights, pleasure in another’s touch, hunger before a meal, smell of a petunia, sound of a violin concerto, taste of a macaroon, and myriad others. These sensations fall into a category dubbed qualia, their being the subjective, qualitative, ‘introspective’ properties of experience.

Qualia might well constitute, in the words of the Australian cognitive scientist, David Chalmers, the ‘hard problem’ in understanding consciousness; but, I would suggest, they’re not in any manner the ‘insoluble problem’. Qualia indeed pose an enigma for consciousness, but a tractable one. The reality of these experiences—what’s going on, where and how—has not yet yielded to research; however, it’s early. Qualia are likely—with time, new technologies, fresh methodologies, innovative paradigms—to also be traced back to brain activity.

In other words, these experiences are not just correlated to the neurophysiology of the brain serving as a substrate for conscious processes, they are inextricably linked to and caused by brain activity. Or, put another way, neurophysiological activity doesn’t merely represent consciousness, it is consciousness—both necessary and sufficient.

Consciousness is not unique to humans, of course. There’s a hierarchy to consciousness, tagged approximately to the biological sophistication of a species. How aware, sentient, deliberative, coherent, and complexly arranged that any one species might be, consciousness varies down to the simplest organisms. The cutoff point of consciousness, if any, is debatable. Also, if aliens of radically different intelligences and physiologies, including different brain substrates, are going about their lives in solar systems scattered throughout the universe, they likewise share properties of consciousness.

This universal presence of consciousness is different than the ‘strong’ version of panpsychism, which assigns consciousness (‘mind’) to everything—from stars to rocks to atoms. Although some philosophers through history have subscribed to this notion, there is nothing empirical (measurable) to support it—future investigation notwithstanding, of course. A takeaway from the broader discussion is that the distributed presence of conscious experience precludes any one species, human or alien, from staking its claim to ‘exceptionalism’.

Consciousness, while universal, isn’t unbounded. That said, consciousness might prove roughly analogous to physics’ dark matter, dark energy, force fields, and fundamental particles. It’s possible that the consciousness of intelligent species (with higher-order cognition) is ‘entangled’—that is, one person’s consciousness instantaneously influences that of others across space without regard to distance and time. In that sense, one person’s conscious state may not end where someone else’s begins; instead, consciousness is an integrated, universal grid.

All that said, the universe doesn’t seem to pulse as a single conscious entity or ‘living organism’. At least, it doesn't to modern physicists. On a fundamental and necessary level, however, the presence of consciousness gives the universe meaning—it provides reasons for an extraordinarily complex universe like ours to exist, allowing for what ‘awareness’ brings to the presence of intelligent, sentient, reflective species... like humans.

Yet might not hyper-capable machines too eventually attain consciousness? Powerful artificial intelligence might endow machines with the analog of ‘whole-brain’ capabilities, and thus consciousness. With time and breakthroughs, such machines might enter reality—though not posing the ‘existential threat’ some philosophers and scientists have publicly articulated. Such machines might well achieve supreme complexity—in awareness, cognition, ideation, sentience, imagination, critical thinking, volition, self-optimisation, for example—translatable to proximate ‘personhood’, exhibiting proximate consciousness.

Among what remains of the deep mysteries is this task of achiveing a better grasp of the relationship between brain properties and phenomenal properties. The promise is that in the process of developing a better understanding of consciousness, humanity will be provided with a vital key for unlocking what makes us us.