Showing posts with label consciousness. Show all posts
Showing posts with label consciousness. Show all posts

Monday 15 July 2024

Are We Alone in the Universe, or Not? And Does It Matter?

Peering through Saturn’s rings, the Cassini probe caught a glimpse of a faraway planet and its moon. At a distance of just under 900 million miles, Earth shines bright among the many stars in the sky, distinguished by its bluish tint.

By Keith Tidman

The writer and futurist Arthur C. Clarke once wrote: “Two possibilities exist: Either we are alone in the universe, or we are not. Both are equally terrifying.” 


But are the two alternatives really terrifying? And even if they were, then what might be the upshot?

 

In exploring the possible consequences of Clarke’s thought experiment, I’ll avoid enmeshing us in a discussion of whether extraterrestrials have already visited Earth, or whether we will get to visit their planets in the near term. For the foreseeable future, the distances are too large for that to happen, where suspected extraterrestrial civilisations are thousands, millions, or billions of light-years away. Those distances hamper hunts for signals engaged in by the Search for Extraterrestrial Intelligence (SETI) Institute, which metaphorically dips only an infinitesimally small scoop into the vast cosmic ocean. And such distances hamper interstellar travel.

 

Accordingly, we are currently in no position to respond definitively to the challenge Enrico Fermi, also known as “the architect of the nuclear age,” raised with his lunchtime colleagues at the Los Alamos National Laboratory in New Mexico in 1950, referring to extraterrestrials: “Where is everybody?”

 

One piece of crucial context for our conversation here is that of scale: the known universe is currently thought to be some 93 billion light-years in diameter. Recall that a light-year is a measurement of distance, not time, so that in Earthly ‘miles,’ the cosmic diameter is an easy, but boggling, calculation: 93 billion multiplied by 5.8 trillion miles. Add that, in the case of travel or electromagnetic communications (beamed signals) between us and extraterrestrials, the velocity of light is the fixed upper limit — as far as current science is concerned, anyway. All of which is problematic for detecting aliens and their biomarkers or technomarkers, quite apart from anyone engaging in neighbourly interstellar space visitation.

 

Yet, in a universe kickstarted some 13.8 billion years ago — with hundreds of billions of galaxies, and trillions of stars and planets (many of those exoplanets conceivably habitable, even if not twins of our world) — it’s surely arguable that extraterrestrial civilisations, carbon-based or differently constituted physically, are out there, similarly staring toward the skies, quizzically pondering. Alien cosmologists asking, “Where is everybody?,” making great strides developing their own technology, and calculating probabilities for sundry constants and variables assumed necessary for technologically advanced life to prosper elsewhere.

 

There are two key assumption in asking whether we are alone in the universe or we are among teeming alien life strewn throughout the universe. The first assumption, of a general nature, is to define ourselves as a conscious, intelligent, sophisticated species; the second is to assume the extraterrestrials we envision in our discussion are likewise conscious and intelligent and sophisticated — at least equally or maybe considerably more so, options we’ll explore.

 

A third assumption is an evolutionary process, transitioning from physics to chemistry to biology to consciousness. Higher-order consciousness is presumed to be the evolutionary apex both for our species — what it is like to be us — and for extraterrestrials — what it is like to be them. Consciousness may end up the evolutionary apex for our and their machine technology, too. Given that higher-order consciousness is central, we need a baseline for what we mean by the term. Taking a physicalist or materialist point of view, the mind and consciousness are rooted in the neurophysiological activity of the brain, reducible to one and the same. This, rather than existing dualistically in some ethereal, transcendental state separate from the brain, as has sometimes been mythologized.

 

As a placeholder here, consciousness is assumed to be fundamentally similar in its range of domains both for our species and for extraterrestrials, comprising variations of these features: experience, awareness, perception, identity, sentience, thought experimentation, emotion, imagination, innovation, curiosity, memory, chronicled past, projected future, executive function, curation, normative idealism, knowledge, understanding, cognition, metacognition — among others. On these important fronts, the features’ levels of development between us and extraterrestrials may well differ in form and magnitude.

 

As for one of Arthur C. Clarke’s alternative scenarios — that our species is alone in the universe — I can’t help but wonder why, then, the universe is so old, big, and still rapidly growing, if the cosmic carnival is experienced by us alone. We might scratch our heads over the seeming lack of sense in that, whereby the imposing panorama captured by space-based telescopes dwarfs us. We might, therefore, construe that particular scenario as favouring an exceptional place for our species in the otherwise unoccupied cosmic wonderment, or in a different (and more terrifying?) vein suggesting our presence is inconsequential.

 

That is, neither aloneness nor uniqueness necessarily equates to the specialness of a species, but to the contrary a trifling one-off situation. Where we have to come to grips with the indeterminacy of why this majestic display of light-years-sized star nurseries, galaxies rushing toward or away from one another, the insatiability of hungry supermassive black holes, supernovas sending ripples through the faraway reaches of spacetime, and so much more.

 

As for the possibility of sophisticated other life in the universe, we might turn to the so-called anthropic principle for the possible how and why of such occurrences. The principle posits that many constants of the Earth, of the solar system, of the Milky Way, and of the universe are so extraordinarily fine-tuned that only in those ways might conscious, intelligent, advanced life like ours ever to have evolutionarily come into being.

 

The universe would be unstable, as the anthropic principle says, if any of those parameters would shift even a minuscule amount, the cosmos being like a pencil balanced precariously on its pointed tip. It’s likely, therefore, that our species is not floating alone in an unimaginably vast, roiling but barren cosmic sea; according to a more expansive view of the error-less anthropic principle, the latter makes the creation and sustenance of extraterrestrial life possible, too, as fellow players in the cosmic froth. Fine-tuned, after all, doesn't necessarily equate to rare. 

 

We might thus wonder about the consequences for our self-identity and image if some among these teeming numbers of higher-order intelligent extraterrestrials inhabiting the universe got a developmental jumpstart on our species’ civilisation of a million or more years. It’s reasonable to assume that those species would have experienced many-orders-of-magnitude advances biologically, scientifically, technologically, culturally, and institutionally, fundamentally skewing how humanity perceives itself.

 

The impact of these realities on human self-perception might lead some to worry over the glaring inequality and possibly perceived menace, resulting in dents in the armour of our persistent self-exceptionalism, raising larger questions about our purpose. These are profoundly philosophical considerations. We might thereby opt to capitulate, grasping at straws of self-indulgent excuses. Yet, extraterrestrials capable of interstellar travel might choose — whether for benign purposes (e.g., development, enlightenment, resource sharing), or for malign ones (e.g., hegemonism, hubris, manifest destiny, self-exceptionalism, colonisation), or for a hybrid of reasons — that interventionism, with its mix of calculated and unpremeditated consequences, might seem the natural course.

 

Our reactions to gargantuan inter-species differences might range from giddy exceptionalism at one end to dimmed significance at the other. On a religious front, a crisis might ensue in the presence of remarkably advanced extraterrestrials, influencing factors surrounding faith, creeds, dicta, values, patriarchy. Some of our religious constructs — scriptures, symbology, philosophies — might collapse as shallow affectations. For example, in light of hyper-advanced extraterrestrials, our history of expressing religious imagery in anthropomorphic terms (our species described doctrinally as being “in God’s image,” for example) may no longer make sense, fundamentally altering belief systems.

 

We would have to revisit the principles of ethics, including the degree that ethics are culturally and societally contingent. Or the impact might lead to our being elated that life has advanced to such a remarkable degree, covetous for what it might mean for benefits for our species — to model what seems to have worked magnificently for a cutting-edge alien civilisation. The potential for learning vastly advanced natural science and technology and societal paradigms would be immense, where, for instance, extraterrestrials might be hybrids of the best of biology and the best of machines.

 

As potentially confounding either of Clarke’s scenarios might prove, neither need be terrifying; instead, both scenarios have the potential of being exhilarating. But let me toss one last unavoidable constant into the cosmic cauldron. And this is the concept of entropy — the irreversibly increasing (net) disorder within a closed, isolated system like the universe, with its expanding galactic and stellar separation accelerating toward a thermodynamic demise. Entropy is a fact of life of the universe: providing an expiry date, and eventually rendering everything extinct. The end of history, the end of physics — and the end of metaphysics.

 

Monday 6 May 2024

On the Trail of Human Consciousness


By Keith Tidman
 

Daniel Dennett once called consciousness the “last surviving mystery” humankind faces. That may be premature and even a bit hyperbolic, but not by much. At the very least, consciousness ranks among the biggest of the remaining mysteries. Two questions central to this are: Does the source of conscious experience rest solely in the neurophysiology of the brain, reducible to myriad sets of mechanical functions that necessarily conform to physical laws? Or, as some have contended, is consciousness somehow airily, dualistically separate from the brain, existing in some sort of undefinably ethereal dimension? 

Consciousness is an empirical, bridge-like connection to things, events, and conditions, boiling down to external stimuli that require vetting within the brain. Conscious states entail a wide range of human experiences, such as awareness, identity, cognition, wakefulness, sentience, imagination, presence in time and space, perception, enthrallment, emotion, visions of alternative futures, anchors to history, ideation, attention, volition, sense of agency, thought experimentation, self-optimisation, memories, opinions — and much more. Not to forget higher-order states of reality, able to include the social, political, legal, familial, educational, environmental, scientific, and ethical norms of the community. The process includes the brain's ability to orchestrate how the states of consciousness play their roles in harmony. As philosopher Thomas Nagel therefore concluded, “there is something it is like to be [us]” — that something being our sense of identity, acquired through individual awareness, perception, and experience.


The conscious mind empirically, subjectively edits objective reality. In the phrase of David Chalmers, philosopher of mind and cognitive scientist, “there is a whir of information processing” as all that complexly happens. The presence of such states makes it hard, if not impossible, to disbelieve our own existence as just an illusion, even if we have hesitancy about the accuracy of our perception of the presumed objective reality encircling us. Thought, introspection, sensing, knowing, belief, the arrow of perpetual change — as well as the spatial and temporal discernments of the world — contribute to confirming what we are about. It’s us, in an inexorable abundance of curiosity, wondering as we gaze upon the micro to the macro dimensions of the universe.

 

None of these states, however, requires the presence of mysterious goings-on — an “ethereal mind,” operating on a level separate from the neuronal, synaptic activity of the brain. Accordingly, “consciousness is real and irreducible,” as Dennett’s fellow philosopher, John Searle, observed while pointing out that the seat of consciousness is the brain; “you can’t get rid of it.” True enough. The centuries-old Cartesian mind-body distinction, with its suspicious otherworldly spiritual, even religious, underpinnings and motive, has long been displaced by today’s neuroscience, physics, and biology. Today, philosophers of mind cheerfully weigh in on the what-if modeling aspects of human consciousness. But it must be said that the baton for elucidating consciousness, two and a half millennia after the ancient world’s musings on the subject, has been handed off to the natural sciences. And there is every reason to trust the latter will eventually triumph, filling the current explanatory gap — whether the path to ultimate understanding follows a straight line or, perhaps more likely, zigs and zags. A mix of dusky and well-lit alleys.

 

Sensations, like the taste of silky chocolate, the sight of northern lights, the sound of a violin concerto, the smell of a petunia, hunger before an aromatic meal, pleasure from being touched, pain from an accident, fear of dark spaces, roughness of volcanic rock, or happiness while watching children play on the beach, are sometimes called qualia. These are the subjective, qualitative properties of experience, which purportedly differ from one person to another. Each person interpreting, or editing, reality differently, whether only marginally so or perhaps to significant extents — all the while getting close enough to external reality for us to get on with everyday life in workably practical ways. 


So, for example, my experience of an icy breeze might be different from yours because of differences — even microscopically — between our respective neurobiological reactions. This being the subjective nature of experience of the same thing, at the same time and in the same place. And yet, qualia might well be, in the words of Chalmers, the “hard problem” in understanding consciousness; but they aren’t an insoluble problem. The individualisation of these experiences, or something that seems like them, will likely prove traceable to brain circuitry and activity, requiring us to penetrate the finer-coarse granularity of the bustling mind. Consciousness can thus be defined as a blend of what our senses absorb and process, as well as how we resultantly act. Put another way, decisions and behaviours matter.

 

The point is, all this neurophysiological activity doesn’t merely represent the surfacing or emergence or groundswell of consciousness, it is consciousness — both necessary and sufficient. That is, mind and consciousness don’t hover separate from the brain, in oddly spectral form. It steadfastly remains a fundamentally materialist framework, containing the very nucleus of human nature. The promise is that in the process of developing an increasingly better understanding of the complexity — of the nuance and richness — of consciousness, humanity will be provided with a vital key for unlocking what makes us, us

 

Monday 3 April 2023

The Chinese Room Experiment ... and Today’s AI Chatbots


By Keith Tidman

 

It was back in 1980 that the American philosopher John Searle formulated the so-called ‘Chinese room thought experiment’ in an article, his aim being to emphasise the bounds of machine cognition and to push back against what he viewed, even back then, as hyperbolic claims surrounding artificial intelligence (AI). His purpose was to make the case that computers don’t ‘think’, but rather merely manipulate symbols in the absence of understanding.

 

Searle subsequently went on to explain his rationale this way: 


‘The reason that no computer can ever be a mind is simply that a computer is only syntactical [concerned with the formal structure of language, such as the arrangement of words and phrases], and minds are more than syntactical. Minds are semantical, in the sense that they have … content [substance, meaning, and understanding]’.

 

He continued to point out, by way of further explanation, that the latest technology metaphor for purportedly representing and trying to understand the brain has consistently shifted over the centuries: for example, from Leibniz, who compared the brain to a mill, to Freud comparing it to ‘hydraulic and electromagnetic systems’, to the present-day computer. With none, frankly, yet serving as anything like good analogs of the human brain, given what we know today of the neurophysiology, experiential pathways, functionality, expression of consciousness, and emergence of mind associated with the brain.

 

In a moment, I want to segue to today’s debate over AI chatbots, but first, let’s recall Searle’s Chinese room argument in a bit more detail. It began with a person in a room, who accepts pieces of paper slipped under the door and into the room. The paper bears Chinese characters, which, unbeknownst to the people outside, the monolingual person in the room has absolutely no ability to translate. The characters unsurprisingly look like unintelligible patterns of squiggles and strokes. The person in the room then feeds those characters into a digital computer, whose program (metaphorically represented in the original description of the experiment by a book of instructions’) searches a massive database of written Chinese (originally represented by a box of symbols’).

 

The powerful computer program can hypothetically find every possible combination of Chinese words in its records. When the computer spots a match with what’s on the paper, it makes a note of the string of words that immediately follow, printing those out so the person can slip the piece of paper back out of the room. Because of the perfect Chinese response to the query sent into the room, the people outside, unaware of the computer’s and program’s presence inside, mistakenly but reasonably conclude that the person in the room has to be a native speaker of Chinese.

 

Here, as an example, is what might have been slipped under the door, into the room: 


什么是智慧 


Which is the Mandarin translation of the age-old question ‘What is wisdom?’ And here’s what might have been passed back out, the result of the computer’s search: 


了解知识的界限


Which is the Mandarin translation of ‘Understanding the boundary/limits of knowledge’, an answer (among many) convincing the people gathered in anticipation outside the room that a fluent speaker of Mandarin was within, answering their questions in informed, insightful fashion.

 

The outcome of Searle’s thought experiment seemed to satisfy the criteria of the famous Turing test (he himself called it ‘the imitation game’), designed by the computer scientist and mathematician Alan Turing in 1950. The controversial challenge he posed with the test was whether a computer could think like — that is, exhibit intelligent behaviour indistinguishable from — a human being. And who could tell.


It was in an article for the journal Mind, called ‘Computing Machinery and Intelligence’, that Turing himself set out the ‘Turing test’, which inspired Searle’s later thought experiment. After first expressing concern with the ambiguity of the words machine and think in a closed question like ‘Can machines think?’, Turing went on to describe his test as follows:

The [challenge] can be described in terms of a game, which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The aim of the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either ‘X is A and Y is B’ of ‘X is B and Y is A’. The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?


Now suppose X is actually A, then A must answer. It is A’s object in the game to try and cause C to make the wrong identification. His answer might therefore be: ‘My hair is shingled, and the longest strands are about nine inches long’.


In order that tone of voice may not help the interrogator, the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprompter communicating between the two rooms. Alternatively, the question and answers can be repeated by an intermediary. The object of the game is for the third party (B) to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as ‘I am the woman, don’t listen to him!’ to her answers, but it will avail nothing as the man makes similar remarks.


We now ask the question, ‘What will happen when a machine takes the part of A in this game?’ Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, ‘Can machines think?’  

Note that as Turing framed the inquiry at the time, the question arises of whether a computer can ‘be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a [person]?’ The word ‘imitation’ here is key, allowing for the hypothetical computer in Searle’s Chinese room experiment to pass the test — albeit importantly not proving that computers think semantically, which is a whole other capacity not yet achieved even by today’s strongest AI.

 

Let’s fast-forward a few decades and examine the generative AI chatbots whose development much of the world has been enthusiastically tracking in anticipation of what’s to be. When someone engages with the AI algorithms powering the bots, the AI seems to respond intelligently. The result being either back-and-forth conversations with the chatbots, or the use of carefully crafted natural-language input to prompt the bots to write speeches, correspondence, school papers, corporate reports, summaries, emails, computer code, or any number of other written products. End products are based on the bots having been ‘trained’ on the massive body of text on the internet. And where output sometimes gets reformulated by the bot based on the user’s rejiggered prompts.

 

It’s as if the chatbots think. But they don’t. Rather, the chatbots’ capacity to leverage the massive mounds of information on the internet to produce predictive responses is remarkably much more analogous to what the computer was doing in Searle’s Chinese room forty years earlier. With long-term future implications for developmental advances in neuroscience, artificial intelligence and computer science, philosophy of language and mind, epistemology, and models of consciousness, awareness, and perception.

 

In the midst of this evolution, the range of generative AI will expand AI’s reach across the multivariate domains of modern society: education, business, medicine, finance, science, governance, law, and entertainment, among them. So far, so good. Meanwhile, despite machine learning, possible errors and biases and nonsensicalness in algorithmic decision-making, should they occur, are more problematic in some domains (like medicine, military, and lending) than in others. Importantly remembering, though, that gaffs of any magnitude, type, and regularity can quickly erode trust, no matter the field.

 

Sure, current algorithms, natural-language processing, and the underpinnings of developmental engineering are more complex than when Searle first presented the Chinese room argument. But chatbots still don’t understand the meaning of content. They don’t have knowledge as such. Nor do they venture much by way of beliefs, opinions, predictions, or convictions, leaving swaths of important topics off the table. Reassembly of facts scraped from myriad sources is more the recipe of the day — and even then, errors and eyebrow-raising incoherence occurs, including unexplainably incomplete and spurious references.

 

The chatbots revealingly write output by muscularly matching words provided by the prompts with strings of words located online, including words then shown to follow probabilistically, predictively building their answers based on a form of pattern recognition. There’s still a mimicking of computational, rather than thinking, theories of mind. Sure, what the bots produce would pass the Turing test, but today surely that’s a pretty low bar. 

 

Meantime, people have argued that the AI’s writing reveals markers, such as lacking the nuance of varied cadence, phraseology, word choice, modulation, creativity, originality, and individuality, as well as the curation of appropriate content, that human beings often display when they write. At the moment, anyway, the resulting products from chatbots tend to present a formulaic feel, posing challenges to AI’s algorithms for remediation.

 

Three decades after first unspooling his ingenious Chinese room argument, Searle wrote, ‘I demonstrated years ago … that the implementation of the computer program is not itself sufficient for consciousness or intentionality [mental states representing things]’. Both then and now, that’s true enough. We’re barely closing in on completing the first lap. It’s all still computation, not thinking or understanding.


Accordingly, the ‘intelligence’ one might perceive in Searle’s computer and the program his computer runs in order to search for patterns that match the Chinese words is very much like the ‘intelligence’ one might misperceive in a chatbot’s answers to natural-language prompts. In both cases, what we may misinterpret as intelligence is really a deception of sorts. Because in both cases, what’s really happening, despite the large differences in the programs’ developmental sophistication arising from the passage of time, is little more than brute-force searches of massive amounts of information in order to predict what the next words likely should be. Often getting it right, but sometimes getting it wrong — with good, bad, or trifling consequences.

 

I propose, however, that the development of artificial intelligence — particularly what is called ‘artificial general intelligence’ (AGI) — will get us there: an analog of the human brain, with an understanding of semantic content. Where today’s chatbots will look like novelties if not entirely obedient in their functional execution, and where ‘neural networks’ of feasibly self-optimising artificial general intelligence will match up against or elastically stretch beyond human cognition, where the hotbed issues of what consciousness is get rethought.


Tuesday 24 January 2023

‘Brain in a Vat’: A Thought Experiment


By Keith Tidman

Let’s hypothesise that someone’s brain has been removed from the body and immersed in a vat of fluids essential for keeping the brain not only alive and healthy but functioning normally — as if it is still in a human skull sustained by other bodily organs.

A version of this thought experiment was laid out by René Descartes in 1641 in the Meditations on First Philosophy, as part of inquiring whether sensory impressions are delusions. An investigation that ultimately led to his celebrated conclusion, ‘Cogito, ergo sum’ (‘I think, therefore I am’). Fast-forward to American philosopher Gilbert Harman, who modernised the what-if experiment in 1973. Harman’s update included introducing the idea of a vat (in place of the allegorical device of information being fed to someone by an ‘evil demon’, originally conceived by Descartes) in order to acknowledge the contemporary influences of neuroscience in understanding the brain and mind.

In this thought experiment, a brain separated from its body and sustained in a vat of chemicals is assumed to possess consciousness — that is, the neuronal correlates of perception, experience, awareness, wonderment, cognition, abstraction, and higher-order thought — with its nerve endings attached by wires to a quantum computer and a sophisticated program. Scientists feed the disembodied brain with electrical signals, identical to those that people are familiar with receiving during the process of interacting through the senses with a notional external world. Hooked up in this manner, the brain (mind) in the vat therefore does not physically interact with what we otherwise perceive as a material world. Conceptualizations of a physical world — fed to the brain via computer prompts and mimicking such encounters — suffice for the awareness of experience.

The aim of this what-if experiment is to test questions not about science or even ‘Matrix’-like science fiction, but about epistemology — queries such as what do we know, how do we know it, with what certainty do we know it, and why does what we know matter? Specifically, issues to do with scepticism, truth, mind, interpretation, belief, and reality-versus-illusion — influenced by the lack of irrefutable evidence that we are not, in fact, brains in vats. We might regard these notions as solipsistic, where the mind believes nothing (no mental state) exists beyond what it alone experiences and thinks it knows.

In the brain-in-a-vat scenario, the mind cannot differentiate between experiences of things and events in the physical, external world and those virtual experiences electrically prompted by the scientists who programmed the computer. Yet, since the brain is in all ways experiencing a reality, whether or not illusionary, then even in the absence of a body the mind bears the complement of higher-order qualities required to be a person, invested with full-on human-level consciousness. To the brain suspended in a vat and to the brain housed in a skull sitting atop a body, the mental life experienced is presumed to be the same.

But my question, then, is this: Is either reality — that for which the computer provides evidence and that for which external things and events provide evidence — more convincing (more real, that is) than the other? After all, are not both experiences of, say, a blue sky with puffy clouds qualitatively and notionally the same: whereby both realities are the product of impulses, even if the sources and paths of the impulses differ?

If the experiences are qualitatively the same, the philosophical sceptic might maintain that much about the external world that we surmise is true, like the briskness of a winter morning or the aroma of fresh-baked bread, is in fact hard to nail down. The reason being that in the case of a brain in a vat, the evidence of a reality provided by scientists is assumed to resemble that provided by a material external world, yet result in a different interpretation of someone’s experiences. We might wonder how many descriptions there are of how the conceptualized world corresponds to what we ambitiously call ultimate reality.

So, for example, the sceptical hypothesis asserts that if we are unsure about not being a brain in a vat, then we cannot disregard the possibility that all our propositions (alleged knowledge) about the outside physical world would not hold up to scrutiny. This argument can be expressed by the following syllogism:

1. If I know any proposition of external things and events, then I know that I am not a brain in a vat;

2. I do not know that I am not a brain in a vat;

3. Therefore, I do not know any proposition of external things and events about the external world.


Further, given that a brain in a vat and a brain in a skull would receive identical stimuli — and that the latter are the only means either brain is able to relate to its surroundings — then neither brain can determine if it is the one bathed in a vat or the one embodied in a skull. Neither mind can be sure of the soundness of what it thinks it knows, even knowledge of a world of supposed mind-independent things and events. This is the case, even though computer-generated impulses realistically substitute for not directly interacting bodily with a material external world. So, for instance, when a brain in a vat believes that ‘wind is blowing’, there is no wind — no rushing movement of air molecules — but rather the computer-coded, mental simulation of wind. That is, replication of the qualitative state of physical reality.

I would argue that the world experienced by the brain in a vat is not fictitious or unauthentic, but rather is as real to the disembodied brain and mind as the external, physical world is to the embodied brain. Both brains fashion valid representations of truth. I therefore propose that each brain is ‘sufficient’ to qualify as a person: where, notably, the brains’ housing (vat or skull) and signal pathways (digital or sensory) do not matter.

Monday 25 July 2022

‘Philosophical Zombies’: A Thought Experiment

Zombies are essentially machines that appear human.

By Keith Tidman
 

Some philosophers have used the notion of ‘philosophical zombies’ in a bid to make a point about the source and nature of human consciousness. Have they been on the right track?

 

One thought experiment begins by hypothesising the existence of zombies who are indistinguishable in appearance and behaviour from ordinary people. These zombies match our comportment, seeming to think, know, understand, believe, and communicate just as we do. Or, at least, they appear to. You and a zombie could not tell each other apart. 

 

Except, there is one important difference: philosophical zombies lack conscious experience. Which means that if, for example, a zombie was to drop an anvil on its foot, it might give itself away by not reacting at all or, perhaps, very differently than normal. It would not have the inward, natural, individualised experience of actual pain the way the rest of us would. On the other hand, a smarter kind of zombie might know what humans would do in such situations and pretend to recoil and curse as if in extreme pain. 

 

Accordingly, philosophical zombies lead us to what’s called the ‘hard problem of consciousness’, which is whether or not each human has individually unique feelings while experiencing things – whereby each person produces his or her own reactions to stimuli, unlike everyone else’s. Such as the taste of a tart orange, the chilliness of snow, the discomfort of grit in the eye, the awe in gazing at ancient relics, the warmth of holding a squirming puppy, and so on.

 

Likewise, they lead us to wonder whether or not there are experiences (reactions, if you will) that humans subjectively feel in authentic ways that are the product of physical processes, such as neuronal and synaptic activity as regions of the brain fire up. Experiences beyond those that zombies only copycat, or are conditioned or programmed to feign, the way automatons might, lacking true self-awareness. If there are, then there remains a commonsense difference between ‘philosophical zombies’ and us.

 

Zombie thought experiments have been used by some to argue against the notion called ‘physicalism’, whereby human consciousness and subjective experience are considered to be based in the material activity of the brain. That is, an understanding of reality, revealed by philosophers of mind and neuroscientists who are jointly peeling back how the brain works as it experiences, imagines, ponders, assesses, and decides.

 

The key objection to such ‘physicalism’ is the contention that mind and body are separable properties, the venerable philosophical theory also known as dualism. And that by extrapolation, the brain is not (cannot be) the source of conscious experience. Instead, it is argued by some that conscious experience — like the pain from the dropped anvil or joy in response to the bright yellow of fields of sunflowers — is separate from brain function, even though natural law strongly tells us such brain function is the root of everyone's subjective experience.

 

But does the ‘philosophical zombie’ argument against brain function being the seed of conscious experience hold up?

 

After all, the argument that philosophical zombies, whose clever posing makes us assume there are no differences between them and us, seems problematic. Surely, there is insufficient evidence of the brain not giving rise to consciousness and individual experience. Yet, many people who argue against a material basis to experience, residing in brain function, rest their case on the notion that philosophical zombies are at least conceivable.

 

They argue that ‘conceivability’ is enough to make zombies possible. However, such arguments neglect that being conceivable is really just another expression for something ‘being imaginable’. Isn’t that the reason young children look under their beds at night? But, is being imaginable actually enough to conclude something’s real-world existence? How many children actually come face to face with monsters in their closets? There are innumerable other examples, as we’ll get to momentarily, illustrating that all sorts of irrational, unreal things are imaginable  in the same sense that they’re conceivable  yet surely with no sound basis in reality.

 

Proponents of conceivability might be said to stumble into a dilemma: that of logical incoherence. Why so? Because, on the same supposedly logical framework, it is logically imaginable that garden gnomes come to life at night, or that fire-breathing dragons live on an as-yet-undiscovered island, or that the channels scoured on the surface of Mars are signs of an intelligent alien civilisation!

 

Such extraordinary notions are imaginable, but at the same time implausible, even nonsensical. Imagining something doesn’t make it so. These ‘netherworld notions’ simply don’t hold up. Philosophical zombies arguably fall into this group. 

 

Moreover, zombies wouldn’t (couldn’t) have free will; that is, free will and zombiism conflict with one another. Yes, zombies might fabricate self-awareness and free will convincingly enough to trick a casual, uncritical observer — but this would be a sham, insufficient to satisfy the conditions for true free will.

 

The fact remains that the authentic experience of, for example, peacefully listening to gentle waves splashing ashore cannot happen if the complex functionality of the brain were not to exist. A blob that only looks like a brain (as in the case for philosophical zombies) would not be the equivalent of a human brain if, critically, those functions were missing.


It’s those brain functions that, contrary to theories like dualism, assert the separation of mind from body, that make consciousness and individualised sentience possible. The emergence of mind from brain activity is the likeliest explanation of experienced reality. Contemporary philosophers of mind and neuroscientists would agree on this, even as they continue to work jointly on figuring out the details of how all that happens.


The idea of philosophical zombies existing among us thus collapses. Yet, very similar questions of mind, consciousness, sentience, experience, and personhood could easily pop up again. Likely not as recycled philosophical zombies, but instead, as new issues arising longer term as developments in artificial intelligence begin to match and perhaps eventually exceed the vast array of abilities of human intelligence.



 

Monday 29 November 2021

Whose Reality Is It Anyway?

Thomas Nagel wondered if the world a bat perceives is fundamentally different  to our own

By Keith Tidman

Do we experience the world as it objectively is, or only as an approximation shaped by the effects of information passing through our mind’s interpretative sieve? Does our individual reality align with anyone else’s, or is it exclusively ours, dwelling like a single point amid other people’s experienced realities?

 

We are swayed by our senses, whether through the direct sensory observation of the world around us, or indirectly as we use apparatuses to observe, record, measure, and decipher. Either way, our minds filter the information absorbed, becoming the experiences funneled and fashioned into a reality which in turn is affected by sundry factors. These influences include our life experiences and interpretations, our mental models of the world, how we sort and assimilate ideas, our unconscious predilections, our imaginings and intuitions unsubscribed to particular facts, and our expectations of outcomes drawn from encounters with the world.

 

We believe that what serves as the lifeline in this modeling of personal reality is the presence of agency and ‘free will’. The tendency is to regard free will as orthodoxy. We assume we can freely reconsider and alter that reality, to account for new experiences and information that we mold through reason. To a point, that’s right; but to one degree or another we grapple with biases, some of which are hard-wired or at least deeply entrenched, that predispose us to particular choices and behaviours. So, how freely we can actually surmount those preconceptions and predispositions is problematic, in turn bearing on the limits of how we perceive the world.


The situation is complicated further by the vigorous debate over free will versus how much of what happens does so deterministically, where lifes course is set by forces beyond our control. Altering the models of reality to which we clutch is hard; resistance to change is tempting. We shun hints of doubt in upholding our individual (subjective) representations of reality. The obscurity and inaccessibility of any single, universally accepted objective world exacerbates the circumstances. We realise, though, that subjective reality is not an illusion to be casually dismissed to our suiting, but is lastingly tangible.


In 1974, the American philosopher Thomas Nagel developed a classic metaphor to address these issues of conscious experience. He proposed that some knowledge is limited to what we acquire through our subjective experiences, differentiating those from underlying objective facts. To show how, Nagel turned to bats’ conscious use of echoed sounds as the equivalent of our vision in perceiving its surroundings for navigation. He argued that although we might be able to imagine some aspects of what it’s like to be a bat, like hanging upside down or flying, we cannot truly know what a bat experiences as physical reality. The bat’s experiences are its alone, and for the same reasons of filtering and interpretation, are likewise distinguishable from objective reality.

 

Sensory experience, however, does more than just filter objective reality. The very act of human observation (in particular, measurement) can also create reality. What do I mean? Repeated studies have shown that a potential object remains in what’s called ‘superposition’, or a state of suspension. What stays in superposition is an abstract mathematical description, called a ‘wavefunction’, of all the possible ways an object can become real. There is no distinction between the wave function and the physical things.


While in superposition, the object can be in any number of places until measurement causes the wavefunction to ‘collapse’, resulting in the object being in a single location. Observation thus has implications for the nature of reality and the role of consciousness in bringing that about. According to quantum physicist John Wheeler, ‘No ... property is a property until it is observed’, a notion presaged by the philosopher George Berkeley three centuries earlier by declaring ‘Esse est percepi’ – to be, is to be perceived.


Evidence, furthermore, that experienced reality results from a subjective filtering of objective reality comes from how our minds react to externalities. For example, two friends are out for a stroll and look up at the summer sky. Do their individual perceptions of the sky’s ‘blueness’ precisely match each other’s or anyone else’s, or do they experience blueness differently? If those companions then wade into a lake, do their perceptions of ‘chilliness’ exactly match? How about their experiences of ‘roughness’ upon rubbing their hand on the craggy bark of a tree? These are interpretations of objective reality by the senses and the mind.


Despite the physiology of the friends’ brains and physical senses being alike, their filtered experiences nonetheless differ in both small and big ways. All this, even though the objective physical attributes of the sky, the lake, and the tree bark, independent of the mind, are the same for both companions. (Such as in the case of the wavelength of visible light that accounted for the blueness being interpretatively, subjectively perceived by the senses and mind.) Notwithstanding the deceptive simplicity of these examples, they are telling of how our minds are attuned to processing sensory input, thereby creating subjective realities that might resemble yet not match other people’s, and importantly don’t directly merge with underlying objective reality.

  

In this paradigm of experience, there are untold parsed and sieved realities: our own and everyone else’s. That’s not to say objective reality, independent of our mental parsing, is myth. It exists, at least as backdrop. That is, both objective and subjective reality are credible in their respective ways, as sides of the whole. It’s just that our minds’ unavoidable filtering leads to the altering of objective reality. Objective reality thus stays out of reach. The result is our being left with the personal reality our minds are capable of, a reality nonetheless easily but mistakenly conflated with objective reality.

 

That’s why our models of the underlying objective reality remain approximations, in states of flux. Because when it comes to understanding the holy grail of objective reality, our search is inspired by the belief that close is never close enough. We want more. Humankind’s curiosity strives to inch closer and closer to objective reality, however unending that tireless pursuit will likely prove.

 

Monday 21 September 2020

‘What Are We?’ “Self-reflective Consciousness, Cooperation, and the Agents of Our Future Evolution”

Cueva de las Manos, Río Pinturas

Posted by John Hands 

‘What are we?’ This is arguably the fundamental philosophical question. Indeed, ‘What are we?’ along with ‘Where do we come from?’ and ‘Why do we exist?’ are questions that humans have been asking for at least 25,000 years. During all of this time we have sought answers from the supernatural. About 3,000 years ago, however, we began to seek answers through philosophical reasoning and insight. Then, around 150 years ago, we began to seek answers through science: through systematic, preferably measurable, observation or experiment. 

As a science graduate and former tutor in physics for Britain's ‘Open University*’, I wanted to find out what answers science currently gives. But I couldn’t find any book that did so. There are two reasons for this.

  • First, the exponential increase in empirical data generated by rapid developments in technology had resulted in the branching of science into increasingly narrow, specialized fields. I wanted to step back from the focus of one leaf on one branch and see what the whole evolutionary tree shows us. 
  • Second, most science books advocate a particular theory, and often present it as fact. But scientific explanations change as new data is obtained and new thinking develops. 

And so I decided to write ‘the book that hadn’t been written’: an impartial evaluation of the current theories that explain how we evolved, not just from the first life on Earth, but where that came from, right back to the primordial matter and energy at the beginning of the universe of which we ultimately consist. I called it COSMOSAPIENS Human Evolution from the Origin of the Universe* and in the event it took more than 10 years to research and write. What’s more, the conclusions I reached surprised me. I had assumed that the Big Bang was well-established science. But the more I investigated the more I discovered that the Big Bang Theory had been contradicted by observational evidence stretching back 60 years. Cosmologists had continually changed this theory as more sophisticated observations and experiments produced ever more contradictions with the theory.

The latest theory is called the Concordance Model. It might more accurately be described as ‘The Inflationary-before-or-after-the-Hot Big Bang-unknown-27% Dark Matter-unknown-68% Dark Energy model’. Its central axiom, that the universe inflated at a trillion trillion trillion times the speed of light in a trillion trillion trillionth of a second is untestable. Hence it is not scientific.

The problem arises because these cosmological theories are mathematical models. They are simplified solutions of Einstein’s field equations of general relativity applied to the universe. They are based on assumptions that the latest observations show to be invalid. That’s one surprising conclusion I found. 

Another surprise came when I examined the orthodox theory for the last 65 years in the UK and the USA of how and why life on Earth evolved into so many different species. It is known as NeoDarwinism, and was popularised by Richard Dawkins in his bestselling book, The Selfish Gene, where it says that biological evolution is caused by genes selfishly competing with each other to survive and replicate.

NeoDarwinism is based on the fallacy of ascribing intention to an acid, deoxyribonucleic acid, of which genes are composed. Dawkins admits that this language is sloppy and says he could express it in scientific terms. But I’ve read the book twice and he never does manage to do this. Moreover, the theory is contradicted by substantial behavioural, genetic, and genomic evidence. When confronted by such, instead of modifying the theory to take account of the evidence, as a scientist should do, Dawkins lamely says “genes must have misfired”. 

The fact is, he couldn’t modify the theory because the evidence shows that Darwinian competition causes not the evolution of species but the destruction of species. It is cooperation, not competition, that has caused the evolution of successively more complex species.

Today, most biologists assert that we differ only in degree from other animals. I think that this too is wrong. What marked our emergence as a distinct species some 25,000 years ago wasn’t the size or shape of our skulls, or that we walked upright, or that we lacked bodily hair, or the genes we possess. These are differences in degree from other animals. What made us unique was reflective consciousness.

Consciousness is a characteristic of a living thing as distinct from an inanimate thing like a rock. It is possessed in rudimentary form by the simplest species like bacteria. In the evolutionary lineage leading to humans, consciousness increased with increasing neural complexity and centration in the brain until, with humans, it became conscious of itself. We are the only species that not only knows but also knows that it knows. We reflect on ourselves and our place in the cosmos. We ask questions like: What are we? Where did we come from? Why do we exist? 

This self-reflective consciousness has transformed existing abilities and generated new ones. It has transformed comprehension, learning, invention, and communication, which all other animals have in varying degrees. It has generated new abilities, like imagination, insight, abstraction, written language, belief, and morality that no other animal has. Its possession marks a difference in kind, not merely degree, from other animals, just as there is a difference in kind between inanimate matter, like a rock, and living things, like bacteria and animals. 

Moreover, Homo sapiens is the only known species that is still evolving. Our evolution is not morphological—physical characteristics—or genetic, but noetic, meaning ‘relating to mental activity’. It is an evolution of the mind, and has been occurring in three overlapping phases: primeval, philosophical, and scientific. 

Primeval thinking was dominated by the foreknowledge of death and the need to survive. Accordingly, imagination gave rise to superstition, which is a belief that usually arises from a lack of understanding of natural phenomena or fear of the unknown. 

It is evidenced by legends and myths, the beliefs in animism, totemism, and ancestor worship of hunter-gatherers, to polytheism in city-states in which the pantheon of gods reflected the social hierarchy of their societies, and finally to a monotheism in which other gods were demoted to angels or subsumed into one God, reflecting the absolute power of king or emperor. 

The instinct for competition and aggression, which had been ingrained over millions of years of prehuman ancestry, remained a powerful characteristic of humans, interacting with, and dominating, reflective consciousness. 

The second phase of reflective consciousness, philosophical thinking, emerged roughly 1500 to 500 BCE. It was characterised by humans going beyond superstition to use reasoning and insight, often after disciplined meditation, to answer questions. In all cultures it produced the ethical view that we should treat all others, including our enemies, as ourselves. This ran counter to the predominant instinct of aggression and competition. 

The third phase, scientific thinking, gradually emerged from natural philosophy around 1600 CE. It branched into the physical sciences, the life sciences, and medical sciences. 

Physics, the fundamental science, then started to converge, rapidly so over the last 65 years, towards a single theory that describes all the interactions between all forms of matter. According to this view, all physical phenomena are lower energy manifestations of a single energy at the beginning of the universe. This is similar in very many respects to the insight of philosophers of all cultures that there is an underlying energy in the cosmos that gives rise to all matter and energy. 

During this period, reflective consciousness has produced an increasing convergence of humankind. The development of technology has led to globalisation, both physically and electronically, in trade, science, education, politics (United Nations), and altruistic activities such as UNICEF and Médecins Sans Frontières. It has also produced a ‘complexification’ of human societies, a reduction in aggression, an increase in cooperation, and the ability to determine humankind’s future. 

This whole process of human evolution has been accelerating. Primeval thinking emerges roughly 25,000 years ago, philosophical thinking emerges about 3,000 years ago, scientific thinking emerges some 400 years ago, while convergent thinking begins barely 65 years ago. 

I think that when we examine the evidence of our evolution from primordial matter and energy at the beginning of the universe, we see a consistent pattern. This shows that we humans are the unfinished product of an accelerating cosmic evolutionary process characterised by cooperation, increasing complexity and convergence, and that – uniquely as far we know – we are the self-reflective agents of our future evolution. 


 

*For further details and reviews of John’s new book, see https://johnhands.com 

Editor's note. The UK’s ‘Open University’ differs from other universities through its the policy of open admissions and its emphasis on distance and online learning programs.

Monday 29 June 2020

The Afterlife: What Do We Imagine?

Posted by Keith Tidman


‘The real question of life after death isn’t whether 
or not it exists, but even if it does, what 
problem this really solves’

— Wittgenstein, Tractatus Logico-Philosophicus, 1921

Our mortality, and how we might transcend it, is one of humanity’s central preoccupations since prehistory. One much-pondered possibility is that of an afterlife. This would potentially serve a variety of purposes: to buttress fraught quests for life’s meaning and purpose; to dull unpleasant visions of what happens to us physically upon death; to switch out fear of the void of nothingness with hope and expectation; and, to the point here, to claim continuity of existence through a mysterious hereafter thought to defy and supplant corporeal mortality.

And so, the afterlife, in one form or another, has continued to garner considerable support to the present. An Ipsos/Reuters poll in 2011 of the populations of twenty-three countries found that a little over half believe in an afterlife, with a wide range of outcomes correlated with how faith-based or secular a country is considered. The Pew Center’s Religious Landscape Study polling found, in 2014, that almost three-fourths of people seem to believe in heaven and more than half said that they believed in hell. The findings cut across most religions. Separately, research has found that some one-third of atheists and agnostics believe in an afterlife — one imagined to include ‘some sort of conscious existence’, as the survey put it. (This was the Austin Institute for the Study of Family and Culture, 2014.) 

Other research has corroberated these survey results. Researchers based at Britain's Oxford University in 2011 examined forty related studies conducted over the course of three years by a range of social-science and other specialists (including anthropologists, psychologists, philosophers, and theologians) in twenty countries and different cultures. The studies revealed an instinctive predisposition among people to an afterlife — whether of a soul or a spirit or just an aspect of the mind that continues after bodily death.

My aim here is not to exhaustively review all possible variants of an afterlife subscribed to around the world, like reincarnation — an impracticality for the essay. However, many beliefs in a spiritual afterlife, or continuation of consciousness, point to the concept of dualism, entailing a separation of mind and body. As René Descartes explained back in the 17th century:
‘There is a great difference between the mind and the body, inasmuch as the body is by its very nature always divisible, whereas the mind is clearly indivisible. For when I consider the mind, or myself insofar as I am only a thinking thing, I cannot distinguish any parts within myself. . . . By contrast, there is no corporeal or extended thing that I can think of which in my thought I cannot easily divide into parts. . . . This one argument would be enough to show me that the mind is completely different than the body’ (Sixth Meditation, 1641).
However, in the context of modern research, I believe that one may reasonably ask the following: Are the mind and body really two completely different things? Or are the mind and the body indistinct — the mind reducible to the brain, where the brain and mind are integral, inseparable, and necessitating each other? Mounting evidence points to consciousness and the mind as the product of neurophysiological activity. As to what’s going on when people think and experience, many neuroscientists favour the notion that the mind — consciousness and thought — is entirely reducible to brain activity, a concept sometimes variously referred to as physicalism, materialism, or monism. But the idea is that, in short, for every ‘mind state’ there is a corresponding ‘brain state’, a theory for which evidence is growing apace.

The mind and brain are today often considered, therefore, not separate substances. They are viewed as functionally indistinguishable parts of the whole. There seems, consequently, not to be broad conviction in mind-body dualism. Contrary to Cartesian dualism, the brain, from which thought comes, is physically divisible according to hemispheres, regions, and lobes — the brain’s architecture; by extension, the mind is likewise divisible — the mind’s architecture. What happens to the brain physically (from medical or other tangible influences) affects the mind. Consciousness arises from the entirety of the brain. A brain — a consciousness — that remarkably is conscious of itself, demonstrably curious and driven to contemplate its origins, its future, its purpose, and its place in the universe.

The contemporary American neuroscientist, Michael Gazzaniga, has described the dynamics of such consciousness in this manner:
‘It is as if our mind is a bubbling pot of water. . . . The top bubble ultimately bursts into an idea, only to be replaced by more bubbles. The surface is forever energized with activity, endless activity, until the bubbles go to sleep. The arrow of time stitches it all together as each bubble comes up for its moment. Consider that maybe consciousness can be understood only as the brain’s bubbles, each with its own hardware to close the gap, getting its moment’. (The Consciousness Instinct, 2018)
Moreover, an immaterial mind and a material world (such as the brain in the body), as dualism typically frames reality, would be incapable of acting upon each other: what’s been dubbed the ‘interaction problem’. Therefore the physicalist model — strengthened by research in fields like neurophysiology, which quicken to acquire ever-deeper learning — has, arguably, superseded the dualist model.

People’s understanding that, of course, they will die one day, has spurred search for spiritual continuation to earthbound life. Apprehension motivates. The yearn for purpose motivates. People have thus sought evidence, empirical or faith-based or other, to underprop their hope for otherworldly survival. However, modern reality as to the material, naturalistic basis of the mind may prove an injurious blow to notions of an out-of-body afterlife. After all, if we are our bodies and our bodies are us, death must end hope for survival of the mind. As David Hume graphically described our circumstances in Of the Immortality of the Soul (1755), our ‘common dissolution in death’. That some people are nonetheless prone to evoke dualistic spectral spirits — stretching from disembodied consciousness to immortal souls — that provide pretext in desirously thwarting the interruption of life doesn’t change the finality of existence. 

And so, my conclusion is that perhaps we’d be better served to find ingredients for an ‘afterlife’ in what we leave by way of influences, however ordinary and humble, upon others’ welfare. That is, a legacy recollected by those who live on beyond us, in its ideal a benevolent stamp upon the present and the future. This earthbound, palpable notion of what survives us goes to answer Wittgenstein’s challenge we started with, regarding ‘what problem’ an afterlife ‘solves’, for in this sense it solves the riddle of what, realistically, anyone might hope for.