Showing posts with label reality. Show all posts
Showing posts with label reality. Show all posts

Monday, 16 September 2024

Plato’s Allegory of the Cave: And the Deception of Perception



By Keith Tidman

 

It is a tribute to the timelessness of Plato’s ideas that his philosophical stories still echo powerfully in the contemporary world. People still do live in the flickering shadows of Plato’s cave, mistaking myths for reality and surmising evidence. We are metaphorically bound, for example, to watch and assent to the shadows cast by social media, influencing our notions of reality. An increasingly subjective and debatable reality, shaped by the passing of gossamer shadows flung onto the wall (today the computer screen) by puppeteers. Today, there’s clearly a risk of deception by partial perception, of information exploited for political ends.


It was in his most-read work, The Republic, written about 380 BCE, that Plato recounted an exchange between Glaucon and Socrates, sometimes called the Allegory of the Cave. Socrates describes how in this cave, seated in a line, are prisoners who have been there since birth, entirely cut off from the outside world. Tightly restrained by chains such that they cannot move, their lived experience is limited to staring at the cave wall in front of them. 

 

What they cannot know is that just behind where they sit is a parapet and fire, in front of which other people carry variously shaped objects, and it is these that cast the strange shadows. The shadows on the wall, and not the fire or the objects themselves, are the prisoners’ only visible reality — the only world they can know. Of the causes of the moving shadows, of the distinction between the abstract and the real, they can know nothing. 

 

Plato asks us to consider what might happen if one of the prisoners is then unchained and forced reluctantly to leave the cave, into the glaring light of the sun. At first, he says, the brightness must obscure the freed prisoner’s vision,  so that he can see only shadows and reflections, similar to being in the cave. However, after a while, his eyes would grow accustomed to the light, and eventually he would be able to see other people and objects themselves, not just their shadows. As the former prisoner adjusts, he begins to believe the outside world offers what he construes as a very different, even better reality than the shadows in the dusky cave.

 

But now suppose, Plato asks, that this prisoner decides to return to the cave to share his experience — to try to convince the prisoners to follow his lead to the sunlight and the ‘forms’ of the outside world. Would they willingly seize the chance? But no, quite the contrary, Plato warns. Far from seizing the opportunity to see more clearly, he thinks the other prisoners would defiantly resist, believing the outside world to be harmful and dangerous and not wanting to leave the security of their cave with the shadows they have become so familiar with, even so expert at interpreting. 

 

The allegory of the cave is part of Plato’s larger theory of knowledge — of ideals and forms. The cave and shadows are representative of how people usually live, often ensconced within the one reality they’re comfortable with and assume to be of greatest good. All the while, they are confronted by having to interpret, adjust to, and live in a wholly dissimilar world. The so-called truth that people meet up with is shaped by contextual circumstances they happened to have been exposed to (their upbringing, education, and experiences, for example), in turn swaying their interpretations, judgments, beliefs, and norms. All often cherished. Change requires overcoming inertia and myopia, which proves arduous, given prevailing human nature.

 

People may wonder which is in fact the most authentic reality. And they may wonder how they might ultimately overcome trepidation, choosing whether or not to turn their backs to their former reality, and understanding and embracing the alternative truth. A process that perhaps happens again and again. The undertaking, or journey, from one state of consciousness to another entails conflict and requires parsing the differences in awareness of one truth over another, to be edified of the supposed higher levels of reality and to overcome what one might call the deception of perception: the unreal world of blurry appearances..

 

Some two and a half millennia after Plato crafted his allegory of the cave, popular culture has borrowed the core storyline, in both literature as well as movies. For example, the pilots of both Fahrenheit 451, by Ray Bradbury, and Country of the Blind, by H.G. Wells, concern eventual enlightened awareness, where key characters come to grips with the shallowness of the world with which they’re familiar every day. 


Similarly, in the movie The Matrix, the lead character, Neo, is asked to make a difficult choice: to either take a blue pill and continue living his current existence of comfort but obscurity and ignorance, or take a red pill and learn the hard truth. He opts for the red pill, and in doing so becomes aware that the world he has been living in is merely a contrivance, a computer-generated simulation of reality intended to pacify people.

 

Or take the movie The Truman Show. In this, the lead character, Truman Burbank, lives a suburban, family life as an insurance agent for some thirty years, before the illusion starts to crumble and he suspects his family is made up of actors and everything else is counterfeit. It even turns out that he is living on a set that comprises several thousand hidden cameras producing a TV show for the entertainment of spectators worldwide. It is all a duplicitous manipulation of reality — a deception of perception, again — creating a struggle for freedom. And in this movie, after increasingly questioning the unfathomable goings-on around him, Truman (like the prisoner who leaves Plato’s cave) manages to escape the TV set and to enter the real world.

 

Perhaps, then, what is most remarkable about the Allegory of the Cave is there is nothing about it that anchors it exclusively to the ancient world in which it was first imagined. Instead, Plato’s cave is, if anything, even more pertinent in the technological world of today, split as it is between spectral appearances and physical reality. Being surrounded today with the illusory shadows of digital technology, with our attention guided by algorithm-steering, belief-reinforcing social media, strikes a warning note. That today, more than ever, it is our responsibility to continually question our assumptions.

 

Tuesday, 24 January 2023

‘Brain in a Vat’: A Thought Experiment


By Keith Tidman

Let’s hypothesise that someone’s brain has been removed from the body and immersed in a vat of fluids essential for keeping the brain not only alive and healthy but functioning normally — as if it is still in a human skull sustained by other bodily organs.

A version of this thought experiment was laid out by René Descartes in 1641 in the Meditations on First Philosophy, as part of inquiring whether sensory impressions are delusions. An investigation that ultimately led to his celebrated conclusion, ‘Cogito, ergo sum’ (‘I think, therefore I am’). Fast-forward to American philosopher Gilbert Harman, who modernised the what-if experiment in 1973. Harman’s update included introducing the idea of a vat (in place of the allegorical device of information being fed to someone by an ‘evil demon’, originally conceived by Descartes) in order to acknowledge the contemporary influences of neuroscience in understanding the brain and mind.

In this thought experiment, a brain separated from its body and sustained in a vat of chemicals is assumed to possess consciousness — that is, the neuronal correlates of perception, experience, awareness, wonderment, cognition, abstraction, and higher-order thought — with its nerve endings attached by wires to a quantum computer and a sophisticated program. Scientists feed the disembodied brain with electrical signals, identical to those that people are familiar with receiving during the process of interacting through the senses with a notional external world. Hooked up in this manner, the brain (mind) in the vat therefore does not physically interact with what we otherwise perceive as a material world. Conceptualizations of a physical world — fed to the brain via computer prompts and mimicking such encounters — suffice for the awareness of experience.

The aim of this what-if experiment is to test questions not about science or even ‘Matrix’-like science fiction, but about epistemology — queries such as what do we know, how do we know it, with what certainty do we know it, and why does what we know matter? Specifically, issues to do with scepticism, truth, mind, interpretation, belief, and reality-versus-illusion — influenced by the lack of irrefutable evidence that we are not, in fact, brains in vats. We might regard these notions as solipsistic, where the mind believes nothing (no mental state) exists beyond what it alone experiences and thinks it knows.

In the brain-in-a-vat scenario, the mind cannot differentiate between experiences of things and events in the physical, external world and those virtual experiences electrically prompted by the scientists who programmed the computer. Yet, since the brain is in all ways experiencing a reality, whether or not illusionary, then even in the absence of a body the mind bears the complement of higher-order qualities required to be a person, invested with full-on human-level consciousness. To the brain suspended in a vat and to the brain housed in a skull sitting atop a body, the mental life experienced is presumed to be the same.

But my question, then, is this: Is either reality — that for which the computer provides evidence and that for which external things and events provide evidence — more convincing (more real, that is) than the other? After all, are not both experiences of, say, a blue sky with puffy clouds qualitatively and notionally the same: whereby both realities are the product of impulses, even if the sources and paths of the impulses differ?

If the experiences are qualitatively the same, the philosophical sceptic might maintain that much about the external world that we surmise is true, like the briskness of a winter morning or the aroma of fresh-baked bread, is in fact hard to nail down. The reason being that in the case of a brain in a vat, the evidence of a reality provided by scientists is assumed to resemble that provided by a material external world, yet result in a different interpretation of someone’s experiences. We might wonder how many descriptions there are of how the conceptualized world corresponds to what we ambitiously call ultimate reality.

So, for example, the sceptical hypothesis asserts that if we are unsure about not being a brain in a vat, then we cannot disregard the possibility that all our propositions (alleged knowledge) about the outside physical world would not hold up to scrutiny. This argument can be expressed by the following syllogism:

1. If I know any proposition of external things and events, then I know that I am not a brain in a vat;

2. I do not know that I am not a brain in a vat;

3. Therefore, I do not know any proposition of external things and events about the external world.


Further, given that a brain in a vat and a brain in a skull would receive identical stimuli — and that the latter are the only means either brain is able to relate to its surroundings — then neither brain can determine if it is the one bathed in a vat or the one embodied in a skull. Neither mind can be sure of the soundness of what it thinks it knows, even knowledge of a world of supposed mind-independent things and events. This is the case, even though computer-generated impulses realistically substitute for not directly interacting bodily with a material external world. So, for instance, when a brain in a vat believes that ‘wind is blowing’, there is no wind — no rushing movement of air molecules — but rather the computer-coded, mental simulation of wind. That is, replication of the qualitative state of physical reality.

I would argue that the world experienced by the brain in a vat is not fictitious or unauthentic, but rather is as real to the disembodied brain and mind as the external, physical world is to the embodied brain. Both brains fashion valid representations of truth. I therefore propose that each brain is ‘sufficient’ to qualify as a person: where, notably, the brains’ housing (vat or skull) and signal pathways (digital or sensory) do not matter.

Monday, 12 September 2022

The Uncaused Multiverse: And What It Signifies


By Keith Tidman

Here’s an argument that seems like commonsense: everything that exists has a cause; the universe exists; and so, therefore, the universe has a cause. A related argument goes on to say that the events that led to the universe must themselves ultimately originate from an uncaused event, bringing the regress of causes to a halt.

But is such a model of cosmic creation right?


Cosmologists assert that our universe was created by the Big Bang, an origin story developed by the Belgian physicist and Catholic priest Georges Lemaitre in 1931. However, we ought not to confuse the so-called singularity — a tiny point of infinite density — and the follow-on Big Bang event with creation or causation per se, as if those events preceded the universe. Rather, they were early components of a universe that by then already existed, though in its infancy.

It’s often considered problematic to ask ‘what came before the Big Bang’, given the event is said to have led to the creation of space and time (I address ‘time’ in some detail below). By extension, the notion of nothingness prior to the Big Bang is equally problematic, because, correctly defined, nothingness is the total, absolute absence of everything — even energy and space. Although cosmologists claim that quantum fluctuations, or short bursts of energy in space, allowed the Big Bang to happen, we are surely then obliged to ask what allowed those fluctuations to happen.

Yet, it’s generally agreed you can’t get something from nothing. Which makes it all the more meaningful that by nothingness, we are not talking about space that happens to be empty, but rather the absence of space itself.

I therefore propose, instead, that there has always been something, an infinity where something is the default condition, corresponding to the impossibility of nothingness. Further, nothingness is inconceivable, in that we are incapable of visualising nothingness. As soon as we attempt to imagine nothingness, our minds — the act of thinking about it — causes the otherwise abstraction of ‘nothingness’ to turn into the concreteness of ‘something’: a thing with features. We can’t resist that outcome, for we have no basis in reality and in experience that we can match up with this absolute absence of everything, including space, no matter how hard we try to picture it in our mind’s eye.

The notion of infinity in this model of being excludes not just a ‘first universe’, but likewise excludes a ‘first cause’ or ‘prime mover’. By its very definition, infinity has no starting point: no point of origin; no uncaused cause. That’s key; nothing and no one turned on some metaphorical switch, to get the ball rolling.

What I wish to convey is a model of multiple universes existing — each living and dying — within an infinitely bigger whole, where infinity excludes a ‘first cause’ or ‘first universe’.

In this scenario, where something has always prevailed over nothingness, the topic of time inevitably raises its head, needing to be addressed. We cannot ignore it. But, I suggest, time appears problematic only because it's misconceived. Rather, time is not something that suddenly lurches out of the starting gate upon the occurrence of a Big Bang, in the manner that cosmologists and philosophers have typically described how it happens. Instead, when properly understood, time is best reflected in the unfolding of change.

The so-called ‘arrow of time’ traditionally appears to us in the three-way guise of the past leading to (causing) the present leading to the future. Allegorically, like a river. However, I propose that past and future are artificial constructs of the mind that simply give us a handy mechanism by which to live with the consequences of what we customarily call time: by that, meaning the consequences of change, and thus of causation. Accordingly, it is change through which time (temporal duration) is made visible to us; that is, the neurophysiological perception of change in human consciousness.

As such, only the present — a single, seamless ‘now’ — exists in context of our experience. To be sure, future and past give us a practical mental framework for modeling a world in ways that conveniently help us to make sense of it on an everyday level. Such as for hypothesising about what might be ahead and chronicling events for possible retrieval in the ‘now’. However, future and past are figments, of which we have to make the best. ‘Time reflected as change’ fits the cosmological model described here.

A process called ‘entropy’ lets us look at this time-as-change model on a cosmic scale. How? Well, entropy is the irresistible increase in net disorder — that is, evolving change — in a single universe. Despite spotty semblances of increased order in a universe — from the formation of new stars and galaxies to someone baking an apple pie — such localised instances of increased order are more than offset by the governing physical laws of thermodynamics.

These physical laws result in increasing net disorder, randomness, and uncertainty during the life cycle of a universe. That is, the arrow of change playing out as universes live and peter out because of heat death — or as a result of universes reversing their expansion and unwinding, erasing everything, only to rebound. Entropy, then, is really super-charged change running its course within each universe, giving us the impression of something we dub time.

I propose that in this cosmological model, the universe we inhabit is no more unique and alone than our solar system or beyond it our spiral galaxy, the Milky Way. The multiplicity of such things that we observe and readily accept within our universe arguably mirrors a similar multiplicity beyond our universe. These multiple universes may be regarded as occurring both in succession and in parallel, entailing variants of Big Bangs and entropy-driven ‘heat deaths’, within an infinitely larger whole of which they are a part.

In this multiverse reality of cosmic roiling, the likelihood of dissimilar natural laws from one universe to another, across the infinite many, matters as to each world’s developmental direction. For example, in both the science and philosophy of cosmology, the so-called ‘fine-tuning principle’ — known, too, as the anthropic principle — argues that with enough different universes, there’s a high probability some worlds will have natural laws and physical constants allowing for the kick-start and evolution of complex intelligent forms of life.

There’s one last consequence of the infinite, uncaused multiverse described here. Which is the absence of intent, and thus absence of intelligent design, when it comes to the physical laws and materialisation of sophisticated, conscious species pondering their home worlds. I propose that the fine-tuning of constants within these worlds does not undo the incidental nature of such reality.

The special appeal of this kind of multiverse is that it alone allows for the entirety of what can exist.

Monday, 25 July 2022

‘Philosophical Zombies’: A Thought Experiment

Zombies are essentially machines that appear human.

By Keith Tidman
 

Some philosophers have used the notion of ‘philosophical zombies’ in a bid to make a point about the source and nature of human consciousness. Have they been on the right track?

 

One thought experiment begins by hypothesising the existence of zombies who are indistinguishable in appearance and behaviour from ordinary people. These zombies match our comportment, seeming to think, know, understand, believe, and communicate just as we do. Or, at least, they appear to. You and a zombie could not tell each other apart. 

 

Except, there is one important difference: philosophical zombies lack conscious experience. Which means that if, for example, a zombie was to drop an anvil on its foot, it might give itself away by not reacting at all or, perhaps, very differently than normal. It would not have the inward, natural, individualised experience of actual pain the way the rest of us would. On the other hand, a smarter kind of zombie might know what humans would do in such situations and pretend to recoil and curse as if in extreme pain. 

 

Accordingly, philosophical zombies lead us to what’s called the ‘hard problem of consciousness’, which is whether or not each human has individually unique feelings while experiencing things – whereby each person produces his or her own reactions to stimuli, unlike everyone else’s. Such as the taste of a tart orange, the chilliness of snow, the discomfort of grit in the eye, the awe in gazing at ancient relics, the warmth of holding a squirming puppy, and so on.

 

Likewise, they lead us to wonder whether or not there are experiences (reactions, if you will) that humans subjectively feel in authentic ways that are the product of physical processes, such as neuronal and synaptic activity as regions of the brain fire up. Experiences beyond those that zombies only copycat, or are conditioned or programmed to feign, the way automatons might, lacking true self-awareness. If there are, then there remains a commonsense difference between ‘philosophical zombies’ and us.

 

Zombie thought experiments have been used by some to argue against the notion called ‘physicalism’, whereby human consciousness and subjective experience are considered to be based in the material activity of the brain. That is, an understanding of reality, revealed by philosophers of mind and neuroscientists who are jointly peeling back how the brain works as it experiences, imagines, ponders, assesses, and decides.

 

The key objection to such ‘physicalism’ is the contention that mind and body are separable properties, the venerable philosophical theory also known as dualism. And that by extrapolation, the brain is not (cannot be) the source of conscious experience. Instead, it is argued by some that conscious experience — like the pain from the dropped anvil or joy in response to the bright yellow of fields of sunflowers — is separate from brain function, even though natural law strongly tells us such brain function is the root of everyone's subjective experience.

 

But does the ‘philosophical zombie’ argument against brain function being the seed of conscious experience hold up?

 

After all, the argument that philosophical zombies, whose clever posing makes us assume there are no differences between them and us, seems problematic. Surely, there is insufficient evidence of the brain not giving rise to consciousness and individual experience. Yet, many people who argue against a material basis to experience, residing in brain function, rest their case on the notion that philosophical zombies are at least conceivable.

 

They argue that ‘conceivability’ is enough to make zombies possible. However, such arguments neglect that being conceivable is really just another expression for something ‘being imaginable’. Isn’t that the reason young children look under their beds at night? But, is being imaginable actually enough to conclude something’s real-world existence? How many children actually come face to face with monsters in their closets? There are innumerable other examples, as we’ll get to momentarily, illustrating that all sorts of irrational, unreal things are imaginable  in the same sense that they’re conceivable  yet surely with no sound basis in reality.

 

Proponents of conceivability might be said to stumble into a dilemma: that of logical incoherence. Why so? Because, on the same supposedly logical framework, it is logically imaginable that garden gnomes come to life at night, or that fire-breathing dragons live on an as-yet-undiscovered island, or that the channels scoured on the surface of Mars are signs of an intelligent alien civilisation!

 

Such extraordinary notions are imaginable, but at the same time implausible, even nonsensical. Imagining something doesn’t make it so. These ‘netherworld notions’ simply don’t hold up. Philosophical zombies arguably fall into this group. 

 

Moreover, zombies wouldn’t (couldn’t) have free will; that is, free will and zombiism conflict with one another. Yes, zombies might fabricate self-awareness and free will convincingly enough to trick a casual, uncritical observer — but this would be a sham, insufficient to satisfy the conditions for true free will.

 

The fact remains that the authentic experience of, for example, peacefully listening to gentle waves splashing ashore cannot happen if the complex functionality of the brain were not to exist. A blob that only looks like a brain (as in the case for philosophical zombies) would not be the equivalent of a human brain if, critically, those functions were missing.


It’s those brain functions that, contrary to theories like dualism, assert the separation of mind from body, that make consciousness and individualised sentience possible. The emergence of mind from brain activity is the likeliest explanation of experienced reality. Contemporary philosophers of mind and neuroscientists would agree on this, even as they continue to work jointly on figuring out the details of how all that happens.


The idea of philosophical zombies existing among us thus collapses. Yet, very similar questions of mind, consciousness, sentience, experience, and personhood could easily pop up again. Likely not as recycled philosophical zombies, but instead, as new issues arising longer term as developments in artificial intelligence begin to match and perhaps eventually exceed the vast array of abilities of human intelligence.



 

Monday, 21 March 2022

Would You Plug Into Nozick’s ‘Experience Machine’?

Clockwork Eyes by Michael Ryan

By Keith Tidman

 

Life may have emotionally whipsawed you. Maybe to the extent that you begin to imagine how life’s experiences might somehow be ‘better’. And then you hear about a machine that ensures you experience only pleasure, and no pain. What not to like!


It was the American philosopher Robert Nozick who,  in 1974, hypothesised a way to fill in the blanks of our imaginings of a happier, more fulfilled life by creating his classic Experience Machine thought experiment.

 

According to this, we can choose to be hooked up to such a machine that ensures we experience only pleasure, and eliminates pain. Over the intervening years, Nozick offered different versions of the scenario, as did other writers, but here’s one that will serve our purposes:

 

‘Imagine a machine that could give you any experience (or sequence of experiences) you might desire. When connected to this experience machine [floating in a tank, with electrodes attached to your brain], you can have the experience of writing a great poem or bringing about world peace or loving someone and being loved in return. You can experience the felt pleasures of these things. . . . While in the tank you won’t know that you’re there; you’ll think it’s all actually happening’.

 

At which point, Nozick went on to ask the key question. If given such a choice, would you plug into the machine for the rest of your life?

 

Maybe if we assume that our view of the greatest intrinsic good is a state of general wellbeing, referred to as welfarism, then on utilitarian grounds it might make sense to plug into the machine. But this theory might itself be a naïve, incomplete summary of what we value — what deeply matters to us in living out our lives — and the totality of the upside and downside consequences of our desires, choices, and actions.

 

Our pursuit of wellbeing notwithstanding, Nozick expects most of us would rebuff his invitation and by extension rebuff ethical hedonism, with its origins reaching back millennia. Our opting instead to live a life ‘in contact with reality’, as Nozick put it. That is, to take part of experiences authentically of the world — reflecting a reality of greater consequence than a manufactured illusion. A choice that originates, at least in part, from a bias toward the status quo. This so-called status quo bias leads some people  if told to imagine their lives to date having been produced by an ‘experience machine’  to choose not to detach from the machine.

 

However, researchers have found many people are reluctant to plug into the machine. This seems to be due to several factors. Factors beyond individuals finding the thought of plugging in too scary, icky, or alien’, as philosopher Ben Bramble interestingly characterised the prospect. And beyond such prosaic grounds as apprehension of something askew happening. For example, either the complex technology could malfunction, or the technicians overseeing the process might be sloppy one day, or there might be malign human intrusion (along the lines of the ‘fundamentalist zealots’ that Bramble invented) — any of which might cause a person’s experience in the machine to go terribly awry.

 

A philosophical reason to refuse being plugged in is that we prefer to do things, not just experience things, the former bringing deeper meaning to life than simply figuring out how to maximise pleasure and minimise pain. So, for example, its more rewarding to objectively (actually) write great plays, visit a foreign land, win chess championships, make new friends, compose orchestral music, terraform Mars, love one’s children, have a conversation with Plato, or invent new thought experiments than only subjectively think we did. An intuitive preference we have for tangible achievements and experiences over machine-made, simulated sensations.

 

Another factor in choosing not to plug into the machine may be that we’re apprehensive about the resulting loss of autonomy and free will in sorting choices, making decisions, taking action, and being accountable for consequences. People don’t want to be deprived of the perceived dignity that comes from self-regulation and intentional behaviour. That is, we wouldn’t want to defer to the Experience Machine to make determinations about life on our behalf, such as how to excel at or enjoy activities, without giving us the opportunity to intervene, to veto, to remold as we see fit. An autonomy or agency we prefer, even if all that might cause far more aggrievement than the supposed bliss provided by Nozick’s thought experiment.

 

Further in that vein, sensations are often understood, appreciated, and made real by their opposites. That is to say, in order for us to feel pleasure, arguably we must also experience its contrast: some manner of disappointment, obstacles, sorrow, and pain. So, to feel the pride of hearing our original orchestral composition played to an audience’s adulation, our journey getting there might have been dotted by occasional stumbles, even occasionally critical reviews. Besides, it’s conceivable that a menu only of successes and pleasure might grow tedious, and less and less satisfying with time, in face of its interminable predictability.

 

Human connections deeply matter, too, of course, all part of a life that conforms with Nozick’s notion of maintaining ‘contact with reality’. Yes, as long as we’re plugged in we’d be unaware of the inauthenticity of relationships with the family members and friends simulated by the machine. But the nontrivial fact is that family and friends in the real world — outside the machine — would remain unreachable.

 

Because we’d be blithely unaware of the sadness of not being reachable by family and friends for as long as we’re hooked up to the electrodes, we would have no reason to be concerned once embedded in the experience machine. Yet real family and friends, in the outside world, whom we care about may indeed grieve. The anticipation of such grief by loved ones in the real world may well lead most of us to reject lowering ourselves into the machine for a life of counterfeit relationships.

 

In light of these sundry factors, especially the loss of relationships outside of the device, Nozick concludes that the pursuit of hedonic pleasure in the form of simulations — the constructs of the mind that the Experience Machine would provide in place of objective reality – makes plugging into the machine a lot less attractive. Indeed, he says, it begins to look more like ‘a kind of suicide’.

 

Monday, 10 February 2020

What Is It to Be Human?

Hello, world!
Posted by Keith Tidman

Consciousness is the mental anchor to which we attach our larger sense of reality.

We are conscious of ourselves — our minds pondering themselves in a curiously human manner — as well as being intimately conscious of other people, other species, and everything around us, near and remote.

We’re also aware that in reflecting upon ourselves and upon our surroundings, we process experiences absorbed through our senses — even if filtered and imagined imperfectly. This intrinsically empirical nature of our being is core, nourishing our experience of being human. It is our cue: to think about thinking. To ponder the past, present, and future. To deliberate upon reality. And to wonder — leaving no stone unturned: from the littlest (subatomic particles) to the cosmic whole. To inspire and be inspired. To intuit. To poke into the possible beginning, middle, and end of the cosmos. To reflect on whether we behave freely or predeterminedly. To conceptualise and pick from alternative futures. To learn from being wrong as well as from being right. To contemplate our mortality. And to tease out the possibility of purpose from it all.

Perception, memory, interpretation, imagination, emotion, logic, and reason are among our many tools for extracting order out of disorder, to quell chaos. These and other properties, collectively essential to distinguishing humanity, enable us to model reality, as best we can.

There is perhaps no more fundamental investigation than this into consciousness touching upon what it means to be human.

To translate the world in which we’re thoroughly immersed. To use our rational minds as the gateway to that understanding — to grasp the dimensions of reality. For humans, the transmission of thought, through the representational symbols of language, gestures, and expressions — representative cognition — provides a tool for chiseling out our place in the world. In the twentieth century, Ludwig Wittgenstein laconically but pointedly framed the germaneness of these ideas:
‘The limits of my language mean the limits of my world’.
Crucially, Wittgenstein grounds language as a tool for communication in shared experiences. 

Language provides not only an opening through which to peer into human nature but also combines  with other cognitive attributes, fueling and informing what we believe and know. Well, at least what we believe we know. The power of language — paradoxically both revered and feared, yet imperative to our success — stems from its channeling human instincts: fundamentally, what we think we need and want.

Language, to the extraordinary, singular level of complexity humankind has developed and learned to use it as a manifestation of human thought, emanates from a form of social leaning. That is, we experiment with language in utilitarian fashion, for best effect; use it to construct and contemplate what-ifs, venturing into the concrete and abstract to unspool reality; and observe, interact with, and learn from each other in associative manner. Accumulative adaptation and innovation. It’s how humanity has progressed — sometimes incrementally, sometimes by great bounds; sometimes as individuals, sometimes as elaborate networks. Calibrating and recalibrating along the way. Accomplished, deceptively simply, by humans emitting sounds and scribbling streams of symbols to drive progress — in a manner that makes us unique.

Language — sophisticated, nuanced, and elastic — enables us to meaningfully absorb what our brains take in. Language helps us to decode and make sense of the world, and recode the information for imaginatively different purposes and gain. To interpret and reinterpret the assembly of information in order to shape the mind’s new perspectives on what’s real — well, at least the glowing embers of what’s real — in ways that may be shared to benefit humankind on a global, community, and individual level. Synaptic-like, social connections of which we are an integral part.

Fittingly, we see ourselves simultaneously as points connected to others, while also as distinct identities for which language proves essential in tangibly describing how we self-identify. Human nature is such that we have individual and communal stakes. The larger scaffolding is the singularly different cultures where we dwell, find our place, and seek meaning — a dynamically frothing environment, where we both react to and shape culture, with its assortment of both durably lasting and other times shifting norms.

Monday, 28 January 2019

Is Mathematics Invented or Discovered?



Posted by Keith Tidman

I’m a Platonist. Well, at least insofar as how mathematics is presumed ‘discovered’ and, in its being so, serves as the basis of reality. Mathematics, as the mother tongue of the sciences, is about how, on one important epistemological level, humankind seeks to understand the universe. To put this into context, the American physicist Eugene Wigner published a paper in 1960 whose title even referred to the ‘unreasonable effectiveness’ of mathematics, before trying to explain why it might be so. His English contemporary, Paul Dirac, dared to go a step farther, declaring, in a phrase with a theological and celestial ring, that ‘God used beautiful mathematics in creating the world’. All of which leads us to this consequential question: Is mathematics invented or discovered, and does mathematics underpin universal reality?
‘In every department of physical science, there is only so much science … as there is mathematics’ — Immanuel Kant
If mathematics is simply a tool of humanity that happens to align with and helps to describe the natural laws and organisation of the universe, then one might say that mathematics is invented. As such, math is an abstraction that reduces to mental constructs, expressed through globally agreed-upon symbols. In this capacity, these constructs serve — in the complex realm of human cognition and imagination — as a convenient expression of our reasoning and logic, to better grasp the natural world. According to this ‘anti-realist’ school of thought, it is through our probing that we observe the universe and that we then build mathematical formulae in order to describe what we see. Isaac Newton, for example, developed calculus to explain such things as the acceleration of objects and planetary orbits. Mathematicians sometimes refine their formulae later, to increasingly conform to what scientists learn about the universe over time. Another way to put it is that anti-realist theory is saying that without humankind around, mathematics would not exist, either. Yet, the flaw in this paradigm is that it leaves the foundation of reality unstated. It doesn’t meet Galileo’s incisive and ponderable observation that:
‘The book of nature is written in the language of mathematics.’
If, however, mathematics is regarded as the unshakably fundamental basis of the universe — whereby it acts as the native language of everything (embodying universal truths) — then humanity’s role becomes to discover the underlying numbers, equations, and axioms. According to this view, mathematics is intrinsic to nature and provides the building blocks — both proximate and ultimate — of the entire universe. An example consists of that part of the mathematics of Einstein’s theory of general relativity predicting the existence of ‘gravitational waves’; the presence of these waves would not be proven empirically until this century, through advanced technology and techniques. Per this ‘Platonic’ school of thought, the numbers and relationships associated with mathematics would nonetheless still exist, describing phenomena and governing how they interrelate, bringing a semblance of order to the universe — a math-based universe that would exist even absent humankind. After all, this underlying mathematics existed before humans arrived upon the scene — awaiting our discovery — and this mathematics will persist long after us.

If this Platonic theory is the correct way to look at reality, as I believe it is, then it’s worth taking the issue to the next level: the unique role of mathematics in formulating truth and serving as the underlying reality of the universe — both quantitative and qualitative. As Aristotle summed it up, the ‘principles of mathematics are the principles of all things’. Aristotle’s broad stroke foreshadowed the possibility of what millennia later became known in the mathematical and science world as a ‘theory of everything’, unifying all forces, including the still-defiant unification of quantum mechanics and relativity. 

As the Swedish-American cosmologist Max Tegmark provocatively put it, ‘There is only mathematics; that is all that exists’ — an unmistakably monist perspective. He colorfully goes on:
‘We all live in a gigantic mathematical object — one that’s more elaborate than a dodecahedron, and probably also more complex than objects with intimidating names such as Calabi-Yau manifolds, tensor bundles and Hilbert spaces, which appear in today’s most advanced physics theories. Everything in our world is purely mathematical— including you.’
The point is that mathematics doesn’t just provide ‘models’ of physical, qualitative, and relational reality; as Descartes suspected centuries ago, mathematics is reality.

Mathematics thus doesn’t care, if you will, what one might ‘believe’; it dispassionately performs its substratum role, regardless. The more we discover the universe’s mathematical basis, the more we build on an increasingly robust, accurate understanding of universal truths, and get ever nearer to an uncannily precise, clear window onto all reality — foundational to the universe. 

In this role, mathematics has enormous predictive capabilities that pave the way to its inexhaustibly revealing reality. An example is the mathematical hypothesis stating that a particular fundamental particle exists whose field is responsible for the existence of mass. The particle was theoretically predicted, in mathematical form, in the 1960s by British physicist Peter Higgs. Existence of the particle — named the Higgs boson — was confirmed by tests some fifty-plus years later. Likewise, Fermat’s famous last theorem, conjectured in 1637, was not proven mathematically until some 360 years later, in 1994 — yet the ‘truth value’ of the theorem nonetheless existed all along.

Underlying this discussion is the unsurprising observation by the early-20th-century philosopher Edmund Husserl, who noted, in understated fashion, that ‘Experience by itself is not science’ — while elsewhere his referring to ‘the profusion of insights’ that could be obtained from mathematical research. That process is one of discovery. Discovery, that is, of things that are true, even if we had not hitherto known them to be so. The ‘profusion of insights’ obtained in that mathematical manner renders a method that is complete and consistent enough to direct us to a category of understanding whereby all reality is mathematical reality.

Monday, 26 November 2018

How Language Connects Mind, World, and Reality


The Chinese characters for not only ‘meaning’ but for ‘connotation, denotation, import, gist, substance, significance, signification, implication, suggestion, consequence, worth, nuance, association, subtext, sense’  and more!

Posted by Keith Tidman

‘The limits of my language mean the limits of my world’, observed the Austrian-British philosopher Ludwig Wittgenstein in his 1922 book Tractatus Logico-Philosophicus. To that point, we might ask: How does language relate to the world? And, more particularly, does language shape human experience — our shared reality and our individual reality? Built into these questions is another — about how language connects mind and world, and in doing so arbitrates our experience of what’s around us.

At a fundamental level, words and ideas describe the world through things (people, horses, pomegranates), properties (purple, octagon, scratchy surface), relations (the moon is 384,000 kilometres from Earth, the flu virus infects millions of people globally, the calamari sits on her mezze plate), and abstractions (thought, value, meaning, belief). That is, language serves to create and aggregate knowledge, understanding, and experience. That’s broadly how we know what we know about reality. But language — the sounds made as people talk and the inscriptions made as they write — is more than just, say, a meta-tool for informational exchanges.

That is, people issue commands, share jokes, welcome visitors, pledge allegiances, pose questions, admonish, lie, explain feelings, threaten, share stories, exaggerate, sing, and so on. Body language (a suddenly raised eyebrow, perhaps) and tone (gruffness or ecstasy, perhaps) add an important layer. An observation by Willard Van Orman Quine, the 20th-century American philosopher, that ‘Language is a social art’, rightly captures this function of language in our lives. There’s a complex harmonising between what we infer and internalize about purported reality and the various kinds of things, properties, and relations that actually exist.

Language thus shapes our thoughts and changes how we think. The relation between thought (mind) and language is synergistic — that is, the combined effect of language and thought is greater than their separate effects. In this manner, a Chickasaw speaker, a Tagalog speaker, an Urdu speaker, a Russian speaker, and an English speaker perceive reality differently — the fundamental building blocks of which are words. As the British philosopher J.L. Austin noted:
‘Going back into the history of a word . . . we come back pretty commonly to pictures or models of how things happen or are done’.
The tie, we might say, between language and perceptions (‘pictures’ and ‘models’) — both concrete and abstract — of how reality, in all its nuance and complexity, plays out.

Correspondingly, the many subtle differences across the world’s roughly 7,000 languages — across vocabularies and other linguistic elements — frame and constrain the way we experience the world. That is, languages differ enough to lead to singularly dissimilar views of reality. Word choice, meaning (both denotation and connotation), syntax, metaphors, grammar, gender, figures of speech, correlation and causality, intent and expectation, and context all influence our perception of the world.

It is thus understandable, amidst this mix of languages’ ingredients, for the German-American philosopher Rudolf Carnap, writing in the mid-20th century,  to have counseled, ‘Let us . . . be tolerant in permitting linguistic forms’. Whether despite or because of this mix, language directly influences culture, which in turn bears on how we talk and what we talk about. Cultural norms influence this process. Yet, notwithstanding the power of perceptions, there is a world independent of language — empirically knowable — even if external reality may not be independent of observation and measurement. Galaxies and microbes exist.

As one illustration where language intervenes upon reality, it has been pointed out that the Native American language Nootka has actions as its principal classification of words. Emphasis is on verbs that describe reality not as physical objects (where subjects act upon objects) but as transitory occurrences — like ‘a meal occurs’ — or longer lived — like ‘shelter occurs’. The result ‘delineates’ the Nootka notion of reality, distinguishing it from others’. It is in the context of this rather expansive view of language that Noam Chomsky, the American linguist, is surely right in saying:
‘A language is not just words. It’s a culture, a tradition, a unification of a community, a whole history that creates what a community is. It’s all embodied in a language’.
Extending this theme, of tying together usage and perspective, in some languages there is no front, back, left, and right; instead, there is north, south, east, or west of something — a geographical kind of view of place. Two languages with just such a sense of location and cardinal direction are Guugu Yimithirr, which is an aboriginal language from Australia, and Sambali, spoken in a province of the Philippines. Another example entails agency for an accidental action: ‘Sebastian, the lead lab scientist, dropped the test tube’ (agency pinpointed, as in English) versus ‘The test tube dropped’ (agency hidden, as in Japanese). These rich differences among languages have implications that ripple across society, affecting, for example, values, norms, law, economics, and political policy.

We might argue that the plasticity of language — and the consequential differences in how language, over time, shapes our understanding of reality — affects how the mind distinguishes between fact and fiction. This observation hints at the subjectivity associated with postmodernism in defining the truth and falsity of perceived reality — at least in a linguistic context. In this view, a subjectively conscious reality — differing among the native speakers of diverse languages — and the external world do not intersect, or if they do, it is but imperfectly.

As such, purported knowledge, understanding, and belief are likely to be contested among partisan cultures, each embracing its own conventions regarding how the mind might describe the world. Writing in the mid-20th century, Algerian-French philosopher Jacques Derrida pointed to this issue of defensively shielding one’s own language, saying:
‘No one gets angry at . . . someone who speaks a foreign language, but rather with someone who tampers with your own language’.
And yet, with Derrida’s cautionary words in mind, whose truth and falsity is it? And whose perspective is valid, or at least the most valid (that is, the least flawed)? Does it come down to simply a catalogue of rules for usage prescribed within each community speaking and writing a particular language? Perhaps J.L. Austin got it right in opining, ‘Sentences are not as such either true or false’.

Perhaps, too, it is as Humpty Dumpty famously declared in Lewis Carroll’s book, Through the Looking Glass, when he said:
 ‘When I use a word,  it means just what I choose it to mean, neither more nor less’.
That’s not too far off from the latest thinking about language, actually. Why so? It’s not only that different languages seem to lead to a different knowledge, understanding, and experience of reality within the mind. Rather, the effects of language seem more granular than that: Users within each of the world’s thousands of languages have different understanding of reality than even their fellow native speakers of those languages.

There are thus two levels of reality in the mind’s eye: one based on shared languages, such as NorwegianKhmer, and Maori. And one based on individuals within each language group whose personalised understanding and application of language uniquely and subtly differs from one person to another — quite apart from the differences in how, as individuals, we stamp our customs and norms on language.