Showing posts with label Keith Tidman. Show all posts
Showing posts with label Keith Tidman. Show all posts

Sunday 26 February 2023

Universal Human Rights for Everyone, Everywhere

Jean-Jacques Rousseau

By Keith Tidman


Human rights exist only if people believe that they do and act accordingly. To that extent, we are, collectively, architects of our destiny — taking part in an exercise in the powers of human dignity and sovereignty. Might we, therefore, justly consider human rights as universal?

To presume that there are such rights, governments must be fashioned according to the people’s freely subscribed blueprints, in such ways that policymaking and consignment of authority in society represent citizens’ choices and that power is willingly shared. Such individual autonomy is itself a fundamental human right: a norm to be exercised by all, in all corners. Despite scattered conspicuous headwinds. Respect for and attachment to human rights in relations with others is binding, prevailing over the mercurial whimsy of institutional dictates.

For clarity, universal human rights are inalienable norms that apply to everyone, everywhere. No nation ought to self-immunise as an exception. These human rights are not mere privileges. By definition they represent the natural order of things; that is, these rights are naturally, not institutionally, endowed. There’s no place for governmental, legal, or social neglect or misapplication of those norms, heretically violating human dignity. This point about dignity is redolent of Jean-Jacques Rousseau’s notions of civil society, explained in his Social Contract (1762), which provocatively opens with the famous ‘Man was born free, and he is everywhere in chains’. By which Rousseau was referring to the tradeoff between people’s deference to government authority over moral behaviour in exchange for whatever freedoms civilisation might grant as part of the social contract. The contrary notion, however, asserts that human rights are natural, protected from government caprice in their unassailability — claims secured by the humanitarianism of citizens in all countries, regardless of cultural differences.

The idea that everyone has a claim to immutable rights has the appeal of providing a platform for calling out wrongful behaviour and a moral voice for preventing or remedying harms, in compliance with universal standards. The standards act as moral guarantees and assurance of oversight. The differences among cultures should not translate to the warped misplacement of relativism in calculating otherwise clear-cut universal rights aimed to protect.

International nongovernmental organisations (such as Human Rights Watch) have laboured to protect fundamental liberties around the world, investigating abuses. Several other human rights organisations, such as the United Nations, have sought to codify people's rights, like those spelled out in the UN Declaration of Human Rights. The many universal human rights listed by the declaration include these:
All human beings are born free; everyone has the right to life, liberty, and security; no one shall be subjected to torture; everyone has the right to freedom of thought, conscience, and religion; everyone has the right to education; no one shall be held in slavery; all are equal before the law’. 
(Here’s the full UN declaration, for a grasp of its breadth.) 

These aims have been ‘hallowed’ by the several documents spelling out moral canon, in aggregate amounting to an international bill of rights to which countries are to commit and abide by. This has been done without regard to appeals to national sovereignty or cultural differences, which might otherwise prejudice the process, skew policy, undermine moral universalism, lay claim to government dominion, or cater to geopolitical bickering — such things always threatening to pull the legs out from under citizens’ human rights.

These kinds of organisations have set the philosophical framework for determining, spelling out, justifying, and promoting the implementation of human rights on as maximum global scale as possible. Aristotle, in Nicomachean Ethics, wrote to this core point, saying: 
A rule of justice is natural that has the same validity everywhere, and does not depend on our accepting it’.
That is, natural justice foreruns social, historical, and political institutions shaped to bring about conformance to their arbitrary, self-serving systems of fairness and justice. Aristotle goes on:
Some people think that all rules of justice are merely conventional, because whereas a law of nature is immutable and has the same validity everywhere, as fire burns both here and in Persia, rules of justice are seen to vary. That rules of justice vary is not absolutely true, but only with qualifications. Among the gods indeed it is perhaps not true at all; but in our world, although there is such a thing as Natural Justice, all rules of justice are variable. But nevertheless there is such a thing as Natural Justice as well as justice not ordained by nature’.
Natural justice accordingly applies to everyone, everywhere, where moral beliefs are objectively corroborated as universal truths and certified as profound human goods. In this model, it is the individual who shoulders the task of appraising the moral content of institutional decision-making.

Likewise, it was John Locke, the 17th-century English philosopher, who argued, in his Two Treaties of Government, the case that individuals enjoy natural rights, entirely non-contingent of the nation-state. And that whatever authority the state might lay claim to rested in guarding, promoting, and serving the natural rights of citizens. The natural rights to life, liberty, and property set clear limits to the power of the state. There was no mystery as to Locke’s position: states existed singularly to serve the natural rights of the people.

A century later, Immanuel Kant was in the vanguard in similarly taking a strong moral position on validating the importance of human rights, chiefly the entangled ideals of equality and the moral autonomy and self-determination of rational people.

The combination of the universality and moral heft of human rights clearly imparts greater potency to people’s rights, untethered to legal, institutional force of acknowledgment. As such, human rights are enjoyed equally, by everyone, all the time. It makes sense to conclude that everyone is therefore responsible for guarding the rights of fellow citizens, not just their own. Yet, in practice it is the political regime and perhaps international organisations that bear that load.

And within the ranks of philosophers, human-rights universalism has sometimes clashed with relativists, who reject universal (objective) moral canon. They paint human rights as influenced contingently by social, historical, and cultural factors. The belief is that rights in society are considered apropos only for those countries whose culture allows. Yet, surely, relativism still permits the universality of numerous rights. We instinctively know that not all rights are relative. At the least, societies must parse which rights endure as universal and which endure as relative, and hope the former are favoured.

That optimism notwithstanding, many national governments around the world choose not to uphold, either in part or in whole, fundamental rights in their countries. Perhaps the most transfixing case for universal human rights, as entitlements, is the inhumanity that haunts swaths of the world today, instigated for the most trifling of reasons.

Tuesday 24 January 2023

‘Brain in a Vat’: A Thought Experiment


By Keith Tidman

Let’s hypothesise that someone’s brain has been removed from the body and immersed in a vat of fluids essential for keeping the brain not only alive and healthy but functioning normally — as if it is still in a human skull sustained by other bodily organs.

A version of this thought experiment was laid out by RenĂ© Descartes in 1641 in the Meditations on First Philosophy, as part of inquiring whether sensory impressions are delusions. An investigation that ultimately led to his celebrated conclusion, ‘Cogito, ergo sum’ (‘I think, therefore I am’). Fast-forward to American philosopher Gilbert Harman, who modernised the what-if experiment in 1973. Harman’s update included introducing the idea of a vat (in place of the allegorical device of information being fed to someone by an ‘evil demon’, originally conceived by Descartes) in order to acknowledge the contemporary influences of neuroscience in understanding the brain and mind.

In this thought experiment, a brain separated from its body and sustained in a vat of chemicals is assumed to possess consciousness — that is, the neuronal correlates of perception, experience, awareness, wonderment, cognition, abstraction, and higher-order thought — with its nerve endings attached by wires to a quantum computer and a sophisticated program. Scientists feed the disembodied brain with electrical signals, identical to those that people are familiar with receiving during the process of interacting through the senses with a notional external world. Hooked up in this manner, the brain (mind) in the vat therefore does not physically interact with what we otherwise perceive as a material world. Conceptualizations of a physical world — fed to the brain via computer prompts and mimicking such encounters — suffice for the awareness of experience.

The aim of this what-if experiment is to test questions not about science or even ‘Matrix’-like science fiction, but about epistemology — queries such as what do we know, how do we know it, with what certainty do we know it, and why does what we know matter? Specifically, issues to do with scepticism, truth, mind, interpretation, belief, and reality-versus-illusion — influenced by the lack of irrefutable evidence that we are not, in fact, brains in vats. We might regard these notions as solipsistic, where the mind believes nothing (no mental state) exists beyond what it alone experiences and thinks it knows.

In the brain-in-a-vat scenario, the mind cannot differentiate between experiences of things and events in the physical, external world and those virtual experiences electrically prompted by the scientists who programmed the computer. Yet, since the brain is in all ways experiencing a reality, whether or not illusionary, then even in the absence of a body the mind bears the complement of higher-order qualities required to be a person, invested with full-on human-level consciousness. To the brain suspended in a vat and to the brain housed in a skull sitting atop a body, the mental life experienced is presumed to be the same.

But my question, then, is this: Is either reality — that for which the computer provides evidence and that for which external things and events provide evidence — more convincing (more real, that is) than the other? After all, are not both experiences of, say, a blue sky with puffy clouds qualitatively and notionally the same: whereby both realities are the product of impulses, even if the sources and paths of the impulses differ?

If the experiences are qualitatively the same, the philosophical sceptic might maintain that much about the external world that we surmise is true, like the briskness of a winter morning or the aroma of fresh-baked bread, is in fact hard to nail down. The reason being that in the case of a brain in a vat, the evidence of a reality provided by scientists is assumed to resemble that provided by a material external world, yet result in a different interpretation of someone’s experiences. We might wonder how many descriptions there are of how the conceptualized world corresponds to what we ambitiously call ultimate reality.

So, for example, the sceptical hypothesis asserts that if we are unsure about not being a brain in a vat, then we cannot disregard the possibility that all our propositions (alleged knowledge) about the outside physical world would not hold up to scrutiny. This argument can be expressed by the following syllogism:

1. If I know any proposition of external things and events, then I know that I am not a brain in a vat;

2. I do not know that I am not a brain in a vat;

3. Therefore, I do not know any proposition of external things and events about the external world.


Further, given that a brain in a vat and a brain in a skull would receive identical stimuli — and that the latter are the only means either brain is able to relate to its surroundings — then neither brain can determine if it is the one bathed in a vat or the one embodied in a skull. Neither mind can be sure of the soundness of what it thinks it knows, even knowledge of a world of supposed mind-independent things and events. This is the case, even though computer-generated impulses realistically substitute for not directly interacting bodily with a material external world. So, for instance, when a brain in a vat believes that ‘wind is blowing’, there is no wind — no rushing movement of air molecules — but rather the computer-coded, mental simulation of wind. That is, replication of the qualitative state of physical reality.

I would argue that the world experienced by the brain in a vat is not fictitious or unauthentic, but rather is as real to the disembodied brain and mind as the external, physical world is to the embodied brain. Both brains fashion valid representations of truth. I therefore propose that each brain is ‘sufficient’ to qualify as a person: where, notably, the brains’ housing (vat or skull) and signal pathways (digital or sensory) do not matter.

Monday 9 January 2023

The Philosophy of Science


The solar eclipse of May 29, 1919, forced a rethink of fundamental laws of physics

By Keith Tidman


Science aims at uncovering what is true. And it is equipped with all the tools — natural laws, methods, technologies, mathematics — that it needs to succeed. Indeed, in many ways, science works exquisitely. But does science ever actually arrive at reality? Or is science, despite its persuasiveness, paradoxically consigned to forever wending closer to its goal, yet not quite arriving — as theories are either amended to fit new findings, or they have to be replaced outright?

It is the case that science relies on observation — especially measurement. Observation confirms and grounds the validity of contending models of reality, empowering critical analysis to probe the details. The role of analysis is to scrutinise a theory’s scaffolding, to better visualise the coherent whole, broadening and deepening what is understood of the natural world. To these aims, science, at its best, has a knack for abiding by the ‘laws of parsimony’ of Occam’s razor — describing complexity as simply as possible, with the fewest suppositions to get the job done.

To be clear, other fields attempt this self-scrutiny and rigour, too, in one manner or another, as they fuel humanity’s flame of creative discovery and invention. They include history, languages, aesthetics, rhetoric, ethics, anthropology, law, religion, and of course philosophy, among others. But just as these fields are unique in their mission (oriented in the present) and their vision (oriented in the future), so is science — the latter heralding a physical world thought to be rational.

Accordingly, in science, theories should agree with evidence-informed, objective observations. Results should be replicated every time that tests and observations are run, confirming predictions. This bottom-up process is driven by what is called inductive reasoning: where a general principle — a conclusion, like an explanatory theory — is derived from multiple observations in which a pattern is discerned. An example of inductive reasoning at its best is Newton’s Third Law of Motion, which states that for every action (force) there is an equal and opposite reaction. It is a law that has worked unfailingly in uncountable instances.

But such successes do not eliminate inductive reasoning’s sliver of vulnerability. Karl Popper, the 20th-century Austrian-British philosopher of science, considered all scientific knowledge to be provisional. He illustrated his point with the example of a person who, having seen only white swans, concludes all swans are white. However, the person later discovers a black swan, an event conclusively rebutting the universality of white swans. Of course, abandoning this latter principle has little consequence. But what if an exception to Newton’s universal law governing action and reaction were to appear, instead?

Perhaps, as Popper suggests, truth, scientific and otherwise, should therefore only ever be parsed as partial or incomplete, where hypotheses offer different truth-values. Our striving for unconditional truth being a task in the making. This is of particular relevance in complex areas: like the nature of being and existence (ontology); or of universal concepts, transcendental ideas, metaphysics, and the fundamentals of what we think we know and understand (epistemology). (Areas also known to attempt to reveal the truth of unobserved things.) 

And so, Popper introduced a new test of truth: ‘falsifiability’. That is, all scientific assertions should be subjected to the test of being proven false — the opposite of seeking confirmation. Einstein, too, was more interested in whether experiments disagreed with his bold conjectures, as such experiments would render his theories invalid — rather than merely provide further evidence for them.

Nonetheless, as human nature would have it, Einstein was jubilant when his prediction that massive objects bend light was confirmed by astronomical observations of light passing close to the sun during the total solar eclipse of 1919, the observation thereby requiring revision of Newton’s formulation of the laws of gravity.

Testability is also central to another aspect of epistemology. That is, to draw a line between true science — whose predictions are subject to rigorous falsification and thus potential disproof — and pseudoscience — seen as speculative, untestable predictions relying on uncontested dogma. Pseudoscience balances precariously, depending as it does on adopters’ fickle belief-commitment rather than on rigorous tests and critical analyses.

On the plus side, if theories are not successfully falsified despite earnest efforts to do so, the claims may have a greater chance of turning out true. Well, at least until new information surfaces to force change to a model. Or, until ingenious thought experiments and insights lead to the sweeping replacement of a theory. Or, until investigation explains how to merge models formerly considered defyingly unalike, yet valid in their respective domains. An example of this last point is the case of general relativity and quantum mechanics, which have remained irreconcilable in describing reality (in matters ranging from spacetime to gravity), despite physicists’ attempts. 

As to the wholesale switching out of scientific theories, it may appear compelling to make the switch, based on accumulated new findings or the sense that the old theory has major fault lines, suggesting it has run its useful course. The 20th-century American philosopher of science, Thomas Kuhn, was influential in this regard, coining the formative expression ‘paradigm shift’. The shift occurs when a new scientific theory replaces its problem-ridden predecessor, based on a consensus among scientists that the new theory (paradigm) better describes the world, offering a ‘revolutionarily’ different understanding that requires a shift in fundamental concepts.


Among the great paradigm shifts of history are Copernicuss sun-centered (heliocentric) model of planet rotation, replacing Ptolemys Earth-centered model. Another was Charles Darwins theory of natural selection as key to the biological sciences, informing the origins and evolution of species. Additionally, Einsteins theories of relativity ushered in major changes to Newtons understanding of the physical universe. Also significant was recognition that plate tectonics explain large-scale geologic change. Significant, too, was development by Neils Bohr and others of quantum mechanics, replacing classical mechanics at microscopic scales. The story of paradigm shifts is long and continues.


Science’s progress in unveiling the universe’s mysteries entails dynamic processes: One is the enduring sustainability of theories, seemingly etched in stone, that hold up under unsparing tests of verification and falsification. Another is implementation of amendments as contrary findings chip away at the efficacy of models. But then another is the revolutionarily replacement of scientific models as legacy theories become frail and fail. Reasons for belief in the methods of positivism. 


In 1960, the physicist Eugene Wigner wrote what became a famous paper in philosophy and other circles, coining the evocative expression unreasonable effectiveness. This was in reference to the role of mathematics in the natural sciences, but he could well have been speaking of the role of science itself in acquiring understanding of the world.


Monday 12 December 2022

Determinism and Accountability

Dominos falling

By Keith Tidman


People assume that free will and moral responsibility are mutually and inextricably interwoven. That is, the default belief tends to be that people make decisions and act on them freely. On the grounds of that conviction, society condemns and punishes, or lauds and rewards, people on the basis of their actions’ supposed morality. It’s how accountability for behaviour intersects with matters like retributive and distributive justice. 

 

But what if decisions and actions are already decided – predetermined? Such that if an event has transpired, it is impossible it could not have happened. Might society still need to parse people’s deeds on the basis of some arbitrary construct — a community’s self-prescribed code of right and wrong — in order for society to function in an orderly fashion?

 

With the objective, then, of preserving social orderliness, all the while holding people responsible, doesn’t society have no option but to submit to at least the pretense of free will? Where even that pretense is itself predetermined. That is, to make-believe — for the sake of convenience, pragmatic expediency, and the evasion of disorder — that people enjoy unfettered decisions, choices, and deeds.

 

Okay, so far I’ve summarised what free will means by way of libertarian agency in choosing and behaving in particular ways, with the presumption, however faulty, that people could have acted otherwise. But what about its counterpoint, determinism: especially what in academic circles is often referred to as ‘hard determinism’, where determinism and freedom unreservedly conflict (called incompatibilism)?

 

According to determinism, for example, acting benevolently rather than selfishly (or the reverse) may be no more the exercise of unconstrained free agency than naturally having brunette hair or 20/20 vision. We may not really be ‘free’ to decide which job candidate to hire, which book to read, which model car to buy, which investment to make, which country to visit — or which political candidate to vote for.

 

Rather, the argument states that all decisions and deeds are predicated on the laws of nature, which inform, describe, and animate the stuff of our universe. The proposition is that people’s choices and actions are shaped (are predetermined) by all that has happened over the course of the cosmos’s entire lifespan. The basis is an unremitting regress of successive causes and outcomes recursively branching and branching in incalculable directions, nonstop. A causal determinism, sourced all the way back to the beginning of the universe.

 

That is, decisions and deeds inescapably result from a timeless accretion of precedents. The tumbling buildup, over far-ranging generations, of influences: like culture, genetic makeup, experiences, parenting, evolution, intelligence, identity, emotions, disposition, surroundings. As well as, every bit crucially, what naturally occurred throughout the entirety of history and prehistory.

 

Such factors, among others, have powerful, compelling influences, canceling out moral agency — our ability to make choices based on our sense of right and wrong. After all, in the deterministic model, the events that occurred as antecedents of current and future events did so necessarily. Indeed, we might imagine that if fissures were ever to show up in determinism’s cause-and-effect procession of happenings, the laws of nature and of human behaviour would pitch toward systemic failure — the undoing of events’ inevitability. We thus justify judging and punishing people who behave antisocially, on grounds induced by predetermination, where there is only one possible course of events.

 

If, however, because of the absence of free agency and volitional intent, people cannot be regarded as morally accountable, ought they be held responsible anyway, subject to legal or other kinds of sanction? To go through the motions — despite determinism dangling menacingly over systems of criminal justice everywhere. And similarly, ought people be lauded and rewarded for things deemed to have been done right? With implications for assigned guilt, sin, and evil, and other verdicts pertinent to actions freely chosen.

 

One answer to the two preceding questions about responsibility has been ‘yes’, on the basis of a belief system referred to as compatibilism. This asserts that free will and determinism can compatibly coexist. But this is a challenging — arguably impossible — needle to thread, short of arbitrarily warping definitions, assumptions, and preconceived conditions.

 

My position goes in a different, even simpler, direction from compatibilism. It is that accountability is necessitated by society having to prescribe ethical norms, no matter how contrived — and attempt to force human behaviour to fit those engineered norms — in order to avoid society alternatively sinking into chaos. In this manner, society learns, perhaps kicking and screaming, to cope with a deterministic world — a world where people cannot act otherwise than they do, and events are inevitable.

 

It’s difficult for us to shake intuitively favouring free will, despite its illusory naturePeople feel as if in control; they zealously covet being in control; they recoil unsettlingly at the prospect of not being in control. Fundamentally, they sense that personal agency and volitional intent define humanity. They can’t easily discard the pretense that only freely willed actions meet the criterion of warranting tribute, on the one hand, or fault, on the other. 

 

But even if they’re not in control, and determinism routed free will from the start, society must behave otherwise: it must hold people responsible, both to deter and punish — censure — and to reward — validate — decisions and actions as if free choice had indeed sparked them. 



 

Monday 7 November 2022

Free Will, the ‘Block Universe’, and Eternalism

In this image, the light trail left by traffic illustrates an idea central to the growing block universe theory of time, that the past, present, and future coexist.  

By Keith Tidman

The block universe is already filled with every event that ever happens. It is where what are traditionally dubbed the past, present, and future exist simultaneously, not as classically flowing linearly from one to the other. As such, these three distinct aspects to time, which by definition exclude the notion of tense, are equally real. None is in any way advantaged over the others.


The orthodox model of a ‘block universe’ describes a four-dimensional universe, resembling a cube, which merges the three dimensions of space and one of time, along the lines that Albert Einstein theorised in his special relativity.


Might this tell us something about the possibility of free will in such a universe? Before we try to answer, let’s explore more particulars about the block universe itself.

 

If observed from outside, the block would appear to hold all of space and time. The spacetime coordinates of someone’s birth and death — and every occurrence bracketed in between — accordingly exist concurrently somewhere within the block. The occurrences are inalterably and forever in the block. This portrayal of foreverness is sometimes referred to as ‘eternalism’, defined as a complete history of all possible events.

 

Conventionally, the block is considered static. But maybe it’s not. What if, for example, what we ordinarily call ‘time’ is better called change? After all, the second law of thermodynamics tells us that the state of entropy of the entire universe — meaning the presence of disorder — will always result in a net increase. It never decreases. Until, that is, the universe ultimately ends. Demonstrating how change, as in the case of entropy, moves inexorably in one direction. The inevitability of such change has a special place for humankind, as reality transforms.

 

Entropy is thus consummate change, on a cosmic scale, which is how the illusion of something we call ‘the arrow of time’ manifests itself in our conscious minds. As such, change, not time, is what is truly fundamental in nature. Change defines our world. Which, in turn, means that what the block universe comprises is necessarily dynamical and fluid, rather than frozen and still. By extension, the block universe challenges the concept of eternalism.

 

This also means that cause and effect exist (as do correlation and effect) as fundamental features of a universe in which ‘becoming’, in the form of change, is rooted. Despite past, present, and future coexisting within the block universe, causes still necessarily precede and can never follow the effects of what appears as relentless change. Such change serves, in place of illusory time, as one axis matched up with three-dimensional space. The traditional picture of the block universe comprising nondynamical events would contradict the role of cause in making things happen.

 

So, let’s return to the issue of free will within the block universe.

 

First off, the block universe has typically been described as deterministic. That is, if every event within the universe happens simultaneously according to the precise space and time coordinates the model calls for, then everything has been inescapably preordained, or predetermined. It all just is. Free will in such a situation becomes every bit as much an illusion as time.

 

But there’s a caveat pushing back against that last point. In the absence of freewill, humans would resemble automatons. We would be contraption-like assemblages of parts that move but lack agency, and would be devoid of meaningful identity and true humanity. We, and events, could be seen as two-dimensional set pieces on a stage, deterministically scripted. With no stage direction or audience — and worse, no meaning. Some might proclaim that our sense of autonomy is yet another illusion, along with time. But I believe, given our species’ active role within this dynamical cosmos, that reality is otherwise.

 

Further, determinism would take us off the hook of accountability and consequences. Fate, bubbling up from the capriciousness of nature’s supposed mechanistic forces, would situate us in a world stripped of responsibility. A world in which our lives are pointlessly set to automatic. Where the distinction between good and evil becomes fuzzy. In this world, ethical norms are arbitrary and fickle — a mere stage prop, giving the appearance of consequences to actions.

 

And yet, the blueprint above replacing the concept of time with that of change puts free will back into play, allowing a universe in which our conscious minds freely make decisions and behave accordingly. Or, at least, seemingly so. In particular, for there to be events at the space-change coordinates of the block universe, there must be something capable of driving (causing) change. The events aren’t simply fated. That ‘something’ can only be choice associated with truly libertarian free will.

 

There’s one other aspect to free will that should be mentioned. Given that motion within the three-dimensional space of the block universe can occur, not only the what but also the where of events can be changed. Again, agency is required to freely choose. It’s like shuffling cards: the cards remain the same, but their ‘coordinates’ (location) change.

 

In refutation of determinism, the nature of change as described above allows that what decisions we make and actions we take within the block universe are expressions of libertarian free will. Our choices become new threads woven through the block universe’s fabric — threads that prove dissoluble, however, through the ceaselessness of change.

 

Monday 31 October 2022

Beetle in a Box: A Thought Experiment


By Keith Tidman


Let’s hypothesise that everyone in a community has a box containing a ‘beetle’. Each person can peer into only his or her box, and never into anyone else’s. Each person insists, upon looking into their own box, that they know what a ‘beetle’ is.

But there’s a catch: Each box might contain something different from some or all the others; each box might contain something that continually changes; or each box might actually contain nothing at all. Yet upon being asked, each person resolutely continues to use the word ‘beetle’ to describe what’s in their box. Refusing, even if probed, to more fully describe what they see, even if not showing it. The word ‘beetle’ thus simply meaning ‘that thing inside a person’s box’.

So, what does the thought experiment, set out by the influential twentieth-century philosopher Ludwig Wittgenstein in his book Philosophical Investigations,  tell us about language, mind, and reality?

As part of this experiment, Wittgenstein introduced the concept of a ‘private language’. That is, a language with a vocabulary and structure that only its originator and sole user understands, all the while untranslatable and obscure to everyone else. The original notion of a private (personal) language was in being analogous to what a person might use in attempting to convey his or her unique experiences, perceptions, and senses — the person’s individualised mental state. However, one criticism of such a personal language, by reason of being mostly unfathomable to others, is in its not holding to the definitional purpose of a working language as we commonly know it: to communicate with others, using mutually agreed-upon and comprehended guidelines.

Notably, however, the idea of a ‘private language’ has been subject to different interpretations over the years — besides in expressing to others one’s own mental state — on account of what some people have held are its inherent ambiguities. Even on its surface, such a private language does seem handicapped, inadequate for faithfully representing external reality among multiple users. A language unable to tie external reality to ‘internal’ reality — to a person’s ‘immediate private sensations’, as Wittgenstein put it, such as pain the individual feels. That is, to the user’s subjective, qualitative state of mind. Yet, the idea that people’s frames of mind, subjective experiences, and sense of awareness are unknowable by others, or at least uncertainly known, seems to come to us quite naturally.

Conventionally speaking, we become familiar with what something is because of its intrinsic physical characteristics. That ‘something’ has an external, material reality, comfortably and knowingly acknowledged by others in abidance to norms within the community. The something holds to the familiar terms of the ‘public language’ we use to describe it. It conveys knowledge. It denotes the world as we know it, precipitated by the habitual awareness of things and events. There’s a reassuringly objective concreteness to it.

So, if you were to describe to someone else some of the conventional features of, say, a sheet of paper or of an airplane or of a dog, we would imagine that other people could fathom, with minimal cognitive effort and without bewilderment, what the item you were describing was. A ‘private language’ can’t do any of that, its denying us a universally agreed-upon understanding of what Wittgenstein’s beetle-in-the-box might actually be. To the point about effectiveness, a ‘private language’ — where definitions of terms may be adversely arbitrary, unorthodox, imprecise, and unfamiliar  differs greatly from a ‘public language’ — where definitions of terms and syntactical form stick to conventional doctrine.

Meanwhile, such a realisation about the shortcomings of a ‘private language’ points to an analogy applicable to a ‘shared’ (or public) language: What happens in the case of expressing one’s personal, private experiences? Is it even possible to do so in an intelligible fashion? The discussion now pivots to the realm of the mind, interrogating aspects such as perception, appearance, attention, awareness, understanding, belief, and knowledge.

For example, if someone is in pain, or feeling joy, fear, or boredom, what’s actually conveyed and understood in trying to project their situation to other people? It’s likely that only they can understand their own mental state: their pain, joy, fear, or boredom. And any person with whom they are speaking, while perhaps genuinely empathetic and commiserative, in reality can only infer the other individual’s pain while understanding only their own.

Put another way, neither person can look into the other’s ‘box’; neither can reach into the other’s mind and hope to know. There are epistemic (knowledge-related) limits to how familiar we can be with another person’s subjective experience, even to the extent of the experience’s validation. Pain, joy, fear, and boredom are inexpressible and incomprehensible, beyond rough generalizations and approximations, whether resorting to either a ‘private’ or public language.

What’s important is that subjective feelings obscurely lack form — like the mysterious ‘beetle’. They lack the concrete, external reality mentioned previously. The reason being that your feelings and those of the other person are individualised, qualitative, and subjective. They are what philosophy of mind calls qualia. Such that your worry, pleasure, pride, and anxiety likely don’t squarely align with mine or the next person’s. Defaulting, as Wittgenstein put it, to a ‘language game’ with consequences, with its own puzzling syntactical rules and lexicon. And as such, the game’s challenge to translate reality into precise, logical, decipherable meaning.

All of which echoes Wittgenstein’s counsel against the inchoate, rudimentary notion of a ‘private language’, precisely because of its lacking necessary social, cultural, historical, and semiotic context. A social backdrop whereby a language must be predictably translatable into coherent concepts (with the notable exception of qualia). Such as giving things identifiable, inherent form readily perceived by others, according to the norms of social engagement and shared discourse among people within a community.

Shape-shifting ‘beetles’ are a convenient analogue of shape-shifting mental states. Reflecting altering ways our qualitative, subjective states of mind influence our choices and behaviours, through which other people develop some sense of our states of mind and how others may define us  a process that, because  of its mercurial nature, is seldom reliable. The limitations discussed here of Wittgenstein’s ‘private language’ arguably render such a medium of communication unhelpful to this process.

We make assumptions, based on looking in the box at our metaphorical beetle (the thing or idea or sensation inside), that will uncover a link: a connection between internal, subjective reality — like the pain that Wittgenstein’s theorising demonstrably focused on, but also happiness, surprise, sadness, enthrallment, envy, boredom — and external, objective reality. However, the dynamics of linguistically expressing qualitative, individualised mental states like pain need to be better understood.

So, what truths about others states of mind are closed off from us, because we’re restricted to looking at only our own ‘beetle’ (experience, perception, sensation)? And because we have to reconcile ourselves to trying to bridge gaps in our knowledge by imperfectly divining, based on externalities like behaviour and language, what’s inside the boxes’ (minds) of everyone else?

Monday 26 September 2022

Where Do Ideas Come From?


By Keith Tidman

Just as cosmic clouds of dust and gas, spanning many light-years, serve as ‘nurseries’ of new stars, could it be that the human mind similarly serves as a nursery, where untold thought fragments coalesce into full-fledged ideas?

At its best, this metaphor for bringing to bear creative ideas would provide us with a different way of looking at some of the most remarkable human achievements in the course of history.

These are things like Michelangelo’s inspired painting, sculpting, architecture, and engineering. The paradigm-shifting science of Niels Bohr and Max Planck developing quantum theory. The remarkable compositions of Mozart. The eternal triumvirate of Socrates, Plato, and Aristotle — whose intellectual hold remains to today. The piercing insights into human nature memorably expressed by Shakespeare. The democratic spread of knowledge achieved through Gutenberg’s printing press. And so many more, of course.

To borrow from Newton (with his nod to the generations of luminaries who set the stage for his own influences upon science and mathematics), might humbler souls, too, learn to ‘stand on the shoulders of such giants’, even if in less remarkable ways? Yet still to reach beyond the rote? And, if so, how might that work?

I would say that, for a start, it is essential for the mind to be unconstrained by conformance and orthodox groupthink in viewing and reconceiving the world: a quest for patterns. The creative process must not be sapped by concern over not getting endeavours right the first or second or third time. Doubting ideas, putting them to the test through decomposition and recomposition, adds to the rigour of those that optimally survive exploitation and scrutiny.

To find solutions that move significantly beyond the prevailing norms requires the mind to be undaunted, undistracted, and unflagging. Sometimes, how the creative process starts out — the initial conditions, as well as the increasing numbers of branching paths along which those conditions travel — greatly shapes eventual outcomes; other times, not. All part of the interlacing of analysis and serendipitous discovery. I think that tracing the genealogy of how ideas coalesce informs that process.

For a start, there’s a materialistic aspect to innovative thought, where the mind is demystified from some unmeasurable, ethereal other. That is, ideas are the product of neuronal activity in the fine-grained circuity of the brain, where hundreds of trillions of synapses, acting like switches and routers and storage devices, sort out and connect thoughts and deliver clever solutions. Vastly more synapses, one might note, than there are stars in our Milky Way galaxy!

The whispering unconscious mind, present in reposed moments such as twilight or midnight or simply gazing into the distance, associated with ‘alpha brain waves’, is often where creative, innovative insights dwell, being readied to emerge. It’s where the critical mass of creative insights is housed, rising to challenge rigid intellectual canon. This activity finds a force magnifier in the ‘parallel processing’ of others’ minds during the frothy back and forth of collaborative dialogue.

The panoply of surrounding influences helps the mind set up stencils for transitioning inspiration into mature ideas. These influences may germinate from individuals in one’s own creative orbit, or as inspiration derived from the culture and community of which one is a part. Yet, synthesising creative ideas across fields, resulting in multidisciplinary teams whose members complement one another, works effectively to kindle fresh insights and solutions.

Thoughts may be collaboratively exchanged within and among teams, pushing boundaries and inciting vision and understanding. It’s incremental, with ideas stepwise building on ideas in the manner famously acknowledged by Newton. Ultimately, at its best the process leads to the diffusion of ideas, across communities, as grist for others engaged in reflection and the generation of new takes on things. Chance happenings and spontaneous hunches matter, too, with blanks cooperatively filled in with others’ intuitions.

As an example, consider that, in a 1959 talk, the Nobel prize winning physicist, Richard Feynman, challenged the world to shrink text to such an extent that the entire twenty-four-volume Encyclopedia Britannica could fit onto the head of a pin. (A challenge perhaps reminiscent of the whimsical question about ‘the number of angels fitting on the head of a pin’, at the time intended to mock medieval scholasticism.) Meanwhile, Feynman believed there was no reason technology couldn’t be developed to accomplish the task. The challenge was met, through the scaling of nanotechnology, two and a half decades later. Never say never, when it comes to laying down novel intellectual markers.

I suggest that the most-fundamental dimension to the origination of such mind-stretching ideas as Feynman’s is curiosity — to wonder at the world as it has been, as it is now, and crucially as it might become. To doggedly stay on the trail of discovery through such measures as what-if deconstruction, reimagination, and reassembly. To ferret out what stands apart from the banal. And to create ways to ensure the right-fitting application of such reinvention.

Related is a knack for spotting otherwise secreted links between outwardly dissimilar and disconnected things and circumstances. Such links become apparent as a result of combining attentiveness, openness, resourcefulness, and imagination. A sense that there might be more to what’s locked in one’s gaze than what immediately springs to mind. Where, frankly, the trite expression ‘thinking outside-the-box’ is itself an ironic example of ‘thinking inside-the-box’.

Forging creative results from the junction of farsightedness and ingenuity is hard — to get from the ordinary to the extraordinary is a difficult, craggy path. Expertise and extensive knowledge is the metaphorical cosmic dust required in order to coalesce into the imaginatively original ideas sought.

Case in point is the technically grounded Edison, blessed with vision and critical-thinking competencies, experiencing a prolific string of inventive, life-changing eureka moments. Another example is Darwin, prepared to arrive at his long-marinating epiphany into the brave world of ‘natural selection’. Such incubation of ideas, venturing into uncharted waters, has proven immensely fruitful.

Thus, the ‘nurseries’ of thought fragments, coalescing into complex ideas, can provide insight into reality — and grist for future visionaries.

Monday 12 September 2022

The Uncaused Multiverse: And What It Signifies


By Keith Tidman

Here’s an argument that seems like commonsense: everything that exists has a cause; the universe exists; and so, therefore, the universe has a cause. A related argument goes on to say that the events that led to the universe must themselves ultimately originate from an uncaused event, bringing the regress of causes to a halt.

But is such a model of cosmic creation right?


Cosmologists assert that our universe was created by the Big Bang, an origin story developed by the Belgian physicist and Catholic priest Georges Lemaitre in 1931. However, we ought not to confuse the so-called singularity — a tiny point of infinite density — and the follow-on Big Bang event with creation or causation per se, as if those events preceded the universe. Rather, they were early components of a universe that by then already existed, though in its infancy.

It’s often considered problematic to ask ‘what came before the Big Bang’, given the event is said to have led to the creation of space and time (I address ‘time’ in some detail below). By extension, the notion of nothingness prior to the Big Bang is equally problematic, because, correctly defined, nothingness is the total, absolute absence of everything — even energy and space. Although cosmologists claim that quantum fluctuations, or short bursts of energy in space, allowed the Big Bang to happen, we are surely then obliged to ask what allowed those fluctuations to happen.

Yet, it’s generally agreed you can’t get something from nothing. Which makes it all the more meaningful that by nothingness, we are not talking about space that happens to be empty, but rather the absence of space itself.

I therefore propose, instead, that there has always been something, an infinity where something is the default condition, corresponding to the impossibility of nothingness. Further, nothingness is inconceivable, in that we are incapable of visualising nothingness. As soon as we attempt to imagine nothingness, our minds — the act of thinking about it — causes the otherwise abstraction of ‘nothingness’ to turn into the concreteness of ‘something’: a thing with features. We can’t resist that outcome, for we have no basis in reality and in experience that we can match up with this absolute absence of everything, including space, no matter how hard we try to picture it in our mind’s eye.

The notion of infinity in this model of being excludes not just a ‘first universe’, but likewise excludes a ‘first cause’ or ‘prime mover’. By its very definition, infinity has no starting point: no point of origin; no uncaused cause. That’s key; nothing and no one turned on some metaphorical switch, to get the ball rolling.

What I wish to convey is a model of multiple universes existing — each living and dying — within an infinitely bigger whole, where infinity excludes a ‘first cause’ or ‘first universe’.

In this scenario, where something has always prevailed over nothingness, the topic of time inevitably raises its head, needing to be addressed. We cannot ignore it. But, I suggest, time appears problematic only because it's misconceived. Rather, time is not something that suddenly lurches out of the starting gate upon the occurrence of a Big Bang, in the manner that cosmologists and philosophers have typically described how it happens. Instead, when properly understood, time is best reflected in the unfolding of change.

The so-called ‘arrow of time’ traditionally appears to us in the three-way guise of the past leading to (causing) the present leading to the future. Allegorically, like a river. However, I propose that past and future are artificial constructs of the mind that simply give us a handy mechanism by which to live with the consequences of what we customarily call time: by that, meaning the consequences of change, and thus of causation. Accordingly, it is change through which time (temporal duration) is made visible to us; that is, the neurophysiological perception of change in human consciousness.

As such, only the present — a single, seamless ‘now’ — exists in context of our experience. To be sure, future and past give us a practical mental framework for modeling a world in ways that conveniently help us to make sense of it on an everyday level. Such as for hypothesising about what might be ahead and chronicling events for possible retrieval in the ‘now’. However, future and past are figments, of which we have to make the best. ‘Time reflected as change’ fits the cosmological model described here.

A process called ‘entropy’ lets us look at this time-as-change model on a cosmic scale. How? Well, entropy is the irresistible increase in net disorder — that is, evolving change — in a single universe. Despite spotty semblances of increased order in a universe — from the formation of new stars and galaxies to someone baking an apple pie — such localised instances of increased order are more than offset by the governing physical laws of thermodynamics.

These physical laws result in increasing net disorder, randomness, and uncertainty during the life cycle of a universe. That is, the arrow of change playing out as universes live and peter out because of heat death — or as a result of universes reversing their expansion and unwinding, erasing everything, only to rebound. Entropy, then, is really super-charged change running its course within each universe, giving us the impression of something we dub time.

I propose that in this cosmological model, the universe we inhabit is no more unique and alone than our solar system or beyond it our spiral galaxy, the Milky Way. The multiplicity of such things that we observe and readily accept within our universe arguably mirrors a similar multiplicity beyond our universe. These multiple universes may be regarded as occurring both in succession and in parallel, entailing variants of Big Bangs and entropy-driven ‘heat deaths’, within an infinitely larger whole of which they are a part.

In this multiverse reality of cosmic roiling, the likelihood of dissimilar natural laws from one universe to another, across the infinite many, matters as to each world’s developmental direction. For example, in both the science and philosophy of cosmology, the so-called ‘fine-tuning principle’ — known, too, as the anthropic principle — argues that with enough different universes, there’s a high probability some worlds will have natural laws and physical constants allowing for the kick-start and evolution of complex intelligent forms of life.

There’s one last consequence of the infinite, uncaused multiverse described here. Which is the absence of intent, and thus absence of intelligent design, when it comes to the physical laws and materialisation of sophisticated, conscious species pondering their home worlds. I propose that the fine-tuning of constants within these worlds does not undo the incidental nature of such reality.

The special appeal of this kind of multiverse is that it alone allows for the entirety of what can exist.

Monday 15 August 2022

The Tangled Web We Weave


By Keith Tidman
 

Kant believed, as a universal ethical principle, that lying was always morally wrong. But was he right? And how might we decide that?

 

The eighteenth-century German philosopher asserted that everyone had ‘intrinsic worth’: that people are characteristically rational and free to make their own choices. Lying, he believed, degrades that aspect of moral worth, withdrawing others’ ability to exercise autonomy and make logical decisions, as we presume they might in possessing truth. 

 

Kant’s ground-level belief in these regards was that we should value others strictly ‘as ends’, and never see people ‘as merely means to ends’. A maxim that’s valued and commonly espoused in human affairs today, too, even if people sometimes come up short.

 

The belief that judgements of morality should be based on universal principles, or ‘directives’, without reference to the practical outcomes, is termed deontology. For example, according to this approach, all lies are immoral and condemnable. There are no attempts to parse right and wrong, to dig into nuance. It’s blanket censure.

 

But it’s easy to think of innumerable drawbacks to the inviolable rule of wholesale condemnation. Consider how you might respond to a terrorist demanding the place and time of a meeting to be held by the intended target. Deontologists like Kant would consider such a lie immoral.

 

Virtue ethics, to this extent compatible with Kant’s beliefs, also says that lying is morally wrong. Their reasoning, though, is that it violates a core virtue: honesty. Virtue ethicists are concerned to protect people’s character, where ‘virtues’ — like fairness, generosity, compassion, courage, fidelity, integrity, prudence, and kindness — lead people to behave in ways others will judge morally laudable. 

 

Other philosophers argue that, instead of turning to the rules-based beliefs of Kant and of virtue ethicists, we ought to weigh the (supposed) benefits and harms of a lie’s outcomes. This principle is called  consequentialist ethics, mirroring the utilitarianism of eighteenth/nineteenth-century philosophers Jeremy Bentham and John Stuart Mill, emphasising greatest happiness. 

 

Advocates of consequentialism claim that actions, including lying, are morally acceptable when the results of behaviour maximise benefits and minimise harms. A tall order! A lie is not always immoral, as long as outcomes on net balance favour the stakeholders.

 

Take the case of your saving a toddler from a burning house. Perhaps, however, you believe in not taking credit for altruism, concerned about being perceived conceitedly self-serving. You thus tell the emergency responders a different story about how the child came to safety, a lie that harms no one. Per Bentham’s utilitarianism, the ‘deception’ in this instance is not immoral.

 

Kant’s dyed-in-the-wool unforgiveness of lies invites examples that challenge the concept’s wisdom. Take the historical case of a Jewish woman concealed, from Nazi military occupiers, under the floorboards of a farmer’s cottage. The situation seems clear-cut, perhaps.

 

If grilled by enemy soldiers as to the woman’s whereabouts, the farmer lies rather than dooming her to being shot or sent to a concentration camp. The farmer chooses good over bad, echoing consequentialism and virtue ethics. His choice answers the question whether the lie elicits the better outcome than would truth. It would have been immoral not to lie.

 

Of course, the consequences of lying, even for an honorable person, may sometimes be hard to get right, differing in significant ways from reality or subjectively the greater good. One may overvalue or undervalue benefits — nontrivial possibilities.

 

But maybe what matters most in gauging consequences are motive and goal. As long as the purpose is to benefit, not to beguile or harm, then trust remains intact — of great benefit in itself.

 

Consider two more cases as examples. In the first, a doctor knowingly gives a cancer-ridden patient and family false (inflated) hope for recovery from treatment. In the second, a politician knowingly gives constituents false (inflated) expectations of benefits from legislation he sponsored and pushed through.

 

The doctor and politician both engage in ‘deceptions’, but critically with very different intent: Rightly or wrongly, the doctor believes, on personal principle, that he is being kind by uplifting the patient’s despondency. And the politician, rightly or wrongly, believes that his hold on his legislative seat will be bolstered, convinced that’s to his constituents’ benefit.

 

From a deontological — rules-focused — standpoint, both lies are immoral. Both parties know that they mislead — that what they say is false. (Though both might prefer to say something like they ‘bent the truth’, as if more palatable.) But how about from the standpoint of either consequentialism or virtue ethics? 

 

The Roman orator Quintilian is supposed to have advised, ‘A liar should have a good memory’. Handy practical advice, for those who ‘weave tangled webs’, benign or malign, and attempt to evade being called out for duplicity.

 

And damning all lies seems like a crude, blunt tool, with no real value by being wholly unworkable outside Kant’s absolutist disposition toward the matter; no one could unswervingly meet that rigorous standard. Indeed, a study by psychologist Robert Feldman claimed that people lie two to three times, in trivial and major ways, for every ten minutes of conversation! 

 

However, consequentialism and virtue ethics have their own shortcomings. They leave us with the problematic task of figuring out which consequences and virtues matter best in a given situation, and tailoring our decisions and actions accordingly. No small feat.

 

So, in parsing which lies on balance are ‘beneficial’ or ‘harmful’, and how to arrive at those assessments, ethicists still haven’t ventured close to crafting an airtight model: one that dots all the i’s and crosses all the t’s of the ethics of lying. 


At the very least, we can say that, no, Kant got it wrong in overbearingly rebuffing all lies as immoral. Not seeking reasonable exceptions may have been obvious folly. Yet, that may be cold comfort for some people, as lapses into excessive risk — weaving evermore tangled webs — court danger by unwary souls.


Meantime, while some more than others may feel they have been cut some slack, they might be advised to keep Quintilian’s advice close.




* ’O what a tangled web we weave / When first we practice to deceive’, Sir Walter Scott, poem, ‘Marmion: A Tale of Flodden Field’.

 

Monday 25 July 2022

‘Philosophical Zombies’: A Thought Experiment

Zombies are essentially machines that appear human.

By Keith Tidman
 

Some philosophers have used the notion of ‘philosophical zombies’ in a bid to make a point about the source and nature of human consciousness. Have they been on the right track?

 

One thought experiment begins by hypothesising the existence of zombies who are indistinguishable in appearance and behaviour from ordinary people. These zombies match our comportment, seeming to think, know, understand, believe, and communicate just as we do. Or, at least, they appear to. You and a zombie could not tell each other apart. 

 

Except, there is one important difference: philosophical zombies lack conscious experience. Which means that if, for example, a zombie was to drop an anvil on its foot, it might give itself away by not reacting at all or, perhaps, very differently than normal. It would not have the inward, natural, individualised experience of actual pain the way the rest of us would. On the other hand, a smarter kind of zombie might know what humans would do in such situations and pretend to recoil and curse as if in extreme pain. 

 

Accordingly, philosophical zombies lead us to what’s called the ‘hard problem of consciousness’, which is whether or not each human has individually unique feelings while experiencing things – whereby each person produces his or her own reactions to stimuli, unlike everyone else’s. Such as the taste of a tart orange, the chilliness of snow, the discomfort of grit in the eye, the awe in gazing at ancient relics, the warmth of holding a squirming puppy, and so on.

 

Likewise, they lead us to wonder whether or not there are experiences (reactions, if you will) that humans subjectively feel in authentic ways that are the product of physical processes, such as neuronal and synaptic activity as regions of the brain fire up. Experiences beyond those that zombies only copycat, or are conditioned or programmed to feign, the way automatons might, lacking true self-awareness. If there are, then there remains a commonsense difference between ‘philosophical zombies’ and us.

 

Zombie thought experiments have been used by some to argue against the notion called ‘physicalism’, whereby human consciousness and subjective experience are considered to be based in the material activity of the brain. That is, an understanding of reality, revealed by philosophers of mind and neuroscientists who are jointly peeling back how the brain works as it experiences, imagines, ponders, assesses, and decides.

 

The key objection to such ‘physicalism’ is the contention that mind and body are separable properties, the venerable philosophical theory also known as dualism. And that by extrapolation, the brain is not (cannot be) the source of conscious experience. Instead, it is argued by some that conscious experience — like the pain from the dropped anvil or joy in response to the bright yellow of fields of sunflowers — is separate from brain function, even though natural law strongly tells us such brain function is the root of everyone's subjective experience.

 

But does the ‘philosophical zombie’ argument against brain function being the seed of conscious experience hold up?

 

After all, the argument that philosophical zombies, whose clever posing makes us assume there are no differences between them and us, seems problematic. Surely, there is insufficient evidence of the brain not giving rise to consciousness and individual experience. Yet, many people who argue against a material basis to experience, residing in brain function, rest their case on the notion that philosophical zombies are at least conceivable.

 

They argue that ‘conceivability’ is enough to make zombies possible. However, such arguments neglect that being conceivable is really just another expression for something ‘being imaginable’. Isn’t that the reason young children look under their beds at night? But, is being imaginable actually enough to conclude something’s real-world existence? How many children actually come face to face with monsters in their closets? There are innumerable other examples, as we’ll get to momentarily, illustrating that all sorts of irrational, unreal things are imaginable  in the same sense that they’re conceivable  yet surely with no sound basis in reality.

 

Proponents of conceivability might be said to stumble into a dilemma: that of logical incoherence. Why so? Because, on the same supposedly logical framework, it is logically imaginable that garden gnomes come to life at night, or that fire-breathing dragons live on an as-yet-undiscovered island, or that the channels scoured on the surface of Mars are signs of an intelligent alien civilisation!

 

Such extraordinary notions are imaginable, but at the same time implausible, even nonsensical. Imagining something doesn’t make it so. These ‘netherworld notions’ simply don’t hold up. Philosophical zombies arguably fall into this group. 

 

Moreover, zombies wouldn’t (couldn’t) have free will; that is, free will and zombiism conflict with one another. Yes, zombies might fabricate self-awareness and free will convincingly enough to trick a casual, uncritical observer — but this would be a sham, insufficient to satisfy the conditions for true free will.

 

The fact remains that the authentic experience of, for example, peacefully listening to gentle waves splashing ashore cannot happen if the complex functionality of the brain were not to exist. A blob that only looks like a brain (as in the case for philosophical zombies) would not be the equivalent of a human brain if, critically, those functions were missing.


It’s those brain functions that, contrary to theories like dualism, assert the separation of mind from body, that make consciousness and individualised sentience possible. The emergence of mind from brain activity is the likeliest explanation of experienced reality. Contemporary philosophers of mind and neuroscientists would agree on this, even as they continue to work jointly on figuring out the details of how all that happens.


The idea of philosophical zombies existing among us thus collapses. Yet, very similar questions of mind, consciousness, sentience, experience, and personhood could easily pop up again. Likely not as recycled philosophical zombies, but instead, as new issues arising longer term as developments in artificial intelligence begin to match and perhaps eventually exceed the vast array of abilities of human intelligence.