Showing posts with label David Hume. Show all posts
Showing posts with label David Hume. Show all posts

Monday, 1 May 2023

Problems with the Problem of Evil


By Keith Tidman

  

Do we really reside in what German polymath Gottlieb Wilhelm Leibniz referred to as ‘the best of all possible worlds’, picked by God from among an infinite variety of world orders at God’s disposal, based on the greatest number of supposed perfections? (A claim that the French Enlightenment writer Voltaire satirised in his novella Candide.)

 

How do we safely arrive at Leibniz’s sweeping assessment of ‘best’ here, given the world’s harrowing circumstances, from widespread violence to epidemics to famine, of which we’re reminded every day? After all, the Augustinian faith-based explanation for the presence of evil has been punishment for Adam and Eve’s original sin and expulsion from the Garden of Eden. From this emerged Leibniz’s term ‘theodicy’, created from two Greek words for the expression ‘justifying God’ (Theodicy: Essays on the Goodness of God, the Freedom of Man and the Origin of Evil, 1710).


No, there’s a problem … the ‘problem of evil’. If God is all powerful (omnipotent), all knowing (omniscient), all places (omnipresent), all good and loving (omnibenevolent), and all wise, then why is there evil in the very world that God is said to have designed and created? Not having averted or fixed the problem, instead permitting unrestrained reins and abiding by noninterventionism. There is not just one form of evil, but at least two: moral evil (volitionally wrongful human conduct) and natural evil (ranging from illnesses and other human suffering, to natural law causing ruinous and lethal calamities).

 

There are competitor explanations for evil, of course, like that developed by the first-century Greek bishop Saint Irenaeus, whose rationalisation was that evil presented the population with incentives and opportunities to learn, develop, and evolve toward ever-greater perfection. The shortcoming with this Irenaean description, however, is that it fails to account for the ubiquity and diversity of natural disasters, like tsunamis, volcanoes, earthquakes, wildfires, hurricanes, and many other manifestations of natural law taking its toll around the globe.

 

Yet, it has been argued that even harmful natural hazards like avalanches and lightning, not just moral indiscretions, are part of the plan for people’s moral, epistemic growth, spurring virtues like courage, charity, gratitude, patience, and compassion. It seems that both the Augustinian and Irenaean models of the universe adhere to the anthropic principle that cosmic constants are imperatively fine grained enough (balanced on a sharp edge) to allow for human life to exist at this location, at this point in time.

 

Meanwhile, although some people might conceivably respond to natural hazards and pressing moral hardships by honing their awareness, which some claim, other people are overcome by the devastating effects of the hazards. These outcomes point to another in the battery of explanations for evil, in the reassuring form of a spiritual life after death. Some people assert that such rewards may be expected to tower over mundane earthly challenges and suffering, and that the suffering that moral and natural evil evokes conditions people for the enlightenment of an afterlife. 

 

At this stage, the worldly reasons for natural hazards and moral torment (purportedly the intentions behind a god’s strategy) become apparent. Meanwhile, others argue that the searing realities of, say, the Holocaust or any other genocidal atrocities or savagery or warring in this world are not even remotely mitigated, let alone vindicated, by the anticipated jubilation of life after death, no matter the form that the latter might take.

 

Still another contending explanation is that what we label evil in terms of human conduct is not a separate ‘thing’ that happens to be negative, but rather is the absence of a particular good, such as the absence of hope, integrity, forbearance, friendship, altruism, prudence, principle, and generosity, among other virtues. In short, evil isn’t the opposite of good, but is the nonattendance of good. Not so simple to resolve in this model, however, is the following: Would not a god, as original cause, have had to create the conditions for that absence of good to come to be?

 

Others have asserted that God’s design and the presence of evil are in fact compatible, not a contradiction or intrinsic failing, and not preparation either for development in the here and now or for post-death enlightenment. American philosopher Alvin Plantinga has supported this denial of a contradiction between the existence of an all-capable and all-benevolent (almighty) god and the existence of evil:

 

‘There are people who display a sort of creative moral heroism in the face of suffering and adversity — a heroism that inspires others and creates a good situation out of a bad one. In a situation like this the evil, of course, remains evil; but the total state of affairs — someone’s bearing pain magnificently, for example — may be good. If it is, then the good present must outweigh the evil; otherwise, the total situation would not be good’ (God, Freedom, and Evil, 1977).

 

Or then, as British philosopher John Hick imagines, perhaps evil exists only as a corruption of goodness. Here is Hick’s version of the common premises stated and conclusion drawn: ‘If God is omnipotent, God can prevent evil. If God is perfectly good, God must want to prevent all evil. Evil exists. Thus, God is either not omnipotent or perfectly good, or both’. It does appear that many arguments cycle back to those similarly couched observations about incidents of seeming discrepancy.

 

Yet others have taken an opposite view, seeing incompatibilities between a world designed by a god figure and the commonness of evil. Here, the word ‘design’ conveys similarities between the evidence of complex (intelligent) design behind the cosmos’s existence and complex (intelligent) design behind many things made by humans, from particle accelerators, quantum computers, and space-based telescopes, to cuneiform clay tablets and the carved code of Hammurabi law.


Unknowability matters, however, to this aspect of design and evil. For the presence, even prevalence, of evil does not necessarily contradict the logical or metaphysical possibility of a transcendental being as designer of our world. That being said, some people postulate that the very existence, as well as the categorical abstractness of qualities and intentions, of any such overarching designer are likely to remain incurably unknowable, beyond confirmation or falsifiability.

 

Although the argument by design has circulated for millennia, it was popularised by the English theologian William Paley early in the nineteenth century. Before him, the Scottish philosopher David Hume shaped his criticism of the design argument by paraphrasing Epicurus: ‘Is God willing to prevent evil, but not able? Then he is impotent. Is he able, but not willing? Then he is malevolent. Is he both able and willing? Whence then is evil? Is he neither able nor will? Then why call him God?’ (Dialogues Concerning Natural Religion, 1779).

 

Another in the catalog of explanations of moral evil is associated with itself a provocative claim, which is that we have free will. That is, we are presented with the possibility, not inevitability, of moral evil. Left to their own unconstrained devices, people are empowered either to freely reject or freely choose immoral decisions or actions. From among a large constellation, like venality, malice, and injustice. As such, free will is essential to human agency and by extension to moral evil (for obvious reasons, leaving natural evil out). Plantinga is among those who promote this free-will defense of the existence of moral evil. 

 

Leibniz was wrong about ours being ‘the best of all possible worlds’. Better worlds are indeed imaginable, where plausibly evil in its sundry guises pales in comparison. The gauntlet as to what those better worlds resemble, among myriad possibilities, idles provocatively on the ground. For us to dare to pick up, perhaps. However, reconciling evil, in the presence of theistic paradoxes like professed omnipotence, omniscience, and omnibenevolence, remains problematic. As Candide asked, ‘If this is the best ... what are the others?

 

Monday, 28 June 2021

Our Impulse Toward Anthropomorphism

Animals in the film Animal Farm
‘Animal Farm’, as imagined in the 1954 film, actually described human politics.

Posted by Keith Tidman

 

The Caterpillar and Alice looked at each other for some time in silence: at last, the Caterpillar took the hookah out of its mouth and addressed her in a languid, sleepy voice.

    ‘Who are YOU?’ said the Caterpillar.

    This was not an encouraging opening for a conversation. Alice replied, rather shyly, ‘I--I hardly know, sir, just at present  at least I know who I WAS when I got up this morning, but I think I must have been changed several times since then.’

    ‘What do you mean by that?’ said the Caterpillar sternly. ‘Explain yourself!’

    ‘I can't explain MYSELF, I’m afraid, sir,’ said Alice, ‘because I’m not myself, you see.’

 

Alice’s Adventures in Wonderland, by Lewis Carroll, is just one example of the book’s rich portrayal of nonhumans — like the Caterpillar — all of whom exhibit humanlike properties and behaviours. A literary device that is also a form of anthropomorphism — from the Greek anthropos, meaning ‘human’, and morphe, meaning form or shape. Humans have a long history of attributing both physical and mental human qualities to a wide array of things, ranging from animals to inanimate objects and gods. Such anthropomorphism has been common even since the earliest mythologies.

 

Anthropomorphism has also been grounded in commonplace usage as metaphor. We ‘see’ agency, intentionality, understanding, thought, and humanlike conduct in all sorts of things: pets, cars, computers, tools, musical instruments, boats, favourite toys, and so forth. These are often items with which we grow a special rapport: and that we soon regard as possessing the deliberateness and quirkiness of human instinct. Items with which we ‘socialise’, such as through affectionate communication; to which we appoint names that express their character; that we blame for vexing us if, for example, they don’t work according to expectations; and that, in the case of gadgets, we might view as extensions of our own personhood.

 

Today, we’ve become accustomed to thinking of technology as having humanlike agency and features — and we behave accordingly. Common examples in our device-centric lives include assigning a human name to a car, robot, or ‘digital personal assistant’. Siri pops up here, Alexa there… This penchant has become all the more acute in light of the ‘cleverness’ of computers and artificial intelligence. We react to ‘capriciousness’ and ‘letdowns’: beseeching a car to start in the bitter cold, expressing anger toward a smart phone that fell and shattered, or imploring the electricity to come back on during a storm. 

 

Anthropomorphism has been deployed in art and literature throughout the ages to portray natural objects, such as animals and plants, as speaking, reasoning, feeling beings with human qualities. Even to have conscious minds. One aim is to turn the unfamiliar into the comfortably familiar; another to pique curiosity and achieve dramatic effect; another to build relatability; another to distinguish friend from foe; and yet another simply to explain natural phenomena.


Take George Orwell’s Animal Farm as another example. The 1945 book’s characters, though complexly nuanced, are animals representing people, or perhaps, to be more precise, political and social groups. The cast includes pigs, horses, dogs, a goat, sheep, a raven, and chickens, among others, with human language, emotions, intentions, personalities, and thoughts. The aim is to warn of the consolidation of power, denial of rights, manipulation of language, and exploitation and control of the masses associated with authoritarianism. The characters are empathetic and relatable in both positive and flawed ways. Pigs, so often portrayed negatively, indeed are the bad guys here too: they represent key members of the Soviet Union’s Bolshevik leadership. Napoleon represents Joseph Stalin, Snowball represents Leon Trotsky, and Squealer represents Vyacheslav Molotov. 

Children's stories, familiar to parents having read to their young children, abound with simpler examples. Among the many favourites are the fairy tales by the Brothers Grimm, The Adventures of Pinocchio by Carlo Collodi, The Jungle Book by Rudyard Kipling, The Tale of Peter Rabbit by Beatrix Potter, and Winnie-the-Pooh by A.A. Milne. Such stories often have didactic purposes, to convey lessons about life, such as ethical choices, while remaining accessible, interpretable, and affable to young minds. The use of animal characters aids this purpose.

 

More generally, too, the predisposition toward anthropomorphism undergirds some religions. Indeed, anthropomorphic gods appear in assorted artifacts, thousands of years old, unearthed by archeologists across the globe. This notion of gods possessing human attributes came to full expression among the ancient Greeks.

 

Their pantheon of deities exhibited qualities of both appearance and thought resembling those of everyday people: wrath, jealously, lust, greed, vengeance, quarrelsomeness, and deception. Or they represented valued attributes like fertility, love, war, wisdom, power, and beauty. These qualities, both admirable and sometimes dreadful, make the gods oddly approachable, even if warily.

 

As to this, the eighteenth-century philosopher David Hume, in his wide-reaching reproach of religions, struggled to come to grips with the faithful lauding and symbolically putting deities on pedestals, all the while incongruously ascribing flawed human emotions to them.

 

In the fifth century BCE, the philosopher Xenophanes also recoiled from the practice of anthropomorphism, observing, ‘Mortals deem that the gods are begotten as they are [in their own likeness], and have clothes like theirs, and voice and form’. He underscored his point about partiality — modeling deities’ features on humans’ features by observing that ‘Ethiopians say that their gods are snub-nosed and black; Thracians that they are pale and red-haired’. Xenophanes concluded that ‘the greatest God’ resembles people ‘neither in form nor in mind’.

 

That said, this penchant toward seeing a god in humans’ own likeness, moored to familiar humanlike qualities, rather than as an unmanifested, metaphysical abstraction whose reality lies forever and inalterably out of reach (whether by human imagination, definition, or description), has long been favoured by many societies.

 

We see it up close in Genesis, the first book of the Old Testament, where it says: ‘So God created humankind in His image, in the image of God He created them; male and female He created them’, as well as frequently elsewhere in the Bible. Such reductionism to human qualities, while still somehow allowing for God to be transcendent, makes it easier to rationalise and shed light on perplexing, even inexplicable, events in the world and in our lives.

 

In this way, anthropomorphism is a stratagem for navigating life. It reduces reality to accessible metaphors and reduces complexity to safe, easy-to-digest analogues, where intentions and causes become both more vivid and easier to make sense of. Above all, anthropomorphism is often how we arrive at empathy, affiliation, and understanding.

 

Monday, 9 November 2020

The Certainty of Uncertainty


Posted by Keith Tidman
 

We favour certainty over uncertainty. That’s understandable. Our subscribing to certainty reassures us that perhaps we do indeed live in a world of absolute truths, and that all we have to do is stay the course in our quest to stitch the pieces of objective reality together.

 

We imagine the pursuit of truths as comprising a lengthening string of eureka moments, as we put a check mark next to each section in our tapestry of reality. But might that reassurance about absolute truths prove illusory? Might it be, instead, ‘uncertainty’ that wins the tussle?

 

Uncertainty taunts us. The pursuit of certainty, on the other hand, gets us closer and closer to reality, that is, closer to believing that there’s actually an external world. But absolute reality remains tantalizingly just beyond our finger tips, perhaps forever.

 

And yet it is uncertainty, not certainty, that incites us to continue conducting the intellectual searches that inform us and our behaviours, even if imperfectly, as we seek a fuller understanding of the world. Even if the reality we think we have glimpsed is one characterised by enough ambiguity to keep surprising and sobering us.

 

The real danger lies in an overly hasty, blinkered turn to certainty. This trust stems from a cognitive bias — the one that causes us to overvalue our knowledge and aptitudes. Psychologists call it the Dunning-Kruger effect.

 

What’s that about then? Well, this effect precludes us from spotting the fallacies in what we think we know, and discerning problems with the conclusions, decisions, predictions, and policies growing out of these presumptions. We fail to recognise our limitations in deconstructing and judging the truth of the narratives we have created, limits that additional research and critical scrutiny so often unmask. 

 

The Achilles’ heel of certainty is our habitual resort to inductive reasoning. Induction occurs when we conclude from many observations that something is universally true: that the past will predict the future. Or, as the Scottish philosopher, David Hume, put it in the eighteenth century, our inferring ‘that instances of which we have had no experience resemble those of which we have had experience’. 

 

A much-cited example of such reasoning consists of someone concluding that, because they have only ever observed white swans, all swans are therefore white — shifting from the specific to the general. Indeed, Aristotle uses the white swan as an example of a logically necessary relationship. Yet, someone spotting just one black swan disproves the generalisation. 

 

Bertrand Russell once set out the issue in this colourful way:

 

‘Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to uniformity of nature would have been useful to the chicken’.

 

The person’s theory that all swans are white — or the chicken’s theory that the man will continue to feed it — can be falsified, which sits at the core of the ‘falsification’ principle developed by philosopher of science Karl Popper. The heart of this principle is that in science a hypothesis or theory or proposition must be falsifiable, that is, to possibly being shown wrong. Or, in other words, to be testable through evidence. For Popper, a claim that is untestable is no longer scientific. 

 

However, a testable hypothesis that is proven through experience to be wrong (falsified) can be revised, or perhaps discarded and replaced by a wholly new proposition or paradigm. This happens in science all the time, of course. But here’s the rub: humanity can’t let uncertainty paralyse progress. As Russell also said: 

 

‘One ought to be able to act vigorously in spite of the doubt. . . . One has in practical life to act upon probabilities’.

 

So, in practice, whether implicitly or explicitly, we accept uncertainty as a condition in all fields — throughout the humanities, social sciences, formal sciences, and natural sciences — especially if we judge the prevailing uncertainty to be tiny enough to live with. Here’s a concrete example, from science.

 

In the 1960s, the British theoretical physicist, Peter Higgs, mathematically predicted the existence of a specific subatomic particle. The last missing piece in the Standard Model of particle physics. But no one had yet seen it, so the elusive particle remained a hypothesis. Only several decades later, in 2012, did CERN’s Large Hadron Collider reveal the particle, whose field is claimed to have the effect of giving all other particles their mass. (Earning Higgs, and his colleague Francis Englert, the Nobel prize in physics.)

 

The CERN scientists’ announcement said that their confirmation bore ‘five-sigma’ certainty. That is, there was only 1 chance in 3.5 million that what was sighted was a fluke, or something other than the then-named Higgs boson. A level of certainty (or of uncertainty, if you will) that physicists could very comfortably live with. Though as Kyle Cranmer, one of the scientists on the team that discovered the particle, appropriately stresses, there remains an element of uncertainty: 

 

“People want to hear declarative statements, like ‘The probability that there’s a Higgs is 99.9 percent,’ but the real statement has an ‘if’ in there. There’s a conditional. There’s no way to remove the conditional.”

 

Of course, not in many instances in everyday life do we have to calculate the probability of reality. But we might, through either reasoning or subconscious means, come to conclusions about the likelihood of what we choose to act on as being right, or safely right enough. The stakes of being wrong matter — sometimes a little, other times consequentially. Peter Higgs got it right; Bertrand Russell’s chicken got it wrong.

  

The takeaway from all this is that we cannot know things with absolute epistemic certainty. Theories are provisional. Scepticism is essential. Even wrong theories kindle progress. The so-called ‘theory of everything’ will remain evasively slippery. Yet, we’re aware we know some things with greater certainty than other things. We use that awareness to advantage, informing theory, understanding, and policy, ranging from the esoteric to the everyday.

 

Monday, 20 July 2020

Miracles: Confirmable, or Chimerical?

Posted by Keith Tidman

Multiplication of the Loaves, by Georges, Mount Athos.
We are often passionately told of claims to experienced miracles, in both the religious and secular worlds. The word ‘miracle’ coming from the Latin mirari, meaning to wonder. But what are these miracles that some people wonder about, and do they happen as told?

Scottish philosopher David Hume, as sceptic on this matter, defined a miracle as ‘a violation of the laws of nature’ — with much else to say on the issue in his An Enquiry Concerning Human Understanding (1748). He proceeded to define the transgression of nature as due to a ‘particular volition of the Deity, or by the interposition of some invisible agent’. Though how much credence might one place in ‘invisible agents’?

Other philosophers, like Denmark’s Søren Kierkegaard in his pseudonymous persona Johannes Climacus, also placed themselves in Hume’s camp on the matter of miracles. Earlier, Dutch philosopher Baruch Spinoza wrote of miracles as events whose source and cause remain unknown to us (Tractatus Theologico-Politicus, 1670). Yet, countless other people around the world, of many religious persuasions, earnestly assert that the entreaty to miracles is one of the cornerstones of their faith. Indeed, some three-fourths of survey respondents indicated they believe in miracles, while nearly half said they have personally experienced or seen a miracle (Princeton Survey Research Associates, 2000; Harris poll, 2013).

One line of reasoning as to whether miracles are credible might start with the definition of miracles, such as transgressions of natural events uncontested convincingly by scientists or other specialists. The sufficiency of proof that a miracle really did occur and was not, deus ex machina, just imagined or stemming from a lack of understanding of the laws underlying nature is a very tall order, as surely it should be.

Purported proof would come from people who affirm they witnessed the event, raising questions about witnesses’ reliability and motives. In this regard, it would be required to eliminate obvious delusions, fraud, optical illusions, distortions, and the like. The testimony of witnesses in such matters is, understandably, often suspect. There are demanding conditions regarding definitions and authentication — such as of ‘natural events’, where scientific hypotheses famously, but for good reason, change to conform to new knowledge acquired through disciplined investigation. These conditions lead many people to dismiss the occurrence of miracles as pragmatically untenable, requiring by extension nothing less than a leap of faith.

But a leap of faith suggests that the alleged miracle happened through the interposition of a supernatural power, like a god or other transcendent, creative force of origin. This notion of an original source gives rise, I argue, to various problematic aspects to weigh.

One might wonder, for example, why a god would have created the cosmos to conform to what by all measures is a finely grained set of natural laws regarding cosmic reality, only later to decide, on rare occasion, to intervene. That is, where a god suspends or alters original laws in order to allow miracles. The assumption being that cosmic laws encompass all physical things, forces, and the interactions among them. So, a god choosing not to let select original laws remain in equilibrium, uninterrupted, seems selective — incongruously so, given theistic presumptions about a transcendent power’s omniscience and omnipotence and omniwisdom.

One wonders, thereby, what’s so peculiarly special about humankind to deserve to receive miracles — symbolic gestures, some say. Additionally, one might reasonably ponder why it was necessary for a god to turn to the device of miracles in order for people to extract signals regarding purported divine intent.

One might also wonder, in this theistic context, whether something was wrong with the suspended law to begin with, to necessitate suspension. That is, perhaps it is reasonable to conclude from miracles-based change that some identified law is not, as might have been supposed, inalterably good in all circumstances, for all eternity. Or, instead, maybe nothing was in fact defective in the original natural law, after all, there having been merely an erroneous read of what was really going on and why. A rationale, thereby, for alleged miracles — and the imagined compelling reasons to interfere in the cosmos — to appear disputable and nebulous.

The presumptive notion of ‘god in the gaps’ seems tenuously to pertain here, where a god is invoked to fill the gaps in human knowledge — what is not yet known at some point in history — and thus by extension allows for miracles to substitute for what reason and confirmable empirical evidence might otherwise and eventually tell us.

As Voltaire further ventured, ‘It is . . . impious to ascribe miracles to God; they would indicate a lack of forethought, or of power, or both’ (Philosophical Dictionary, 1764). Yet, unsurprisingly, contentions like Voltaire’s aren’t definitive as a closing chapter to the accounting. There’s another facet to the discussion that we need to get at — a nonreligious aspect.

In a secular setting, the list of problematic considerations regarding miracles doesn’t grow easier to resolve. The challenges remain knotty. A reasonable assumption, in this irreligious context, is that the cosmos was not created by a god, but rather was self-caused (causa sui). In this model, there were no ‘prior’ events pointing to the cosmos’s lineage. A cosmos that possesses integrally within itself a complete explanation for its existence. Or, a cosmos that has no beginning — a boundless construct having existed infinitely.

One might wonder whether a cosmos’s existence is the default, stemming from the cosmological contention that ‘nothingness’ cannot exist, implying no beginning or end. One might further ponder how such a cosmos — in the absence of a transcendent force powerful enough to tinker with it — might temporarily suspend or alter a natural law in order to accommodate the appearance of a happening identifiable as a miracle. I propose there would be no mechanism to cause such an alteration to the cosmic fabric to happen. On those bases, it may seem there’s no logical reason for (no possibility of) miracles. Indeed, the scientific method does itself call for further examining what may have been considered a natural law whenever there are repeated exceptions or contradictions to it, rather than assuming that a miracle is recurring.

Hume proclaimed that ‘no testimony is sufficient to establish a miracle’; centuries earlier, Augustine of Hippo articulated a third, and broader take on the subject. He pointedly asked, ‘Is not the universe itself a miracle?’ (The City of God, 426 AD). Here, one might reasonably interpret ‘a miracle’ as synonymous for a less emotionally charged, temporal superlative like ‘remarkable’. I suspect most of us agree that our vast, roiling cosmos is indeed a marvel, though debatably not necessitating an originating spiritual framework like Augustine’s. 

No matter how supposed miracles are perceived, internalised, and retold, the critical issue of what can or cannot be confirmed dovetails to an assessment of the ‘knowledge’ in hand: what one knows, how one knows it, and with what level of certainty one knows it. So much of reality boils down to probabilities as the measuring stick; the evidence for miracles is no exception. If we’re left with only gossamer-thin substantiation, or no truly credible substantiation, or no realistically potential path to substantiation — which appears the case — claims of miracles may, I offer, be dismissed as improbable or even phantasmal.

Monday, 29 June 2020

The Afterlife: What Do We Imagine?

Posted by Keith Tidman


‘The real question of life after death isn’t whether 
or not it exists, but even if it does, what 
problem this really solves’

— Wittgenstein, Tractatus Logico-Philosophicus, 1921

Our mortality, and how we might transcend it, is one of humanity’s central preoccupations since prehistory. One much-pondered possibility is that of an afterlife. This would potentially serve a variety of purposes: to buttress fraught quests for life’s meaning and purpose; to dull unpleasant visions of what happens to us physically upon death; to switch out fear of the void of nothingness with hope and expectation; and, to the point here, to claim continuity of existence through a mysterious hereafter thought to defy and supplant corporeal mortality.

And so, the afterlife, in one form or another, has continued to garner considerable support to the present. An Ipsos/Reuters poll in 2011 of the populations of twenty-three countries found that a little over half believe in an afterlife, with a wide range of outcomes correlated with how faith-based or secular a country is considered. The Pew Center’s Religious Landscape Study polling found, in 2014, that almost three-fourths of people seem to believe in heaven and more than half said that they believed in hell. The findings cut across most religions. Separately, research has found that some one-third of atheists and agnostics believe in an afterlife — one imagined to include ‘some sort of conscious existence’, as the survey put it. (This was the Austin Institute for the Study of Family and Culture, 2014.) 

Other research has corroberated these survey results. Researchers based at Britain's Oxford University in 2011 examined forty related studies conducted over the course of three years by a range of social-science and other specialists (including anthropologists, psychologists, philosophers, and theologians) in twenty countries and different cultures. The studies revealed an instinctive predisposition among people to an afterlife — whether of a soul or a spirit or just an aspect of the mind that continues after bodily death.

My aim here is not to exhaustively review all possible variants of an afterlife subscribed to around the world, like reincarnation — an impracticality for the essay. However, many beliefs in a spiritual afterlife, or continuation of consciousness, point to the concept of dualism, entailing a separation of mind and body. As RenĂ© Descartes explained back in the 17th century:
‘There is a great difference between the mind and the body, inasmuch as the body is by its very nature always divisible, whereas the mind is clearly indivisible. For when I consider the mind, or myself insofar as I am only a thinking thing, I cannot distinguish any parts within myself. . . . By contrast, there is no corporeal or extended thing that I can think of which in my thought I cannot easily divide into parts. . . . This one argument would be enough to show me that the mind is completely different than the body’ (Sixth Meditation, 1641).
However, in the context of modern research, I believe that one may reasonably ask the following: Are the mind and body really two completely different things? Or are the mind and the body indistinct — the mind reducible to the brain, where the brain and mind are integral, inseparable, and necessitating each other? Mounting evidence points to consciousness and the mind as the product of neurophysiological activity. As to what’s going on when people think and experience, many neuroscientists favour the notion that the mind — consciousness and thought — is entirely reducible to brain activity, a concept sometimes variously referred to as physicalism, materialism, or monism. But the idea is that, in short, for every ‘mind state’ there is a corresponding ‘brain state’, a theory for which evidence is growing apace.

The mind and brain are today often considered, therefore, not separate substances. They are viewed as functionally indistinguishable parts of the whole. There seems, consequently, not to be broad conviction in mind-body dualism. Contrary to Cartesian dualism, the brain, from which thought comes, is physically divisible according to hemispheres, regions, and lobes — the brain’s architecture; by extension, the mind is likewise divisible — the mind’s architecture. What happens to the brain physically (from medical or other tangible influences) affects the mind. Consciousness arises from the entirety of the brain. A brain — a consciousness — that remarkably is conscious of itself, demonstrably curious and driven to contemplate its origins, its future, its purpose, and its place in the universe.

The contemporary American neuroscientist, Michael Gazzaniga, has described the dynamics of such consciousness in this manner:
‘It is as if our mind is a bubbling pot of water. . . . The top bubble ultimately bursts into an idea, only to be replaced by more bubbles. The surface is forever energized with activity, endless activity, until the bubbles go to sleep. The arrow of time stitches it all together as each bubble comes up for its moment. Consider that maybe consciousness can be understood only as the brain’s bubbles, each with its own hardware to close the gap, getting its moment’. (The Consciousness Instinct, 2018)
Moreover, an immaterial mind and a material world (such as the brain in the body), as dualism typically frames reality, would be incapable of acting upon each other: what’s been dubbed the ‘interaction problem’. Therefore the physicalist model — strengthened by research in fields like neurophysiology, which quicken to acquire ever-deeper learning — has, arguably, superseded the dualist model.

People’s understanding that, of course, they will die one day, has spurred search for spiritual continuation to earthbound life. Apprehension motivates. The yearn for purpose motivates. People have thus sought evidence, empirical or faith-based or other, to underprop their hope for otherworldly survival. However, modern reality as to the material, naturalistic basis of the mind may prove an injurious blow to notions of an out-of-body afterlife. After all, if we are our bodies and our bodies are us, death must end hope for survival of the mind. As David Hume graphically described our circumstances in Of the Immortality of the Soul (1755), our ‘common dissolution in death’. That some people are nonetheless prone to evoke dualistic spectral spirits — stretching from disembodied consciousness to immortal souls — that provide pretext in desirously thwarting the interruption of life doesn’t change the finality of existence. 

And so, my conclusion is that perhaps we’d be better served to find ingredients for an ‘afterlife’ in what we leave by way of influences, however ordinary and humble, upon others’ welfare. That is, a legacy recollected by those who live on beyond us, in its ideal a benevolent stamp upon the present and the future. This earthbound, palpable notion of what survives us goes to answer Wittgenstein’s challenge we started with, regarding ‘what problem’ an afterlife ‘solves’, for in this sense it solves the riddle of what, realistically, anyone might hope for.

Monday, 29 October 2018

How Life Has Value, Even Absent Overarching Purpose

Wherein lies value?
Posted by Keith Tidman

Among the most-common questions from philosophy is, ‘What is the purpose of life?’ After all, as Plato pithily said, humans are ‘beings in search of meaning’. But what might be the real reason for the question about the purpose of life? I suggest that what fundamentally lurks behind this age-old head-scratcher is an alternative query: Might not life still have value, even if there is no sublimely overarching purpose? So, instead, let’s start with ‘purpose’ and only then work our way to ‘value’.

Is an individual's existence best understood scientifically — more particularly, in biological terms? The purpose of biological life, in strictly scientific terms, might be reduced to survival and passing along genes — to propagate, for continuation of the familial line and (largely unconsciously) the species. More broadly, scientists have typically steered clear of deducing ‘higher purpose’ and are more comfortable restricting themselves to explanations of empirically, rationally grounded physical models — however inspiring those peeks into presumed reality may be — that relate to the ‘what’ and ‘how’ of existence. The list is familiar:
  • the heliocentric construct of Copernicus and the mechanistic universes of RenĂ© Descartes and Isaac Newton
  • the Darwinian theories of evolution and natural selection
  • the laws of thermodynamics and the theory of general relativity of Albert Einstein 
  • the quantum mechanics of Niels Bohr, Max Planck, Werner Heisenberg, and Erwin Schrödinger. 
But grand as these theories are, they still don’t provide us with purpose.

Rather, such theories focus on better understanding the emergence and evolution of the cosmos and humankind, in all their wonder and complexity. The (not uncommonly murky) initial conditions and necessary parameters to make intelligent life possible add a challenge to relying on conclusions from the models. As to this point about believability and deductions drawn, David Hume weighed in during the 18th century, advising,

             ‘A wise man proportions his belief to the evidence’.

Meanwhile, modern physics doesn’t yet rule in or rule out some transcendent, otherworldly dimension of the universe — disproof is always tough, as we know, and thus the problem is perhaps unanswerable — but the physical–spiritual dualism implied by such an ethereal dimension is extraordinarily questionable. Yet one cannot deduce meaning or purpose, exceptional or ordinary, simply from mere wonder and complexity; the latter are not enough. Suggested social science insights — about such things as interactions among people, examining behaviours and means to optimise social constructs — arguably add only a pixel here and a pixel there to the larger picture of life’s quintessential meaning.

Religious belief — from the perspectives of revelation, enlightenment, and doctrine — is an obvious place to turn to next in this discussion. Theists start with a conviciton that God exists — and conclude that it was God who therefore planted the human species amidst the rest of His creation of the vast universe. In this way, God grants humankind an exalted overarching purpose. In no-nonsense fashion, the 17th-century Dutch philosopher Baruch Spinoza took the point to another declarative level, writing:
‘Whatever is, is in God, and without God nothing can be, or be conceived’. 
This kind of presumed God-given plan or purpose seems to instill in humankind an inspirational level of exceptionalism. This exceptionalism in turn leads human beings toward such grand purposes as undiminished love toward and worship of God, fruitful procreation, and dominion over the Earth (with all the environmental repercussions of that dominion), among other things. These purposes include an implied contract of adding value to the world, within one’s abilities, as prescribed by religious tenets.

One takeaway may be a comfortable feeling that humankind, and each member of our species, has meaning — and, in a soul-based model, a roadmap for redemption, perhaps to an eternal afterlife. As to that, in the mid-20th century, Jean-Paul Sartre wrote in characteristically unsparing fashion:
‘Life has no meaning the moment you lose the illusion of being eternal’. 
Universes constructed around a belief in God, thereby, attempt to allay the dread of mortality and the terror of dying and of death. Yet, even where God is the prime mover of everything, is it unreasonable to conceive of humankind as perhaps still lacking any lofty purpose, after all? Might, for example, humankind share the universe with other brainy species on our own planet — or even much brainier ones cosmically farther flung?

Because if humankind has no majestically overarching purpose — or put another way, even if existentially it might not materially matter to the cosmos if the human species happened to tip into extinction — we can, crucially, still have value. Ultimately value, not exceptionalism or eternity, is what matters. There’s an important difference between ‘purpose’ — an exalted reason that soars orders of magnitude above ordinary explanations of why we’re riding the rollercoaster of creation — and value, which for an individual might only need a benevolent role in continuously improving the lot of humankind, or perhaps other animals and the ecosphere. It may come through empathically good acts without the expectation of any manner of reward. Socrates hewed close to those principles, succinctly pointing out,

            ‘Not life, but a good life, is to be chiefly valued’.

Value, then, is anchored to our serving as playwrights scribbling, if you will, on pieces of paper how our individual, familial, community, and global destiny unfolds into the future. And what the quality of that future is, writ large. At minimum, we have value based on humanistic grounds: people striving for natural, reciprocal connections, to achieve hope and a range of benefits — the well-being of all — and disposing of conceits to instead embrace our interdependence in order not only to survive but, better, to thrive. This defines the intrinsic nature of ‘value’; and perhaps it is to this that we owe our humanity.


Monday, 11 April 2016

Farmer Hogget, the Limited God


Posted by Eduardo Frajman

One beautiful autumn afternoon not too long ago, my daughters and I were coming home from an errand. They ran ahead of me, headed for our front yard to climb our knobby, twisted tree, or jump headfirst onto a leaf pile, or some other such wholesome activity that would add a tiny brick to the edifice of their innocent, golden childhoods. 

As I reached them I saw my eldest had stopped. She was prodding at something with her foot, nudging it back and forth. Though half-buried, I immediately recognized it for what it was. “What is it?,” my freckled-faced cherub asked. I saw her little sister step towards us curiously, an expectant smile on her face. The thing was roundish, about the size of a plum. Two blade-like stalks protruded out of one end. Amid the black dirt, I could make out patches of fur and a rigid, unseeing eye. “It’s a rock,” I said. My daughter shot me an incredulous, accusatory look as she wailed “Then why does it have ears?!”


Monday, 14 March 2016

Eastern and Western Philosophy: Personal Identity

With acknowledgement to the CeramiX Art Collection
Posted by John Hansen
Once, when our world was not so small, major philosophies rarely made contact with one another. Further, being embedded in different languages, different concepts, different cultures, and different religions, on the surface of it they seemed to hold little in common.  
Yet as our world has become smaller, and as scholars have devoted more careful attention to distant ideas, so we have discovered, to our surprise, that our philosophies may be much the same.

A case in point is David Hume, the Scottish philosopher of the 18th Century, and Vasubandhu, the Indian philosopher of (about) the 5th – in particular, their views on personal identity.

From one point of view, there were enormous differences between these two men. Hume was an agnostic, and probably an atheist. He was, in the words of Julian Baggini, ‘as godless a man as can be imagined.’ Vasubandhu, on the other hand, was deeply religious. He was a Buddhist monk who spent much of his life writing commentaries on the teachings of the Buddha.

Yet Hume and Vasubandhu came remarkably close, on core philosophical issues. How then did they diverge so completely on matters of religion? What may this tell us about philosophy – above all about metaphysics? But first, let us survey a few examples of the central concepts common to both men, in the area of personal identity.

Vasubandhu believed that the self is a continuum of 'aggregates', which are the physiological elements which constitute the individual person. Similarly, Hume equated the self with a conglomeration of perceptions, which are in a constant state of flux. Both Hume and Vasubandhu therefore believed that, because of the constant transition of our mental states, these are a part of a continuum that moves in temporal sequence from perception to perception.

Vasubandhu believed that one's memory of an object is aroused when a special function of the mind connects to, and identifies objects from, earlier occurrences. Similarly, Hume believed that whatever the changes a person’s mental state may go through, older perceptions influence newer, and the vehicle for continuity is found in our memory, which acquaints us with a succession of perceptions.

For Vasubandhu, the 'self' which possesses a memory is equivalent to that which generated the memory. He argues that the only constant is that of perceived causal connection. Hume, similarly, argues that our memory helps us discover our personal identity by showing us associations among our different perceptions – and these produce the impression of identity.

Vasubandhu, however, did not distinguish between material objects and our mental sensation of them. Hume, on the other hand, did separate the two. Therefore Vasubandhu presumed the existence of objects outside of our mental state of being – allowing for religious belief. But Hume focused almost entirely on empirical comparisons and observations, believing it to be an abuse of the notion of personal identity that the idea of an unchanging substance should be added to it.

Hume the skeptic, and Vasubandhu the monk. How did they come so close on core philosophical questions, yet on the basis of such vastly different presuppositions? How could they so completely diverge on matters of religion, while in basic concepts they so largely agreed? What was it that – as it were – switched on religious corollaries in Vasubandhu, and switched them off in Hume?

Was Hume right? Was Vasubandhu wrong? Were there cracks in the coherence of their philosophies? Did their very languages shape their conceptual associations? Do religious belief or godlessness serve as mere garnish to real philosophy? The answers could have crucial consequences for philosophy.



By the same author:  The Pleasures of Idle Thought?