Showing posts with label Bertrand Russell. Show all posts
Showing posts with label Bertrand Russell. Show all posts

Monday, 9 November 2020

The Certainty of Uncertainty


Posted by Keith Tidman
 

We favour certainty over uncertainty. That’s understandable. Our subscribing to certainty reassures us that perhaps we do indeed live in a world of absolute truths, and that all we have to do is stay the course in our quest to stitch the pieces of objective reality together.

 

We imagine the pursuit of truths as comprising a lengthening string of eureka moments, as we put a check mark next to each section in our tapestry of reality. But might that reassurance about absolute truths prove illusory? Might it be, instead, ‘uncertainty’ that wins the tussle?

 

Uncertainty taunts us. The pursuit of certainty, on the other hand, gets us closer and closer to reality, that is, closer to believing that there’s actually an external world. But absolute reality remains tantalizingly just beyond our finger tips, perhaps forever.

 

And yet it is uncertainty, not certainty, that incites us to continue conducting the intellectual searches that inform us and our behaviours, even if imperfectly, as we seek a fuller understanding of the world. Even if the reality we think we have glimpsed is one characterised by enough ambiguity to keep surprising and sobering us.

 

The real danger lies in an overly hasty, blinkered turn to certainty. This trust stems from a cognitive bias — the one that causes us to overvalue our knowledge and aptitudes. Psychologists call it the Dunning-Kruger effect.

 

What’s that about then? Well, this effect precludes us from spotting the fallacies in what we think we know, and discerning problems with the conclusions, decisions, predictions, and policies growing out of these presumptions. We fail to recognise our limitations in deconstructing and judging the truth of the narratives we have created, limits that additional research and critical scrutiny so often unmask. 

 

The Achilles’ heel of certainty is our habitual resort to inductive reasoning. Induction occurs when we conclude from many observations that something is universally true: that the past will predict the future. Or, as the Scottish philosopher, David Hume, put it in the eighteenth century, our inferring ‘that instances of which we have had no experience resemble those of which we have had experience’. 

 

A much-cited example of such reasoning consists of someone concluding that, because they have only ever observed white swans, all swans are therefore white — shifting from the specific to the general. Indeed, Aristotle uses the white swan as an example of a logically necessary relationship. Yet, someone spotting just one black swan disproves the generalisation. 

 

Bertrand Russell once set out the issue in this colourful way:

 

‘Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to uniformity of nature would have been useful to the chicken’.

 

The person’s theory that all swans are white — or the chicken’s theory that the man will continue to feed it — can be falsified, which sits at the core of the ‘falsification’ principle developed by philosopher of science Karl Popper. The heart of this principle is that in science a hypothesis or theory or proposition must be falsifiable, that is, to possibly being shown wrong. Or, in other words, to be testable through evidence. For Popper, a claim that is untestable is no longer scientific. 

 

However, a testable hypothesis that is proven through experience to be wrong (falsified) can be revised, or perhaps discarded and replaced by a wholly new proposition or paradigm. This happens in science all the time, of course. But here’s the rub: humanity can’t let uncertainty paralyse progress. As Russell also said: 

 

‘One ought to be able to act vigorously in spite of the doubt. . . . One has in practical life to act upon probabilities’.

 

So, in practice, whether implicitly or explicitly, we accept uncertainty as a condition in all fields — throughout the humanities, social sciences, formal sciences, and natural sciences — especially if we judge the prevailing uncertainty to be tiny enough to live with. Here’s a concrete example, from science.

 

In the 1960s, the British theoretical physicist, Peter Higgs, mathematically predicted the existence of a specific subatomic particle. The last missing piece in the Standard Model of particle physics. But no one had yet seen it, so the elusive particle remained a hypothesis. Only several decades later, in 2012, did CERN’s Large Hadron Collider reveal the particle, whose field is claimed to have the effect of giving all other particles their mass. (Earning Higgs, and his colleague Francis Englert, the Nobel prize in physics.)

 

The CERN scientists’ announcement said that their confirmation bore ‘five-sigma’ certainty. That is, there was only 1 chance in 3.5 million that what was sighted was a fluke, or something other than the then-named Higgs boson. A level of certainty (or of uncertainty, if you will) that physicists could very comfortably live with. Though as Kyle Cranmer, one of the scientists on the team that discovered the particle, appropriately stresses, there remains an element of uncertainty: 

 

“People want to hear declarative statements, like ‘The probability that there’s a Higgs is 99.9 percent,’ but the real statement has an ‘if’ in there. There’s a conditional. There’s no way to remove the conditional.”

 

Of course, not in many instances in everyday life do we have to calculate the probability of reality. But we might, through either reasoning or subconscious means, come to conclusions about the likelihood of what we choose to act on as being right, or safely right enough. The stakes of being wrong matter — sometimes a little, other times consequentially. Peter Higgs got it right; Bertrand Russell’s chicken got it wrong.

  

The takeaway from all this is that we cannot know things with absolute epistemic certainty. Theories are provisional. Scepticism is essential. Even wrong theories kindle progress. The so-called ‘theory of everything’ will remain evasively slippery. Yet, we’re aware we know some things with greater certainty than other things. We use that awareness to advantage, informing theory, understanding, and policy, ranging from the esoteric to the everyday.

 

Monday, 24 September 2018

Why Is There Something Rather Than Nothing?

For scientists, space is not empty but full of quantum energy
Posted by Keith Tidman

Gottfried Wilhelm Leibniz introduced this inquiry more than three hundred years ago, saying, ‘The first question that should rightly be asked is, “Why is there something rather than nothing?”’ Since then, many philosophers and scientists have likewise pondered this question. Perhaps the most famous restatement of it came in 1929 when the German philosopher, Martin Heidegger, placed it at the heart of his book What Is Metaphysics?: ‘Why are there beings at all, and why not rather nothing?’

Of course, many people around the world turn to a god as a sufficient reason (explanation) for the universe’s existence. Aristotle believed, as did his forerunner Heraclitus, that the world was mutable — everything undergoing perpetual change — which he characterised as movement. He argued that there was a sequence of predecessor causes that led back deep into the past, until reaching an unmoved mover, or Prime Mover (God). An eternal, immaterial, unchanging god exists necessarily, Aristotle believed, itself independent of cause and change.

In the 13th century Saint Thomas Aquinas, a Christian friar, advanced this so-called cosmological view of universal beginnings, likewise perceiving God as the First Cause. Leibniz, in fact, was only proposing something similar, with his Contingency Argument, in the 17th century:

‘The sufficient reason [for the existence of the universe] which needs not further reason must be outside of this series of contingent things and is found in a substance which . . . is a necessary being bearing the reason for its existence within itself. . . .  This final reason for things is called God’ — Leibniz, The Principles of Nature and Grace

However, evoking God as the prime mover or first cause or noncontingent being — arbitrarily, on a priori rather than empirical grounds — does not inescapably make it so. Far from it. The common counterargument maintains that a god correspondingly raises the question that, if a god exists — has a presence — what was its cause? Assuming, that is, that any thing — ‘nothing’ being the sole exception — must have a cause. So we are still left with the question, famously posed by the theoretical physicist Stephen Hawking, ‘What is it that breathes fire into the equations and makes a universe for them to describe?’ To posit the existence of a god does not, as such, get around the ‘hard problem’: why there is a universe at all, not just why our universe is the way it is.



Some go so far as to say that nothingness is unstable, hence again impossible.


 
Science has not fared much better in this challenge. The British mathematician and philosopher Bertrand Russell ended up merely declaring in 1948, ‘I should say that the universe is just there, and that’s all’. A ‘brute fact’, as some have called it. Many scientists have embraced similar sentiments: concluding that ‘something’ was inevitable, and that ‘nothingness’ would be impossible. Some go so far as to say that nothingness is unstable, hence again impossible. But these are difficult positions to support unquestionally, given that, as with many scientific and philosophical predecessors and contemporaries, they do not adequately explain why and how. This was, for example, the outlook of Baruch Spinoza, the 17th-century Dutch philosopher who maintained that the universe (with its innumerable initial conditions and subsequent properties) had to exist. Leaping forward to the 20th century, Albert Einstein, himself an admirer of Spinoza’s philosophy, seemed to concur.

Quantum mechanics poses an interesting illustration of the science debate, informing us that empty space is not really empty — not in any absolute sense, anyway. Even what we might consider the most perfect vacuum is actually filled by churning virtual particles — quantum fluctuations — that almost instantaneously flit in and out of existence. Some theoretical physicists have suggested that this so-called ‘quantum vacuum’ is as close to nothingness as we might get. But quantum fluctuations do not equate to nothingness; they are not some modern-day-science equivalent of the non-contingent Prime Mover discussed above. Rather, no matter however flitting and insubstantial, virtual quantum particles are still something.

It is therefore reasonable to inquire into the necessary origins of these quantum fluctuations — an inquiry that requires us to return to an Aristotelian-like chain of causes upon causes, traceable back in time. The notion of a supposed quantum vacuum still doesn’t get us to what might have garnered something from nothing. Hence, the hypothesis that there has always been something — that the quantum vacuum was the universe’s nursery — peels away as an unsupportable claim. Meanwhile, other scientific hypotheses, such as string theory, bid to take the place of Prime Mover. At the heart of the theory is the hypothesis that the fundamental particles of physics are not really ‘points’ as such but rather differently vibrating energy ‘strings’ existing in many more than the familiar dimensions of space-time. Yet these strings, too, do not get us over the hump of something in place of nothing; strings are still ‘something’, whose origins (causes) would beg to be explained.

In addressing these questions, we are not talking about something emerging from nothing, as nothingness by definition would preclude the initial conditions required for the emergence of a universe. Also, ‘nothingness’ is not the mere absence (or opposite) of something; rather, it is possible to regard ‘nothingness’ as theoretically having been just as possible as ‘something’. In light of such modern-day challenges in both science and philosophy, Lugdwig Wittgenstein was at least partially right in saying, early in the 20th century (Tractatus Logico-Philosophicus, section 6.4 on what he calls ‘the mystical’), that the real mystery was, ‘Not how the world is . . . but that it is’.