Showing posts with label philosophy of science. Show all posts
Showing posts with label philosophy of science. Show all posts

Monday 15 April 2024

Models, Metaphysics and Reality: How Philosophy keeps science on track


By Rob Hamilton

Does God exist? What is consciousness? How can we know what is real?

Questions such as these have always perplexed humanity and despite the great advances made over recent centuries in understanding the behaviour of the world around us, we seem to be no closer to answering these core questions about the nature of existence.

In my new book Anything Goes – A Philosophical Approach to Answering the God Question, I argue that, paradoxically, answers to these questions can be obtained – but only once we recognise that no knowledge of the true structure of reality is possible. What do I mean by this? Well, essentially that claims about the structure of reality are models that describe the way our experience of how the world behaves. It is these models that then become our reality.

Put short, all the world is models

The popular notion of how science progresses is that we are steadily, if slowly, getting closer to the truth about the nature of the world around us. Indubitably, as time has gone on, scientific advances have been made and, yes, we have reached the stage where two great theories, Einstein’s General Relativity and the Standard Model of particle physics, provide us with a nearly complete description of the universe. We just need some clever physicists to iron out a few wrinkles like dark matter and dark energy in a Theory of Everything, and then we will have arrived at the Truth of how reality is structured.

The naivety of this belief is striking, a point highlighted by 20th century philosopher of science, Karl Popper, when he pointed out that scientific theories can never be proven to be true. Rather, they are working assumptions about the way the world is, that are supported by the evidence. Until they aren’t. 

Take Newton’s theory of gravity: this was thought to be true until anomalies like the precession of the perihelion of the planet Mercury were discovered. Nowadays, it is Einstein’s theory that provides the correct answer. But this raises the possibility that if we manage to come up with a Theory of Everything, who is to say that one day we will not conduct an experiment or make an observation that contradicts this theory too? For this reason, even if physicists were to discover the true structure of reality, they could never know it! 

“Okay”, some might say. “Although we can never know that we have reached the truth, at least we can say that our current theories are ‘more true’ than the previous ones”. This view is known as Convergent Realism and was powerfully critiqued in a 1981 paper by the philosopher Larry Laudan. 

At the everyday level, Einstein’s theory actually provides only very slightly different results to Newton’s, but the way it characterises the universe is completely different. Newton’s theory is set in the common-sense world of three-dimensional space plus a separate conception of time. Einstein’s is based on the notion of curved four-dimensional spacetime. Who can say what the universe will look like according to the next theory? As Schrödinger quipped, quantum mechanics tells us that cats can be alive and dead at the same time and that the building blocks of our universe can be both waves and particles. Weird, yes, but might it be that the true nature of the universe is just as weird and perhaps even beyond our ability to comprehend? 

Ultimately, scientific theories are models of the way the universe works. They allow us to understand the universe in terms of its behaviour, and we can use them to predict how the macroscopic objects of our experience, such as tables, stars and light bulbs behave. They do this by characterising the universe in a certain way that helps us get to grips with it. Because, as humans, we just do not have the tools to find out what the universe is ‘really like’.

The Map is the territory

Now comes the plot twist. The surprising but unavoidable consequence of this conceptual speed limit, is that the structure or make-up of this reality that we are modelling is irrelevant! It is only reality’s behaviour that matters. It is reality’s behaviour that we are modelling and a good model will predict its behaviour well. But if reality’s structure is unknowable and elusive, then it will forever remain a shadowy mysterious thing lying behind the veil. It is only the structure and objects of our models that can be known to us. These are the things that we live by and that give our lives meaning. And so these are the only objects that can be considered ‘real’ in any meaningful sense – if the objects of our models are not real, then nothing is real.

And so, what we have here, I would argue, is a case of The Emperor’s New Clothes. Many scientists and physicists are aware that all of our understanding is in terms of our models, but avoid engaging with the implications of this, because it is unnecessary for day to day work and raises difficult questions. They cling to the idea that there must be a ‘right answer’ out there, because if there isn’t, then well doesn’t everything fall apart? Where are the standards of correctness? What is to stop us from just claiming that whatever we like is true? In my book, I argue that these worries are unfounded. Although the structure of reality is unknowable, the good news is that it does behave in a certain way. And so not all models are created equal.

Anything Goes

I like to call this way of thinking the ‘Anything Goes’ method, because with no knowable reality to assess our models against, the only standard of correctness is a consideration of whether your model produces sensible results. And there is more to modelling reality than just the laws of physics. Even the idea that there is some kind of external reality, that is the source of our experiences, is part of this model that gives us an explanation for why our experiences behave in the way they do. Ultimately, each of us needs to find a way of making sense of our experiences in a way that works for us. In that sense, Anything Goes.

I think that this way of thinking is revolutionary! Once we recognise that it’s all a matter of perspective – that there are no disembodied facts about the universe in any useful sense, we can make progress in all sorts of areas that have previously proved intractable. Does God exist? It depends on your model. Is Schrödinger’s Cat alive or dead? Well, from whose perspective? Schrödinger’s or the cat’s? How would we tell if an Artificial Intelligence model attained consciousness? 

In my book, in order to find answers to questions like this last one, I ask what it means to say that an entity that only exists as part of your model of reality has a mind of its own as well as whether solipsism could be true, what it’s like to be a bat and whether you could be a brain in a vat!



All these questions and more are addressed in Anything Goes – A Philosophical Approach to Answering the God Question, due to be released on Amazon on 3 June 2024.

Visit http://www.anythinggoesmetaphysics.com/  to find out more as well as for details of how to get a free advance copy.

Monday 9 January 2023

The Philosophy of Science


The solar eclipse of May 29, 1919, forced a rethink of fundamental laws of physics

By Keith Tidman


Science aims at uncovering what is true. And it is equipped with all the tools — natural laws, methods, technologies, mathematics — that it needs to succeed. Indeed, in many ways, science works exquisitely. But does science ever actually arrive at reality? Or is science, despite its persuasiveness, paradoxically consigned to forever wending closer to its goal, yet not quite arriving — as theories are either amended to fit new findings, or they have to be replaced outright?

It is the case that science relies on observation — especially measurement. Observation confirms and grounds the validity of contending models of reality, empowering critical analysis to probe the details. The role of analysis is to scrutinise a theory’s scaffolding, to better visualise the coherent whole, broadening and deepening what is understood of the natural world. To these aims, science, at its best, has a knack for abiding by the ‘laws of parsimony’ of Occam’s razor — describing complexity as simply as possible, with the fewest suppositions to get the job done.

To be clear, other fields attempt this self-scrutiny and rigour, too, in one manner or another, as they fuel humanity’s flame of creative discovery and invention. They include history, languages, aesthetics, rhetoric, ethics, anthropology, law, religion, and of course philosophy, among others. But just as these fields are unique in their mission (oriented in the present) and their vision (oriented in the future), so is science — the latter heralding a physical world thought to be rational.

Accordingly, in science, theories should agree with evidence-informed, objective observations. Results should be replicated every time that tests and observations are run, confirming predictions. This bottom-up process is driven by what is called inductive reasoning: where a general principle — a conclusion, like an explanatory theory — is derived from multiple observations in which a pattern is discerned. An example of inductive reasoning at its best is Newton’s Third Law of Motion, which states that for every action (force) there is an equal and opposite reaction. It is a law that has worked unfailingly in uncountable instances.

But such successes do not eliminate inductive reasoning’s sliver of vulnerability. Karl Popper, the 20th-century Austrian-British philosopher of science, considered all scientific knowledge to be provisional. He illustrated his point with the example of a person who, having seen only white swans, concludes all swans are white. However, the person later discovers a black swan, an event conclusively rebutting the universality of white swans. Of course, abandoning this latter principle has little consequence. But what if an exception to Newton’s universal law governing action and reaction were to appear, instead?

Perhaps, as Popper suggests, truth, scientific and otherwise, should therefore only ever be parsed as partial or incomplete, where hypotheses offer different truth-values. Our striving for unconditional truth being a task in the making. This is of particular relevance in complex areas: like the nature of being and existence (ontology); or of universal concepts, transcendental ideas, metaphysics, and the fundamentals of what we think we know and understand (epistemology). (Areas also known to attempt to reveal the truth of unobserved things.) 

And so, Popper introduced a new test of truth: ‘falsifiability’. That is, all scientific assertions should be subjected to the test of being proven false — the opposite of seeking confirmation. Einstein, too, was more interested in whether experiments disagreed with his bold conjectures, as such experiments would render his theories invalid — rather than merely provide further evidence for them.

Nonetheless, as human nature would have it, Einstein was jubilant when his prediction that massive objects bend light was confirmed by astronomical observations of light passing close to the sun during the total solar eclipse of 1919, the observation thereby requiring revision of Newton’s formulation of the laws of gravity.

Testability is also central to another aspect of epistemology. That is, to draw a line between true science — whose predictions are subject to rigorous falsification and thus potential disproof — and pseudoscience — seen as speculative, untestable predictions relying on uncontested dogma. Pseudoscience balances precariously, depending as it does on adopters’ fickle belief-commitment rather than on rigorous tests and critical analyses.

On the plus side, if theories are not successfully falsified despite earnest efforts to do so, the claims may have a greater chance of turning out true. Well, at least until new information surfaces to force change to a model. Or, until ingenious thought experiments and insights lead to the sweeping replacement of a theory. Or, until investigation explains how to merge models formerly considered defyingly unalike, yet valid in their respective domains. An example of this last point is the case of general relativity and quantum mechanics, which have remained irreconcilable in describing reality (in matters ranging from spacetime to gravity), despite physicists’ attempts. 

As to the wholesale switching out of scientific theories, it may appear compelling to make the switch, based on accumulated new findings or the sense that the old theory has major fault lines, suggesting it has run its useful course. The 20th-century American philosopher of science, Thomas Kuhn, was influential in this regard, coining the formative expression ‘paradigm shift’. The shift occurs when a new scientific theory replaces its problem-ridden predecessor, based on a consensus among scientists that the new theory (paradigm) better describes the world, offering a ‘revolutionarily’ different understanding that requires a shift in fundamental concepts.


Among the great paradigm shifts of history are Copernicuss sun-centered (heliocentric) model of planet rotation, replacing Ptolemys Earth-centered model. Another was Charles Darwins theory of natural selection as key to the biological sciences, informing the origins and evolution of species. Additionally, Einsteins theories of relativity ushered in major changes to Newtons understanding of the physical universe. Also significant was recognition that plate tectonics explain large-scale geologic change. Significant, too, was development by Neils Bohr and others of quantum mechanics, replacing classical mechanics at microscopic scales. The story of paradigm shifts is long and continues.


Science’s progress in unveiling the universe’s mysteries entails dynamic processes: One is the enduring sustainability of theories, seemingly etched in stone, that hold up under unsparing tests of verification and falsification. Another is implementation of amendments as contrary findings chip away at the efficacy of models. But then another is the revolutionarily replacement of scientific models as legacy theories become frail and fail. Reasons for belief in the methods of positivism. 


In 1960, the physicist Eugene Wigner wrote what became a famous paper in philosophy and other circles, coining the evocative expression unreasonable effectiveness. This was in reference to the role of mathematics in the natural sciences, but he could well have been speaking of the role of science itself in acquiring understanding of the world.


Monday 9 November 2020

The Certainty of Uncertainty


Posted by Keith Tidman
 

We favour certainty over uncertainty. That’s understandable. Our subscribing to certainty reassures us that perhaps we do indeed live in a world of absolute truths, and that all we have to do is stay the course in our quest to stitch the pieces of objective reality together.

 

We imagine the pursuit of truths as comprising a lengthening string of eureka moments, as we put a check mark next to each section in our tapestry of reality. But might that reassurance about absolute truths prove illusory? Might it be, instead, ‘uncertainty’ that wins the tussle?

 

Uncertainty taunts us. The pursuit of certainty, on the other hand, gets us closer and closer to reality, that is, closer to believing that there’s actually an external world. But absolute reality remains tantalizingly just beyond our finger tips, perhaps forever.

 

And yet it is uncertainty, not certainty, that incites us to continue conducting the intellectual searches that inform us and our behaviours, even if imperfectly, as we seek a fuller understanding of the world. Even if the reality we think we have glimpsed is one characterised by enough ambiguity to keep surprising and sobering us.

 

The real danger lies in an overly hasty, blinkered turn to certainty. This trust stems from a cognitive bias — the one that causes us to overvalue our knowledge and aptitudes. Psychologists call it the Dunning-Kruger effect.

 

What’s that about then? Well, this effect precludes us from spotting the fallacies in what we think we know, and discerning problems with the conclusions, decisions, predictions, and policies growing out of these presumptions. We fail to recognise our limitations in deconstructing and judging the truth of the narratives we have created, limits that additional research and critical scrutiny so often unmask. 

 

The Achilles’ heel of certainty is our habitual resort to inductive reasoning. Induction occurs when we conclude from many observations that something is universally true: that the past will predict the future. Or, as the Scottish philosopher, David Hume, put it in the eighteenth century, our inferring ‘that instances of which we have had no experience resemble those of which we have had experience’. 

 

A much-cited example of such reasoning consists of someone concluding that, because they have only ever observed white swans, all swans are therefore white — shifting from the specific to the general. Indeed, Aristotle uses the white swan as an example of a logically necessary relationship. Yet, someone spotting just one black swan disproves the generalisation. 

 

Bertrand Russell once set out the issue in this colourful way:

 

‘Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to uniformity of nature would have been useful to the chicken’.

 

The person’s theory that all swans are white — or the chicken’s theory that the man will continue to feed it — can be falsified, which sits at the core of the ‘falsification’ principle developed by philosopher of science Karl Popper. The heart of this principle is that in science a hypothesis or theory or proposition must be falsifiable, that is, to possibly being shown wrong. Or, in other words, to be testable through evidence. For Popper, a claim that is untestable is no longer scientific. 

 

However, a testable hypothesis that is proven through experience to be wrong (falsified) can be revised, or perhaps discarded and replaced by a wholly new proposition or paradigm. This happens in science all the time, of course. But here’s the rub: humanity can’t let uncertainty paralyse progress. As Russell also said: 

 

‘One ought to be able to act vigorously in spite of the doubt. . . . One has in practical life to act upon probabilities’.

 

So, in practice, whether implicitly or explicitly, we accept uncertainty as a condition in all fields — throughout the humanities, social sciences, formal sciences, and natural sciences — especially if we judge the prevailing uncertainty to be tiny enough to live with. Here’s a concrete example, from science.

 

In the 1960s, the British theoretical physicist, Peter Higgs, mathematically predicted the existence of a specific subatomic particle. The last missing piece in the Standard Model of particle physics. But no one had yet seen it, so the elusive particle remained a hypothesis. Only several decades later, in 2012, did CERN’s Large Hadron Collider reveal the particle, whose field is claimed to have the effect of giving all other particles their mass. (Earning Higgs, and his colleague Francis Englert, the Nobel prize in physics.)

 

The CERN scientists’ announcement said that their confirmation bore ‘five-sigma’ certainty. That is, there was only 1 chance in 3.5 million that what was sighted was a fluke, or something other than the then-named Higgs boson. A level of certainty (or of uncertainty, if you will) that physicists could very comfortably live with. Though as Kyle Cranmer, one of the scientists on the team that discovered the particle, appropriately stresses, there remains an element of uncertainty: 

 

“People want to hear declarative statements, like ‘The probability that there’s a Higgs is 99.9 percent,’ but the real statement has an ‘if’ in there. There’s a conditional. There’s no way to remove the conditional.”

 

Of course, not in many instances in everyday life do we have to calculate the probability of reality. But we might, through either reasoning or subconscious means, come to conclusions about the likelihood of what we choose to act on as being right, or safely right enough. The stakes of being wrong matter — sometimes a little, other times consequentially. Peter Higgs got it right; Bertrand Russell’s chicken got it wrong.

  

The takeaway from all this is that we cannot know things with absolute epistemic certainty. Theories are provisional. Scepticism is essential. Even wrong theories kindle progress. The so-called ‘theory of everything’ will remain evasively slippery. Yet, we’re aware we know some things with greater certainty than other things. We use that awareness to advantage, informing theory, understanding, and policy, ranging from the esoteric to the everyday.

 

Monday 19 October 2020

Is Technology ‘What Makes us Human’?


Posted by Keith Tidman

Technology and human behaviour have historically always been intertwined, defining us as the species we are. Today, technology’s ubiquity means that our lives’ ever-faster turn toward it and its multiplicity of forms have given it stealth-like properties. Increasingly, for many people, technology seems just to happen, and the human agency behind it appears veiled. Yet at the same time, perhaps counterintuitively, what appears to us to happen ‘behind the curtain’ hints that technology is fundamentally rooted in human nature. 


Certainly, there is a delicate affinity between science and technology: the former uncovers how the world happens to be, while the latter helps science to convert those realities into artefacts. As science changes, technologists see opportunities: through invention, design, engineering, and application. This restlessly visionary process is not just incidental, I suggest, but rather is intrinsic to us.

 

Our species comprises enthusiastic toolmakers. The coupling of science and technology has led to humanity’s rich array of transformative products, from particle accelerators to world-spanning aircraft, to magnetic-resonance imaging devices, to the space-station laboratory and universe-imaging space telescopes. The alliance has brought us gene-editing technologies and bioengineering, robotics driven by artificial intelligence, energy-generating solar panels, and multifunctional ‘smart phones’.

 

There’s an ‘everywhereness’ of many such devices in the world, reaching into our lives, increasingly creating a one-world community linked by mutual interdependence on many fronts. The role of toolmaker-cum-technologist has become integrated, metaphorically speaking, into our species’ biological motherboard. In this way, technology has becomes the tipping point of globalisation’s irrepressibility.

 

René Descartes went so far as to profess that science would enable humankind to ‘become the masters and possessors of nature’. An overreach, perhaps — the despoiling of aspects of nature, such as the air, land, and ecosystems at our over-eager hands convinces us of that — but the trend line today points in the direction Descartes declared, just as electric light frees swaths of the world’s population from dependence on daylight.

 

Technology was supercharged by the science of the Newtonian world, which saw the universe as a machine, and its subsequent vaulting to the world of digits has had obvious magnifying effects. These will next become amplified as the world of machine learning takes center stage. Yet human imagination and creativity have had a powerfully galvanizing influence over the transformation. 

 

Technology itself is morally impartial, and as such neither blameworthy nor praiseworthy. Despite how ‘clever’ it becomes, for the foreseeable future technology does not yet have agency — or preference of any kind. However, on the horizon, much cleverer, even self-optimising technology might start to exhibit moral partiality. But as to the point about responsibility and accountability, it is how technology is employed, through users, which gives rise to considerations of morality.

 

A car, for example, is a morally impartial technology. No nefarious intent can be fairly ascribed to either inventor or owner. However, as soon as someone chooses to exercise his agency and drive the car into a crowd with the intent to hurt, he turns the vehicle from its original purpose as an empowering tool for transportation into an empowering weapon of sorts. But no one wags their finger remonstratively at the car.

 

Technology influences our values and norms, prompting culture to morph — sometimes gradually, other times hurriedly. It’s what defines us, at least in large part, as human beings. At the same time, the incorporation and acceptance of technology is decidedly seductive. Witness the new Digital Revolution. Technology’s sway is hard to discount, and even harder to rebuff, especially once it has established roots deep into culture’s rich subsurface soil. But this sway can also be overstated.

 

To that last point, despite technology’s ubiquity, it has not entirely pulled the rug from under other values, like those around community, spirituality, integrity, loyalty, respect, leadership, generosity, and accountability, among others. Indeed, technology might be construed as serving as a multiplier of opportunities for development and improvement, empowering individuals, communities, and institutions alike. How the fifteenth-century printing press democratised access to knowledge, became a tool that spurred revolutions, and helped spark the Enlightenment was one instance of this influential effect.


Today, rockets satisfy our impulse to explore space; the anticipated advent of quantum computers promises dramatic advances in machine learning as well as the modeling of natural events and behaviours, unbreakable encryption, and the development of drugs; nanotechnology leads to the creation of revolutionary materials — and all the time the Internet increasingly connects the world in ways once beyond the imagination.

 

In this manner, there are cascading events that work both ways: human needs and wants drive technology; and technology drives human needs and wants. Technological change thus is a Janus figure with two faces: one looking toward the past, as we figure out what is important and which lessons to apply; and the other looking toward the future, as we innovate. Accordingly, both traditional and new values become expressed, more than just obliquely, by the technology we invent, in a cycle of generation and regeneration.

 

Despite technology’s occasional fails, few people are really prepared to live unconditionally with nature, strictly on nature’s terms. To do so remains a romanticised vision, worthy of the likes of American idealist Henry David Thoreau. Rather, whether rightly or wrongly, more often we have seen our higher interests to make life yet a bit easier, a bit more palatable. 

 

Philosopher Martin Heidegger declared, rather dismally, that we are relegated to ‘remain unfree and chained to technology’. But I think his view is an unappreciative, undeservedly dismissive view of technology’s advantages, across domains: agriculture, education, industry, medicine, business, sanitation, transportation, building, entertainment, materials, information, and communication, among others. Domains where considerations like resource sustainability, ethics, and social justice have been key.

 

For me, in its reach, technology’s pulse has a sociocultural aspect, both shaping and drawing upon social, political, and cultural values. And to get the right balance among those values is a moral, not just a pragmatic, responsibility — one that requires being vigilant in making choices from among alternative priorities and goals. 

 

In innumerable ways, it is through technology, incubated in science, that civilisation has pushed back against the Hobbesian ‘nastiness and brutishness’ of human existence. That’s the record of history. In meantime, we concede the paradox of complex technology championing a simplified, pleasanter life. And as such, our tool-making impulse toward technological solutions, despite occasional fails, will continue to animate what makes us deeply human.

 

Monday 21 September 2020

‘What Are We?’ “Self-reflective Consciousness, Cooperation, and the Agents of Our Future Evolution”

Cueva de las Manos, Río Pinturas

Posted by John Hands 

‘What are we?’ This is arguably the fundamental philosophical question. Indeed, ‘What are we?’ along with ‘Where do we come from?’ and ‘Why do we exist?’ are questions that humans have been asking for at least 25,000 years. During all of this time we have sought answers from the supernatural. About 3,000 years ago, however, we began to seek answers through philosophical reasoning and insight. Then, around 150 years ago, we began to seek answers through science: through systematic, preferably measurable, observation or experiment. 

As a science graduate and former tutor in physics for Britain's ‘Open University*’, I wanted to find out what answers science currently gives. But I couldn’t find any book that did so. There are two reasons for this.

  • First, the exponential increase in empirical data generated by rapid developments in technology had resulted in the branching of science into increasingly narrow, specialized fields. I wanted to step back from the focus of one leaf on one branch and see what the whole evolutionary tree shows us. 
  • Second, most science books advocate a particular theory, and often present it as fact. But scientific explanations change as new data is obtained and new thinking develops. 

And so I decided to write ‘the book that hadn’t been written’: an impartial evaluation of the current theories that explain how we evolved, not just from the first life on Earth, but where that came from, right back to the primordial matter and energy at the beginning of the universe of which we ultimately consist. I called it COSMOSAPIENS Human Evolution from the Origin of the Universe* and in the event it took more than 10 years to research and write. What’s more, the conclusions I reached surprised me. I had assumed that the Big Bang was well-established science. But the more I investigated the more I discovered that the Big Bang Theory had been contradicted by observational evidence stretching back 60 years. Cosmologists had continually changed this theory as more sophisticated observations and experiments produced ever more contradictions with the theory.

The latest theory is called the Concordance Model. It might more accurately be described as ‘The Inflationary-before-or-after-the-Hot Big Bang-unknown-27% Dark Matter-unknown-68% Dark Energy model’. Its central axiom, that the universe inflated at a trillion trillion trillion times the speed of light in a trillion trillion trillionth of a second is untestable. Hence it is not scientific.

The problem arises because these cosmological theories are mathematical models. They are simplified solutions of Einstein’s field equations of general relativity applied to the universe. They are based on assumptions that the latest observations show to be invalid. That’s one surprising conclusion I found. 

Another surprise came when I examined the orthodox theory for the last 65 years in the UK and the USA of how and why life on Earth evolved into so many different species. It is known as NeoDarwinism, and was popularised by Richard Dawkins in his bestselling book, The Selfish Gene, where it says that biological evolution is caused by genes selfishly competing with each other to survive and replicate.

NeoDarwinism is based on the fallacy of ascribing intention to an acid, deoxyribonucleic acid, of which genes are composed. Dawkins admits that this language is sloppy and says he could express it in scientific terms. But I’ve read the book twice and he never does manage to do this. Moreover, the theory is contradicted by substantial behavioural, genetic, and genomic evidence. When confronted by such, instead of modifying the theory to take account of the evidence, as a scientist should do, Dawkins lamely says “genes must have misfired”. 

The fact is, he couldn’t modify the theory because the evidence shows that Darwinian competition causes not the evolution of species but the destruction of species. It is cooperation, not competition, that has caused the evolution of successively more complex species.

Today, most biologists assert that we differ only in degree from other animals. I think that this too is wrong. What marked our emergence as a distinct species some 25,000 years ago wasn’t the size or shape of our skulls, or that we walked upright, or that we lacked bodily hair, or the genes we possess. These are differences in degree from other animals. What made us unique was reflective consciousness.

Consciousness is a characteristic of a living thing as distinct from an inanimate thing like a rock. It is possessed in rudimentary form by the simplest species like bacteria. In the evolutionary lineage leading to humans, consciousness increased with increasing neural complexity and centration in the brain until, with humans, it became conscious of itself. We are the only species that not only knows but also knows that it knows. We reflect on ourselves and our place in the cosmos. We ask questions like: What are we? Where did we come from? Why do we exist? 

This self-reflective consciousness has transformed existing abilities and generated new ones. It has transformed comprehension, learning, invention, and communication, which all other animals have in varying degrees. It has generated new abilities, like imagination, insight, abstraction, written language, belief, and morality that no other animal has. Its possession marks a difference in kind, not merely degree, from other animals, just as there is a difference in kind between inanimate matter, like a rock, and living things, like bacteria and animals. 

Moreover, Homo sapiens is the only known species that is still evolving. Our evolution is not morphological—physical characteristics—or genetic, but noetic, meaning ‘relating to mental activity’. It is an evolution of the mind, and has been occurring in three overlapping phases: primeval, philosophical, and scientific. 

Primeval thinking was dominated by the foreknowledge of death and the need to survive. Accordingly, imagination gave rise to superstition, which is a belief that usually arises from a lack of understanding of natural phenomena or fear of the unknown. 

It is evidenced by legends and myths, the beliefs in animism, totemism, and ancestor worship of hunter-gatherers, to polytheism in city-states in which the pantheon of gods reflected the social hierarchy of their societies, and finally to a monotheism in which other gods were demoted to angels or subsumed into one God, reflecting the absolute power of king or emperor. 

The instinct for competition and aggression, which had been ingrained over millions of years of prehuman ancestry, remained a powerful characteristic of humans, interacting with, and dominating, reflective consciousness. 

The second phase of reflective consciousness, philosophical thinking, emerged roughly 1500 to 500 BCE. It was characterised by humans going beyond superstition to use reasoning and insight, often after disciplined meditation, to answer questions. In all cultures it produced the ethical view that we should treat all others, including our enemies, as ourselves. This ran counter to the predominant instinct of aggression and competition. 

The third phase, scientific thinking, gradually emerged from natural philosophy around 1600 CE. It branched into the physical sciences, the life sciences, and medical sciences. 

Physics, the fundamental science, then started to converge, rapidly so over the last 65 years, towards a single theory that describes all the interactions between all forms of matter. According to this view, all physical phenomena are lower energy manifestations of a single energy at the beginning of the universe. This is similar in very many respects to the insight of philosophers of all cultures that there is an underlying energy in the cosmos that gives rise to all matter and energy. 

During this period, reflective consciousness has produced an increasing convergence of humankind. The development of technology has led to globalisation, both physically and electronically, in trade, science, education, politics (United Nations), and altruistic activities such as UNICEF and Médecins Sans Frontières. It has also produced a ‘complexification’ of human societies, a reduction in aggression, an increase in cooperation, and the ability to determine humankind’s future. 

This whole process of human evolution has been accelerating. Primeval thinking emerges roughly 25,000 years ago, philosophical thinking emerges about 3,000 years ago, scientific thinking emerges some 400 years ago, while convergent thinking begins barely 65 years ago. 

I think that when we examine the evidence of our evolution from primordial matter and energy at the beginning of the universe, we see a consistent pattern. This shows that we humans are the unfinished product of an accelerating cosmic evolutionary process characterised by cooperation, increasing complexity and convergence, and that – uniquely as far we know – we are the self-reflective agents of our future evolution. 


 

*For further details and reviews of John’s new book, see https://johnhands.com 

Editor's note. The UK’s ‘Open University’ differs from other universities through its the policy of open admissions and its emphasis on distance and online learning programs.

Monday 3 February 2020

Picture Post #53 Buckled Rails


'Because things don’t appear to be the known thing; they aren’t what they seemed to be neither will they become what they might appear to become.'

Posted by Thomas Scarborough

Buckled railway line near Glasgow, 25 June 2018.

The thermal expansion of railway lines is governed most simply by the formula

Δ L ≈ α L Δ T

This formula failed near Glasgow on 25 June 2018, when railway lines buckled in the heat. In fact they buckled in heatwaves all across Europe in the 2010s.  Why?  The answer is simple.  This formula, and versions of it, failed to include environmental factors—at least, not those which mattered.

It is not only railway lines which buckle.  Oceans are polluted, glaciers retreat, bees are poisoned, toads go blind, groundwater is poisoned, people suffocate—in fact, thousands if not millions of things go wrong besides—all without their being included in the formulae.

Here is the problem.  We take at face value that physical laws are true of this world.  It is the heresy of Plato.  Ordinary things, held Plato, imitate forms.  We hold up forms to reality, which is formulae: 'This is how it is!'  It is not.  And so the world is continually bedevilled by negative consequences.