Showing posts with label Eugene Wigner. Show all posts
Showing posts with label Eugene Wigner. Show all posts

Monday, 9 January 2023

The Philosophy of Science


The solar eclipse of May 29, 1919, forced a rethink of fundamental laws of physics

By Keith Tidman


Science aims at uncovering what is true. And it is equipped with all the tools — natural laws, methods, technologies, mathematics — that it needs to succeed. Indeed, in many ways, science works exquisitely. But does science ever actually arrive at reality? Or is science, despite its persuasiveness, paradoxically consigned to forever wending closer to its goal, yet not quite arriving — as theories are either amended to fit new findings, or they have to be replaced outright?

It is the case that science relies on observation — especially measurement. Observation confirms and grounds the validity of contending models of reality, empowering critical analysis to probe the details. The role of analysis is to scrutinise a theory’s scaffolding, to better visualise the coherent whole, broadening and deepening what is understood of the natural world. To these aims, science, at its best, has a knack for abiding by the ‘laws of parsimony’ of Occam’s razor — describing complexity as simply as possible, with the fewest suppositions to get the job done.

To be clear, other fields attempt this self-scrutiny and rigour, too, in one manner or another, as they fuel humanity’s flame of creative discovery and invention. They include history, languages, aesthetics, rhetoric, ethics, anthropology, law, religion, and of course philosophy, among others. But just as these fields are unique in their mission (oriented in the present) and their vision (oriented in the future), so is science — the latter heralding a physical world thought to be rational.

Accordingly, in science, theories should agree with evidence-informed, objective observations. Results should be replicated every time that tests and observations are run, confirming predictions. This bottom-up process is driven by what is called inductive reasoning: where a general principle — a conclusion, like an explanatory theory — is derived from multiple observations in which a pattern is discerned. An example of inductive reasoning at its best is Newton’s Third Law of Motion, which states that for every action (force) there is an equal and opposite reaction. It is a law that has worked unfailingly in uncountable instances.

But such successes do not eliminate inductive reasoning’s sliver of vulnerability. Karl Popper, the 20th-century Austrian-British philosopher of science, considered all scientific knowledge to be provisional. He illustrated his point with the example of a person who, having seen only white swans, concludes all swans are white. However, the person later discovers a black swan, an event conclusively rebutting the universality of white swans. Of course, abandoning this latter principle has little consequence. But what if an exception to Newton’s universal law governing action and reaction were to appear, instead?

Perhaps, as Popper suggests, truth, scientific and otherwise, should therefore only ever be parsed as partial or incomplete, where hypotheses offer different truth-values. Our striving for unconditional truth being a task in the making. This is of particular relevance in complex areas: like the nature of being and existence (ontology); or of universal concepts, transcendental ideas, metaphysics, and the fundamentals of what we think we know and understand (epistemology). (Areas also known to attempt to reveal the truth of unobserved things.) 

And so, Popper introduced a new test of truth: ‘falsifiability’. That is, all scientific assertions should be subjected to the test of being proven false — the opposite of seeking confirmation. Einstein, too, was more interested in whether experiments disagreed with his bold conjectures, as such experiments would render his theories invalid — rather than merely provide further evidence for them.

Nonetheless, as human nature would have it, Einstein was jubilant when his prediction that massive objects bend light was confirmed by astronomical observations of light passing close to the sun during the total solar eclipse of 1919, the observation thereby requiring revision of Newton’s formulation of the laws of gravity.

Testability is also central to another aspect of epistemology. That is, to draw a line between true science — whose predictions are subject to rigorous falsification and thus potential disproof — and pseudoscience — seen as speculative, untestable predictions relying on uncontested dogma. Pseudoscience balances precariously, depending as it does on adopters’ fickle belief-commitment rather than on rigorous tests and critical analyses.

On the plus side, if theories are not successfully falsified despite earnest efforts to do so, the claims may have a greater chance of turning out true. Well, at least until new information surfaces to force change to a model. Or, until ingenious thought experiments and insights lead to the sweeping replacement of a theory. Or, until investigation explains how to merge models formerly considered defyingly unalike, yet valid in their respective domains. An example of this last point is the case of general relativity and quantum mechanics, which have remained irreconcilable in describing reality (in matters ranging from spacetime to gravity), despite physicists’ attempts. 

As to the wholesale switching out of scientific theories, it may appear compelling to make the switch, based on accumulated new findings or the sense that the old theory has major fault lines, suggesting it has run its useful course. The 20th-century American philosopher of science, Thomas Kuhn, was influential in this regard, coining the formative expression ‘paradigm shift’. The shift occurs when a new scientific theory replaces its problem-ridden predecessor, based on a consensus among scientists that the new theory (paradigm) better describes the world, offering a ‘revolutionarily’ different understanding that requires a shift in fundamental concepts.


Among the great paradigm shifts of history are Copernicuss sun-centered (heliocentric) model of planet rotation, replacing Ptolemys Earth-centered model. Another was Charles Darwins theory of natural selection as key to the biological sciences, informing the origins and evolution of species. Additionally, Einsteins theories of relativity ushered in major changes to Newtons understanding of the physical universe. Also significant was recognition that plate tectonics explain large-scale geologic change. Significant, too, was development by Neils Bohr and others of quantum mechanics, replacing classical mechanics at microscopic scales. The story of paradigm shifts is long and continues.


Science’s progress in unveiling the universe’s mysteries entails dynamic processes: One is the enduring sustainability of theories, seemingly etched in stone, that hold up under unsparing tests of verification and falsification. Another is implementation of amendments as contrary findings chip away at the efficacy of models. But then another is the revolutionarily replacement of scientific models as legacy theories become frail and fail. Reasons for belief in the methods of positivism. 


In 1960, the physicist Eugene Wigner wrote what became a famous paper in philosophy and other circles, coining the evocative expression unreasonable effectiveness. This was in reference to the role of mathematics in the natural sciences, but he could well have been speaking of the role of science itself in acquiring understanding of the world.


Monday, 28 January 2019

Is Mathematics Invented or Discovered?



Posted by Keith Tidman

I’m a Platonist. Well, at least insofar as how mathematics is presumed ‘discovered’ and, in its being so, serves as the basis of reality. Mathematics, as the mother tongue of the sciences, is about how, on one important epistemological level, humankind seeks to understand the universe. To put this into context, the American physicist Eugene Wigner published a paper in 1960 whose title even referred to the ‘unreasonable effectiveness’ of mathematics, before trying to explain why it might be so. His English contemporary, Paul Dirac, dared to go a step farther, declaring, in a phrase with a theological and celestial ring, that ‘God used beautiful mathematics in creating the world’. All of which leads us to this consequential question: Is mathematics invented or discovered, and does mathematics underpin universal reality?
‘In every department of physical science, there is only so much science … as there is mathematics’ — Immanuel Kant
If mathematics is simply a tool of humanity that happens to align with and helps to describe the natural laws and organisation of the universe, then one might say that mathematics is invented. As such, math is an abstraction that reduces to mental constructs, expressed through globally agreed-upon symbols. In this capacity, these constructs serve — in the complex realm of human cognition and imagination — as a convenient expression of our reasoning and logic, to better grasp the natural world. According to this ‘anti-realist’ school of thought, it is through our probing that we observe the universe and that we then build mathematical formulae in order to describe what we see. Isaac Newton, for example, developed calculus to explain such things as the acceleration of objects and planetary orbits. Mathematicians sometimes refine their formulae later, to increasingly conform to what scientists learn about the universe over time. Another way to put it is that anti-realist theory is saying that without humankind around, mathematics would not exist, either. Yet, the flaw in this paradigm is that it leaves the foundation of reality unstated. It doesn’t meet Galileo’s incisive and ponderable observation that:
‘The book of nature is written in the language of mathematics.’
If, however, mathematics is regarded as the unshakably fundamental basis of the universe — whereby it acts as the native language of everything (embodying universal truths) — then humanity’s role becomes to discover the underlying numbers, equations, and axioms. According to this view, mathematics is intrinsic to nature and provides the building blocks — both proximate and ultimate — of the entire universe. An example consists of that part of the mathematics of Einstein’s theory of general relativity predicting the existence of ‘gravitational waves’; the presence of these waves would not be proven empirically until this century, through advanced technology and techniques. Per this ‘Platonic’ school of thought, the numbers and relationships associated with mathematics would nonetheless still exist, describing phenomena and governing how they interrelate, bringing a semblance of order to the universe — a math-based universe that would exist even absent humankind. After all, this underlying mathematics existed before humans arrived upon the scene — awaiting our discovery — and this mathematics will persist long after us.

If this Platonic theory is the correct way to look at reality, as I believe it is, then it’s worth taking the issue to the next level: the unique role of mathematics in formulating truth and serving as the underlying reality of the universe — both quantitative and qualitative. As Aristotle summed it up, the ‘principles of mathematics are the principles of all things’. Aristotle’s broad stroke foreshadowed the possibility of what millennia later became known in the mathematical and science world as a ‘theory of everything’, unifying all forces, including the still-defiant unification of quantum mechanics and relativity. 

As the Swedish-American cosmologist Max Tegmark provocatively put it, ‘There is only mathematics; that is all that exists’ — an unmistakably monist perspective. He colorfully goes on:
‘We all live in a gigantic mathematical object — one that’s more elaborate than a dodecahedron, and probably also more complex than objects with intimidating names such as Calabi-Yau manifolds, tensor bundles and Hilbert spaces, which appear in today’s most advanced physics theories. Everything in our world is purely mathematical— including you.’
The point is that mathematics doesn’t just provide ‘models’ of physical, qualitative, and relational reality; as Descartes suspected centuries ago, mathematics is reality.

Mathematics thus doesn’t care, if you will, what one might ‘believe’; it dispassionately performs its substratum role, regardless. The more we discover the universe’s mathematical basis, the more we build on an increasingly robust, accurate understanding of universal truths, and get ever nearer to an uncannily precise, clear window onto all reality — foundational to the universe. 

In this role, mathematics has enormous predictive capabilities that pave the way to its inexhaustibly revealing reality. An example is the mathematical hypothesis stating that a particular fundamental particle exists whose field is responsible for the existence of mass. The particle was theoretically predicted, in mathematical form, in the 1960s by British physicist Peter Higgs. Existence of the particle — named the Higgs boson — was confirmed by tests some fifty-plus years later. Likewise, Fermat’s famous last theorem, conjectured in 1637, was not proven mathematically until some 360 years later, in 1994 — yet the ‘truth value’ of the theorem nonetheless existed all along.

Underlying this discussion is the unsurprising observation by the early-20th-century philosopher Edmund Husserl, who noted, in understated fashion, that ‘Experience by itself is not science’ — while elsewhere his referring to ‘the profusion of insights’ that could be obtained from mathematical research. That process is one of discovery. Discovery, that is, of things that are true, even if we had not hitherto known them to be so. The ‘profusion of insights’ obtained in that mathematical manner renders a method that is complete and consistent enough to direct us to a category of understanding whereby all reality is mathematical reality.