Showing posts with label Karl Popper. Show all posts
Showing posts with label Karl Popper. Show all posts

Monday, 9 January 2023

The Philosophy of Science


The solar eclipse of May 29, 1919, forced a rethink of fundamental laws of physics

By Keith Tidman


Science aims at uncovering what is true. And it is equipped with all the tools — natural laws, methods, technologies, mathematics — that it needs to succeed. Indeed, in many ways, science works exquisitely. But does science ever actually arrive at reality? Or is science, despite its persuasiveness, paradoxically consigned to forever wending closer to its goal, yet not quite arriving — as theories are either amended to fit new findings, or they have to be replaced outright?

It is the case that science relies on observation — especially measurement. Observation confirms and grounds the validity of contending models of reality, empowering critical analysis to probe the details. The role of analysis is to scrutinise a theory’s scaffolding, to better visualise the coherent whole, broadening and deepening what is understood of the natural world. To these aims, science, at its best, has a knack for abiding by the ‘laws of parsimony’ of Occam’s razor — describing complexity as simply as possible, with the fewest suppositions to get the job done.

To be clear, other fields attempt this self-scrutiny and rigour, too, in one manner or another, as they fuel humanity’s flame of creative discovery and invention. They include history, languages, aesthetics, rhetoric, ethics, anthropology, law, religion, and of course philosophy, among others. But just as these fields are unique in their mission (oriented in the present) and their vision (oriented in the future), so is science — the latter heralding a physical world thought to be rational.

Accordingly, in science, theories should agree with evidence-informed, objective observations. Results should be replicated every time that tests and observations are run, confirming predictions. This bottom-up process is driven by what is called inductive reasoning: where a general principle — a conclusion, like an explanatory theory — is derived from multiple observations in which a pattern is discerned. An example of inductive reasoning at its best is Newton’s Third Law of Motion, which states that for every action (force) there is an equal and opposite reaction. It is a law that has worked unfailingly in uncountable instances.

But such successes do not eliminate inductive reasoning’s sliver of vulnerability. Karl Popper, the 20th-century Austrian-British philosopher of science, considered all scientific knowledge to be provisional. He illustrated his point with the example of a person who, having seen only white swans, concludes all swans are white. However, the person later discovers a black swan, an event conclusively rebutting the universality of white swans. Of course, abandoning this latter principle has little consequence. But what if an exception to Newton’s universal law governing action and reaction were to appear, instead?

Perhaps, as Popper suggests, truth, scientific and otherwise, should therefore only ever be parsed as partial or incomplete, where hypotheses offer different truth-values. Our striving for unconditional truth being a task in the making. This is of particular relevance in complex areas: like the nature of being and existence (ontology); or of universal concepts, transcendental ideas, metaphysics, and the fundamentals of what we think we know and understand (epistemology). (Areas also known to attempt to reveal the truth of unobserved things.) 

And so, Popper introduced a new test of truth: ‘falsifiability’. That is, all scientific assertions should be subjected to the test of being proven false — the opposite of seeking confirmation. Einstein, too, was more interested in whether experiments disagreed with his bold conjectures, as such experiments would render his theories invalid — rather than merely provide further evidence for them.

Nonetheless, as human nature would have it, Einstein was jubilant when his prediction that massive objects bend light was confirmed by astronomical observations of light passing close to the sun during the total solar eclipse of 1919, the observation thereby requiring revision of Newton’s formulation of the laws of gravity.

Testability is also central to another aspect of epistemology. That is, to draw a line between true science — whose predictions are subject to rigorous falsification and thus potential disproof — and pseudoscience — seen as speculative, untestable predictions relying on uncontested dogma. Pseudoscience balances precariously, depending as it does on adopters’ fickle belief-commitment rather than on rigorous tests and critical analyses.

On the plus side, if theories are not successfully falsified despite earnest efforts to do so, the claims may have a greater chance of turning out true. Well, at least until new information surfaces to force change to a model. Or, until ingenious thought experiments and insights lead to the sweeping replacement of a theory. Or, until investigation explains how to merge models formerly considered defyingly unalike, yet valid in their respective domains. An example of this last point is the case of general relativity and quantum mechanics, which have remained irreconcilable in describing reality (in matters ranging from spacetime to gravity), despite physicists’ attempts. 

As to the wholesale switching out of scientific theories, it may appear compelling to make the switch, based on accumulated new findings or the sense that the old theory has major fault lines, suggesting it has run its useful course. The 20th-century American philosopher of science, Thomas Kuhn, was influential in this regard, coining the formative expression ‘paradigm shift’. The shift occurs when a new scientific theory replaces its problem-ridden predecessor, based on a consensus among scientists that the new theory (paradigm) better describes the world, offering a ‘revolutionarily’ different understanding that requires a shift in fundamental concepts.


Among the great paradigm shifts of history are Copernicuss sun-centered (heliocentric) model of planet rotation, replacing Ptolemys Earth-centered model. Another was Charles Darwins theory of natural selection as key to the biological sciences, informing the origins and evolution of species. Additionally, Einsteins theories of relativity ushered in major changes to Newtons understanding of the physical universe. Also significant was recognition that plate tectonics explain large-scale geologic change. Significant, too, was development by Neils Bohr and others of quantum mechanics, replacing classical mechanics at microscopic scales. The story of paradigm shifts is long and continues.


Science’s progress in unveiling the universe’s mysteries entails dynamic processes: One is the enduring sustainability of theories, seemingly etched in stone, that hold up under unsparing tests of verification and falsification. Another is implementation of amendments as contrary findings chip away at the efficacy of models. But then another is the revolutionarily replacement of scientific models as legacy theories become frail and fail. Reasons for belief in the methods of positivism. 


In 1960, the physicist Eugene Wigner wrote what became a famous paper in philosophy and other circles, coining the evocative expression unreasonable effectiveness. This was in reference to the role of mathematics in the natural sciences, but he could well have been speaking of the role of science itself in acquiring understanding of the world.


Monday, 9 November 2020

The Certainty of Uncertainty


Posted by Keith Tidman
 

We favour certainty over uncertainty. That’s understandable. Our subscribing to certainty reassures us that perhaps we do indeed live in a world of absolute truths, and that all we have to do is stay the course in our quest to stitch the pieces of objective reality together.

 

We imagine the pursuit of truths as comprising a lengthening string of eureka moments, as we put a check mark next to each section in our tapestry of reality. But might that reassurance about absolute truths prove illusory? Might it be, instead, ‘uncertainty’ that wins the tussle?

 

Uncertainty taunts us. The pursuit of certainty, on the other hand, gets us closer and closer to reality, that is, closer to believing that there’s actually an external world. But absolute reality remains tantalizingly just beyond our finger tips, perhaps forever.

 

And yet it is uncertainty, not certainty, that incites us to continue conducting the intellectual searches that inform us and our behaviours, even if imperfectly, as we seek a fuller understanding of the world. Even if the reality we think we have glimpsed is one characterised by enough ambiguity to keep surprising and sobering us.

 

The real danger lies in an overly hasty, blinkered turn to certainty. This trust stems from a cognitive bias — the one that causes us to overvalue our knowledge and aptitudes. Psychologists call it the Dunning-Kruger effect.

 

What’s that about then? Well, this effect precludes us from spotting the fallacies in what we think we know, and discerning problems with the conclusions, decisions, predictions, and policies growing out of these presumptions. We fail to recognise our limitations in deconstructing and judging the truth of the narratives we have created, limits that additional research and critical scrutiny so often unmask. 

 

The Achilles’ heel of certainty is our habitual resort to inductive reasoning. Induction occurs when we conclude from many observations that something is universally true: that the past will predict the future. Or, as the Scottish philosopher, David Hume, put it in the eighteenth century, our inferring ‘that instances of which we have had no experience resemble those of which we have had experience’. 

 

A much-cited example of such reasoning consists of someone concluding that, because they have only ever observed white swans, all swans are therefore white — shifting from the specific to the general. Indeed, Aristotle uses the white swan as an example of a logically necessary relationship. Yet, someone spotting just one black swan disproves the generalisation. 

 

Bertrand Russell once set out the issue in this colourful way:

 

‘Domestic animals expect food when they see the person who usually feeds them. We know that all these rather crude expectations of uniformity are liable to be misleading. The man who has fed the chicken every day throughout its life at last wrings its neck instead, showing that more refined views as to uniformity of nature would have been useful to the chicken’.

 

The person’s theory that all swans are white — or the chicken’s theory that the man will continue to feed it — can be falsified, which sits at the core of the ‘falsification’ principle developed by philosopher of science Karl Popper. The heart of this principle is that in science a hypothesis or theory or proposition must be falsifiable, that is, to possibly being shown wrong. Or, in other words, to be testable through evidence. For Popper, a claim that is untestable is no longer scientific. 

 

However, a testable hypothesis that is proven through experience to be wrong (falsified) can be revised, or perhaps discarded and replaced by a wholly new proposition or paradigm. This happens in science all the time, of course. But here’s the rub: humanity can’t let uncertainty paralyse progress. As Russell also said: 

 

‘One ought to be able to act vigorously in spite of the doubt. . . . One has in practical life to act upon probabilities’.

 

So, in practice, whether implicitly or explicitly, we accept uncertainty as a condition in all fields — throughout the humanities, social sciences, formal sciences, and natural sciences — especially if we judge the prevailing uncertainty to be tiny enough to live with. Here’s a concrete example, from science.

 

In the 1960s, the British theoretical physicist, Peter Higgs, mathematically predicted the existence of a specific subatomic particle. The last missing piece in the Standard Model of particle physics. But no one had yet seen it, so the elusive particle remained a hypothesis. Only several decades later, in 2012, did CERN’s Large Hadron Collider reveal the particle, whose field is claimed to have the effect of giving all other particles their mass. (Earning Higgs, and his colleague Francis Englert, the Nobel prize in physics.)

 

The CERN scientists’ announcement said that their confirmation bore ‘five-sigma’ certainty. That is, there was only 1 chance in 3.5 million that what was sighted was a fluke, or something other than the then-named Higgs boson. A level of certainty (or of uncertainty, if you will) that physicists could very comfortably live with. Though as Kyle Cranmer, one of the scientists on the team that discovered the particle, appropriately stresses, there remains an element of uncertainty: 

 

“People want to hear declarative statements, like ‘The probability that there’s a Higgs is 99.9 percent,’ but the real statement has an ‘if’ in there. There’s a conditional. There’s no way to remove the conditional.”

 

Of course, not in many instances in everyday life do we have to calculate the probability of reality. But we might, through either reasoning or subconscious means, come to conclusions about the likelihood of what we choose to act on as being right, or safely right enough. The stakes of being wrong matter — sometimes a little, other times consequentially. Peter Higgs got it right; Bertrand Russell’s chicken got it wrong.

  

The takeaway from all this is that we cannot know things with absolute epistemic certainty. Theories are provisional. Scepticism is essential. Even wrong theories kindle progress. The so-called ‘theory of everything’ will remain evasively slippery. Yet, we’re aware we know some things with greater certainty than other things. We use that awareness to advantage, informing theory, understanding, and policy, ranging from the esoteric to the everyday.

 

Monday, 20 January 2020

Environmental Ethics and Climate Change

Posted by Keith Tidman

The signals of a degrading environment are many and on an existential scale, imperilling the world’s ecosystems. Rising surface temperature. Warming oceans. Sinking Greenland and Antarctic ice sheets. Glacial retreat. Decreased snow cover. Sea-level rise. Declining Arctic sea ice. Increased atmospheric water vapour. Permafrost thawing. Ocean acidification. And not least, supercharged weather events (more often, longer lasting, more intense).

Proxy (indirect) measurements — ice cores, tree rings, corals, ocean sediment — of carbon dioxide, a heat-trapping gas that plays an important role in creating the greenhouse effect on Earth, have spiked dramatically since the beginning of the Industrial Revolution. The measurements underscore that the recent increase far exceeds the natural ups and downs of the previous several hundred thousand years. Human activity — use of fossil fuels to generate energy and run industry, deforestation, cement production, land use changes, modes of travel, and much more — continues to be the accelerant.

The reports of the United Nations’ Intergovernmental Panel on Climate Change, contributed to by some 1,300 independent scientists and other researchers from more than 190 countries worldwide, reported that concentrations of carbon dioxide, methane, and nitrous oxides ‘have increased to levels unprecedented in at least 800,000 years’. The level of certainty of human activity being the leading cause, referred to as anthropogenic cause, has been placed at more than 95 percent.

That probability figure has legs, in terms of scientific method. Early logical positivists like A.J. Ayer had asserted that for validity, a scientific proposition must be capable of proof — that is, ‘verification’. Later, however, Karl Popper, in his The Logic of Scientific Discovery, argued that in the case of verification, no number of observations can be conclusive. As Popper said, no matter how many instances of white swans we may have observed, this does not justify the conclusion that all swans are white. (Lo and behold, a black swan shows up.) Instead, Popper said, the scientific test must be whether in principle the proposition can be disproved — referred to as ‘falsification’. Perhaps, then, the appropriate test is not ability to prove that mankind has affected the Earth’s climate; rather, it’s incumbent upon challengers to disprove (falsify) such claims. Something that  hasn’t happened and likely never will.

As for the ethics of human intervention into the environment, utilitarianism is the usual measure. That is to say, the consequences of human activity upon the environment govern the ethical judgments one makes of behavioural outcomes to nature. However, we must be cautious not to translate consequences solely in terms of benefits or disadvantages to humankind’s welfare; our welfare appropriately matters, of course, but not to the exclusion of all else in our environment. A bias to which we have often repeatedly succumbed.

The danger of such skewed calculations may be in sliding into what the philosopher Peter Singer coined ‘speciesism’. This is where, hierarchically, we place the worth of humans above all else in nature, as if the latter is solely at our beck and call. This anthropocentric favouring of ourselves is, I suggest, arbitrary and too narrow. The bias is also arguably misguided, especially if it disregards other species — depriving them of autonomy and inherent rights — irrespective of the sophistication of their consciousness. To this point, the 18th/19th-century utilitarian Jeremy Bentham asserted, ‘Can [animals] feel? If they can, then they deserve moral consideration’.

Assuredly, human beings are endowed with cognition that’s in many ways vastly more sophisticated than that of other species. Yet, without lapsing into speciesism, there seem to be distinct limits to the comparison, to avoid committing what’s referred to as a ‘category mistake’ — in this instance, assigning qualities to species (from orangutans and porpoises to snails and amoebas) that belong only to humans. In other words, an overwrought egalitarianism. Importantly, however, that’s not the be-all of the issue. Our planet is teeming not just with life, but with other features — from mountains to oceans to rainforest — that are arguably more than mere accouterments for simply enriching our existence. Such features have ‘intrinsic’ or inherent value — that is, they have independent value, apart from the utilitarianism of satisfying our needs and wants.

For perspective, perhaps it would be better to regard humans as nodes in what we consider a complex ‘bionet’. We are integral to nature; nature is integral to us; in their entirety, the two are indissoluble. Hence, while skirting implications of panpsychism — where everything material is thought to have at least an element of consciousness — there should be prima facie respect for all creation: from animate to inanimate. These elements have more than just the ‘instrumental’ value of satisfying the purposes of humans; all of nature is itself intrinsically the ends, not merely the means. Considerations of aesthetics, culture, and science, though important and necessary, aren’t sufficient.

As such, there is an intrinsic moral imperative not only to preserve Earth, but for it and us jointly to flourish — per Aristotle’s notion of ‘virtue’, with respect and care, including for the natural world. It’s a holistic view that concedes, on both the utilitarian and intrinsic sides of the moral equation, mutually serving roles. This position accordingly pushes back against the hubristic idea that human-centricism makes sense if the rest of nature collectively amounts only to a backstage for our purposes. That is, a backstage that provides us with a handy venue where we act out our roles, whose circumstances we try to manage (sometimes ham-fistedly) for self-satisfying purposes, where we tinker ostensibly to improve, and whose worth (virtue) we believe we’re in a position to judge rationally and bias-free.

It’s worth reflecting on a thought experiment, dubbed ‘the last man’, that the Australian philosopher Richard Routley introduced in the 1970s. He envisioned a single person surviving ‘the collapse of the world system’, choosing to go about eliminating ‘every living thing, animal and plant’, knowing that there’s no other person alive to be affected. Routley concluded that ‘one does not have to be committed to esoteric values to regard Mr. Last Man as behaving badly’. Whether Last Man was, or wasn’t, behaving unethically goes to the heart of intrinsic versus utilitarian values regarding nature —and presumptions about human supremacy in that larger calculus.

Groups like the UN Intergovernmental Panel on Climate Change have laid down markers as to tipping points beyond which extreme weather events might lead to disastrously runaway effects on the environment and humanity. Instincts related to the ‘tragedy of the commons’ — where people rapaciously consume natural resources and pollute, disregarding the good of humanity at large — have not yet been surmounted. That some other person, or other community, or other country will shoulder accountability for turning back the wave of environmental destruction and the upward-spiking curve of climate extremes has hampered the adequacy of attempted progress. Nature has thrown down the gauntlet. Will humanity pick it up in time?

Sunday, 26 May 2019

Is Popper a ‘modest’ Leo?


Posted by Martin Cohen

A few years ago, astrologer-aesthete Mark Shulgasser asked this revealing question about one of the 20th century's most under-rated philosophers for us. Popper, we should first recall, is admired for at least two big ideas: the first that science proceeds by testing hypotheses and disregarding those that fail the test (‘falsification’) and secondly, his critique of ‘historicism’ (the idea that history is marching towards a fine goal) and linked defence of liberal values and what he calls ‘the open society’. His point is that too many philosophers, from Plato down, think that they are exceptional beings - ‘philosopher kings’.

And yet... Shulgasser throws the charge back at him!

Those (like Popper) born under the astrological sign of Leo think they are kings. Do Leo philosophers think like that too?

Shulgasser continues:
‘Popper himself, so Napoleonic, the overcompensating short man. Popper's philosophical ambitions are overweening. He conquers continents. No one talks about Popper the person without noting his autocratic behavior and intransigence in contrast to his ethic of openness. Here's the Leo dilemma — the autocratic, central I versus the right of every peripheral being to claim to be the same.’
Certainly, in later years, it seems that Professor Popper lived in a house ‘supremely large in area, and adorned with numerous books, works of art, and a Steinway concert grand piano’...  But does that make him ‘Napoleonic’? Consider Brian Magee (broadcaster, politician, author, and popularizer of philosophy) on Popper. taken from Confessions of a Philosopher. Magee starts by accepting Popper as the ‘the outstanding philosopher of the twentieth century’ indeed, the “foremost philosopher of the age”! 
‘My chief impression of him at our early meetings was of an intellectual aggressiveness such as I had never encountered before [Napoleonism]. Everything we argued about he pursued relentlessly, beyond the limits of acceptable aggression in conversation. As Ernst Gombrich—his closest friend, who loved him—once put it to me, he seemed unable to accept the continued existence of different points of view, but went on and on and on about them with a kind of unforgivingness until the dissenter, so to speak, put his signature to a confession that he was wrong and Popper was right. 
In practice this meant he was trying to subjugate people. And there was something angry about the energy and intensity with which he made the attempt. This unremittingly fierce, tight focus, like a flame, put me in mind of a blowtorch, and that image remained the dominant one I had of him for many years, until he mellowed with age. . . 
He behaved as if the proper thing to do was to think one’s way carefully to a solution by the light of rational criteria and then, having come as responsibly and critically as one can to a liberal-minded view of what is right, impose it by an unremitting exercise of will, and never let up until one gets one’s way. ‘The totalitarian liberal’ was one of his nicknames at the London School of Economics, and it was a perceptive one.’
Popper it seems,  ‘turned every discussion into the verbal equivalent of a fight, and appeared to become almost uncontrollable with rage, and would tremble with anger ’.

Yet central to his philosophy is the claim that criticism does more than anything else to bring about growth and improvement of our knowledge and his political writings contain the best statement ever made of the case for freedom and tolerance in human affairs.

So who is the ‘real’ Karl Popper? Does it matter if he failed to live up to his own writings? There's a revealing story told about Popper in which he was invited to give a talk at Cambridge University ‘at the Moral Sciences Club’. 

Who did wave the poker during the acrimonious debate? I understood the Popper version of the Poker incident to put him in a meek and philosophical light and Wittgenstein in a boorish, intolerant one. Maybe I got this wrong - alas I committed myself to this in print - in my book called Philosophical Tales

Anyway, what is known is that Popper was there to present his paper entitled ‘Are There Philosophical Problems?’ at a meeting chaired by Wittgenstein. The two started arguing vehemently over whether there existed substantial problems in philosophy, or merely linguistic puzzles—the position taken by Wittgenstein. In Popper’s account, Wittgenstein gestured at him with a fireplace poker to emphasise his points. When challenged by Wittgenstein to state an example of a moral rule, Popper claims to have replied: ‘Not to threaten visiting lecturers with pokers’, after which (according to Popper) Wittgenstein threw down the poker and stormed out.

My guess it that Popper was indeed a little bit Napoleonic. Mind you, he faced a world in which he was passed over by others all the time, not least Wittgenstein, partly on some kind of unspoken notion of his not being ‘one of us’, not being quite posh enough. Popper was denied access to Oxbridge, and had to graze on the outskirts of academia as a 'not-quite-great' philosopher. 

And elsewhere Magee himself makes it clear he believes Popper is colossally underrated. Why, it’s enough to give anyone a Napoleon complex!

Monday, 20 November 2017

Freedom of Speech in the Public Square

Posted by Keith Tidman

Free to read the New York Times forever, in Times Square
What should be the policy of free society toward the public expression of opinion? The First Amendment of the U.S. Constitution required few words to make its point:
‘Congress shall make no law . . . abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the government for a redress of grievances.’
It reveals much about the republic, and the philosophical primacy of freedom of speech, that this was the first of the ten constitutional amendments collectively referred to as the Bill of Rights.

As much as we like to convince ourselves, however, that the public square in the United States is always a bastion of unbridled free speech, lamentably sometimes it’s not. Although we (rightly) find solace in our free-speech rights, at times and in every forum we are too eager to restrict someone else’s privilege, particularly where monopolistic and individualistic thinking may collide. Hot-button issues have flared time and again to test forbearance and deny common ground.

And it is not only liberal ideas but also conservative ones that have come under assault in recent years. When it comes to an absence of tolerance of opinion, there’s ample responsibility to share, across the ideological continuum. Our reaction to an opinion often is swayed by whose philosophical ox is being gored rather than by the rigor of argument. The Enlightenment thinker Voltaire purportedly pushed back against this parochial attitude, coining this famous declaration:

‘I don’t agree with what you have to say, but I’ll defend to the death your right to say it.’
Yet still, the avalanche of majority opinion, and overwrought claims to ‘unique wisdom’, poses a hazard to the fundamental protection of minority and individual points of view — including beliefs that others might find specious, or even disagreeable.

To be clear, these observations about intolerance in the public square are not intended to advance moral relativism or equivalency. There may indeed be, for want of a better term, ‘absolute truths’ that stand above others, even in the everyday affairs of political, academic, and social policymaking. This reality should not fall prey to pressure from the more clamorous claims of free speech: that the loudest, angriest voices are somehow the truest, as if decibel count and snarling expressions mattered to the urgency and legitimacy of one’s ideas.

Thomas Jefferson like-mindedly referred to ‘the safety with which error of opinion may be tolerated where reason is left free to combat it’. The key is not to fear others’ ideas, as blinkered censorship concedes defeat: that one’s own facts, logic, and ideas are not up to the task of effectively put others’ opinions to the test, without resort to vitriol or violence.

The risk to society of capriciously shutting down the free flow of ideas was powerfully warned against some one hundred fifty years ago by that Father of Liberalism, the English philosopher John Stuart Mill:
‘Strange it is that men should admit the validity of the arguments for free speech but object to their being “pushed to an extreme”, not seeing that unless the reasons are good for an extreme case, they are not good for any case.’
Mill’s observation is still germane to today’s society: from the halls of government to university campuses to self-appointed bully pulpits to city streets, and venues in-between.

Indeed, as recently as the summer of 2017, the U.S. Supreme Court underscored Mill’s point, setting a high bar in affirming bedrock constitutional protections of even offensive speech. Justice Anthony Kennedy, considered a moderate, wrote:
‘A law that can be directed against speech found offensive to some portion of the public can be turned against minority and dissenting views to the detriment of all. . . . The First Amendment does not entrust that power to the government’s benevolence. Instead, our reliance must be on the substantial safeguards of free and open discussion in a democratic society.’
It is worth noting that the high court opinion was unanimous: both liberal and conservative justices concurred. The long and short of it is that even the shards of hate speech are protected.

As to this issue of forbearance, the 20th-century philosopher Karl Popper introduced his paradox of tolerance: ‘Unlimited tolerance must lead to the disappearance of tolerance’. Popper goes on to assert, with some ambiguity,
‘I do not imply . . . that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be unwise. But we should claim the right to suppress them if necessary even by force’.
The philosopher John Rawls agreed, asserting that a just society must tolerate the intolerant, to avoid itself becoming guilty of intolerance and appearing unjust. However, Rawls evoked reasonable limits ‘when the tolerant sincerely and with reason believe that their own security and that of the institutions of liberty are in danger’. Precisely where that line would be drawn is unclear — left to Supreme Court justices to dissect and delineate, case by case.

Open-mindedness — honoring ideas of all vintages — is a cornerstone of an enlightened society. It allows for the intellectual challenge of contrarian thinking. Contrarians might at times represent a large cohort of society; at other times they simply remain minority (yet influential) iconoclasts. Either way, the power of contrarians’ nonconformance is in serving as a catalyst for transformational thinking in deciding society’s path leading into the future.

That’s intellectually healthier than the sides of debates getting caught up in their respective bubbles, with tired ideas ricocheting around without discernible purpose or prediction.

Rather than cynicism and finger pointing across the philosophical divide, the unfettered churn of diverse ideas enriches citizens’ minds, informs dialogue, nourishes curiosity, and makes democracy more enlightened and sustainable. In the face of simplistic patriarchal, authoritarian alternatives, free speech releases and channels the flow of ideas. Hyperbole that shuts off the spigot of ideas dampens inventiveness; no one’s ideas are infallible, so no one should have a hand at the ready to close that spigot. As Benjamin Franklin, one of America’s Founding Fathers, prophetically and plainly pronounced in the Pennsylvania Gazette, 17 November 1737:
‘Freedom of speech is a principal pillar of a free government.’
Adding that ‘... when this support is taken away, the constitution of a free society is dissolved, and tyranny is erected on its ruins’. Franklin’s point is that the erosion or denial of unfettered speech threatens the foundation of a constitutional, free nation that holds government accountable.

With determination, the unencumbered flow of ideas, leavened by tolerance, can again prevail as the standard of every public square — unshackling discourse, allowing dissent, sowing enlightenment, and delivering a foundational example and legacy of what’s possible by way of public discourse.

Monday, 11 May 2015

What is a philosophical problem? The irrefutable metahypothesis

By Matthew Blakeway

If we ban speculation about metahypotheses, does philosophical debate simply evaporate? 



Karl Popper explained how scientific knowledge grows in his book Conjectures and Refutations. A conjecture is a guess as to an explanation of a phenomenon. And an experiment is an attempt to refute a conjecture. Experiments can never prove a conjecture correct, but if successive experiments fail to refute it, then gradually it becomes accepted by scientists that the conjecture is the best available explanation. It is then a scientific theory. Scientists don’t like the word “conjecture” because it implies that it is merely a guess. They prefer the word “hypothesis”. Popper’s rule is that, for a hypothesis to be considered scientific, it must be empirically falsifiable.

When scientists consider a phenomenon that is truly mystifying, it seems reasonable to ask “what might a hypothesis for this look like?” At this point, scientists are hypothesising about hypotheses. Metahypothetical thinking is the first step in any scientific journey. When this produces no results, frustration gets the upper hand and they pursue the following line of reasoning: “the phenomenon is an effect, and must have a cause. But since we don’t know what that cause is, let’s give it a name ‘X’ and then speculate about its properties.” A metahypothesis is now presumed to be 'A Thing', rather than merely an idea about an idea.

The problem is the irrefutability of its existence.