Showing posts with label Immanuel Kant. Show all posts
Showing posts with label Immanuel Kant. Show all posts

Sunday, 26 February 2023

Universal Human Rights for Everyone, Everywhere

Jean-Jacques Rousseau

By Keith Tidman


Human rights exist only if people believe that they do and act accordingly. To that extent, we are, collectively, architects of our destiny — taking part in an exercise in the powers of human dignity and sovereignty. Might we, therefore, justly consider human rights as universal?

To presume that there are such rights, governments must be fashioned according to the people’s freely subscribed blueprints, in such ways that policymaking and consignment of authority in society represent citizens’ choices and that power is willingly shared. Such individual autonomy is itself a fundamental human right: a norm to be exercised by all, in all corners. Despite scattered conspicuous headwinds. Respect for and attachment to human rights in relations with others is binding, prevailing over the mercurial whimsy of institutional dictates.

For clarity, universal human rights are inalienable norms that apply to everyone, everywhere. No nation ought to self-immunise as an exception. These human rights are not mere privileges. By definition they represent the natural order of things; that is, these rights are naturally, not institutionally, endowed. There’s no place for governmental, legal, or social neglect or misapplication of those norms, heretically violating human dignity. This point about dignity is redolent of Jean-Jacques Rousseau’s notions of civil society, explained in his Social Contract (1762), which provocatively opens with the famous ‘Man was born free, and he is everywhere in chains’. By which Rousseau was referring to the tradeoff between people’s deference to government authority over moral behaviour in exchange for whatever freedoms civilisation might grant as part of the social contract. The contrary notion, however, asserts that human rights are natural, protected from government caprice in their unassailability — claims secured by the humanitarianism of citizens in all countries, regardless of cultural differences.

The idea that everyone has a claim to immutable rights has the appeal of providing a platform for calling out wrongful behaviour and a moral voice for preventing or remedying harms, in compliance with universal standards. The standards act as moral guarantees and assurance of oversight. The differences among cultures should not translate to the warped misplacement of relativism in calculating otherwise clear-cut universal rights aimed to protect.

International nongovernmental organisations (such as Human Rights Watch) have laboured to protect fundamental liberties around the world, investigating abuses. Several other human rights organisations, such as the United Nations, have sought to codify people's rights, like those spelled out in the UN Declaration of Human Rights. The many universal human rights listed by the declaration include these:
All human beings are born free; everyone has the right to life, liberty, and security; no one shall be subjected to torture; everyone has the right to freedom of thought, conscience, and religion; everyone has the right to education; no one shall be held in slavery; all are equal before the law’. 
(Here’s the full UN declaration, for a grasp of its breadth.) 

These aims have been ‘hallowed’ by the several documents spelling out moral canon, in aggregate amounting to an international bill of rights to which countries are to commit and abide by. This has been done without regard to appeals to national sovereignty or cultural differences, which might otherwise prejudice the process, skew policy, undermine moral universalism, lay claim to government dominion, or cater to geopolitical bickering — such things always threatening to pull the legs out from under citizens’ human rights.

These kinds of organisations have set the philosophical framework for determining, spelling out, justifying, and promoting the implementation of human rights on as maximum global scale as possible. Aristotle, in Nicomachean Ethics, wrote to this core point, saying: 
A rule of justice is natural that has the same validity everywhere, and does not depend on our accepting it’.
That is, natural justice foreruns social, historical, and political institutions shaped to bring about conformance to their arbitrary, self-serving systems of fairness and justice. Aristotle goes on:
Some people think that all rules of justice are merely conventional, because whereas a law of nature is immutable and has the same validity everywhere, as fire burns both here and in Persia, rules of justice are seen to vary. That rules of justice vary is not absolutely true, but only with qualifications. Among the gods indeed it is perhaps not true at all; but in our world, although there is such a thing as Natural Justice, all rules of justice are variable. But nevertheless there is such a thing as Natural Justice as well as justice not ordained by nature’.
Natural justice accordingly applies to everyone, everywhere, where moral beliefs are objectively corroborated as universal truths and certified as profound human goods. In this model, it is the individual who shoulders the task of appraising the moral content of institutional decision-making.

Likewise, it was John Locke, the 17th-century English philosopher, who argued, in his Two Treaties of Government, the case that individuals enjoy natural rights, entirely non-contingent of the nation-state. And that whatever authority the state might lay claim to rested in guarding, promoting, and serving the natural rights of citizens. The natural rights to life, liberty, and property set clear limits to the power of the state. There was no mystery as to Locke’s position: states existed singularly to serve the natural rights of the people.

A century later, Immanuel Kant was in the vanguard in similarly taking a strong moral position on validating the importance of human rights, chiefly the entangled ideals of equality and the moral autonomy and self-determination of rational people.

The combination of the universality and moral heft of human rights clearly imparts greater potency to people’s rights, untethered to legal, institutional force of acknowledgment. As such, human rights are enjoyed equally, by everyone, all the time. It makes sense to conclude that everyone is therefore responsible for guarding the rights of fellow citizens, not just their own. Yet, in practice it is the political regime and perhaps international organisations that bear that load.

And within the ranks of philosophers, human-rights universalism has sometimes clashed with relativists, who reject universal (objective) moral canon. They paint human rights as influenced contingently by social, historical, and cultural factors. The belief is that rights in society are considered apropos only for those countries whose culture allows. Yet, surely, relativism still permits the universality of numerous rights. We instinctively know that not all rights are relative. At the least, societies must parse which rights endure as universal and which endure as relative, and hope the former are favoured.

That optimism notwithstanding, many national governments around the world choose not to uphold, either in part or in whole, fundamental rights in their countries. Perhaps the most transfixing case for universal human rights, as entitlements, is the inhumanity that haunts swaths of the world today, instigated for the most trifling of reasons.

Monday, 15 August 2022

The Tangled Web We Weave


By Keith Tidman
 

Kant believed, as a universal ethical principle, that lying was always morally wrong. But was he right? And how might we decide that?

 

The eighteenth-century German philosopher asserted that everyone had ‘intrinsic worth’: that people are characteristically rational and free to make their own choices. Lying, he believed, degrades that aspect of moral worth, withdrawing others’ ability to exercise autonomy and make logical decisions, as we presume they might in possessing truth. 

 

Kant’s ground-level belief in these regards was that we should value others strictly ‘as ends’, and never see people ‘as merely means to ends’. A maxim that’s valued and commonly espoused in human affairs today, too, even if people sometimes come up short.

 

The belief that judgements of morality should be based on universal principles, or ‘directives’, without reference to the practical outcomes, is termed deontology. For example, according to this approach, all lies are immoral and condemnable. There are no attempts to parse right and wrong, to dig into nuance. It’s blanket censure.

 

But it’s easy to think of innumerable drawbacks to the inviolable rule of wholesale condemnation. Consider how you might respond to a terrorist demanding the place and time of a meeting to be held by the intended target. Deontologists like Kant would consider such a lie immoral.

 

Virtue ethics, to this extent compatible with Kant’s beliefs, also says that lying is morally wrong. Their reasoning, though, is that it violates a core virtue: honesty. Virtue ethicists are concerned to protect people’s character, where ‘virtues’ — like fairness, generosity, compassion, courage, fidelity, integrity, prudence, and kindness — lead people to behave in ways others will judge morally laudable. 

 

Other philosophers argue that, instead of turning to the rules-based beliefs of Kant and of virtue ethicists, we ought to weigh the (supposed) benefits and harms of a lie’s outcomes. This principle is called  consequentialist ethics, mirroring the utilitarianism of eighteenth/nineteenth-century philosophers Jeremy Bentham and John Stuart Mill, emphasising greatest happiness. 

 

Advocates of consequentialism claim that actions, including lying, are morally acceptable when the results of behaviour maximise benefits and minimise harms. A tall order! A lie is not always immoral, as long as outcomes on net balance favour the stakeholders.

 

Take the case of your saving a toddler from a burning house. Perhaps, however, you believe in not taking credit for altruism, concerned about being perceived conceitedly self-serving. You thus tell the emergency responders a different story about how the child came to safety, a lie that harms no one. Per Bentham’s utilitarianism, the ‘deception’ in this instance is not immoral.

 

Kant’s dyed-in-the-wool unforgiveness of lies invites examples that challenge the concept’s wisdom. Take the historical case of a Jewish woman concealed, from Nazi military occupiers, under the floorboards of a farmer’s cottage. The situation seems clear-cut, perhaps.

 

If grilled by enemy soldiers as to the woman’s whereabouts, the farmer lies rather than dooming her to being shot or sent to a concentration camp. The farmer chooses good over bad, echoing consequentialism and virtue ethics. His choice answers the question whether the lie elicits the better outcome than would truth. It would have been immoral not to lie.

 

Of course, the consequences of lying, even for an honorable person, may sometimes be hard to get right, differing in significant ways from reality or subjectively the greater good. One may overvalue or undervalue benefits — nontrivial possibilities.

 

But maybe what matters most in gauging consequences are motive and goal. As long as the purpose is to benefit, not to beguile or harm, then trust remains intact — of great benefit in itself.

 

Consider two more cases as examples. In the first, a doctor knowingly gives a cancer-ridden patient and family false (inflated) hope for recovery from treatment. In the second, a politician knowingly gives constituents false (inflated) expectations of benefits from legislation he sponsored and pushed through.

 

The doctor and politician both engage in ‘deceptions’, but critically with very different intent: Rightly or wrongly, the doctor believes, on personal principle, that he is being kind by uplifting the patient’s despondency. And the politician, rightly or wrongly, believes that his hold on his legislative seat will be bolstered, convinced that’s to his constituents’ benefit.

 

From a deontological — rules-focused — standpoint, both lies are immoral. Both parties know that they mislead — that what they say is false. (Though both might prefer to say something like they ‘bent the truth’, as if more palatable.) But how about from the standpoint of either consequentialism or virtue ethics? 

 

The Roman orator Quintilian is supposed to have advised, ‘A liar should have a good memory’. Handy practical advice, for those who ‘weave tangled webs’, benign or malign, and attempt to evade being called out for duplicity.

 

And damning all lies seems like a crude, blunt tool, with no real value by being wholly unworkable outside Kant’s absolutist disposition toward the matter; no one could unswervingly meet that rigorous standard. Indeed, a study by psychologist Robert Feldman claimed that people lie two to three times, in trivial and major ways, for every ten minutes of conversation! 

 

However, consequentialism and virtue ethics have their own shortcomings. They leave us with the problematic task of figuring out which consequences and virtues matter best in a given situation, and tailoring our decisions and actions accordingly. No small feat.

 

So, in parsing which lies on balance are ‘beneficial’ or ‘harmful’, and how to arrive at those assessments, ethicists still haven’t ventured close to crafting an airtight model: one that dots all the i’s and crosses all the t’s of the ethics of lying. 


At the very least, we can say that, no, Kant got it wrong in overbearingly rebuffing all lies as immoral. Not seeking reasonable exceptions may have been obvious folly. Yet, that may be cold comfort for some people, as lapses into excessive risk — weaving evermore tangled webs — court danger by unwary souls.


Meantime, while some more than others may feel they have been cut some slack, they might be advised to keep Quintilian’s advice close.




* ’O what a tangled web we weave / When first we practice to deceive’, Sir Walter Scott, poem, ‘Marmion: A Tale of Flodden Field’.

 

Monday, 27 April 2020

The Curiosity of Creativity and Imagination

In Chinese mythology, dragon energy is creative. It is a magical energy, the fire of the soul itself. The dragon is the symbol of our power to transmute and create with imagination and purpose.
Posted by Keith Tidman

Most people would agree that ‘creativity’ is the facility to produce ideas, artifacts, and performances that are both original and valuable. ‘Original’ as in novel, where new ground is tilled. While the qualifier ‘valuable’ is considered necessary in order to address German philosopher Immanuel Kant’s point in The Critique of Judgment (1790) that:

‘Since there can also be original nonsense, its products [creativities] must at the same time be models, i.e., be exemplary’.

An example of lacking value or appropriateness in such context might be a meaningless sequence of words, or gibberish.

Kant believed that creativity pertains mostly to the fine arts, or matters of aesthetics — a narrower perspective than today’s inclusive view. He contended, for example, that genius could not be found in science, believing (mistakenly, I would argue) that science only ever adheres to preset methods, and does not allow for the exercise of imagination. He even excluded Isaac Newton from history’s pantheon of geniuses, despite respecting him as a great man of science.

Today, however, creativity’s reach extends along vastly broader lines, encompassing fields like business, economics, history, philosophy, language, physics, biology, mathematics, technology, psychology, and social, political, and organisational endeavours. Fields, that is, that lend themselves to being, at their creative best, illuminative, nontraditional, gestational, and transformational, open to abstract ideas that prompt pondering novel possibilities. The clue as to the greatness of such endeavors is provided by the 16th/17th-century English philosopher Francis Bacon in the Novum Organum (1620), where he says that:

‘By far the greatest obstacle to the progress . . . and undertaking of new tasks and provinces therein is found in this — that men despair and think things impossible’.

Accordingly, such domains of human activity have been shown to involve the same explorative and generative functions associated with the brain’s large-scale neural networks. A paradigm of creative cognition that is flexible and multidimensional, and one that calls upon several features:
  • an unrestricted vision of what’s possible,
  • ideation, 
  • images, 
  • intuitions,
  • thought experiments, 
  • what-if gaming, 
  • analogical reasoning, 
  • metaphors, 
  • counterfactual reasoning, 
  • inventive free play, 
  • hypotheses, 
  • knowledge reconceptualisation, 
  • and theory selection.
Collectively, these are the cognitive wellspring of creative attainment. To those extents, creativity appears fundamental to defining humanity — what shapes us, through which individual and collective expression occurs — and humanity’s seemingly insatiable, untiring quest for progress and attainment.

Societies tend to applaud those who excel at original thought, both for its own sake and for how it advances human interests. That said, these principles are as relevant to the creative processes of everyday people as to those who eventually are recorded in the annals of history as geniuses. However, the creative process does not start out with the precise end (for example, a poem) and the precise means to getting there (for example, the approach to writing that poem) already known. Rather, both the means and the end product are discoverable only as the creative process unfolds.

Above all, imagination sits at the core of creativity. Imagination is representational, of circumstances not yet real but that nevertheless can evoke emotions and behaviours in people. The world of imagination is, of course, boundless in theory and often in practice, depending on the power of one’s mind to stretch. The American philosopher John Dewey spoke to this point, chalking up every major leap in science, as he boldly put it in The Quest for Certainty, to ‘a new audacity of the imagination’. Albert Einstein’s thoughts paralleled these sentiments, declaring in an interview in 1929 that ‘Imagination is more important than knowledge’. Wherein new possibilities take shape. Accordingly and importantly, imagination yields ideas that surpass what’s already supposed.

Imagination is much more, however, than a mere synonym for creativity, otherwise the term would simply be redundant. Imagination, rather, is a tool: freeing up, even catalysing, creativity. To those ends, imagination entails visualisation (including thought experiments, engaged across disciplines) that enables a person to reach out for assorted, and changing, possibilities — of things, times, places, people, and ideas unrestricted by what’s presumed already experienced and known concerning subjective external reality. Additionally, ‘mirroring’ might occur in the imaginative process, where the absence of features of a mental scenario are filled in with analogues plucked from the external world around us. Ultimately, new knowledge and beliefs emerge, in a progressive loop of creation, validation, application, re-imagination.

Imagination might revolve around diverse dominions, like unconstrained creative thought, play, pretense, the arts, allegorical language, predictive possibilities, and imagery, among others. Imagination cannot, however, guarantee creative outcomes — nor can the role of intuition in human cognition — but imagination is essential (if not always sufficient) for creative results to happen. As explained by Kant, imagination has a ‘constitutive’ role in creativity. Something demonstrated by a simple example offered by 17th-century English philosopher Thomas Hobbes:

‘as when from the sight of a man at one time, and a horse at another, we conceive in our mind a Centaur’. 

Such imaginative, metaphorical playfulness being the stuff not only of absorbed, undaunted children, of course — though they are notably gifted with it in abundance — but also of freethinking adults. Adults whose minds marvel at alternatives in starting from scratch (tabula rasa), or from picking apart (divergence) and reassembling (convergence) presumed reality.

The complexities of imagination best nourish what one might call ‘purposeful creativity’ — where a person deliberately aims to achieve a broad, even if initially indeterminate outcome. Such imagining might happen either alone or with the involvement of other participants. With purposeful creativity, there’s agency and intentionality and autonomy, as is quintessentially the case of the best of thought experiments. It occasions deep immersion into the creative process. ‘Passive creativity’, on the other hand, is where someone has a spontaneous, unsought solution (a Eureka! moment) regarding a matter at hand.

Purposeful, or directed, creativity draws on both conscious and unconscious mechanisms. Passive creativity — with mind open to the unexpected — largely depends on unconscious mental apparatuses, though with the mind’s executive function not uncommonly collaboratively and additively ‘editing’ afterwards, in order to arrive at the final result. To be sure, either purposeful or passive creativity is capable of summoning remarkable insights.

The 6th-century BC Chinese spiritual philosopher Laozi perhaps most pithily described people’s capacity for creativity, and its sometimes-companion genius, with this figurative depiction in the Teo Te Ching, the context being to define ‘genius’ as the ability to see potential: ‘To see things in the seed’ — long before germination eventually makes those ‘things’ apparent, even obvious, to everyone else and become stitched into the fabric of society and culture.

Monday, 11 March 2019

Are ‘Designer Offspring’ Our Destiny?

The promise of gene editing and designer offspring may prove irresistible

Posted by Keith Tidman

It’s an axiom that parents aspire to the best for their children — from good health to the best of admired traits. Yet our primary recourse is to roll the dice in picking a spouse or partner, hoping that the resulting blend of chromosomes will lead to offspring who are healthy, smart, happy, attractive, fit, and a lot else. Gene editing, now concentrated on medical applications, will offer ways to significantly raise the probability of human offspring manifesting the traits parents seek: ‘designer offspring’. What, then, are the philosophical and sociological implications of using gene editing to influence the health-related wellbeing of offspring, as well as to intervene into the complex traits that define those offspring under the broader rubric of human enhancement and what we can and ought to do?
‘All the interests of my reason, speculative as well as practical, combine in the three following questions: What can I know? What ought I to do? What may I hope?
— Immanuel Kant
The idea is to alter genes for particular outcomes, guided by previous mapping of every gene in the human body. To date, these selected outcomes have targeted averting or curing disorders, like cystic fibrosis, Huntington’s, and sickle-cell disease, stemming from gene mutations. As such, one of the central bioethical issues is for parents to freely decide which disorders are ‘unacceptable’ and thus to prevent or fix through gene editing. The public, and the medical field, already make similar medical decisions all the time in the course of treatments: stem cells to grow transplantable organs, AI-controlled robotic surgery, and vaccinations, among innumerable others. The aim is to avoid or cure health disorders, or minimally to mitigate symptoms.

As a matter of societal norms, these decisions reflect people’s basic notions about the purpose of health science. Yet, if informed parents knowingly choose to give birth to, say, an infant with Down syndrome, believing philosophically and sociologically that such children can live happy, productive lives and are a ‘blessing’, then as a matter of ethics, humanitarianism, and sovereign agency they retain that right. A potential wrinkle in the reasoning is that such a child itself has no say in the decision. Which might deny the child her ‘natural right’ not to go through a lifetime with the quality-of-life conditions the disorder hands her. The child is denied freely choosing her own destiny: the absence of consent traditionally associated with medical intervention. As a corollary, the aim is not to deprive society of heterogeneity; sameness is not an ideal. That is not equivalent, however, to contending that a particular disorder must remain a forever variation of the human species.
‘We are going from being able to read our genetic code to the ability to write it. This gives us the … ability to do things never contemplated before’
— Craig Venter, writing in ‘Heraclitean Fire: Sketches from a Life Before Nature’.
Longer term, people won’t be satisfied limited to health-related measures. They will turn increasingly to more-complex traits: cognition (intelligence, memory, comprehension, talent, etc.), body type (eye and hair colour, height, weight, mesomorphism, etc.), athleticism (fast, strong, agile, endurance, etc.), attractiveness, gender, lifespan, and personality. The ‘designer offspring’, that is, mentioned above. Nontrivially, some changes may be inheritable, passed from one generation to the next. This will add to the burden of getting each intervention right, in a science that’s briskly evolving. Thus, gene editing will not only give parents offspring that conform to their ideals; also, it may alter the foundational features of our very species. These transhumanist choices will give rise to philosophical and sociological issues with which society will grapple. Claims that society is skating close to eugenics —a practice rightly discredited as immoral — as well as specious charges of ‘playing God’ and assertions of dominion may lead to select public backlash, but not incurably so to human-enhancing programs.

Debates will confront thorny issues: risk–reward balance in using gene editing to design offspring; comparative value among alternative human traits; potential inequality in access to procedures, exacerbating classism; tipping point between experimentation and informed implementation; which embryos to carry to term and childhood; cultural norms and values that emerge from designer offspring; individual versus societal rights; society’s intent in adopting what one might call genetic engineering, and the basis of family choice; acceleration and possible redirection of the otherwise-natural evolution of the human species; consequences of genetic changes for humanity’s future; the need for ongoing programmes to monitor children born as a result of gene editing; and possible irreversibility of some adverse effects. It won't be easy.
‘It is an important point to realize that the genetic programming of our lives is not fully deterministic. It is statistical … not deterministic’ 
— Richard Dawkins
The promise of gene editing and designer offspring (and by extension, human enhancement writ large) may prove irresistible and irreversible — our destiny. To light the way, nations and supranational institutions should arrange ongoing collaboration among philosophers, scientists, the humanities, medical professionals, theologians, policymakers, and the public. Self-regulation is not enough. Oversight is key, where malleable guidelines take account of improved knowledge and procedures. What society accepts (or rejects) today in human gene editing and human enhancements may well change dramatically from decade to decade. Importantly, introducing gene editing into selecting the complex traits of offspring must be informed and unrushed. Overarching moral imperatives must be clear. Yet, as parents have always felt a compelling urge and responsibility to advantage their children in any manner possible, eventually they may muse whether genetic enhancements are a ‘moral obligation’, not just a ‘moral right’.


Monday, 28 May 2018

Occam's Razor: On the Virtue of Simplicity

As a Franciscan monk, simplicity was at the heart of   William's daily life.
Posted by Keith Tidman

The English philosopher and monk, William of Occam (c. 1287–1347), surely got it about right with his ‘law of parsimony’, which asserts, as a general principle, that when there are two competing explanations or theories, the one with the fewest assumptions (and fewest guesses or variables) more often is to be prefered. As the ‘More than Subtle Doctor’ couched the concept in his Summa Logicae, ‘It is futile to do with more what can be done with fewer’ — itself an example of ‘economy’. William’s law is typically referred to as Occam’s razor — the word ‘razor’ signifying a slicing away of arguably unnecessary postulates. In many instances, Occam’s razor is indeed right; in other examples, well, perhaps not. Let’s explore the ideas further.

Although the law of parsimony has always been most closely associated with William of Occam, (Occam, now called ‘Ockham’, being the village where he was born), he hasn’t been the principle’s only proponent. Just as famously, a millennia and a half earlier, the Greek philosopher Aristotle said something similar in his Posterior Analytics:
‘We may assume the superiority ceteris paribus [other things being equal] of the demonstration which derives from fewer postulates or hypotheses.’
And seven centuries after William, Albert Einstein, perhaps thinking of his own formulation of special relativity, noted that ‘the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible’. Many other philosophers, scientists, and thinkers have also admired the concept.

Science’s favoritism toward the parsimony of Occam’s razor is no more apparent than in the search for a so-called ‘theory of everything’ — an umbrella theory unifying harmoniously all the physical forces of the cosmos, including the two cornerstones of 20th-century physics: the general theory of relativity (describing the macro scale) and quantum theory (describing the micro scale). This holy grail of science has proven an immense but irresistible challenge, its having occupied much of Einstein’s life, as it has the imagination of other physicists. But the appeal to scientists is in a unified (presumed final or all-encompassing) theory being condensed into a single set of equations, or perhaps just one equation, to describe all physical reality. The appeal of the theory’s potential frugality in coherently and irreducibly explaining the universe remains immense.

Certainly, philosophers too, often regard parsimony as a virtue — although there have been exceptions. For clarity, we must first note that parsimony and simplicity are usually, as a practical matter, considered one and the same thing — that is, largely interchangeable. For its part, simplicity comes in at least two variants: one equates to the number and complexity of kinds of things hypothesised, and sometimes referred to as ‘elegance’ or ‘qualitative parsimony’; the second equates to the number and complexity of individual, independent things (entities) hypothesised, and sometimes referred to as ‘quantitative parsimony’. Intuitively, people in their daily lives usually favor simpler hypotheses; so do philosophers and scientists. For example, we assume that Earth’s gravity will always apply rather than its suddenly ceasing — that is, rather than objects falling upward unassisted.
Among the philosophers who weighed in on the principle was Thomas Aquinas, who noted in Summa Theologica in the 13th century, ‘If a thing can be done adequately by means of one, it is superfluous to do it by means of several; for we observe that nature does not employ two instruments where one suffices.’ And the 18th-century German philosopher Immanuel Kant, in the Critique of Pure Reason, similarly observed that ‘rudiments or principles must not be unnecessarily multiplied.’ In this manner, philosophers have sometimes turned to Occam’s razor to criticise broad metaphysical hypotheses that purportedly include the baggage of unnecessary ontological concepts. An example of falling under such criticism via the application of Occam’s razor is Cartesian dualism, which physicalists argue is flawed by an extra category — that is, the notion that the mind is entirely apart from the neuronal and synaptic activity of the brain (the physical and mental purportedly being two separate entities).

Returning to Einstein, his iconic equation, E=mc2, is an example of Occam’s razor. This ‘simple’ mathematical formula, which had more-complex precursors, has only two variables and one constant, relating (via conversion) the amount of energy to the amount of matter (mass) multiplied by the speed of light squared. It allows one to calculate how much energy is tied up in the mass of any given object, such as a chickpea or granite boulder. The result is a perfectly parsimonious snapshot of physical reality. But simplicity isn’t always enough, of course. There must also be consistency with the available data, with the model necessarily accommodating new (better) data as they become available.

Other eminent scientists, like the 17th-century physicist and mathematician Isaac Newton, similarly valued this principle of frugality. The first of Newton’s three ‘rules of reasoning in philosophy’ expressed in his Principia Mathematica offers:
‘We are to admit no more causes of natural things than such as are both true and sufficient to explain their appearances. . . . Nature is pleased with simplicity, and affects not the pomp of superfluous causes.’
But, as noted above, Occam’s razor doesn’t always lead to truth per se. Nor, importantly, does the notion of ‘simplicity’ necessarily equate to ease of explanation or ease of understanding. Here are two examples where frugality arguably doesn’t win the day. One theory presents a complex cosmological explanation of the Big Bang and the physical evolution of a 13.8-billion-year-old universe. A single, but very-late-on-the-stage thread of that cosmological account is the intricate biological evolution of modern human beings. A second, creationist explanation of the current universe and of human beings — with far fewer assumptions and hypotheses — describes both as having roots in a single event some 6,000 to 10,000 years ago, with the cosmos conveniently made to look older. Available evidence suggests, however, that the first explanation is correct, despite the second explanation’s parsimony.

In broad ways, Occam’s razor has been supported by the empirical successes of theories that proved parsimonious in their explanations: with fewer causes, entities, properties, variables, and processes embedded in fewer assumptions and hypotheses. However, even though people tend instinctively and understandably to be drawn toward simpler accounts of hoped-for reality, simplicity hasn’t always triumphed. For example, the earlier nature-versus-nurture debate posed a simpler, albeit false, either-or dichotomy in trying to understand a person’s development and behaviour on the basis of either the environment — the influence of external factors, such as experience and learning, on an otherwise blank slate or perhaps set of instincts — or genes and heritability — that is, biological pre-wiring. Reality is, of course, a complex mix of both nature and nurture, with one influencing the other.

To avoid such pitfalls, as the English mathematician and philosopher Alfred North Whitehead pointedly (and parsimoniously) suggested:
‘. . . every natural philosopher should seek simplicity and distrust it.