Showing posts with label Keith Tidman. Show all posts
Showing posts with label Keith Tidman. Show all posts

Monday 11 July 2022

Religions as World History

Religious manuscripts in the fabulous library of Timbuktu. Such texts are a storehouse of ancient knowledge.
By Keith Tidman

Might it be desirable to add teaching about world religions to the history curriculum in schools?


Religions have been deeply instrumental in establishing the course of human civilisation, from the earliest stirrings of community and socialisation thousands of years ago. Yet, even teaching about the world’s religions has often been guardedly held at arm’s length, for concern instruction might lapse into proselytising. 


Or at least, for apprehension over instructors actions being seen as such.


The pantheon of religions subject to being taught span the breadth: from Hinduism, Islam, Zoroastrianism, and Judaism to Buddhism, Christianity, and Sikhism  including indigenous faiths. The richness of their histories, the literacy and sacred quality of their storytelling, the complexities and directive principles held among their adherents, and religions seminal influences upon the advancement of human civilisation are truly consequential.


This suggests that religions might be taught as a version of world history. Done so without exhortation, judgment, or stereotyping. And without violating religious institutions desire to be solely responsible for nurturing the pureness of their faith. School instruction ought be straightforwardly scholarly and factual, that is  without presumption, spin, or bias. Most crucially, both subject-matter content and manner of presentation should avert transgressing the beliefs and faiths of students or their families and communities. And avoid challenging what theologians may consider axiomatic about the existence and nature of God, the word of authoritative figures, the hallowed nature of practices like petitionary prayer, normative canon, or related matters.

 

Accordingly, the aim of such an education would not be to evangelise or favour any religions doctrine over another’s; after all, we might agree that choice in paving a child’s spiritual foundation is the province of families and religious leaders.


Rather, the vision I wish to offer here is a secularised, scholarly teaching of religious literacy in the context of broader world histories. Adding a philosophical, ideas-based, dialogue-based layer to the historical explanation of religions may ensure that content remains subject to the rationalism (critical reflection) seen in educational content generally: as, for example, in literature, art, political theory, music, civics, rhetoric, geography, classics, science and math, and critical thinking, among other fields of enquiry.

 

You see, there is, I propose, a kind of DNA inherent in religion. This is rooted in origin stories of course, but also revealed by their proclivity toward change achieved through a kind of natural selection not dissimilar to that of living organisms. An evolutionary change in which the faithful — individuals, whole peoples, and formal institutions — are the animating force. Where change is what’s constant. We have seen this dynamical process result in the shaping and reshaping of values, moral injunctions, institutions, creeds, epistemologies, language, organisation, orthodoxies, practices, symbols, and cultural trappings.

 

In light of this evolutionary change, a key supporting pillar of an intellectually robust, curious society is to learn — through the power of unencumbered ideas and intellectual exploration — what matters to the development and practice of the world’s many religions. The aim being to reveal how doctrine has been reinterpreted over time, as well as to help students shed blinkers to others’ faith, engage in free-ranging dialogue on the nature, mindset, and language of religion writ large, and assume greater respect and tolerance.


Democracies are one example of where teaching about religion as an academic exercise can take firmest hold. One goal would be to round out understanding, insights, skills, and even greater wisdom for effective, enlightened citizenship. Such a program’s aim would be to encompass all religions on a par with one another in importance and solemnity, including those spiritual belief systems practiced by maybe only a few — all such religious expression nonetheless enriched by the piloting of their scriptures, ideologies, philosophies, and primary texts.

 

The objective should be to teach religious tenets as neutral, academic concepts, rather than doctrinal matters of faith, the latter being something individuals, families, and communities can and should choose for themselves. Such that, for example, whose moral code and doctrinal fundamentals one ought to adopt and whose to shy from are avoided — these values-based issues regarded as improper for a public-education forum. Although history has shown that worship is a common human impulse around the world, promoting worship per se ought not be part of teaching about religions. That’s for another time and place. 


Part and parcel, the instructional program should respect secularist philosophies, too. Like those individuals and families who philosophically regard faith and notions of transcendentalism as untenable, and see morality (good works) in humanistic terms. And like those who remain agnostically quizzical while grappling with what they suppose is the unknowability of such matters as higher-order creators, yet are at peace with their personal indecision about the existence of a deity. People come to these philosophical camps on equal footing, through deliberation and well-intentioned purposes — seekers of truth in their own right.

 

In paralleling how traditional world histories are presented, the keystone to teaching about religions should be intellectual honesty. Knowing what judiciously to put in and leave out, while understanding that tolerance and inclusion are core to a curious society and informed citizenry. Factually and scholarly opening minds to a range of new perspectives as windows on reality.

 

As such, teachings focus should be rigorously academic and instructional, not devotional. In that regard, it’s imperative that schools sidestep breaching the exclusive prerogative of families, communities, and religious institutions to frame whose ‘reality’ — whose truth, morality, orthodoxy, ritual, holy ambit, and counsel — to live by.


These cautions notwithstanding, it seems to me that schools ought indeed seek to teach the many ways in which the world’s religions are a cornerstone to humanity’s cultural, anthropological, and civilisational ecology, and thus a core component of the millennia-long narratives of world history.

 

Monday 13 June 2022

The Diamond–Water Paradox


All that glitters is not gold! Or at least, is not worth as much as gold. Here, richly interwoven cubic crystals of light metallic golden pyrite – also known as fool’s gold – are rare but nowhere near as valuable. Why’s that?

By Keith Tidman


One of the notable contributions of the Enlightenment philosopher, Adam Smith, to the development of modern economics concerned the so-called ‘paradox of value’.

That is, the question of why one of the most-critical items in people’s lives, water, is typically valued far less than, say, a diamond, which may be a nice decorative bauble to flaunt but is considerably less essential to life? As Smith couched the issue in his magnum opus, titled An Inquiry Into the Nature and Causes of the Wealth of Nations (1776):
‘Nothing is more useful than water: but it will purchase scarcely anything; scarcely anything can be had in exchange for it. A diamond, on the contrary, has scarcely any use-value; but a very great quantity of other goods may frequently be had in exchange for it’.
It turns out that the question has deep roots, dating back more than two millennia, explored by Plato and Aristotle, as well as later luminaries, like the seventeenth-century philosopher John Locke and eighteenth-century economist John Law.

For Aristotle, the solution to the paradox involved distinguishing between two kinds of ‘value’: the value of a product in its use, such as water in slaking thirst, and its value in exchange, epitomised by a precious metal conveying the power to buy, or barter for, another good or service.

But, in the minds of later thinkers on the topic, that explanation seemed not to suffice. So, Smith came at the paradox differently, through the theory of the ‘cost of production’ — the expenditure of capital and labour. In many regions of the world, where rain is plentiful, water is easy to find and retrieve in abundance, perhaps by digging a well, or walking to a river or lake, or simply turning on a kitchen faucet. However, diamonds are everywhere harder to find, retrieve, and prepare.

Of course, that balance in value might dramatically tip in water’s favour in largely barren regions, where droughts may be commonplace — with consequences for food security, infant survival, and disease prevalence — with local inhabitants therefore rightly and necessarily regarding water as precious in and of itself. So context matters.

Clearly, however, for someone lost in the desert, parched and staggering around under a blistering sun, the use-value of water exceeds that of a diamond. ‘Utility’ in this instance is how well something gratifies a person’s wants or needs, a subjective measure. Accordingly, John Locke, too, pinned a commodity’s value to its utility — the satisfaction that a good or service gives someone.

For such a person dying of thirst in the desert, ‘opportunity cost’, or what they could obtain in exchange for a diamond at a later time (what’s lost in giving up the other choice), wouldn’t matter — especially if they otherwise couldn’t be assured of making it safely out of the broiling sand alive and healthy.

But what if, instead, that same choice between water and a diamond is reliably offered to the person every fifteen minutes rather than as a one-off? It now makes sense, let’s say, to opt for a diamond three times out of the four offers made each hour, and to choose water once an hour. Where access to an additional unit (bottle) of water each hour will suffice for survival and health, securing the individual’s safe exit from the desert. A scenario that captures the so-called ‘marginal utility’ explanation of value.

However, as with many things in life, the more water an individual acquires in even this harsh desert setting, with basic needs met, the less useful or gratifying the water becomes, referred to as the ‘law of diminishing marginal utility’. An extra unit of water gives very little or even no extra satisfaction.

According to ‘marginal utility’, then, a person will use a commodity to meet a need or want, based on perceived hierarchy of priorities. In the nineteenth century, the Austrian economic theorist Eugen Ritter von Böhm-Bawerk provided an illustration of this concept, exemplified by a farmer owning five sacks of grain:
  • The farmer sets aside the first sack to make bread, for the basics of survival. 
  • He uses the second sack of grain to make yet more bread so that he’s fit enough to perform strenuous work around the farm. 
  • He devotes the third sack to feed his farm animals. 
  • The fourth he uses in distilling alcohol. 
  • And the last sack of grain the farmer uses to feed birds.
If one of those bags is inexplicably lost, the farmer will not then reduce each of the remaining activities by one-fifth, as that would thoughtlessly cut into higher-priority needs. Instead, he will stop feeding the birds, deemed the least-valuable activity, leaving intact the grain for the four more-valuable activities in order to meet what he deems greater needs.

Accordingly, the next least-productive (least-valuable) sack is the fourth, set aside to make alcohol, which would be sacrificed if another sack is lost. And so on, working backwards, until, in a worst-case situation, the farmer is left with the first sack — that is, the grain essential for feeding him so that he stays alive. This situation of the farmer and his five sacks of grain illustrates how the ‘marginal utility’ of a good is driven by personal judgement of least and highest importance, always within a context.

Life today provides contemporary instances of this paradox of value.

Consider, for example, how society pays individual megastars in entertainment and sports vastly more than, say, school teachers. This is so, even though citizens insist they highly value teachers, entrusting them with educating the next generation for societys future competitive economic development. Megastar entertainers and athletes are of course rare, while teachers are plentiful. According to diminishing marginal utility, acquiring one other teacher is easier and cheaper than acquiring one other top entertainer or athlete.

Consider, too, collectables like historical stamps and ancient coins. Afar from their original purpose, these commodities no longer have use-value. 
Yet, ‘a very great quantity of other goods may frequently be had in exchange for them, to evoke Smiths diamond analogue. Factors like scarcity, condition, provenance, and subjective constructs of worth in the minds of the collector community fuel value, when swapping, selling, buying — or exchanging for other goods and services.

Of course, the dynamics of value can prove brittle. History has taught us that many times. Recall, for example, the exuberant valuing of tulips in seventeenth-century Holland. Speculation in tulips skyrocketed — with some varieties worth more than houses in Amsterdam — in what was surely one of the most-curious bubbles ever. Eventually, tulipmania came to a sudden end; however, whether the valuing of, say, todays cryptocurrencies, which are digital, intangible, and volatile, will follow suit and falter, or compete indefinitely with dollars, euros, pounds, and renminbi, remains an unclosed chapter in the paradox of value.

Ultimately, value is demonstrably an emergent construct of the mind, whereby ‘knowledge, as perhaps the most-ubiquitous commodity, poses a special paradoxical case. Knowledge has value simultaneously and equally in its use and ‘in its exchange. In the former, that is in its use, knowledge is applied to acquire ones own needs and wants; in the latter, that is in its exchange, knowledge becomes of benefit to others in acquiring their needs and wants. Is there perhaps a solution to Smith’s paradox here?

Monday 23 May 2022

Are There Limits to Human Knowledge?


By Keith Tidman

‘Any research that cannot be reduced to actual visual observation is excluded where the stars are concerned…. It is inconceivable that we should ever be able to study, by any means whatsoever, their chemical or mineralogical structure’.
A premature declaration of the end of knowledge, made by the French philosopher, Auguste Comte, in 1835.
People often take delight in saying dolphins are smart. Yet, does even the smartest dolphin in the ocean understand quantum theory? No. Will it ever understand the theory, no matter how hard it tries? Of course not. We have no difficulty accepting that dolphins have cognitive limitations, fixed by their brains’ biology. We do not anticipate dolphins even asking the right questions, let alone answering them.

Some people then conclude that for the same reason — built-in biological boundaries of our species’ brains — humans likewise have hard limits to knowledge. And that, therefore, although we acquired an understanding of quantum theory, which has eluded dolphins, we may not arrive at solutions to other riddles. Like the unification of quantum mechanics and the theory of relativity, both effective in their own dominions. Or a definitive understanding of how and from where within the brain that consciousness arises, and what a complete description of consciousness might look like.

The thinking isn’t that such unification of branches of physics is impossible or that consciousness doesn’t exist, but that supposedly we’ll never be able to fully explain either one, for want of natural cognitive capacity. It’s argued that because of our allegedly ill-equipped brains, some things will forever remain a mystery to us. Just as dolphins will never understand calculus or infinity or the dolphin genome, human brains are likewise closed off from categories of intractable concepts.

Or at least, as it has been said.

Some among these believers of this view have adopted the self-describing moniker ‘mysterians’. They assert that as a member of the animal kingdom, homo sapiens are subject to the same kinds of insuperable cognitive walls. And that it is hubris, self-deception, and pretension to proclaim otherwise. There’s a needless resignation.

After all, the fact that early hominids did not yet understand the natural order of the universe does not mean that they were ill-equipped to eventually acquire such understanding, or that they were suffering so-called ‘cognitive closure’. Early humans were not fixed solely on survival, subsistence, and reproduction, where existence was defined solely by a daily grind over the millennia in a struggle to hold onto the status quo.

Instead, we were endowed from the start with a remarkable evolutionary path that got us to where we are today, and to where we will be in the future. With dexterously intelligent minds that enable us to wonder, discover, model, and refine our understanding of the world around us. To ponder our species’ position within the cosmic order. To contemplate our meaning, purpose, and destiny. And to continue this evolutionary path for however long our biological selves ensure our survival as opposed to extinction at our own hand or by external factors.

How is it, then, that we even come to know things? There are sundry methods, including (but not limited to) these: Logical, which entails the laws (rules) of formal logic, as exemplified by the iconic syllogism where conclusion follow premises. Semantic, which entails the denotative and connotative definitions and context-based meanings of words. Systemic, which entails the use of symbols, words, and operations/functions related to the universally agreed-upon rules of mathematics. And empirical, which entails evidence, information, and observation that come to us through our senses and such tools like those below for analysis, to confirm or finetune or discard hypotheses.

Sometimes the resulting understanding is truly paradigm-shifting; other times it’s progressive, incremental, and cumulative — contributed to by multiple people assembling elements from previous theories, not infrequently stretching over generations. Either way, belief follows — that is, until the cycle of reflection and reinvention begins again. Even as one theory is substituted for another, we remain buoyed by belief in the commonsensical fundamentals of attempting to understand the natural order of things. Theories and methodologies might both change; nonetheless, we stay faithful to the task, embracing the search for knowledge. Knowledge acquisition is thus fluid, persistently fed by new and better ideas that inform our models of reality.

We are aided in this intellectual quest by five baskets of ‘implements’: Physical devices like quantum computers, space-based telescopes, DNA sequencers, and particle accelerators. Tools for smart simulation, like artificial intelligence, augmented reality, big data, and machine learning. Symbolic representations, like natural languages (spoken and written), imagery, and mathematical modeling. The multiplicative collaboration of human minds, functioning like a hive of powerful biological parallel processors. And, lastly, the nexus among these implements.

This nexus among implements continually expands, at a quickening pace; we are, after all, consummate crafters of tools and collaborators. We might fairly presume that the nexus will indeed lead to an understanding of the ‘brass ring’ of knowledge, human consciousness. The cause-and-effect dynamic is cyclic: theoretical knowledge driving empirical knowledge driving theoretical knowledge — and so on indefinitely, part of the conjectural froth in which we ask and answer the tough questions. Such explanations of reality must take account, in balance, of both the natural world and metaphysical world, in their respective multiplicity of forms.

My conclusion is that, uniquely, the human species has boundless cognitive access rather than bounded cognitive closure. Such that even the long-sought ‘theory of everything’ will actually be just another mile marker on our intellectual journey to the next theory of everything, and the next one — all transient placeholders, extending ad infinitum.

There will be no end to curiosity, questions, and reflection; there will be no end to the paradigm-shifting effects of imagination, creativity, rationalism, and what-ifs; and there will be no end to answers, as human knowledge incessantly accrues.

Monday 9 May 2022

Peering into the World's Biggest Search Engine


 If you type “cat” into Google, sone of the top results are for Caterpillar machinery


By Martin Cohen and Keith Tidman


How does Google work? The biggest online search engine has long become ubiquitous in everyday personal and professional life, accounting for an astounding 70 percent of searches globally. It’s a trillion-plus-dollar company with the power to influence, even disrupt, other industries. And yet exactly how it works, beyond broad strokes, remains somewhat shrouded.

So, let’s pull back the curtain a little, if we can, to try observing the cogs whirring behind that friendly webpage interface. At one level, Google’s approach is every bit as simple as imagined. An obvious instance being that a lot of factual queries often simply direct you to Wikipedia on the upper portion of the first displayed page.

Of course, every second, Google performs extraordinary feats, such as searching billions of pages in the blink of an eye. However, that near-instantaneity on the computing dimension is, these days, arguably the easiest to get a handle on — and something we have long since taken for granted. What’s more nuanced is how the search engine appears to evaluate and weigh information.

That’s where web crawlers can screen what motivates: like possibly prioritizing commercial partners, and on occasion seeming to favor particular social and political messages. Or so it seems. Given the stakes in company revenue, those relationships are an understandable approach to running a business. Indeed, it has been reported that some 90% of earnings come from keyword-driven, targeted advertising.

It’s no wonder Google plays up the idea that its engineers are super-smart at what they do. What Google wants us to understand is that its algorithm is complex and constantly changing, for the better. We are allowed to know that when Google decides which search results are most important, pages are ranked by how many other sites link to them — with those sites in turn weighted in importance by their own links.

It’s also obvious that Google performs common-sense concordance searches on the exact text of your query. If you straightforwardly ask, “What is the capital of France?” you will reliably and just as straightforwardly be led to a page saying something like “Paris is the capital of France.” All well and good, and unpretentious, as far as those sorts of one-off queries go.

But what might raise eyebrows among some Google users is the placing of commercial sites above or at least sprinkled amidst factual ones. If you ask, “What do cats eat?” you are led to a cat food manufacturer’s website close to the top of the page, with other informational links surrounding it as if to boost credibility. And if you type “cat” into Google, the links that we recently found near the top of the first page took us not to anything furry and feline  –  but to clunking, great, Caterpillar machinery.

Meanwhile, take a subject that off and on over the last two-plus years has been highly polarizing and politicized — rousing ire, so-called conspiracy theories, and presumptuousness that cleave society across several fronts — like the topical query: “Do covid vaccines have side effects?” Let’s put aside for a moment what you might already be convinced is the answer, either way — whether a full-throated yea or nay.

As a general matter, people might want search engines to reflect the range of context and views — to let searchers ultimately do their own due diligence regarding conflicting opinions. Yet, the all-important first page at Google started, at the time of this particular search, with four sites identified as ads. Followed by several other authoritative links, bunched under ‘More results’, pointing to the vaccine indeed being safe. So, let’s say, you’ll be reassured, but have you been fully informed, to help you understand background and accordingly to make up your own mind?

When we put a similar query to Yahoo!, for comparison, the results were a bit more diverse. Sure, two links were from one of the same sources as Google’s, but a third link was quite a change of pace: a blog suggesting there might be some safety issues, including references to scholarly papers to make sense of the data and conclusions. Might one, in the spirit of avoiding prejudgment, conclude that diversity of information better honours searchers’ agency?

Some people suggest that the technology at Google is rooted in its procedural approach to the science behind it. As a result, it seems that user access to the best information may play second fiddle to mainstream opinion and commercialization, supported, as it has been said, by harvested user data. Yet, isn’t all that the adventurist economic and business model many countries embrace in the name of individual agency and national growth?

Google has been instrumental, of course, in globally democratising access to information in ways undreamt of by history’s cleverest minds. Impressively vast knowledge at the world’s fingertips. But as author Ken Auletta said, “Naïveté and passion make a potent mix; combine the two with power and you have an extraordinary force, one that can effect great change for good or for ill.” Caveat emptor, in other words, despite what one might conclude are good intentions.

Might the savvy technical and business-theoretical minds at Google therefore continue parsing company strategies and search outcomes, as they inventively reshape the search engine’s operational model? And will that continual reinvention help to validate users’ experiences in quests for information intended not only to provide definitive answers but to inform users’ own prioritization and decision-making?

Martin Cohen investigates ‘How Does Google Think’ in his new book, Rethinking Thinking: Problem Solving fro Sun Tzu to Google, which was published by Imprint Academic last month.

Monday 18 April 2022

What Is Love? An Inquiry Reexamined


By Keith Tidman


Someone might say, I love my wife or husband. I love my children and grandchildren. I love my extended family. I love my friends.

All the while, that same someone might also avidly announce, I love…

Conversation. Mozart’s music. Cherry blossoms. Travel abroad. Ethnic cuisine. Democracy. Memories of parents. Sipping espresso. Paradoxes. Animal kingdom. Mysteries of quantum theory. Hiking trails. Absence of war. A baby’s eye contact. Language of mathematics. Theatre performances. History. African savanna. Freedom. Daydreaming on the beach. Loving love. And, yes, philosophy.

We’re free to fill in the blanks with endless personal possibilities: people, events, occasions, experiences, and things we care deeply about, which happen providentially to get elevated by their singular meaning to us on an individual level. The neurons that get triggered in each of us, as-yet unexplainably making what you uniquely experience by way of love as different from what everyone else definably feels — the subjectivism of sensation.

A hazard in applying the word ‘love’ across manifold dimensions like this is that we may start to cloud the concept, making it harder to distinguish love from competitor sentiments — such as simply ‘liking’, ‘fancying a lot’, or maybe ‘yearning’. Uncertainty may intrude as we bracket sentiments. The situation is that love itself comes in many different kinds. Steeped in a historical, cultural, spiritual, scientific, rational, and emotional melding pot. Three of the best-known semantic variants for love, whose names originate from Greek, descend to us from early philosophers.

They are Eros (pictured above on his pedestal in London), which is intensely passionate, romantic, and sexual love (famously fêted by the arts). Intended also for species proliferation. Agape, which is a transcendent, reciprocated love for God and for all humanity, sometimes couched as a form of brotherly love. And philia, which is unconditional love for family and friends, and even one’s country. As well as ‘companionate’ love enjoyed, for example, by a couple later in life, when passion’s embers may have cooled. Philia evokes a mix of virtues, like integrity, fairness, parity, and acquaintance.

Those terms and definitions imply a rational tidiness that may not be deserved when it comes to the everyday, sometimes-fickle interpretation of love: when and how to appropriately apply the word. The reality is that people tend to parse ‘love’ along sundry lengths, widths, and heights, which can be subjective, even idiosyncratic, and often self-servingly changeable to suit the moment and the mood. Individual, family, and community values are influential here.

Love may even be outright ineffable: that is, beyond logical explanation and the search for the source of societal norms. Enough so, perhaps, to make the likes of Aristotle, St. Augustine, Friedrich Nietzsche, Arthur Schopenhauer, Bertrand Russell, and Simone de Bouvier — among other romantics and misanthropes, who thought about and critiqued the whimsicality of love — turn in their graves.

At the very least, we know that love, in its different kinds, can be heady, frenzied stuff, seemingly hard-wired, primal, and distractingly preoccupying. Of course, the category of love might shift — progressively, or abruptly — in accordance with evolving experiences, interactions, and relationships, as well as the sprouting of wholly novel circumstances. Arguably the biology, chemistry, and synapses of the brain, creating the complexities of mind, deterministically calling the shots.

Some contest that the love that others may claim to feel is not actually love, but something akin to it: either friendship, or impassioned obsession, or veneration, or lust, or appreciation of companionship, or esteem, or simply liking someone or something a whole lot. Distinctions between love and alternative sensations, as they wax and wane over time, are for the individual person to decide. We correctly accede to this element of individuality.

Love, as for all the other emotions just mentioned, has a flipside. Together, opposites make wholes — their serving as the source of what’s possible. Along with love can come dispiriting negatives, like possessiveness, insecurity, distrust, noxiousness, suspicion, sexist hindrances, jealousy, and objectification.

There can be a tension between these latter shadowy forces and such affirmative forces as bright-spiritedness, cleverness, romanticism, enchantment, physical attractiveness, empathy, humour, companionability, magnetism, kindness, and generosity. Such a tension usually lessens with the passage of time, as the distinctions between the good and the bad become less hazy and easier to sort from among.

There’s another form of tension, too: Individual values — acquired through personal reflection, and through family and community convictions, for example — may bump up against the stressors of love. Among love’s influences is sometimes having to rethink values. To refine norms in order to accommodate love. There may be justifiable reasons to believe we gain when we inspiringly and aspiringly love someone or something.

The gradations of moral and behavioural values challenge our autonomy — how we calculatedly manage life — as the effects of love invade our moment-to-moment decision-making. Choices become less intentional and less free, as we deferentially strive to preserve love. We might anxiously attempt to evade what we perceive, rightly or misguidedly, as the vulnerabilities of love.

When all is weighed, love appears wittingly compelling: not to cosset self-seeking indulgences, but rather to steer us toward a life affectionately moored to other people and experiences that serve as the fount of inspiration and authentic meaning. In this way, rationality and love become mutually inclusive.

Monday 21 March 2022

Would You Plug Into Nozick’s ‘Experience Machine’?

Clockwork Eyes by Michael Ryan

By Keith Tidman

 

Life may have emotionally whipsawed you. Maybe to the extent that you begin to imagine how life’s experiences might somehow be ‘better’. And then you hear about a machine that ensures you experience only pleasure, and no pain. What not to like!


It was the American philosopher Robert Nozick who,  in 1974, hypothesised a way to fill in the blanks of our imaginings of a happier, more fulfilled life by creating his classic Experience Machine thought experiment.

 

According to this, we can choose to be hooked up to such a machine that ensures we experience only pleasure, and eliminates pain. Over the intervening years, Nozick offered different versions of the scenario, as did other writers, but here’s one that will serve our purposes:

 

‘Imagine a machine that could give you any experience (or sequence of experiences) you might desire. When connected to this experience machine [floating in a tank, with electrodes attached to your brain], you can have the experience of writing a great poem or bringing about world peace or loving someone and being loved in return. You can experience the felt pleasures of these things. . . . While in the tank you won’t know that you’re there; you’ll think it’s all actually happening’.

 

At which point, Nozick went on to ask the key question. If given such a choice, would you plug into the machine for the rest of your life?

 

Maybe if we assume that our view of the greatest intrinsic good is a state of general wellbeing, referred to as welfarism, then on utilitarian grounds it might make sense to plug into the machine. But this theory might itself be a naïve, incomplete summary of what we value — what deeply matters to us in living out our lives — and the totality of the upside and downside consequences of our desires, choices, and actions.

 

Our pursuit of wellbeing notwithstanding, Nozick expects most of us would rebuff his invitation and by extension rebuff ethical hedonism, with its origins reaching back millennia. Our opting instead to live a life ‘in contact with reality’, as Nozick put it. That is, to take part of experiences authentically of the world — reflecting a reality of greater consequence than a manufactured illusion. A choice that originates, at least in part, from a bias toward the status quo. This so-called status quo bias leads some people  if told to imagine their lives to date having been produced by an ‘experience machine’  to choose not to detach from the machine.

 

However, researchers have found many people are reluctant to plug into the machine. This seems to be due to several factors. Factors beyond individuals finding the thought of plugging in too scary, icky, or alien’, as philosopher Ben Bramble interestingly characterised the prospect. And beyond such prosaic grounds as apprehension of something askew happening. For example, either the complex technology could malfunction, or the technicians overseeing the process might be sloppy one day, or there might be malign human intrusion (along the lines of the ‘fundamentalist zealots’ that Bramble invented) — any of which might cause a person’s experience in the machine to go terribly awry.

 

A philosophical reason to refuse being plugged in is that we prefer to do things, not just experience things, the former bringing deeper meaning to life than simply figuring out how to maximise pleasure and minimise pain. So, for example, its more rewarding to objectively (actually) write great plays, visit a foreign land, win chess championships, make new friends, compose orchestral music, terraform Mars, love one’s children, have a conversation with Plato, or invent new thought experiments than only subjectively think we did. An intuitive preference we have for tangible achievements and experiences over machine-made, simulated sensations.

 

Another factor in choosing not to plug into the machine may be that we’re apprehensive about the resulting loss of autonomy and free will in sorting choices, making decisions, taking action, and being accountable for consequences. People don’t want to be deprived of the perceived dignity that comes from self-regulation and intentional behaviour. That is, we wouldn’t want to defer to the Experience Machine to make determinations about life on our behalf, such as how to excel at or enjoy activities, without giving us the opportunity to intervene, to veto, to remold as we see fit. An autonomy or agency we prefer, even if all that might cause far more aggrievement than the supposed bliss provided by Nozick’s thought experiment.

 

Further in that vein, sensations are often understood, appreciated, and made real by their opposites. That is to say, in order for us to feel pleasure, arguably we must also experience its contrast: some manner of disappointment, obstacles, sorrow, and pain. So, to feel the pride of hearing our original orchestral composition played to an audience’s adulation, our journey getting there might have been dotted by occasional stumbles, even occasionally critical reviews. Besides, it’s conceivable that a menu only of successes and pleasure might grow tedious, and less and less satisfying with time, in face of its interminable predictability.

 

Human connections deeply matter, too, of course, all part of a life that conforms with Nozick’s notion of maintaining ‘contact with reality’. Yes, as long as we’re plugged in we’d be unaware of the inauthenticity of relationships with the family members and friends simulated by the machine. But the nontrivial fact is that family and friends in the real world — outside the machine — would remain unreachable.

 

Because we’d be blithely unaware of the sadness of not being reachable by family and friends for as long as we’re hooked up to the electrodes, we would have no reason to be concerned once embedded in the experience machine. Yet real family and friends, in the outside world, whom we care about may indeed grieve. The anticipation of such grief by loved ones in the real world may well lead most of us to reject lowering ourselves into the machine for a life of counterfeit relationships.

 

In light of these sundry factors, especially the loss of relationships outside of the device, Nozick concludes that the pursuit of hedonic pleasure in the form of simulations — the constructs of the mind that the Experience Machine would provide in place of objective reality – makes plugging into the machine a lot less attractive. Indeed, he says, it begins to look more like ‘a kind of suicide’.

 

Monday 14 February 2022

The Ethics of ‘Opt-out, Presumed-Consent’ Organ Donation

By Keith Tidman

According to current data, in the United States alone, some 107,000 people are now awaiting a life-saving organ transplant. Many times that number are of course in similar dire need worldwide, a situation found exasperating by many physicians, organ-donation activists, and patients and their families.


The trouble is that there’s a yawning lag between the number of organs donated in the United States and the number needed. The result is that by some estimates 22 Americans die every day, totaling 8,000 a year, while they desperately wait for a transplant that isn’t available in time.

 

It’s both a national and global challenge to balance the parallel exigencies — medical, social, and ethical — of recycling cadaveric kidneys, lungs, livers, pancreas, hearts, and other tissues in order to extend the lives of those with poorly functioning organs of their own, and more calamitously with end-stage organ failure.

 

The situation is made worse by the following discrepancy: Whereas 95% of adult Americans say they support organ donation upon a donor’s brain death, only slightly more than half actually register. Deeds don’t match bold proclamations. The resulting bottom line is there were only 14,000 donors in 2021, well shy of need. Again, the same worldwide, but in many cases much worse and fraught.

 

Yet, at the same time, there’s the following encouraging ratio, which points to the benefits of deceased-donor programs and should spur action: The organs garnered from one donor can astoundingly save eight lives.

 

Might the remedy for the gaping lag between need and availability therefore be to switch the model of cadaveric organ donation: from the opt-in, or expressed-consent, program to an op-out, or presumed-consent, program? There are several ways that America, and other opt-in countries, would benefit from this shift in organ-donation models.

 

One is that among the many nations having experienced an opt-out program — from Spain, Belgium, Japan, and Croatia to Columbia, Norway, Chile, and Singapore, among many others — presumed-consent rates in some cases reach over 90%.

 

Here’s just one instance of such extraordinary success: Whereas Germany, with an opt-in system, hovers around a low 12% consent rate, its neighbour, Austria, with an opt-out system, boasts a 99% presumed-consent rate.

 

An alternative approach that, however, raises new ethical issues might be for more countries to incentivise their citizens to register as organ donors, and stay on national registers for a minimum number of years. The incentive would be to move them up the queue as organ recipients, should they need a transplant in the future. Registered donors might spike, while patients’ needs have a better hope of getting met.

 

Some ethical, medical, and legal circles acknowledge there’s conceivably a strong version and a weak version of presumed-consent (opt-out) organ recovery. The strong variant excludes the donor’s family from hampering the donation process. The weak variant of presumed consent, meanwhile, requires the go-ahead of the donor’s family, if the family can be found, before organs may be recovered. How well all that works in practice is unclear.

 

Meanwhile, whereas people might believe that donating cadaveric organs to ailing people is an ethically admissible act, indeed of great benefit to communities, they might well draw the ethical line at donation somehow being mandated by society.


Another issue raised by some bioethicists concerns whether the organs of a brain-dead person are kept artificially functional, this to maximize the odds of successful recovery and donation. Doing so affects both the expressed-consent and presumed-consent models of donation, sometimes requiring to keep organs animate.

 

An ethical benefit of the opt-out model is that it still honours the principles of agency and self-determination, as core values, while protecting the rights of objectors to donation. That is, if some people wish to decline donating their cadaveric organs — perhaps because of religion (albeit many religions approve organ donation), personal philosophy, notions of what makes a ‘whole person’ even in death, or simple qualms — those individuals can freely choose not to donate organs.

 

In line with these principles, it’s imperative that each person be allowed to retain autonomy over his or her organs and body, balancing perceived goals around saving lives and the actions required to reach those goals. Decision-making authority continues to rest primarily in the hands of the individual.

 

From a utilitarian standpoint, an opt-out organ-donation program entailing presumed consent provides society with the greatest good for the greatest number of people — the classic utilitarian formula. Yet, the formula needs to account for the expectation that some people, who never wished for their cadeveric organs to be donated, simply never got around to opting out — which may be the entry point for family intervention in the case of the weak version of presumed consent.

 

From a consequentialist standpoint, there are many patients, with lives hanging by a precariously thinning thread, whose wellbeing is greatly improved (life giving) by repurposing valuable, essential organs through cadaveric organ transplantation. This consequentialist calculation points to the care needed to reassure the community that every medical effort is of course still made to save prospective, dying donors.

 

From the standpoint of altruism, the calculus is generally the same whether a person, in an opt-in country, in fact does register to donate their organs; or whether a person, in an opt-out country, chooses to leave intact their status of presumed consent. In either scenario, informed permission — expressed or presumed — to recover organs is granted and many more lives saved.

 

For reasons such as those laid out here, in my assessment the balance of the life-saving medical, pragmatic (supply-side efficiency), and ethical imperatives means that countries like the United States ought to switch from the opt-in, expressed-consent standard of cadaveric organ donation to the opt-out, presumed-consent standard.

 

Monday 17 January 2022

Are ‘Ideas’ the Bulwark of Democracy?

Caricature of Alexis de Tocqueville by Honoré Daumier (1849).

By Keith Tidman


Recently, Joe Biden asserted that ‘democracy doesn’t happen by accident. We have to defend it, fight for it, strengthen it, renew it’. And so, America’s president, along with leaders from over a hundred other similarly minded democratic countries, held the first of two summits, to tackle the ‘greatest threats faced by democracies today’.

Other thought leaders have weighed in, even calling democracy ‘fragile’. But is democracy really on its heels? I don’t think so; democracy is stouter than it’s given credit for, able to fend off prodigious threats. And here, in my view, are some reasons why.

First, let’s briefly turn to America’s founding fathers: James Madison famously said that ‘If men were angels, no government would be necessary’. A true-enough maxim, which led to establishing the United States’ particular form of national governance: a democratic republic. With ‘inalienable’, natural rights.

Many aspects of democracy helped to define the constitutional and moral character of Madison’s new nation. But few factors rise to the level of unencumbered ideas. 

Ideas compose the pillar that binds together democracies, standing alongside those other worthy pillars: voting rights, free and fair elections, rule of law, human-rights advocacy, free press, power vested in people, self-determination, religious choice, peaceful protest, individual agency, freedom of assembly, petition of the government, and protection of minority voices, among others. 

Ideas are the pillar that keeps democracy resilient and rooted, on which its norms are based. They constitute a gateway to progress. Democracy allows for the unhindered flow of different social and political philosophies, in intellectual competition. Ideas flourish or wither by virtue of their content and persuasion. Democracy allows its citizens to choose which ideas frame the standards of society through debate and the willingness to subject ideas to inspection and criticism. Litmus tests of ideas’ rigour. Debate thereby inspires policy, which in turn inspires social change.

Sure, democracy can be messy and noisy. Yet, democracies do not, and should not, fear ideas as a result. The fear of ideas is debilitating and more deleterious than the content of ideas, even in the presence of disinformation aimed to cleave society. Countenancing opposing, even hard-to-swallow points of view ought to be how the seeds of policy sprout. Tolerance in competition, while sieving out the most antithetical to the ideals of society, helps to lubricate the political positions of true leaders.


Democracy makes sure that ideas are not just a matter for the academy, but for everyone. A notion that heeds Thomas Jefferson’s observation that ‘Government is the strongest of which every man feels himself a part’. Inclusivity is thus paramount; exclusivity aims to trivialize the force-multiplying power of common, shared interests, and in the process risks polarizing.

Admittedly, these days our airwaves and social media are rife with hand-wringing over the crisis or outrage of the moment. There’s plenty of self-righteousness. On the domestic front, people stormed the Capitol building just over a year ago, unsuccessfully attempting to interrupt the peaceful handover of presidential power. Extremists of various ideological vintage shadow the nation. Yet, it’s easy to forget that the nation has been immersed in such roiling politics and social hostilities earlier in its history. There’s a familiarity. All the while, powerful foreign antagonists challenge America’s role as the beacon of democracy. The leaders of authoritarian, ultranationalistic regimes delight in poking their thumb into America’s and Europe’s eye.

Lessons of what not to do come from these authoritarian regimes. Their first rule is not to brook objection to viewpoints prescribed by the monopolistic leader. Opinions that run counter to regimes’ authorised ‘truth’ — shades of Orwell’s 1984 — threaten authoritarians’ survival. They race to erase history, to control the narrative. Insecurities simmer. If the chestnut ‘existential crisis’ applies anywhere, it’s there — in autocrats’ insecurities — to be exploited. Yet, they’re aware that ‘People rarely take to the streets demanding autocracy’, as recently pointed out by the former Danish prime minister, Anders Fogh Rasmussen. Contrarianism menaces the authoritarians’ laser focus on power and control: their imposition of will.

The free flow of ideas is democracy’s nursery of innovation. The constructive exchange of opinions is essential for testing hypotheses, to determine which ideas are refutable or confirmable, and thus discarded or kept. Ideas are commanding; they are democracy’s bulwark against the paternalism and disingenuousness of hollowed-out constitutional rights, which have been autocracies’ fraudulent claim to mirror democracies’ bills of rights.

All this leads to the cautionary words of the nineteenth-century political philosopher and statesman Alexis de Tocqueville: 
‘…that men may reach a point where they look at every new theory as a danger, every innovation as a toilsome trouble, every social advance as a first step toward revolution, and that they may absolutely refuse to move at all’.
Democracy thus far has resisted the affliction of which de Tocqueville counseled. It is the emboldened churn of ideas, as spurs to vision, experimentation, innovation, and constructive criticism, that have enabled democracy to maintain its firm footing. A point that might, therefore, inform the second global summit on democracy now slated for year's end is how this power of enlightened ideas underscores the untruth of democracy’s supposed fragility. 

Monday 27 December 2021

Can Thought Experiments Solve Ethical Dilemmas?


In ethics, the appeal to expand the “moral circle” typically requires moving from consideration of yourself to that of all of nature.

By Keith Tidman

What, in ethical terms, do we owe others, especially when lives are at stake? This is the crux of the ‘Drowning Child’ thought experiment posed by the contemporary philosopher, Peter Singer.

Singer illustrates the question to his students in this way:

You are walking to class when you spot a child drowning in a campus pond. You know nothing of the child’s life; and there is no personal affiliation. The pond is shallow, so it would be easy to wade in and rescue her. You would not endanger yourself, or anyone else, by going into the water and pulling the child out.
But, he adds, there are two catches. One is that your clothes will become saturated, caked in mud, and possibly ruined. The other is that taking the time to go back to your dorm to dry off and change clothes will mean missing the class you were crossing the campus for.

Singer then asks his students, ‘Do you have an obligation to rescue the child?’

The students, without exception and as one might expect, think that they do. The circumstances seem simple. Including that events are just yards away. The students, unprompted, recognise their direct responsibility to save the flailing child. The students’ moral, and even pragmatic, calculus is that the life of the child outweighs the possibility of ruined clothes and a missed class. And, for that matter, possibly the sheer ‘nuisance’ of it all. To the students, there is no ambiguity; the moral obligation is obvious; the costs, even to the cash-strapped students, are trivial.

The students’ answer to the hypothetical about saving a drowning child, as framed above, is straightforward — a one-off situation, perhaps, whose altruistic consequences end upon saving the drowning child who is then safe with family. But ought the situation be so narrowly prescribed? After all, as the stakes are raised, the moral issues, including the range of consequences, arguably become more ambiguous, nuanced, and soul-searching.

At this point, let’s pivot away from Singer’s students and toward the rest of us more generally. In pivoting, let’s also switch situations.

Suppose you are walking on the grounds of a ritzy hotel, to celebrate your fiftieth anniversary in a lavish rented ballroom, where many guests gleefully await you. Because of the once-in-a-lifetime situation, you’re wearing an expensive suit, have a wallet filled with several one-hundred-dollar bills, and are wearing a family legacy watch that you rarely wear.
Plainly, the stakes, at least in terms of potential material sacrifices, are much higher than in the first scenario.

If, then, you spot a child drowning in the hotel’s shallow pond nearby, would you wade in and save the child? Even if the expensive suit will be ruined, the paper money will fall apart from saturation, the family antique watch will not be repairable, and the long-planned event will have to be canceled, disappointing the many guests who expectantly flew in at significant expense?

 

The answer to ‘Do you have an obligation to rescue the child?’ is probably still a resounding yes — at least, let’s hope, for most of us. The moral calculus arguably doesn’t change, even if what materially is at risk for you and others does intensify. Sure, there may be momentary hesitation because of the costlier circumstances. Self-interests may marginally intrude, perhaps causing a pause to see if someone else might jump in instead. But hesitation is likely quickly set aside as altruistic and humanitarian instincts kick in.

To ratchet up the circumstances further, Singer turns to a child starving in an impoverished village, in a faraway country whose resources are insufficient to sustain its population, many of whom live in wretched conditions. Taking moral action to give that child a chance to survive, through a donation, would still be within most people’s finances in the developed world, including the person about to celebrate his anniversary. However, there are two obvious catches: one is that the child is far off, in an unfamiliar land; the other is that remoteness makes it easier to avert eyes and ears, in an effort at psychological detachment.

We might further equivocate based on other grounds, as we search for differentiators that may morally justify not donating to save the starving child abroad, after all. Platitudinous rationales might enter our thinking, such as the presence of local government corruption, the excessive administrative costs of charities, or the bigger, systemic problem of over-population needing to be solved first. Intended to trick and assuage our consciences, and repress urges to help.

Strapped for money and consumed by tuition debt, Singer’s students likely won’t be able to afford donating much, if anything, toward the welfare of the faraway starving child. Circumstances matter, like the inaccessibility; there’s therefore seemingly less of a moral imperative. However, the wealthier individual celebrating his anniversary arguably has a commensurately higher moral obligation to donate, despite the remoteness. A donation equal, let’s say, to the expense of the suit, money, and watch that would be ruined in saving the child in the hotel pond.

So, ought we donate? Would we donate? Even if there might appear to be a gnawing conflict between the morality of altruism and the hard-to-ignore sense of ostensible pointlessness in light of the systemic conditions in the country that perpetuate widespread childhood starvation? Under those circumstances, how might we calculate ‘effective altruism’, combining the empathy felt and the odds of meaningful utilitarian outcomes?

After all, what we ought to do and how we act based on what’s morally right not infrequently diverge. Even when we are confronted with stark images on television, social media, and newspapers of the distended stomachs of toddlers, with flies hovering around their eyes.

For most people, the cost of a donation to save the starving child far away is reasonable and socially just. But the concept of social justice might seem nebulous as we hurry on in the clamour of our daily lives. We don’t necessarily equate, in our minds, saving the drowning child with saving the starving child; moral dissonance might influence choices.

To summarise, Singer presented the ethical calculus in all these situations this way: ‘If it is within our power to prevent something bad from happening, without sacrificing anything of comparable moral importance, we ought, morally, to do it’. Including saving the life of a stranger to avoid a child’s preventable death.

For someone like the financially comfortable anniversary celebrator — if not for the financially struggling college students, who would nevertheless feel morally responsible for saving the child drowning on campus — there’s an equally direct line of responsibility in donating to support the starving child far away. Both situations entail moral imperatives in their own fashion, though again circumstances matter.

The important core of these ethical expectations is the idea of ‘cosmopolitanism’: simply, to value everyone equally, as citizens of the world. Idealistic, yes; but in the context of personal moral responsibility, there’s an obligation to the welfare of others, even strangers, and to treat human life reverentially. Humanitarianism and the ‘common good’ writ large, we suppose.

To this critical point, Singer directs us to the political theorist William Lecky, who wrote of an ‘expanding circle of concern’. It is a circle that starts with the individual and family, and then widens to encompass ‘a class, then a nation, then a coalition of nations, then all humanity’. A circle that is a reflection of our rapid globalisation.

Perhaps, the ‘Drowning Child’ thought experiment exposes divides between how we hypothesize about doing right and actually doing right, and the ambiguity surrounding the consistency of moral decision-making.


Monday 29 November 2021

Whose Reality Is It Anyway?

Thomas Nagel wondered if the world a bat perceives is fundamentally different  to our own

By Keith Tidman

Do we experience the world as it objectively is, or only as an approximation shaped by the effects of information passing through our mind’s interpretative sieve? Does our individual reality align with anyone else’s, or is it exclusively ours, dwelling like a single point amid other people’s experienced realities?

 

We are swayed by our senses, whether through the direct sensory observation of the world around us, or indirectly as we use apparatuses to observe, record, measure, and decipher. Either way, our minds filter the information absorbed, becoming the experiences funneled and fashioned into a reality which in turn is affected by sundry factors. These influences include our life experiences and interpretations, our mental models of the world, how we sort and assimilate ideas, our unconscious predilections, our imaginings and intuitions unsubscribed to particular facts, and our expectations of outcomes drawn from encounters with the world.

 

We believe that what serves as the lifeline in this modeling of personal reality is the presence of agency and ‘free will’. The tendency is to regard free will as orthodoxy. We assume we can freely reconsider and alter that reality, to account for new experiences and information that we mold through reason. To a point, that’s right; but to one degree or another we grapple with biases, some of which are hard-wired or at least deeply entrenched, that predispose us to particular choices and behaviours. So, how freely we can actually surmount those preconceptions and predispositions is problematic, in turn bearing on the limits of how we perceive the world.


The situation is complicated further by the vigorous debate over free will versus how much of what happens does so deterministically, where lifes course is set by forces beyond our control. Altering the models of reality to which we clutch is hard; resistance to change is tempting. We shun hints of doubt in upholding our individual (subjective) representations of reality. The obscurity and inaccessibility of any single, universally accepted objective world exacerbates the circumstances. We realise, though, that subjective reality is not an illusion to be casually dismissed to our suiting, but is lastingly tangible.


In 1974, the American philosopher Thomas Nagel developed a classic metaphor to address these issues of conscious experience. He proposed that some knowledge is limited to what we acquire through our subjective experiences, differentiating those from underlying objective facts. To show how, Nagel turned to bats’ conscious use of echoed sounds as the equivalent of our vision in perceiving its surroundings for navigation. He argued that although we might be able to imagine some aspects of what it’s like to be a bat, like hanging upside down or flying, we cannot truly know what a bat experiences as physical reality. The bat’s experiences are its alone, and for the same reasons of filtering and interpretation, are likewise distinguishable from objective reality.

 

Sensory experience, however, does more than just filter objective reality. The very act of human observation (in particular, measurement) can also create reality. What do I mean? Repeated studies have shown that a potential object remains in what’s called ‘superposition’, or a state of suspension. What stays in superposition is an abstract mathematical description, called a ‘wavefunction’, of all the possible ways an object can become real. There is no distinction between the wave function and the physical things.


While in superposition, the object can be in any number of places until measurement causes the wavefunction to ‘collapse’, resulting in the object being in a single location. Observation thus has implications for the nature of reality and the role of consciousness in bringing that about. According to quantum physicist John Wheeler, ‘No ... property is a property until it is observed’, a notion presaged by the philosopher George Berkeley three centuries earlier by declaring ‘Esse est percepi’ – to be, is to be perceived.


Evidence, furthermore, that experienced reality results from a subjective filtering of objective reality comes from how our minds react to externalities. For example, two friends are out for a stroll and look up at the summer sky. Do their individual perceptions of the sky’s ‘blueness’ precisely match each other’s or anyone else’s, or do they experience blueness differently? If those companions then wade into a lake, do their perceptions of ‘chilliness’ exactly match? How about their experiences of ‘roughness’ upon rubbing their hand on the craggy bark of a tree? These are interpretations of objective reality by the senses and the mind.


Despite the physiology of the friends’ brains and physical senses being alike, their filtered experiences nonetheless differ in both small and big ways. All this, even though the objective physical attributes of the sky, the lake, and the tree bark, independent of the mind, are the same for both companions. (Such as in the case of the wavelength of visible light that accounted for the blueness being interpretatively, subjectively perceived by the senses and mind.) Notwithstanding the deceptive simplicity of these examples, they are telling of how our minds are attuned to processing sensory input, thereby creating subjective realities that might resemble yet not match other people’s, and importantly don’t directly merge with underlying objective reality.

  

In this paradigm of experience, there are untold parsed and sieved realities: our own and everyone else’s. That’s not to say objective reality, independent of our mental parsing, is myth. It exists, at least as backdrop. That is, both objective and subjective reality are credible in their respective ways, as sides of the whole. It’s just that our minds’ unavoidable filtering leads to the altering of objective reality. Objective reality thus stays out of reach. The result is our being left with the personal reality our minds are capable of, a reality nonetheless easily but mistakenly conflated with objective reality.

 

That’s why our models of the underlying objective reality remain approximations, in states of flux. Because when it comes to understanding the holy grail of objective reality, our search is inspired by the belief that close is never close enough. We want more. Humankind’s curiosity strives to inch closer and closer to objective reality, however unending that tireless pursuit will likely prove.