Showing posts with label Keith Tidman. Show all posts
Showing posts with label Keith Tidman. Show all posts

Monday 15 November 2021

The Limits of the ‘Unknowable’

In this image, the indeterminacy principle is here about the initial state of a particle. The colour (white, blue, green) indicates the phase, that is the position and direction of motion, of the particle. The position is initially determined with high precision, but the momentum is not. 

By Keith Tidman

 

We’re used to talking about the known and unknown. But rarely do we talk about the unknowable, which is a very different thing. The unknowable can make us uncomfortable, yet, the shadow of unknowability stretches across all disciplines, from the natural sciences to history and philosophy, as people encounter limits of their individual fields in the course of research. For this reason, unknowability invites a closer look.

 

Over the many years there has been a noteworthy shift. What I mean is this: Human intellectual endeavour has been steadily turning academic disciplines from the islands they had increasingly become over the centuries back into continents of shared interests, where specialized knowledge flows over one another’s boundaries in recognition of the interconnectedness of ideas and understanding of reality.

 

The result is fewer margins and gaps separating the assorted sciences and humanities. Interdependence has been regaining respectability. What we know benefits from these commonalities and this collaboration, allowing knowledge to profit: to expand and evolve across disciplines’ dimensions. And yet, despite this growing matrix of knowledge, unknowables still persist.

 

Consider some examples.

 

Forecasts of future outcomes characteristically fall into the unknowable, with outcomes often different from predictions. Such forecasts range widely, from the weather to political contests, economic conditions, vagaries of language, technology inventions, stock prices, occurrence of accidents, human behaviour, moment of death, demographics, wars and revolutions, roulette wheels, human development, and artificial intelligence, among many others. The longer the reach of a forecast, often the more unknowable the outcome. The ‘now’ and the short term come with improved certainty, but still not absolute. Reasons for many predictions’ dubiousness may include the following.

 

First, the initial conditions may be too many and indeterminate to acquire a coherent, comprehensive picture of starting points. 


Second, the untold, opaquely diverging and converging paths along which initial conditions travel may overwhelm: too many to trace. 


Third, how forces jostle those pathways in both subtle and large ways are impossible to model and take account of with precision and confidence. 


Fourth, chaos and complexity — along with volatility, temperamentality, and imperceptibly tiny fluctuations — may make deep understanding impossible to attain.

 

Ethics is another domain where unknowability persists. The subjectivity of societies’ norms, values, standards, and belief systems — derived from a society’s history, culture, language, traditions, lore, and religions, where change provides a backdraft to ‘moral truths’ — leaves objective ethics outside the realm of what is knowable. Contingencies and indefiniteness can interfere with moral decision-making. Accordingly, no matter how rational and informed individuals might be, there will remain unsettled moral disagreements.


On the level of being, why there is something rather than nothing is similarly unknowable. In principle,  ‘nothingness’ is just as possible as ‘something’, but for some unknown reason apart from the unlikelihood of spontaneous manifestation, ‘something’ demonstrably prevailed over its absence. Conspicuously, ‘nothingness’ would preclude the initial conditions required for ‘something’ to emerge from it. However, we and the universe of course exist; in its fine-tuned balance, the model of being is not just thinkable, it discernibly works. Yet, the reason why ‘something’ won out over ‘nothingness’ is not just unknown, it’s unknowable.

 

Anthropology arguably offers a narrower instance of unknowability, concerning our understanding of early hominids. The inevitable skimpiness of evidence and of fine-grained confirmatory records  compounded by uncertain interpretations stemming from the paucity of physical remains, and of their unvalidated connections and meaning in pre-historical context  suggests that the big picture of our more-distant predecessors will remain incomplete. A case of epistemic limits.


Another important instance of unknowability comes out of physics. The Heisenberg uncertainty principle, at the foundation of quantum mechanics, famously tells us that the more precisely we know about a subatomic particle’s position, the less we know about its momentum, and vice versa. There is a fundamental limit, therefore, to what one can know about a quantum system.

 

To be clear, though, seemingly intractable intellectual problems may not ultimately be insoluble, that is, they need not join the ranks of the unknowable. There’s an important distinction. Let me briefly suggest three examples.

 

The first is ‘dark energy and dark matter’, which together compose 95% of the universe. Remarkably, the tiny 5% left over constitutes the entire visible contents of the universe! Science is attempting to learn what dark energy and dark matter are, despite their prevalence compared with observable matter. The direct effects of dark energy and dark matter, such as on the universes known accelerating expansion, offer a glimpse. Someday, investigators will understand them; they are not unknowable.

 

Second is Fermat’s ‘last theorem’, the one that he teed up in the seventeenth century as a note in the margin of his copy of an ancient Greek text. He explained, to the dismay of generations of mathematicians, that the page’s margin was ‘too small to contain’ the proof. Fermat did suggest, however, that the proof is short and elegant. Four centuries passed before a twentieth-century British mathematician solved the theorem. The proof, shown to be long, turned out not to be unknowable as some had speculated, just terribly difficult.

 

A last instance that I’ll offer involves our understanding of consciousness. For millennia, we’ve been spellbound by the attributes that define our experience as persons, holding that ‘consciousness’ is the vital glue of mind and identity. Yet, a decisive explanation of consciousness, despite earnest attempts, has continued to elude us through the ages. Inventive hypotheses have abounded, though remained unsettled. Maybe thats not surprising, in light of the human brain’s physiological and functional complexity.

 

But as the investigative tools that neuroscientists and philosophers of the mind yield in the course of collaboration become more powerful in dissecting the layers of the brain and mind, consciousness will probably yield its secrets. Such as why and how, through the physical processes of the brain, we have very personalised experiences. It’s likely that one day we will get a sounder handle on what makes us, us. Difficult, yes; unknowable, no.

 

Even as we might take some satisfaction in what we know and anticipate knowing, we are at the same time humbled by two epistemic factors. First is that much of what we presume to know will turn out wrong or at most partial right, subject to revised models of reality. But the second humbling factor is a paradox: that the full extent of what is unknowable is itself unknowable.

 

Monday 18 October 2021

On the Appeal of Authoritarianism — and Its Risks

 

On March 30th, Hungary's populist leader, Viktor Orbán, obtained the indefinite recognition of special powers from his parliament, to the shock of many in Europe, and indeed in Hungary

By Keith Tidman

Authoritarianism is back in fashion. Seventy years after the European dictators brought the world to the brink of ruin, authoritarian leaders have again ascended across the globe, preaching firebrand nationalism. And there’s again no shortage of zealous supporters, even as there are equally passionate objectors. So, what has spurred authoritarianism’s renewed appeal? Let’s start by briefly looking at how authoritarianism and its adversarial ideology, liberal democracy, differ in their implied ‘social contract’.

 

One psychological factor for authoritarianism’ allure is its paternal claims, based on all-powerful, all-knowing central regimes substituting for the independent thought and responsibility of citizens. Decisions are made and actions taken on the people’s behalf; individual responsibility is confined to conformance and outright obedience. Worrying about getting choices right, and contending with their good and bad consequences, rests in the government’s lap, not in the individual’s. Constitutional principles start to be viewed as an extravagance, one that thwarts efficiency. For some people, this contract, exchanging freedom for reassuring paternalism, may appeal. For others, it’s a slippery slope that rapidly descends from the illiberalism of populists to something much worse.

 

Liberal democracy is hard work. It requires accountability based on individual agency. It requires people to become informed, assess information’s credibility, analyse arguments’ soundness, and arrive at independent choices and actions. Citizens must be vigilant on democracy’s behalf, with vigilance aided by the free flow of diverse, even contentious, ideas that enlighten and fill the intellectual storehouse on which democracy’s vibrancy depends. Often, individuals must get it right for themselves. They bear the consequences, including in their free and fair choice of elected representatives; ultimately, there are fewer options for offloading blame for bad outcomes. The rewards can be large, but so can the downsides. Constitutional bills of rights, the co-equal separation of powers, and the rule of law are democracy’s valued hallmarks. There’s likewise a social contract, though with allowance for revision to account for conditions at the moment. For many people, this model of democratic governance appeals; for others, it’s disorderly and ineffectual, even messy.

 

It requires only a small shift for the tension between authoritarianism and the personal agency and accountability of liberal democracy to end up tilting in authoritarianism’s favour. Individual perspectives and backgrounds, and particular leaders’ cult of personality, matter greatly here. With this in mind, let’s dig a bit deeper into what authoritarianism is all about and try to understand its appeal.

 

Authoritarianism was once seen more as the refuge of poor countries on far-away continents; nowadays we’ve witnessed its ascendancy in many developed nations too, such as in Europe, where the brittleness of former democracies snapped. Countries like Russia and China briefly underwent ‘liberal springs’, inquisitively flirting with the freedoms associated with democracy before becoming disenchanted with what they saw, rolling back the gains and increasing statist control over the levers of power. In other countries, what starts as extreme rightwing or leftwing populism, as in some quarters of Asia and Central and South America, has turned to authoritarianism. Strongmen have surrounded themselves with a carefully chosen entourage, doing their bidding. Security forces, like modern-day praetorians, shield and enforce. Social and political norms alter, to serve the wishes of centralised powers. It’s about power and control; to be in command is paramount. Challenges to officialdom are quick to set off alarms, and as necessary result in violence to enforce the restoration of conformity.

 

The authoritarian leader’s rationale is to sideline challengers, democratic or otherwise, turning to mock charges of fraudulence and ineptness to neutralize the opposition. The aim is structural submission and compliance with sanctioned doctrine. The leader asserts he or she ‘knows best’, to which flatterers nod in agreement. Other branches of government, from the legislature to the courts and holders of the nation’s purse strings, along with the country’s intelligentsia and news outlets, are disenfranchised in order to serve the bidding of the charismatic demagogue. Such heads of state may see themselves as the singular wellspring of wise decision-making, for some citizens raising the disconcerting spectre of democratic principles teetering in their supposed fragile balance.

 

Authoritarian leaders monopolising the messaging for public consumption, for the purpose of swaying behaviour, commonly becomes an exercise in copycatting the ‘doublespeak’ of George Orwell’s 1984: war is peace; slavery is freedom; ignorance is strength (slogans inscribed by the Party’s Ministry of Truth). Social activism is no longer brooked and thus may be trodden down by heavy-handed trusted handlers. Racism and xenophobia are ever out in front, as has been seen throughout Europe and in the United States, leading to a zealously protective circling of the wagons into increased sectarianism, hyper-partisanship, and the rise of extremist belief systems. In autocracies, criticism — and economic sanctions or withdrawal of official international recognition — from democracies abroad, humanitarian nongovernmental organisations, and supranational unions is scornfully brushed aside.

 

Yet, it may be wrong to suggest that enthusiasts of authoritarian leaders are hapless, prone to make imprudent choices. Populations may feel so stressed by their circumstances they conclude that a populist powerbroker, unhampered by democracy’s imagined rule-of-law ‘manacles’, is attractive. Those stresses on society might range widely: an unnerving haste toward globalisation; fear of an influx of migrants, putting pressure on presumed zero-sum resources, all the while raising hackles over the nation’s majority race or ethnicity becoming the minority; the fierce pitting of social and political identity groups against one another over policymaking; the disquieting sense of lost cohesion and one’s place in society; and a blend of anxiety and suspicion over unknowns about the nation’s future. In such fraught situations, democracy might be viewed as irresolute and clogging problem-solving, whereas authoritarianism might be viewed as decisive.

 

Quashing the voice of the ‘other social philosophy’, the ‘other community, the ‘other reality’ has become increasingly popular among the world’s growing list of authoritarian regimes. The parallel ambiguous wariness of the pluralism of democracy has been fueling this dynamic. It might be that this trend continues indefinitely, with democracy having run its course. Or, perhaps, the world’s nations will cycle unevenly in and out of democracy and authoritarianism, as a natural course of events. Either way, it’s arguable that democracy isn’t anywhere nearly as fragile as avowed, nor is authoritarianism as formidable.

 

Monday 27 September 2021

The Recounting of History: Getting From Then to Now



Double Herm of Thucydides and Herodotus

Thucydides was a historian of the wars between Athens and Sparta, in which he championed the Athenian general Perikles. Herodotus travelled and wrote widely and tried to be more impartial.



Posted by Keith Tidman

 

Are historians obliged to be unwaveringly objective in their telling of the past? After all, as Hegel asserted: ‘People and government never have learned anything from history or acted on principles deduced from it’.

 

History seems to be something more than just stirring fable, yet less than incontestable reality. Do historians’ accounts live up to the tall order of accurately informing and properly influencing the present and future? Certainly, history is not exempt from human frailty. And we do seem ‘condemned’ to repeat some version of even half-remembered history, such as stumbling repeatedly into unsustainable, unwinnable wars.

 

In broad terms, history has an ambitious task: to recount all of human activity  ideological, political, institutional, social, cultural, philosophical, judicial, intellectual, religious, economic, military, scientific, technological and familial. Cicero, who honoured Herodotus with the title the father of history’, seems to have had such a lofty role in mind for the discipline when he pondered: ‘What is the worth of human life, unless it is woven into the life of our ancestors by the records of history?’ The vast scope of that task implies both great challenges and vulnerabilities.

 

History provides the landscape of past events, situations, changes, people, decisions, and actions. Both the big picture and the subtler details of the historical record spur deliberation, and help render what we hope are wise choices about societys current and future direction. How wise such choices are — and the extent to which they are soundly based on, or at at least influenced by, how historians parse and interpret the past  reflects how ‘wise’ the historians are in fulfilment of the task. At its best, the recounting of history tracks the ebb and flow of changes in transparent ways, taking into account context for those moments in time. A pitfall to avoid, however, is tilting conclusions by superimposing on the past the knowledge and beliefs we hold today.

 

To these ends, historians and consumers of history strive to measure the evidence, complexities, inconsistencies, conflicts, and selective interpretations of past events. The challenge of chronicling and interpretation is made harder by the many alternative paths along which events might have unfolded, riven by changes in direction. There is no single linear progression or trajectory to history, extending expediently from the past to the present; twists and turn abound. The resulting tenuousness of causes and effects, and the fact that accounts of history and human affairs might not always align with one another, influence what we believe and how we behave generations later. 

 

The fact is, historical interpretations pile up, one upon another, as time passes. These coagulating layers can only make the picture of the past murkier. To recognise and skilfully scrape away the buildup of past interpretations, whether biased or context-bound or a result of history’s confounding ebb and flow, becomes a monumental undertaking. Indeed, it may never fully happen, as the task of cleaning up history is less alluring feature than capturing and recounting history.


Up to a point, it aids accuracy that historians may turn to primary, or original, sources of past happenings. These sources may be judged on their own merits: to assess evidence and widely differing interpretations, assess credibility and ferret out personal agendas, and assess the relative importance of observations to the true fabric of history. Artifacts, icons, and monuments tell a story, too, filling in the gaps of written and oral accounts. Such histories are intended to endure, leading us to insights into how the rhythms of social, reformist, and cultural forces brought society to where it is today.


And yet, contemporaneous chroniclers of events also fall victim to errors of commission and omission. It’s hard for history to be unimpeachably neutral in re-creating themes in human endeavour, like the victories and letdowns of ideological movements, leaders, governments, economic systems, religions, and cultures, as well as of course the imposing, disruptive succession of wars and revolutions. In the worst of instances, historians are the voice of powerful elites seeking to champion their own interests. 

 

When the past is presented to us, many questions remain. Whose voice is allowed to be loudest in the recounting and interpretation? Is it that of the conquerors, elites, powerful, holders of wealth, well-represented, wielders of authority, patrons? And is the softest or silenced voice only that of the conquered, weak, disregarded, disenfranchised, including marginalised groups based on race or gender? To get beyond fable, where is human agency truly allowed to flourish unfettered?

 

Therein lies the historian’s moral test. A history that is only partial and selective risks cementing in the privileges of the elite and the disadvantages of the silenced. ‘Revisionism’ in the best sense of the word is a noble task, aimed at putting flawed historical storytelling right, so that society can indeed then ‘act on the principles deduced from it’.



Monday 23 August 2021

The Case of Hilbert’s Hotel and Infinity

In Hilberts infinite hotel, a room can always be found for newly arriving guests.

Posted by Keith Tidman

 

‘No vacancies. Rooms available’. That might as well be the contradictory sign outside the Hilbert Hotel of legend. Yet, is the sign really as nonsensical as it first seems?

 

The Hilbert Hotel paradox was made famous by the German mathematician David Hilbert in the 1920s. The paradox tells of an imaginary hotel with infinite rooms. All the rooms were occupied by an infinite number of guests.

 

However, a traveller wondered if a room might still be available, and approached the receptionist. The receptionist answered that the hotel could indeed accommodate him. To make the solution work, the receptionist asked all the current guests simply to move to the next room, making it possible to assign the new guest to Room 1.

 

This was a scalable maneuver, accommodating any number of new lodgers, whether a hundred, a hundred million, or far more. Because of the infinite rooms, importantly there was no last room; the receptionist could therefore keep moving the current guests to higher room numbers.

 

But the challenge was to get a bit harder. What showed up next was an infinitely large coach occupied by an infinite number of vacationers. To accommodate these guests, the receptionist shifted people so that only the infinite even-numbered rooms were occupied. 

 

Increasingly complex scenarios arose. Such as when an infinite number of coaches, each carrying infinite travellers, pulled into the hotel’s infinite parking lot. But we don’t need, for our purposes here, to delve into all the mathematical solutions. Suffice it to say that any number of new travellers could be lodged.

 

The larger significance of Hilbert’s thought experiment was that an ‘actual infinite’ is indeed logically consistent, even if on the surface it’s counterintuitive. As with Hilbert’s hotel, the infinite exists. Infinity’s logical consistency has further consequence, tying the thought experiment to the cosmological notion of an infinite past. 

 

That is, a beginningless reality. A reality in which our own universe, like infinite other universes, is one bounded part. An unlimited reality that extends even to the ‘far side’ of the Big Bang that gave rise to our universe almost fourteen billion years ago. A universe located within the continuum of the infinite. And a universe in which change gives us the illusion of time’s passage.

 

A common argument in cosmology (origins) is that the string of causes must start with the Big Bang. Or, rooted in a theological origin story, that it must start with a noncontingent divine creator, or so-called ‘first cause’. The claim of such arguments is that reality doesn’t reach back indefinitely into the past, but has a starting point.

 

‘Our minds are finite, and yet even in these circumstances of finitude,

we are surrounded by possibilities that are infinite’.


Alfred North Whitehead, philosopher and mathematician


 There are no grounds, however, to believe that our universe, with its time-stamped beginning (the Big Bang) and its one-way life-cycle toward net disorder, is the entirety of existence. Rather, an infinite history before the Big Bang, or beginningless reality, does make sense. As does the other bookend to that reality, an endless future, where infinity describes both before and after the fleetingly present moment, or what we might think of as the ‘now’. Nothing rules out or contradicts that unlimited scope of reality.

 

With an unchangeable beginningless reality, there is no need to evoke the concept of ‘something coming into being from nothing’; there is no need to interrupt the different laws of physics, or of time, governing each universe’s own separate reality; there is no time zero or insupportable moment of all creation. It’s infinity ‘all the way down’, to paraphrase British philosopher Bertrand Russell’s whimsical reference to infinite regress.  

 

We ought to avoid conflating the emergence of things within our bounded universe (like the making of new galaxies, of which there is a finite number) with the emergence of things within the infinite (like the formation of new universes, of which there is an infinite variety, each with its unique properties, life cycle, and natural laws).

 

‘No other question [than the infinite] has ever moved so profoundly the spirit of man; no other idea has so fruitfully stimulated his intellect’, declared David Hilbert. Our bounded universe is simply one part of that infinite, that is, part of beginningless reality. Our universe existing among infinite others, like the infinite rooms in Hilbert’s hotel.

Monday 19 July 2021

The ‘Common Good’ and Equality of Opportunity

Adam Smith, the 19th-century Scottish philosopher, warned against both
monopoly interests and government intervention in private economic arrangements.

Posted by Keith Tidman
 

Every nation grapples with balancing things that benefit the community as a whole — the common good — and those that benefit individuals — the private good. Untangling which things fall under each of the two rubrics is just one of the challenges. Decisions hinge on a nation’s history, political philosophy, approach to governance, and the general will of its citizenry.

 

At the core is recognition that community, civic relationships, and interdependencies matter in building a just society, as what is ‘just’ is a shared enterprise based on liberal Enlightenment principles around rights and ethics. Acting on this recognition drives whether a nation’s social system allows for every individual to benefit impartially from its bounty.

 

Although capitalism has proven to be the most-dynamic engine of nations’ wealth in terms of gross domestic product, it also commonly fosters gaping inequality between the multibillionaires and the many tens of millions of people left destitute. There are those left without homes, without food, without medical care — and without hope. As philosopher and political economist Adam Smith observed: 


‘Wherever there is great property there is great inequality. For one very rich man there must be at least five hundred poor, and the affluence of the few supposes the indigence of the many’.


Today, this gap between the two extreme poles in wealth inequality is widening and becoming uglier in both material and moral terms. Among the worst injustices, however, is inequality not only of income or of wealth — the two traditional standards of inequality — but (underlying them both) inequality of opportunity. Opportunity as in access to education or training, meaningful work, a home in which to raise a family, leisure activity, the chance to excel unhampered by caste or discrimination. Such benefits ultimately stem from opportunity, without which there is little by way of quality of life.

 

I would argue that the presence or absence of opportunity in life is the root of whether society is fair and just and moral. The notion of the common good, as a civically moral imperative, reaches back to the ancient world, adjusting in accordance with the passage and rhythm of history and the gyrations of social composition. Aristotle stated in the Politics that ‘governments, which have a regard to the common interest, are constituted in accordance with strict principles of justice’.

 

The cornerstone of the common good is shared conditions, facilities, and establishments that redound to every citizen’s benefit. A foundation where freedom, autonomy, agency, and self-governance are realised through collective participation. Not as atomised citizens, with narrow self-interests. And not where society myopically hails populist individual rights and liberties. But rather through communal action in the spirit of liberalised markets and liberalised constitutional government institutions.

 

Common examples include law courts and an impartial system of justice, accessible public healthcare, civic-minded policing and order, affordable and sufficient food, thriving economic system, national defense to safeguard peace, well-maintained infrastructure, responsive system of governance, accessible public education, libraries and museums, protection of the environment, and public transportation.

 

The cornerstone of the private good is individual rights, with which the common good must be seeded and counterweighted. These rights, or civic liberties, commonly include those of free speech, conscience, public assembly, and religion. As well as rights to life, personal property, petition of the government, privacy, fair trial (due process), movement, and safety. That is, natural, inalienable human rights that governments ought not attempt to take away but rather ought always to protect.

 

One challenge is how to manage the potential pluralism of a society, where there are dissimilar interest groups (constituencies) whose objectives might conflict. In modern societies, these dissimilar groups are many, divided along lines of race, ethnicity, gender, country of origin, religion, and socioeconomic rank. Establishing a common good from such a mix is something society may find difficult.

 

A second challenge is how to settle the predictable differences of opinion over the relative worth of those values that align with the common good and the private good. When it comes to ‘best’ government and social policy, there must be caution not to allow the shrillest voices, whether among the majority or minority of society, to crowd out others’ opinions. The risk is in opportunity undeservedly accruing to one group in society.

 

Just as the common good requires that everyone has access to it, it requires that all of us must help to sustain it. The common good commands effort, including a sharing of burdens and occasional sacrifice. When people benefit from, but choose not to help sustain it (perhaps like a manufacturer’s operators ignoring their civic obligation and polluting air and water, even as they expect access themselves to clean resources), they freeload.

 

Merit will always matter, of course, but as only one variable in the calculus of opportunity. And so, to mitigate inequality of opportunity, the common good may call for a ‘distributive’ element. Distributive justice emphasises the allocation of shared outcomes and benefits. To uplift the least-advantaged members of society, based on access, participation, proportionality, need, and impartiality.

 

Government policy and social conscience are both pivotal in ensuring that merit doesn’t recklessly eclipse or cancel equality of opportunity. Solutions for access to improved education, work, healthcare, legal justice, and myriad other necessities to establish a floor to quality of life are as much political as social. It is through such measures that we see how sincere society’s concerns really are — for the common good.

Monday 28 June 2021

Our Impulse Toward Anthropomorphism

Animals in the film Animal Farm
‘Animal Farm’, as imagined in the 1954 film, actually described human politics.

Posted by Keith Tidman

 

The Caterpillar and Alice looked at each other for some time in silence: at last, the Caterpillar took the hookah out of its mouth and addressed her in a languid, sleepy voice.

    ‘Who are YOU?’ said the Caterpillar.

    This was not an encouraging opening for a conversation. Alice replied, rather shyly, ‘I--I hardly know, sir, just at present  at least I know who I WAS when I got up this morning, but I think I must have been changed several times since then.’

    ‘What do you mean by that?’ said the Caterpillar sternly. ‘Explain yourself!’

    ‘I can't explain MYSELF, I’m afraid, sir,’ said Alice, ‘because I’m not myself, you see.’

 

Alice’s Adventures in Wonderland, by Lewis Carroll, is just one example of the book’s rich portrayal of nonhumans — like the Caterpillar — all of whom exhibit humanlike properties and behaviours. A literary device that is also a form of anthropomorphism — from the Greek anthropos, meaning ‘human’, and morphe, meaning form or shape. Humans have a long history of attributing both physical and mental human qualities to a wide array of things, ranging from animals to inanimate objects and gods. Such anthropomorphism has been common even since the earliest mythologies.

 

Anthropomorphism has also been grounded in commonplace usage as metaphor. We ‘see’ agency, intentionality, understanding, thought, and humanlike conduct in all sorts of things: pets, cars, computers, tools, musical instruments, boats, favourite toys, and so forth. These are often items with which we grow a special rapport: and that we soon regard as possessing the deliberateness and quirkiness of human instinct. Items with which we ‘socialise’, such as through affectionate communication; to which we appoint names that express their character; that we blame for vexing us if, for example, they don’t work according to expectations; and that, in the case of gadgets, we might view as extensions of our own personhood.

 

Today, we’ve become accustomed to thinking of technology as having humanlike agency and features — and we behave accordingly. Common examples in our device-centric lives include assigning a human name to a car, robot, or ‘digital personal assistant’. Siri pops up here, Alexa there… This penchant has become all the more acute in light of the ‘cleverness’ of computers and artificial intelligence. We react to ‘capriciousness’ and ‘letdowns’: beseeching a car to start in the bitter cold, expressing anger toward a smart phone that fell and shattered, or imploring the electricity to come back on during a storm. 

 

Anthropomorphism has been deployed in art and literature throughout the ages to portray natural objects, such as animals and plants, as speaking, reasoning, feeling beings with human qualities. Even to have conscious minds. One aim is to turn the unfamiliar into the comfortably familiar; another to pique curiosity and achieve dramatic effect; another to build relatability; another to distinguish friend from foe; and yet another simply to explain natural phenomena.


Take George Orwell’s Animal Farm as another example. The 1945 book’s characters, though complexly nuanced, are animals representing people, or perhaps, to be more precise, political and social groups. The cast includes pigs, horses, dogs, a goat, sheep, a raven, and chickens, among others, with human language, emotions, intentions, personalities, and thoughts. The aim is to warn of the consolidation of power, denial of rights, manipulation of language, and exploitation and control of the masses associated with authoritarianism. The characters are empathetic and relatable in both positive and flawed ways. Pigs, so often portrayed negatively, indeed are the bad guys here too: they represent key members of the Soviet Union’s Bolshevik leadership. Napoleon represents Joseph Stalin, Snowball represents Leon Trotsky, and Squealer represents Vyacheslav Molotov. 

Children's stories, familiar to parents having read to their young children, abound with simpler examples. Among the many favourites are the fairy tales by the Brothers Grimm, The Adventures of Pinocchio by Carlo Collodi, The Jungle Book by Rudyard Kipling, The Tale of Peter Rabbit by Beatrix Potter, and Winnie-the-Pooh by A.A. Milne. Such stories often have didactic purposes, to convey lessons about life, such as ethical choices, while remaining accessible, interpretable, and affable to young minds. The use of animal characters aids this purpose.

 

More generally, too, the predisposition toward anthropomorphism undergirds some religions. Indeed, anthropomorphic gods appear in assorted artifacts, thousands of years old, unearthed by archeologists across the globe. This notion of gods possessing human attributes came to full expression among the ancient Greeks.

 

Their pantheon of deities exhibited qualities of both appearance and thought resembling those of everyday people: wrath, jealously, lust, greed, vengeance, quarrelsomeness, and deception. Or they represented valued attributes like fertility, love, war, wisdom, power, and beauty. These qualities, both admirable and sometimes dreadful, make the gods oddly approachable, even if warily.

 

As to this, the eighteenth-century philosopher David Hume, in his wide-reaching reproach of religions, struggled to come to grips with the faithful lauding and symbolically putting deities on pedestals, all the while incongruously ascribing flawed human emotions to them.

 

In the fifth century BCE, the philosopher Xenophanes also recoiled from the practice of anthropomorphism, observing, ‘Mortals deem that the gods are begotten as they are [in their own likeness], and have clothes like theirs, and voice and form’. He underscored his point about partiality — modeling deities’ features on humans’ features by observing that ‘Ethiopians say that their gods are snub-nosed and black; Thracians that they are pale and red-haired’. Xenophanes concluded that ‘the greatest God’ resembles people ‘neither in form nor in mind’.

 

That said, this penchant toward seeing a god in humans’ own likeness, moored to familiar humanlike qualities, rather than as an unmanifested, metaphysical abstraction whose reality lies forever and inalterably out of reach (whether by human imagination, definition, or description), has long been favoured by many societies.

 

We see it up close in Genesis, the first book of the Old Testament, where it says: ‘So God created humankind in His image, in the image of God He created them; male and female He created them’, as well as frequently elsewhere in the Bible. Such reductionism to human qualities, while still somehow allowing for God to be transcendent, makes it easier to rationalise and shed light on perplexing, even inexplicable, events in the world and in our lives.

 

In this way, anthropomorphism is a stratagem for navigating life. It reduces reality to accessible metaphors and reduces complexity to safe, easy-to-digest analogues, where intentions and causes become both more vivid and easier to make sense of. Above all, anthropomorphism is often how we arrive at empathy, affiliation, and understanding.

 

Monday 14 June 2021

Understanding Culture Helps Explain Why It Matters


André Malraux once wrote: “Culture is both the
heritage and the noblest possession of the world.”

Posted by Keith Tidman

What is culture? The answer is that culture is many things. ‘Culture is the sum of all forms of art, of love, and of thought’, as the French writer, André Malraux, defined the term. However, a little burrowing reveals that culture is even more than that. Culture expresses our way of life — from our heritage to our values and traditions. It defines us. It makes sense of the world. 

 

Culture measures the quality of life that society affords us, across sundry dimensions. It’s intended, through learning, experience, and discovery, to foster development and growth. Fundamentally, culture provides the means for members of a society to relate to and empathize with one another, and thereby to form a collective memory and, as importantly, to imagine a collective future to strive for.

 

Those ‘means’ promote an understanding of society’s rich assembly of norms and values: both shared and individual values, which provide the grist for our standards, beliefs, behaviours, and sense of belonging. Culture affords us a guide to socialisation. Culture is a living, anthropologic enterprise, meaning a story of human development and expression over the ages, which chronicles our mores, myths, stories, and narratives. And whatever culture chooses to value — intelligence, wisdom, creativity, relationships, valour, or other — gets rewarded.

 

Although ideas are at the core of culture, the most-visible and equally striking underpinning is physical constructs: cityscapes, statues, museums, monuments, places of worship, seats of government, boulevards, relics, artifacts, theatres, schools, archeological collections.

 

This durable, physical presence in our lives is every bit as key to self-identity, self-esteem, and representation of place as are the ideas-based standards we ascribe to everyday life. In the embodiment and vibrancy of those constructs we see us; in their design and purpose, they are mirrors on our humanity: a humanity that cuts across racial, ethnic, religious, social, and other demographic groups.

 

The culture that lives within us consists of the many core beliefs and customs that people hold close, remaining unchanged across generations. There’s a standard, values-based thread here that the group holds in high enough esteem to resist the corrosive effects of time. The result is a societal master plan for behavioural strategies. Such threads may be based in highly prized religious, historical, or moral traditions. 

 

Still other dogmas, however, don’t retain constancy; they become subject to critical reevaluation and negotiation, resulting in even deeply rooted ancestral practices being upended. Essentially, people contest and reassess what matters, especially as issues relate to values (abstract and concrete) and self-identity. The resulting changes in traditions and habits stem from discovery and learning, and take place either in sudden lurches or as part of a gentle progression. Either way, adaptation to this change is important to survival.

 

This inevitability and unpredictability of cultural change are underscored by the powerful influences of globalisation. Many factors combine to push global change: those that are economic, such as trade and business; those that are geopolitical, such as pacts and security and human rights; and those that accelerate change in technology, travel, and communications. These influences across porous national contours do not threaten cultural sameness per se, which is an occasional refrain, but do quicken the need for societies to adjust. 


As part of this global dynamic, culture’s instinct is to stabilise and routinise people’s lives, which reassures. Opinions, loyalties, apprehensions, ambitions, relationships, creeds, sense of self in time and place, and forms of idolatry become tested in the face of time, but they also comfort the mind. These amount to the collective social capital: the bedrock of what can rightly be called a community.

 

Language, too, is peculiarly adaptive to culture, a tool for varied expression: the reassuring yet unremarkable (everyday); the soaring and imaginative (creatively artistic); and the rigorously, demandingly precise (scientific and philosophic). In these regards, language is simultaneously adaptive to culture and adaptive of culture: a crucible on which the structure and usage of language remain pliant, to serve society’s bidding.

 

Accordingly, language is basic to framing our staple beliefs, values, and rituals — much of what matters to us, and helps to explain how culture enriches life. What we eat, what we wear, whom we marry, what music we listen to, what plays we attend, what locations we travel to, what we find humorous, what recreation we enjoy, what commemorations we observe — these and other ordinary lived experiences are the building blocks of cultural diversification.

 

Culture allows society to define its nature and ultimately prolong its wellbeing. Culture fills in the details of a larger shared reality about the world. We revere the multifaceted features of culture, all the while recognising that we must be prepared to reimagine and reform culture with the passage of time, as conditions shift. 


This evolutionary process brings vigour. To this extent, culture serves as the lifeblood of society.


 

Monday 17 May 2021

On ‘Conceptual Art’: Where Ideas Eclipse Aesthetics

Fifteen Plaster Surrogates,


Posted by Keith Tidman

 

The influential French artist Marcel Duchamp once said, ‘I was interested in ideas — not merely in visual products’. As he put it, work ‘in the service of the mind’, not mere ‘retinal’ art intended to gratify visually. In this manner, conceptual art disrupted the art establishment’s long-held traditional expectation of artist as original creator of handmade objects: a painting, drawing, sculpture, or other. Normalised expectations about the roles of artist, art, observer, display venue, and society in conceptual art are defied; boundaries are both blurred and expandable.

 

To the point of ideas-centric artistic expression, the presence of agency and intentionality are all the more essential. The aim is to shift the artist’s focal point away from ‘making something from scratch’ to ‘manipulating the already-manufactured’. Hence the absence of the conceptual artist manipulating the raw materials that we might expect artists to conventionally use, like paint, stone, glass, clay, metal, fabric, wood, and so forth. 


Duchamp is regarded as the pioneer and inspiration of conceptual art, whose early-twentieth-century foray in the field included a signed urinal, titled ‘Fountain’. It was a classic example of the avant-garde nature of this art form. Duchamp’s ‘ready-mades’, as they got to be called, became a fixture of conceptual art, up to the present day: where artists select, modify, and position ordinary, everyday manufactured objects as thought-provoking artistic expression. An art of the intellect, where objects are ancillary to concepts.


The heart of ‘conceptual art’ is ideas, inquiry, and intellectual deliberation rather than traditional beauty or aesthetic gratification. The objective is to urge observers to reflect cerebrally on the experience. Conceptual art can thus be seen as sharing a bond with other fields, like philosophy and the social sciences. But what does all that mean in practice? 


The central aspiration to spur observers to reflect upon ideas — not to engage exclusively in the ‘retinal’ experience mentioned above — necessitates such agency. The overriding objective of the conceptual artist’s intention is to focus on ‘meaning’ (something with high information content) rather than on the illustration of a scene that’s directed more traditionally toward triggering the senses (something with high experiential content). Art where, as Aristotle once put it, the ‘inward significance of things’ governs.

 

The meaning that the observer takes away from interaction with the art may be solely the observer’s own, or the artist’s, or the professional art critic’s, or the museum’s, or a hybrid of those, depending on how motivated the observer is. Either way, the art and the ideas conjured by it are linked. Conceptual art thereby sees its commission as philosophical, not just another commodity. Notable to this point, all artworks, including conceptual artworks, are created within a social and cultural context. This context exerts influence in defining and nourishing whatever philosophical theories the artwork is intended to convey.

 

What conceptual art underscores, then, is that no single core definition of all art applies to it. Even within any one category or movement of art, attempts to define it authoritatively can prove thorny: Examples of artwork, and the artists’ intent, may be quite dissimilar. Definitions, beyond generalizations, may be fuzzy at best; opinions about what does or doesn’t fit within the category may prove fractious. Conceptual art only magnifies these realities about attempts to craft a universal definition. 

 

A distinguishing factor is that this kind of art rests not only in its provocative appeal to the intellect, but sometimes even more directly to issues that strike at the heart of social and political displeasure. Or perhaps a little less adversarial, the artist’s unapologetic desire to disruptively probe cultural values and norms. Yet, that’s not to say other visual art movements, including those whose primary tradition is aesthetics, don’t have cognitive, socio-political, or cultural appeal too, for some clearly do. 

 

Indeed, one reward we seek from the experience of art broadly is not only to derive aesthetic joy (a matter of taste), but also to incite thought and to better understand the world and ourselves (a matter of judgment and rationalism). There is philosophical history to this view: Immanuel Kant, for example, also differentiated between aesthetic and logical judgment. To aim for the resulting understanding, the interpretation of conceptual artwork may result in sundry appropriate explanations of the art, or a single best explanation with others ranked in order behind it.

 

One thing in particular that we might praise about conceptual art is its unorthodox interpretation of, and verdict on, societal, cultural norms — quite often, rebelling against our philosophical keystones. In this way, conceptual art’s zealous pioneering temperament forces us to rethink the world we have constructed for ourselves.

 

If enlightenment arises from those second-guesses, then conceptual art has met its objective. And if, beyond illumination, action arises to yield social or cultural change, then that is all the better – in the eye of the conceptual artist.

 

Monday 12 April 2021

What Is Wisdom?

Posted by Keith Tidman

Wisdom is often offered as a person’s most-valuable quality, yet even ardent admirers might struggle to define or explain it. Some of philosophy’s giants, whether Confucius, Buddha, Plato, or Socrates, have concluded that wisdom is rooted not so much in what we do know, but in acknowledging what we don’t know — that is, in realising the extent of our own ignorance.

This humbleness about the limits of our knowledge and, further, ability to know — sometimes referred to by academics as ‘epistemic humility’ — seems a just metric as far as it goes. The term ‘epistemic’ referring to matters of knowledge: what we believe we know, and in the particular case of epistemic humility, the limitations of that knowledge. An important thread begins to appear here, which is the role of judgment in explaining the totality of wisdom.

To repudiate boundaries on our knowledge, or just as importantly on the ability to know, would amount to intellectual hubris. But, epistemic humility, while arguably one among other qualities of a person we might characterise as wise in some limited capacity, is not anywhere nearly enough to explain all that wisdom is.

Consider, for illustration, those people who might assume they know things they do not, despite the supposed knowledge existing outside their proficiency. What I’d call ‘epistemic conceit’ — and again, a key matter of judgment. A case in point might be a neuroscientist, with intimate knowledge of the human brain’s physiology and functions, and maybe of consciousness, concluding that his deep understanding of neuroscience endows him with the critical-thinking skills to invest his money wisely. Or to offer cogent solutions to the mathematical challenges of the physics of ‘string theory’.

Similarly, what about those things falling within the scope of a person’s expertise, theories claimed at the time to be known with a degree of confidence, until the knowledge suddenly proved false. Take the case of the geocentric (Earth-centered) model of the universe, and secondly of optical illusions leading to belief in the existence of so-called ‘Martian canals’. These are occasions of what we might call ‘epistemic unawareness’, to which we are humanly disposed no matter how wise.

Yet, while humbleness about the limits of our knowledge may provide a narrow window on wisdom, it is not definitive. Notably, there seems to be an inverse association between the number of factors claimed vital to fully explain wisdom, and how successfully the definition of wisdom may hold up as holes are poked into the many variables of the explanation under close scrutiny.

The breadth and depth of knowledge and experience are similarly insufficient to define wisdom in totality, despite people earnest chronicling such claims through the course of history. After all, we can have little knowledge and experience and still be decidedly wise; and we can have vast knowledge and experience and still be decidedly unwise. To understand the difference between knowledge and wisdom, and to make life’s decisions accordingly, calls on judgment.

Indeed, even exceptionally wise people — regardless of their field of expertise — can and do on occasion harbour false beliefs and knowledge, which one might call ‘epistemic inaccuracy’. History’s equivalents of such intellectual giants as Plato, Sun Tzu, Da Vinci, Beethoven, Goethe, Shakespeare, Fermat, and Einstein are no exception to this encompassing rule. Einstein, for example, proposed that the universe is static, of which he was later disabused by evidence that the universe is actually expanding and accelerating.

In the same vein, Plato was seemingly wrong about the imperative to define something as an ‘ideal’ before we attempt to achieve it, potentially hobbling efforts to reach practical, real-world goals like implementing remedies for inequitable systems of justice. Meanwhile, Shakespeare made both significant historical and geographical mistakes. And Goethe, wearing his polymath hat, erroneously refuted the Newtonian theory of the decomposition of white light, suggesting instead that colours appeared from mixing light and darkness.

More generally, how might we assess the wisdom of deep thinkers who lived centuries or even millennia ago, a large number of whose presumed knowledge had long been disproved and displaced by new paradigms? I doubt those thinkers’ cogency, insightfulness, prescience, and persuasiveness at the time they lived are any less impressive because of what turned out to be the demonstrated shelf half-life of their knowledge and insights.

Meanwhile, all this assumes we consider such exceptional intellects as not just exquisitely erudite, but also mindful of their own fallibility. As well as mindful of the uncertainty and contingency of what’s real and true in the world. Both assumptions about the conditions and requirement for critical mindfulness call for judgment, too.

Even a vast store of knowledge and experience, however, does not get us all the way to explaining the first principles of wisdom writ large as opposed to singular instances of acting wisely. A wise person’s knowledge and beliefs ought to match up with her behaviour and ways of living. Yet, that ingredient in what, say, minimally describes ‘a wise person’ likewise falls short of explaining full-on wisdom. Even highly knowledgeable people, if impulsive or incorrigibly immoral or amoral, may act unwisely; as in so many other ways, their putative lack of judgment here matters.

One fallback strategy that some philosophers, psychologists, and others resort to has been to lard explanation of wisdom with an exhausting catalog of qualities and descriptors in hope of deflecting criticism of their definition of wisdom. What I’d call the ‘potpourri theory of wisdom’. Somehow, as the thinking misguidedly goes, the more descriptors or factors they shoehorn into the definition, supposedly the more sound the argument.

Alternatively, wisdom might be captured in just one word: judgment. Judgment in what one thinks, decides, opines, says, and does. By which is meant that wisdom entails discerning the presence of patterns, including correspondences and dissimilarities, which may challenge customary canons of reality. Then turning those patterns into understanding, and in step turning understanding into execution (behaviours) — with each fork in this process warranting judgment.

Apart from judgment, notably all other elements that we might imagine to partially explain wisdom — amount and accuracy of knowledge, humility of what one knows and can know, amount and nature of experience — are firmly contingent on each other. Co-dependence is inescapable. Judgment, on the other hand, is the only element that is dependent on no others, in a category of one. I propose that judgment is both enough and necessary to define wisdom.