Showing posts with label Thomas Hobbes. Show all posts
Showing posts with label Thomas Hobbes. Show all posts

Monday, 15 May 2023

‘Game Theory’: Strategic Thinking for Optimal Solutions

Cortes began his campaign to conquer the Aztec Empire by having all but one of his ships scuttled, which meant that he and his men would either conquer the Aztecs Empire or die trying.. Initially, the Aztecs did not see the Spanish as a threat. In fact, their ruler, Moctezuma II, sent emissaries to present gifts to these foreign strangers. 



By Keith Tidman

 

The Peloponnesian War, chronicled by the historian Thucydides, pitted two major powers of Ancient Greece against each other, the Athenians and the Spartans. The Battle of Delium, which took place in 424 BC, was one of the war’s decisive battles. In two of his dialogues (Laches and Symposium), Plato had Socrates, who actually fought in the war, apocryphally recalling the battle, bearing on combatants’ strategic choices.

 

One episode recalls a soldier on the front line, awaiting the enemy to attack, pondering his options in the context of self-interest — what works best for him. For example, if his comrades are believed to be capable of successfully repelling the attack, his own role will contribute only inconsequentially to the fight, yet he risks pointlessly being killed. If, however, the enemy is certain to win the battle, the soldier’s own death is all the more likely and senseless, given that the front line will be routed, anyway, no matter what it does.

 

The soldier concludes from these mental somersaults that his best option is to flee, regardless of which side wins the battle. His ‘dominant strategy’ being to stay alive and unharmed. However, based on the same line of reasoning, all the soldier’s fellow men-in-arms should decide to flee also, to avoid the inevitability of being cut down, rather than to stand their ground. Yet, if all flee, the soldiers are guaranteed to lose the battle before the sides have even engaged.

 

This kind of strategic analysis is sometimes called game theoryHistory provides us with many other examples of game theory applied to the real world, too. In 1591, the Spanish conqueror Cortéz landed in the Western Hemisphere, intending to march inland and vanquish the Aztec Empire. He feared, however, that his soldiers, exhausted from the ocean journey, might be reluctant to fight the Aztec warriors, who happened also to greatly outnumber his own force.

 

Instead of counting on the motivation of individual soldier’s courage or even group ésprit de corps, Cortéz scuttled his fleet. His strategy was to remove the risk of the ships tempting his men to retreat rather than fight — and thus, with no option, to pursue the Aztecs in a fight-or-die (vice a fight-or-flee) scenario. The calculus for each of Cortéz’s soldiers in weighing his survivalist self-interest had shifted dramatically. At the same time, in brazenly scuttling his ships in the manner of a metaphorical weapon, Cortéz wanted to dramatically demonstrate to the enemy that for reasons the latter couldn’t fathom, his outnumbered force nonetheless appeared fearlessly confident to engage in the upcoming battle.

 

It’s a striking historical example of one way in which game theory provides means to assess situations where parties make strategic decisions that take account of each other’s possible decisions. The parties aim to arrive at best strategies in the framework of their own interests — business, economic, political, etc. — while factoring in what they believe to be the thinking (strategising) of opposite players whose interests may align or differ or even be a blend of both.

 

The term, and the philosophy of game theory, is much more recent, of course, developed in the early twentieth century by the mathematician John von Neumann and the economist Oskar Morgenstern. They focused on the theory’s application to economic decision-making, with what they considered the game-like nature of the field of economics. Some ten years later, another mathematician, called John Nash, along with others expanded the discipline, to include strategic decisions applicable to a wide range of fields and scenarios, analysing how competitors with diverse interests choose to contest with one another in pursuit of optimised outcomes. 

 

Whereas some of the earliest cases focused on ‘zero-sum’ games involving two players whose interests sharply conflicted, later scenarios and games were far more intricate. Such as ‘variable-sum’ games, where there may be all winners or all losers, as in a labour dispute. Or ‘constant-variable’ games, like poker, characterised as pure competition, entailing total conflict. The more intricately constructed games accommodate multiple players, involve a blend of shared and divergent interests, involve successive moves, and have at least one player with more information to inform and shape his own strategic choices than the information his competitors hold in hand.

 

The techniques of game theory and the scenarios examined are notable for their range of applications, including business, economics, politics, law, diplomacy, sports, social sciences, and war. Some features of the competitive scenarios are challenging to probe, such as accurately discerning the intentions of rivals and trying to discriminate behavioural patterns. That being said, many features of scenarios and alternative strategies can be studied by the methods of game theory, grounded in mathematics and logic.

 

Among the real-world applications of the methods are planning to mitigate the effects of climate extremes; running management-labour negotiations to get to a new contract and head off costly strikes; siting a power-generating plant to reflect regional needs; anticipating the choices of voter blocs; selecting and rejecting candidates for jury duty during voir dire; engaging in a price war between catty-cornered grocery stores rather than both keeping their prices aligned and high; avoiding predictable plays in sports, to make it harder to defend against; foretelling the formation of political coalitions; and negotiating a treaty between two antagonistic, saber-rattling countries to head off runaway arms spending or outright conflict.

 

Perhaps more trivially, applications of game theory stretch to so-called parlour games, too, like chess, checkers, poker, and Go, which are finite in the number of players and optional plays, and in which progress is achieved via a string of alternating single moves. The contestant who presages a competitor’s optimal answer to their own move will experience more favourable outcomes than if they try to deduce that their opponent will make a particular move associated with a particular probability ranking.

 

Given the large diversity of ‘games’, there are necessarily multiple forms of game theory. Fundamental to each theory, however, is that features of the strategising are actively managed by the players rather than through resort to just chance, hence why game theory goes several steps farther than mere probability theory.

 

The classic example of a two-person, noncooperative game is the Prisoner’s Dilemma. This is how it goes. Detectives believe that their two suspects collaborated in robbing a bank, but they don’t have enough admissible evidence to prove the charges beyond a reasonable doubt. They need more on which to base their otherwise shaky case. The prisoners are kept apart, out of hearing range of each other, as interrogators try to coax each into admitting to the crime.

 

Each prisoner mulls their options for getting the shortest prison term. But in deciding whether to confess, they’re unaware of what their accomplice will decide to do. However, both prisoners are mindful of their options and consequences: If both own up to the robbery, both get a five-year prison term; if neither confesses, both are sentenced to a one-year term (on a lesser charge); and if one squeals on the other, that one goes free, while the prisoner who stays silent goes to prison for fifteen years. 

 

The issue of trust is of course central to weighing the options presented by the ‘game’. In terms of sentences, both prisoners are better off choosing to act unselfishly and remain hush, with each serving one year. But if they choose to act selfishly in expectation of outmaneuvering the unsuspecting (presumed gullible) partner — which is to say, both prisoners picture themselves going free by spilling the beans while mistakenly anticipating that the other will stay silent — the result is much worse: a five-year sentence for both.


Presaging these types of game theoretic arguments, the English philosopher Thomas Hobbes, in Leviathan (1651), described citizens believing, on general principle, they’re best off with unrestrained freedom. Though, as Hobbes theorised, they will come to realise there are occasions when their interests will be better served by cooperating. The aim being to jointly accomplish things not doable by an individual alone. However, some individuals may inconsiderately conclude their interests will be served best by reaping the benefits of collaboration — that is, soliciting help from a neighbour in the form of physical labour, equipment, and time in tilling — but later defaulting when the occasion is for such help to be reciprocated.

 

Resentment, distrust, and cutthroat competitiveness take hold. Faith in the integrity of neighbours in the community plummets, and the chain of sharing resources to leverage the force-multiplicity of teamwork is broken. Society is worse off — where, as Hobbes memorably put it, life then becomes all the more ‘solitary, poor, nasty, brutish and short’. Hobbes’s conclusion, to avoid what he referred to as a ‘war of all against all’, was that people therefore need a central government — operating with significant authority — holding people accountable and punishing accordingly, intended to keep citizens and their transactions on the up and up.

 

What’s germane about Hobbes’s example is how its core themes resonate with today’s game theory. In particular, Hobbes’s argument regarding the need for an ‘undivided’, authoritative government is in line with modern-day game theorists’ solutions to protecting people against what theorists label as ‘social dilemmas’. That is, when people cause fissures within society by dishonourably taking advantage of other citizens rather than cooperating and reciprocating assistance, where collaboration benefits the common good. To Hobbes, the strategic play is between what he refers to as the ‘tyranny’ of an authoritative government and the ‘anarchy’ of no government. He argues that tyranny is the lesser ‘evil’ of the two. 

 

In dicing real-world ‘games’, people have rationally intuited workable strategies, with those solutions sufficing in many everyday circumstances. What the methodologies of game theory offer are ways to formalise, validate, and optimise the outcomes of select intuitions where outcomes matter more. All the while taking into account the opponent and his anticipated strategy, and extracting the highest benefits from choices based on one’s principles and preferences.

 

Monday, 19 October 2020

Is Technology ‘What Makes us Human’?


Posted by Keith Tidman

Technology and human behaviour have historically always been intertwined, defining us as the species we are. Today, technology’s ubiquity means that our lives’ ever-faster turn toward it and its multiplicity of forms have given it stealth-like properties. Increasingly, for many people, technology seems just to happen, and the human agency behind it appears veiled. Yet at the same time, perhaps counterintuitively, what appears to us to happen ‘behind the curtain’ hints that technology is fundamentally rooted in human nature. 


Certainly, there is a delicate affinity between science and technology: the former uncovers how the world happens to be, while the latter helps science to convert those realities into artefacts. As science changes, technologists see opportunities: through invention, design, engineering, and application. This restlessly visionary process is not just incidental, I suggest, but rather is intrinsic to us.

 

Our species comprises enthusiastic toolmakers. The coupling of science and technology has led to humanity’s rich array of transformative products, from particle accelerators to world-spanning aircraft, to magnetic-resonance imaging devices, to the space-station laboratory and universe-imaging space telescopes. The alliance has brought us gene-editing technologies and bioengineering, robotics driven by artificial intelligence, energy-generating solar panels, and multifunctional ‘smart phones’.

 

There’s an ‘everywhereness’ of many such devices in the world, reaching into our lives, increasingly creating a one-world community linked by mutual interdependence on many fronts. The role of toolmaker-cum-technologist has become integrated, metaphorically speaking, into our species’ biological motherboard. In this way, technology has becomes the tipping point of globalisation’s irrepressibility.

 

René Descartes went so far as to profess that science would enable humankind to ‘become the masters and possessors of nature’. An overreach, perhaps — the despoiling of aspects of nature, such as the air, land, and ecosystems at our over-eager hands convinces us of that — but the trend line today points in the direction Descartes declared, just as electric light frees swaths of the world’s population from dependence on daylight.

 

Technology was supercharged by the science of the Newtonian world, which saw the universe as a machine, and its subsequent vaulting to the world of digits has had obvious magnifying effects. These will next become amplified as the world of machine learning takes center stage. Yet human imagination and creativity have had a powerfully galvanizing influence over the transformation. 

 

Technology itself is morally impartial, and as such neither blameworthy nor praiseworthy. Despite how ‘clever’ it becomes, for the foreseeable future technology does not yet have agency — or preference of any kind. However, on the horizon, much cleverer, even self-optimising technology might start to exhibit moral partiality. But as to the point about responsibility and accountability, it is how technology is employed, through users, which gives rise to considerations of morality.

 

A car, for example, is a morally impartial technology. No nefarious intent can be fairly ascribed to either inventor or owner. However, as soon as someone chooses to exercise his agency and drive the car into a crowd with the intent to hurt, he turns the vehicle from its original purpose as an empowering tool for transportation into an empowering weapon of sorts. But no one wags their finger remonstratively at the car.

 

Technology influences our values and norms, prompting culture to morph — sometimes gradually, other times hurriedly. It’s what defines us, at least in large part, as human beings. At the same time, the incorporation and acceptance of technology is decidedly seductive. Witness the new Digital Revolution. Technology’s sway is hard to discount, and even harder to rebuff, especially once it has established roots deep into culture’s rich subsurface soil. But this sway can also be overstated.

 

To that last point, despite technology’s ubiquity, it has not entirely pulled the rug from under other values, like those around community, spirituality, integrity, loyalty, respect, leadership, generosity, and accountability, among others. Indeed, technology might be construed as serving as a multiplier of opportunities for development and improvement, empowering individuals, communities, and institutions alike. How the fifteenth-century printing press democratised access to knowledge, became a tool that spurred revolutions, and helped spark the Enlightenment was one instance of this influential effect.


Today, rockets satisfy our impulse to explore space; the anticipated advent of quantum computers promises dramatic advances in machine learning as well as the modeling of natural events and behaviours, unbreakable encryption, and the development of drugs; nanotechnology leads to the creation of revolutionary materials — and all the time the Internet increasingly connects the world in ways once beyond the imagination.

 

In this manner, there are cascading events that work both ways: human needs and wants drive technology; and technology drives human needs and wants. Technological change thus is a Janus figure with two faces: one looking toward the past, as we figure out what is important and which lessons to apply; and the other looking toward the future, as we innovate. Accordingly, both traditional and new values become expressed, more than just obliquely, by the technology we invent, in a cycle of generation and regeneration.

 

Despite technology’s occasional fails, few people are really prepared to live unconditionally with nature, strictly on nature’s terms. To do so remains a romanticised vision, worthy of the likes of American idealist Henry David Thoreau. Rather, whether rightly or wrongly, more often we have seen our higher interests to make life yet a bit easier, a bit more palatable. 

 

Philosopher Martin Heidegger declared, rather dismally, that we are relegated to ‘remain unfree and chained to technology’. But I think his view is an unappreciative, undeservedly dismissive view of technology’s advantages, across domains: agriculture, education, industry, medicine, business, sanitation, transportation, building, entertainment, materials, information, and communication, among others. Domains where considerations like resource sustainability, ethics, and social justice have been key.

 

For me, in its reach, technology’s pulse has a sociocultural aspect, both shaping and drawing upon social, political, and cultural values. And to get the right balance among those values is a moral, not just a pragmatic, responsibility — one that requires being vigilant in making choices from among alternative priorities and goals. 

 

In innumerable ways, it is through technology, incubated in science, that civilisation has pushed back against the Hobbesian ‘nastiness and brutishness’ of human existence. That’s the record of history. In meantime, we concede the paradox of complex technology championing a simplified, pleasanter life. And as such, our tool-making impulse toward technological solutions, despite occasional fails, will continue to animate what makes us deeply human.

 

Monday, 27 April 2020

The Curiosity of Creativity and Imagination

In Chinese mythology, dragon energy is creative. It is a magical energy, the fire of the soul itself. The dragon is the symbol of our power to transmute and create with imagination and purpose.
Posted by Keith Tidman

Most people would agree that ‘creativity’ is the facility to produce ideas, artifacts, and performances that are both original and valuable. ‘Original’ as in novel, where new ground is tilled. While the qualifier ‘valuable’ is considered necessary in order to address German philosopher Immanuel Kant’s point in The Critique of Judgment (1790) that:

‘Since there can also be original nonsense, its products [creativities] must at the same time be models, i.e., be exemplary’.

An example of lacking value or appropriateness in such context might be a meaningless sequence of words, or gibberish.

Kant believed that creativity pertains mostly to the fine arts, or matters of aesthetics — a narrower perspective than today’s inclusive view. He contended, for example, that genius could not be found in science, believing (mistakenly, I would argue) that science only ever adheres to preset methods, and does not allow for the exercise of imagination. He even excluded Isaac Newton from history’s pantheon of geniuses, despite respecting him as a great man of science.

Today, however, creativity’s reach extends along vastly broader lines, encompassing fields like business, economics, history, philosophy, language, physics, biology, mathematics, technology, psychology, and social, political, and organisational endeavours. Fields, that is, that lend themselves to being, at their creative best, illuminative, nontraditional, gestational, and transformational, open to abstract ideas that prompt pondering novel possibilities. The clue as to the greatness of such endeavors is provided by the 16th/17th-century English philosopher Francis Bacon in the Novum Organum (1620), where he says that:

‘By far the greatest obstacle to the progress . . . and undertaking of new tasks and provinces therein is found in this — that men despair and think things impossible’.

Accordingly, such domains of human activity have been shown to involve the same explorative and generative functions associated with the brain’s large-scale neural networks. A paradigm of creative cognition that is flexible and multidimensional, and one that calls upon several features:
  • an unrestricted vision of what’s possible,
  • ideation, 
  • images, 
  • intuitions,
  • thought experiments, 
  • what-if gaming, 
  • analogical reasoning, 
  • metaphors, 
  • counterfactual reasoning, 
  • inventive free play, 
  • hypotheses, 
  • knowledge reconceptualisation, 
  • and theory selection.
Collectively, these are the cognitive wellspring of creative attainment. To those extents, creativity appears fundamental to defining humanity — what shapes us, through which individual and collective expression occurs — and humanity’s seemingly insatiable, untiring quest for progress and attainment.

Societies tend to applaud those who excel at original thought, both for its own sake and for how it advances human interests. That said, these principles are as relevant to the creative processes of everyday people as to those who eventually are recorded in the annals of history as geniuses. However, the creative process does not start out with the precise end (for example, a poem) and the precise means to getting there (for example, the approach to writing that poem) already known. Rather, both the means and the end product are discoverable only as the creative process unfolds.

Above all, imagination sits at the core of creativity. Imagination is representational, of circumstances not yet real but that nevertheless can evoke emotions and behaviours in people. The world of imagination is, of course, boundless in theory and often in practice, depending on the power of one’s mind to stretch. The American philosopher John Dewey spoke to this point, chalking up every major leap in science, as he boldly put it in The Quest for Certainty, to ‘a new audacity of the imagination’. Albert Einstein’s thoughts paralleled these sentiments, declaring in an interview in 1929 that ‘Imagination is more important than knowledge’. Wherein new possibilities take shape. Accordingly and importantly, imagination yields ideas that surpass what’s already supposed.

Imagination is much more, however, than a mere synonym for creativity, otherwise the term would simply be redundant. Imagination, rather, is a tool: freeing up, even catalysing, creativity. To those ends, imagination entails visualisation (including thought experiments, engaged across disciplines) that enables a person to reach out for assorted, and changing, possibilities — of things, times, places, people, and ideas unrestricted by what’s presumed already experienced and known concerning subjective external reality. Additionally, ‘mirroring’ might occur in the imaginative process, where the absence of features of a mental scenario are filled in with analogues plucked from the external world around us. Ultimately, new knowledge and beliefs emerge, in a progressive loop of creation, validation, application, re-imagination.

Imagination might revolve around diverse dominions, like unconstrained creative thought, play, pretense, the arts, allegorical language, predictive possibilities, and imagery, among others. Imagination cannot, however, guarantee creative outcomes — nor can the role of intuition in human cognition — but imagination is essential (if not always sufficient) for creative results to happen. As explained by Kant, imagination has a ‘constitutive’ role in creativity. Something demonstrated by a simple example offered by 17th-century English philosopher Thomas Hobbes:

‘as when from the sight of a man at one time, and a horse at another, we conceive in our mind a Centaur’. 

Such imaginative, metaphorical playfulness being the stuff not only of absorbed, undaunted children, of course — though they are notably gifted with it in abundance — but also of freethinking adults. Adults whose minds marvel at alternatives in starting from scratch (tabula rasa), or from picking apart (divergence) and reassembling (convergence) presumed reality.

The complexities of imagination best nourish what one might call ‘purposeful creativity’ — where a person deliberately aims to achieve a broad, even if initially indeterminate outcome. Such imagining might happen either alone or with the involvement of other participants. With purposeful creativity, there’s agency and intentionality and autonomy, as is quintessentially the case of the best of thought experiments. It occasions deep immersion into the creative process. ‘Passive creativity’, on the other hand, is where someone has a spontaneous, unsought solution (a Eureka! moment) regarding a matter at hand.

Purposeful, or directed, creativity draws on both conscious and unconscious mechanisms. Passive creativity — with mind open to the unexpected — largely depends on unconscious mental apparatuses, though with the mind’s executive function not uncommonly collaboratively and additively ‘editing’ afterwards, in order to arrive at the final result. To be sure, either purposeful or passive creativity is capable of summoning remarkable insights.

The 6th-century BC Chinese spiritual philosopher Laozi perhaps most pithily described people’s capacity for creativity, and its sometimes-companion genius, with this figurative depiction in the Teo Te Ching, the context being to define ‘genius’ as the ability to see potential: ‘To see things in the seed’ — long before germination eventually makes those ‘things’ apparent, even obvious, to everyone else and become stitched into the fabric of society and culture.

Monday, 9 March 2020

Does Power Corrupt?

Mandell Creighton leading his group, ‘The Quadrilateral’, at Oxford University in 1865. (As seen in Louise Creighton’s book, The Life and Letters of Mandell Creighton.)
Posted by Keith Tidman

In 1887, the English historian, Lord John Dalberg-Acton, penned this cautionary maxim in a letter to Bishop Mandell Creighton: ‘Power tends to corrupt, and absolute power corrupts absolutely’. He concluded his missive by sounding this provocative note: ‘Great men are almost always bad men’. Which might lead one to reflect that indeed human history does seem to have been fuller of Neros and Attilas than Buddhas and Gandhis.

Perhaps not unexpectedly, the correlation between power and corruption was amply pointed out before Lord Acton, as evidenced by this 1770 observation by William Pitt the Elder, a former prime minister of Great Britain, in the House of Lords: ‘Unlimited power is apt to corrupt the minds of those who possess it’. To which, the eighteenth-century Irish statesman and philosopher Edmund Burke also seemed to agree:
‘The greater the power, the greater the abuse’.
History is of course replete with scoundrels and tyrants, and worse, rulers who have egregiously and enthusiastically abused power — often with malign, even cruel, brutal, and deadly, consequences. Situations where the Orwellian axiom that ‘the object of power is power’ prevails, with bad outcomes for the world. Indulgent perpetrators have ranged from heads of state like pharaohs to emperors, kings and queens, chancellors, prime ministers, presidents, chiefs, and popes. As well as people scattered throughout the rest of society, from corrupt leaders of industry to criminals to everyday citizens.

In some instances, it seems indeed that wielding great power has led susceptible people to change, in the process becoming corrupt or unkind in erstwhile uncharacteristic ways. As to the psychology of that observation, a much-cited Stanford University experiment, conducted in 1971, suggested such an effect, though its findings come with caveats. The two-week experiment was intended to show the psychological effects of prison life on behaviour, using university students as pretend prison guards and prisoners in a mock prison on campus.

However, the quickly mounting, distressing maltreatment of ‘prisoners’ in the experiment by those in the authoritative role of guards — behaviour that included confiscating the prisoners’ clothes and requiring them to sleep on concrete flooring — led to the experiment being canceled after only six days. Was that the prospect of ‘abuse’ of which Burke warned us above? Was it the prospect of the ‘perpetual and restless desire of power after power’ of which the seventeenth-century English philosopher Thomas Hobbes warned us?

In many other cases, it has also been observed that there seem to be predispositions toward corruption and abuse, in which power serves to amplify rather than simply instill. This view seems favoured today. Power (the acquisition of authority) may prompt people to disregard social checks on their natural instincts and shed self-managing inhibitions. Power uncovers the real persona — those whose instinctual character is malignly predisposed.

President Abraham Lincoln seemed to subscribe to this position regarding preexisting behavioural qualities, saying,
‘Nearly all men can stand adversity, but if you want to test a man’s character [true persona], give him power’.
Among people in leadership positions, in any number of social spheres, power can have two edges — good and bad. Decisions, intent, and outcomes matter. So, for example, ‘socialised power’ translates to the beneficial use of power and influence to inspire others toward the articulation and realisation of visions and missions, as well as the accomplishment of tangible goals. The idea being to benefit others: societal, political, corporate, economic, communal, spiritual. All this in a manner that, by definition, presupposes freedom as opposed to coerced implementation.

‘Personalised power’, on the other hand, reflects a focus on meeting one’s own expectations. If personalised power overshadows or excludes common goods, as sometimes seen among autocratic, self-absorbed, and unsympathetic national leaders, the exclusion is concerning as it may injure through bad policy. Yet, notably these two indices of power can be compatible — they aren’t necessarily adversarial, nor does one necessarily force the other to beat a retreat. Jointly, in fact, they’re more likely force-multiplying.

One corollary (a cautionary note, perhaps) has to do with the ‘power paradox’. As a person acquires power through thoughtfulness, respect, and empathetic behaviours, and his or her influence accordingly flourishes, the risk emerges that the person begins to behave less in those constructive ways. Power may paradoxically spark growing self-centeredness, and less self-constraint. It’s potentially seductive; it can border on Machiavellian doctrine as to control over others, whereby decisions and behaviours become decreasingly framed around laudable principles of morality and, instead, take a turn to exertion of coercive power and fear in place of inspiration.

In a turnabout, this diminution of compassionate behaviours — combined with an increase in impulsivity and self-absorption, increase in ethical shortcuts, and decrease in social intelligence — might steadily lessen the person’s power and influence. It returns to a set point. And unless they’re vigilant, leaders — in politics, business, and other venues — may focus less and less on the shareable common good.

As a matter of disputable attribution, Plato summed up the lessons that have come down through history on the matters discussed here, his purportedly saying in few words but without equivocation:
‘The measure of a man is what he does with power’.
Although he doesn’t seem to have actually ever said this as such, it certainly captures the lesson and message of his famous moral tale, about the magic ring of Gyges that confers the power of invisibility on its owner.

Sunday, 26 August 2018

Utopia: An End, or a Quest?

Posted by Keith Tidman

Detail from the original frontispiece for More’s book Utopia
In his 1516 book Utopia, the English statesman and writer Sir Thomas More summed up his imagined, idealised vision of an island society in this manner:

‘Nobody owns anything but everyone is rich — for what greater wealth can there be than cheerfulness, peace of mind, and freedom from anxiety?’

A laconic, even breezy counterpoint to the imperfect and in some cases heavily flawed dystopian societies that actually populated the world — More’s utopia presenting a republic confronting much that was wrong in the 16th century. More’s utopia promulgates the uplifting notion that, despite humankind’s fallibilities, many ills of society have remedies.

Two other writers, Daniel Defoe and Jonathan Swift, who wrote Robinson Crusoe and Gulliver’s Travels respectively, were both authors of popular 18th-century stories that took inspiration from the utopian principles of Thomas More.

The word ‘utopia’, coined by More, is from the Greek, meaning ‘no place’. Yet, it seems likely that More was also punning on a different word, pronounced identically, which applies more aptly to history’s descriptions of utopia — like that captured in Plato’s Republic (of ‘philosopher-kings’ fame), Tommaso Campanella’s City of the Sun, and Francis Bacon’s New Atlantis — that word being ‘eutopia’. The word is also of Greek origin, but signifies ‘good place’.

Some see utopias and eutopias alike as heralding the possibility of reforming present society toward some idealised end point — what Herbert Marcuse, the 20th-century German-American philosopher, referred to as ‘the end of utopia’, when ‘material and intellectual forces capable of achieving the transformation are technically present’.

However, long ago, Aristotle pushed back against the concept of utopia as an unattainable figment — a chimera. Later political theorists have joined the criticism, notably More’s contemporary, the Italian political philosopher and statesman Niccolò Machiavelli. In The Prince, Machiavelli concurs with More’s notions of cynicism and corruption seen in society generally and in politics specifically. As such, Machiavelli believed that the struggle for political supremacy is conflictual, necessarily lacking morality — the ‘effective truth of the thing’ in power politics. ‘Politics have no relation to morals’, he stated bluntly. Machiavelli thus did not brook what he regarded as illusory social orders like utopias.

Nonetheless, utopias are, in their intriguingly ambitious way, philosophical, sociological, and political thought experiments. They promulgate and proclaim norms that by implication reproachfully differ from all current societies. They are both inspirational and aspirational. As H.G. Wells noted in his 1905 novel, A Modern Utopia:

‘Our business here is to be Utopian, to make vivid and credible, if we can, first this facet and then that, of an imaginary whole and happy world’.

In that vein, many thinkers have taken their definitions to the next level, offering concrete prescriptions: deconstructing society’s shortcomings, and fleshing out blueprints for the improved social order envisioned. These blueprints may include multiple dimensions: political, economic, ecological, moral, educational, customs, judicial, familial, values, communal, philosophical, and scientific and technological, among others.

The 17th-century English philosopher Thomas Hobbes, however, paints a bleak dystopia, even in the highly reformed architecture of utopia:

‘For the laws of nature … of themselves, without the terror of some power, to cause them to be observed, are contrary to our natural passions’.

That is, given that the ‘natural condition of mankind’ is to incurably and quarrelsomely seek ever more power, the civilizing effects of laws and of governance are required to channel people’s energies and ambitions, and to constrain as necessary.

Yet legal constraints can reach too far: this kind of utopian theorizing lapses into a formula for authoritarianism. The German professor of literature Artur Blaim has summed up, as forthrightly as anyone, the suppressive nature associated with a political system of this kind as:

            ‘Utopias die, utopianism does not’.

The apprehension, then, is that even in a declared utopia, powerful leaders might coerce reluctant conformists to fit into a single mold. Dangerously patriarchal, given possibly counterfactual evidence of what’s best for most.

Certainly, there have been occurrences — ‘utopian’ cults, cabals, compounds, religions, and even nation states’ political systems — where heavy-handed pressure to step in line has been administered and violence has erupted. In these scenarios, repressive measures — to preserve society’s structural demands — are at the expense of freedom and liberal drives. As Bertrand de Jouvenel, a 20th-century French philosopher, counseled, if somewhat hyperbolically:
‘There is a tyranny in every utopia.’
So, might ‘utopia’ be defined differently than any single idealised end point, where ‘satisfied’ architects of utopia feel comfortable putting their tools down, hinting ‘it’s the end of history’?

Or instead, might utopianism be better characterised as a dynamic process of change — of a perpetual becoming (emergence) — directed in the search of ever-better conditions? The key to utopianism is thus its catalytic allure: the uninterrupted exploration, trying out, and readjustment of modalities and norms.

As the 20th-century German philosopher Ernst Bloch pointed out,

‘Expectation, hope and intention, directed towards the possibility which has not yet arrived, constitute not only a fundamental property of the human consciousness but also … a fundamental determination at the heart of objective reality itself’.

Monday, 24 July 2017

Identity: From Theseus's Paradox to the Singularity

Posted by Keith Tidman

A "replica" of an ancient Greek merchant ship based on the remains of a ship that wrecked about 2,500 years ago.  With acknowledgements to Donald Hart Keith.
As the legend goes, Theseus was an imposing Greek hero, who consolidated power and became the mythical king of Athens. Along the way, he awed everyone by leading victorious military campaigns. The Athenians honoured Theseus by displaying his ship in the Athenian harbour. As the decades rolled by, parts of the ship rotted. To preserve the memorial, each time a plank decayed, the Athenians replaced it with a new plank of the same kind of wood. First one plank, then several, then many, then all.

As parts of the ship were replaced, at what point was it no longer the ‘ship of Theseus’? Or did the ship retain its unique (undiminished) identity the entire time, no matter how many planks were replaced? Do the answers to those two questions change if the old planks, which had been warehoused rather than disposed of, were later reassembled into the ship? Which, then, is the legendary ‘ship of Theseus’, deserving of reverence — the ship whose planks had been replaced over the years, or the ship reassembled from the stored rotten planks, or neither? The Greek biographer and philosopher Plutarch elaborated on the paradox in the first century in 'Life of Theseus'.

At the core of these questions about a mythical ship is the matter of ‘identity’. Such as how to define ‘an object’; whether an object is limited to the sum of people’s experience of it; whether an object can in some manner stay the same, regardless of the (macro or micro) changes it undergoes; whether the same rules regarding identity apply to all objects, or if there are exceptions; whether gradual and emergent, rather than immediate, change makes a difference in identity; and so forth.

The seventeenth-century English poilitical philosopher, Thomas Hobbes, weighed in on the conundrum, asking, ‘Which of the two existing ships is numerically one and the same ship as Theseus’s original ship?’ He went on to offer this take on the matter:
‘If some part of the first material has been removed or another part has been added, that ship will be another being, or another body. For, there cannot be a body “the same in number” whose parts are not all the same, because all a body’s parts, taken collectively, are the same as the whole.’
The discussion is not, of course, confined to Theseus’s ship. All physical objects are subject to change over time: suns (stars), trees, houses, cats, rugs, hammers, engines, DNA, the Andromeda galaxy, monuments, icebergs, oceans. As do differently categorised entities, such as societies, institutions, and organizations. And people’s bodies, which change with age of course — but more particularly, whose cells get replaced, in their entirety, roughly every seven years throughout one’s life. Yet, we observe that amidst such change — even radical or wholesale change — the names of things typically don’t change; we don’t start calling them something else. (Hobbes is still Hobbes seven years later, despite cellular replacement.)

The examples abound, as do the issues of identity. It was what led the ancient Greek philosopher Heraclitus to famously question whether, in light of continuous change, one can ‘step into the same river twice’—answering that it’s ‘not the same river and he’s not the same man’. And it’s what led Hobbes, in the case of the human body, to conveniently switch from the ‘same parts’ principle he had applied to Theseus’s ship, saying regarding people, ‘because of the unbroken nature of the flux by which matter decays and is replaced, he is always the same man’. (Or woman. Or child.) By extension of this principle, objects like the sun, though changing — emitting energy through nuclear fusion and undergoing cycles — have what might be called a core ‘persistence’, even as aspects of their form change.
‘If the same substance which thinks be changed,
it can be the same person, or remaining
the same, it can be a different person? — John Locke
But people, especially, are self-evidently more than just bodies. They’re also identified by their minds — knowledge, memories, creative instincts, intentions, wants, likes and dislikes, sense of self, sense of others, sense of time, dreams, curiosity, perceptions, imagination, spirituality, hopes, acquisitiveness, relationships, values, and all the rest. This aspect to ‘personal identity’, which John Locke encapsulates under the label ‘consciousness’ (self) and which undergoes continuous change, underpins the identity of a person, even over time — what has been referred to as ‘diachronic’ personal identity. In contrast, the body and mind, at any single moment in time, has been referred to as ‘synchronic’ personal identity. We remain aware of both states — continuous change and single moments — in turns (that is, the mind rapidly switching back and forth, analogous to what happens while supposedly 'multitasking'), depending on the circumstance.

The philosophical context surrounding personal identity — what’s essential and sufficient for personhood and identity — relates to today’s several variants of the so-called ‘singularity’, spurring modern-day paradoxes and thought experiments. For example, the intervention of humans to spur biological evolution — through neuroscience and artificial intelligence — beyond current physical and cognitive limitations is one way to express the ‘singularity’. One might choose to replace organs and other parts of the body — the way the planks of Theseus’s ship were replaced — with non-biological components and to install brain enhancements that make heightened intelligence (even what’s been dubbed ultraintelligence) possible. This unfolding may be continuous, undergoing a so-called phase transition.

The futurologist, Ray Kurzweil, has observed, ‘We're going to become increasingly non-biological’ — attaining a tipping point ‘where the non-biological part dominates and the biological part is not important any more’. The process entails the (re)engineering of descendants, where each milestone of change stretches the natural features of human biology. It’s where the identity conundrum is revisited, with an affirmative nod to the belief that mind and body lend themselves to major enhancement. Since such a process would occur gradually and continuously, rather than just in one fell swoop (momentary), it would fall under the rubric of ‘diachronic’ change. There’s persistence, according to which personhood — the same person — remains despite the incremental change.

In that same manner, some blend of neuroscience, artificial intelligence, heuristics, the biological sciences, and transformative, leading-edge technology, with influences from disciplines like philosophy and the social sciences, may allow a future generation to ‘upload the mind’ — scanning and mapping the mind’s salient features — from a person to another substrate. That other substrate may be biological or a many-orders-of-magnitude-more-powerful (such as quantum) computer. The uploaded mind — ‘whole-brain emulation’ — may preserve, indistinguishably, the consciousness and personal identity of the person from whom the mind came. ‘Captured’, in this term’s most benign sense, from the activities of the brain’s tens of billions of neurons and trillions of synapses.

‘Even in a different body, you’d still be you
if you had the same beliefs, the same worldview,
and the same memories.’ — Daniel Dennett
If the process can happen once, it can happen multiple times, for the same person. In that case, reflecting back on Theseus’s ship and notions of personal identity, which intuitively is the real person? Just the original? Just the first upload? The original and the first upload? The original and all the uploads? None of the uploads? How would ‘obsolescence’ fit in, or not fit in? The terms ‘person’ and ‘identity’ will certainly need to be revised, beyond the definitions already raised by philosophers through history, to reflect the new realities presented to us by rapid invention and reinvention.

Concomitantly, many issues will bubble to the surface regarding social, ethical, regulatory, legal, spiritual, and other considerations in a world of emulated (duplicated) personhood. Such as: what might be the new ethical universe that society must make sense of, and what may be the (ever-shifting) constraints; whether the original person and emulated person could claim equal rights; whether any one person (the original or emulation) could choose to die at some point; what changes society might confront, such as inequities in opportunity and shifting centers of power; what institutions might be necessary to settle the questions and manage the process in order to minimise disruption; and so forth, all the while venturing increasingly into a curiously untested zone.

The possibilities are thorny, as well as hard to anticipate in their entirety; many broad contours are apparent, with specificity to emerge at its own pace. The possibilities will become increasingly apparent as new capabilities arise (building on one another) and as society is therefore obliged, by the press of circumstances, to weigh the what and how-to — as well as the ‘ought’, of course. That qualified level of predictive certainty is not unexpected, after all: given sluggish change in the Medieval Period, our twelfth-century forebears, for example, had no problem anticipating what thirteenth-century life might offer. At that time in history, social change was more in line with the slow, plank-by-plank changes to Theseus’s ship. Today, the new dynamic of what one might call precocious change — combined with increasingly successful, productive, leveraged alliances among the various disciplines — makes gazing into the twenty-second century an unprecedentedly challenging briar patch.

New paradoxes surrounding humanity in the context of change, and thus of identity (who and what I am and will become), must certainly arise. At the very least, amidst startling, transformative self-reinvention, the question of what is the bedrock of personal identity will be paramount.

Monday, 12 September 2016

Six Imperatives for Saving Syria?

Posted by Keith Tidman
With many powers exercising their claims in Syria—and demonising one another—the conflict long ago morphed from a civil war to a Hobbesian battleground for international self-interests. And as Thomas Hobbes warned, life for many in Syria is 'nasty, brutish, and short'. 
The dynamics have turned toward ever-more bloodshed, with rivals—kindled by neighbouring and remote states alike—entangled in a brutal, interventionist struggle for preeminence. The outcomes have included the civilian casualties, families sundered, and an outpouring of millions of refugees funneling into other countries, near and far. The economic and security stressors are being felt in the Middle East, Europe, Africa, and elsewhere: exacerbating localised conflicts, rendering borders porous, spurring radicalisation, and destablising social order.

The war in Syria continues to roil. Stunning images of dazed, blooded children pulled barely alive from the rubble following air strikes have virally circumnavigated the world time and again. Eyes gazing upon such stark images have welled up. Outrage has been stoked. So, five years since the carnage began, and more than quarter of a million deaths later, what—in an admittedly ideal world—are the imperatives for Syria? From a philosophical vantage point, there are at least six—both strategic and moral.
Imperative One - is for the powers exercising the greatest leverage—including Iran, Lebanon, the Gulf coast states, Russia, Western Europe, the United States—to agree to bring the worst of the fighting and cyclical escalation to an end. This imperative calls not for yet another disingenuous, short-lived ceasefire in an ongoing series. Rather, without key factions fueling the fighting—with money, arms, logistical support, fresh foreign fighters, tactical direction, leadership on the battlefield, and the like—the flames will scale back to a more manageable intensity. That, in turn, will feed oxygen to efforts not only to shift the course of events in the towns but more crucially to hammer out a longer-lasting, sustainable solution.

Imperative Two -  is to disentangle the flailing limbs of the rival groups that have spent the last half-decade killing each other and pursuing gains in territory and influence—where one nation’s ‘unsavory’ antagonist is another nation’s ally. The message must be that no one’s interests have any hope of prevailing, permanently, in today’s unremitting carnage. Messaging, though necessary, isn’t sufficient, however. Those countries whose proxies are on the front lines must retract their own talons while also reining in their surrogates. Proxy fighting—the worst of a raging hot war, along with a Mideast cold war of hegemons ham-fistedly competing over ideas and power—is cruel cynicism.

Imperative Three - is for power centres like the United Nations, the Arab League, the United States, Russia, and the European Union, as well as nongovernmental organizations like Médicin Sans Frontières and the Red Crescent, to mobilise in order to inject humanitarian relief into Syria. That means doctors, medicine, food, shelter, clothing, and other necessities—including expertise—to allow for at least rudimentarily livable conditions and some semblance of normalcy, as well as to pave the way for more-robust civil affairs. Essential will be countries and organisations avoiding working at cross-purposes—all the while staying the course with sustainable, not just episodic, infusions of resources. With visibly improving conditions will come the provision in shortest supply: hope.

Imperative Four - is for these same power centres not just to arrange for rival groups to ‘stake their flag’ and settle in place, but to disgorge from Syria those non-native elements—foreign interlopers—that embarked on pursuing their own imperial gains at the Syrian people’s expense. The sponsors of these groups—Iran, Lebanon, Turkey, the Kurds, Gulf Cooperation Council members, Russia, United States, and others—must operate on the basis that ideology, tribalism, sectarianism, spheres of influence, imperialism are not zero sum and, moreover, must not come at the Syrian population’s expense.

Imperative Five -  is for the global community to begin the massive undertaking of repairing what now lies as rubble. Those repairs to infrastructure—buildings, utilities, services—will require resources that can be met only through collective action. Continued fighting will disincline countries from contributing to the kitty, so first achieving imperative number one is essential. ‘Aid fatigue’ will set in if infrastructural fixes get protracted, if there’s unmitigated corruption, and if gains are destroyed—leading to disenchantment and the mission petering out. Reconstitution of the country will therefore have to happen on a grand scale, with all aware of the consequences of diminishing commitment and exigencies at home and abroad competing for attention. One country’s aid will likely provide a fillip to others, leading to a critical mass of support.

Imperative Six - is to settle on a system of governance for Syria, including leadership. The model doesn’t have to be overtly liberal democracy. Rather, some variant of a ‘benign (enlightened) autocracy’ may suffice, at least in the immediate term, with parties pledging to work toward an enduring system to serve the population’s interests. The eventual system will require a broad-brush makeover: political representation, public debate, formal social contract, human rights, policymaking (domestic, foreign), resource management, rule of law, the environment, civil society, institutional formation . . . the gamut.
The overarching need, however, is actionable ends to set history ‘right’. As Confucius, who himself lived in a time of wars, observed, 'To see the right and not to do it is cowardice.' At the very least, to see the right and not to do it is moral bankruptcy. To see the right and not to do it is a corruption of the obligation of nations to set people’s welfare right—an endeavour paradoxically both mundane and noble. To see the right and not to do it is a corruption of the foundational expectation of Syrian families to go about their lives in the absence of tyranny. Idealism, perhaps—but scaling back the 'continued fear and danger of violent death', described by Hobbes, should be at the core of Syrians’ manifest destiny.

Monday, 18 April 2016

Is Political Science Science?

Leviathan frontispiece by Abraham Bosse
Posted by Bohdana Kurylo
Is political science science? The political philosophy of Thomas Hobbes would seem to present us with a test case par excellence. Claiming that his most influential work, Leviathan, was through and through scientific, Hobbes wrote, ‘Science is the knowledge of consequences, and dependence of one fact upon another.’  His work, he judged, was founded upon ‘geometrical and physical first principles of matter and motion’, combined with logical deductions of the human sciences, psychological and political.
Through his scientific researches, Hobbes came to hold a pessimistic view of human nature, which he called the ‘state of nature’, the ‘Natural Condition of Mankind’: a ruinous state of conflict. Paradoxically, he considered that such conflict arose from equality and rationality. Possessing limited resources, a rational man would try to take as much as possible for himself. At the same time, others would need to do the same, as a defensive measure. The likeliest outcome was ‘war of every man against every man’, where law and justice have no place. Such a life, he famously wrote, would be ‘solitary, poor, nasty, brutish, and short’.

Hobbes proposed, therefore, a contract between the people and the Sovereign, as a means of creating peace by imposing a single, sovereign rule. It is the fear of punishment, he wrote, that preserves peace and unity, and ties people to the ‘performance of their Covenants’. Following his logic, individuals are likely to reach the conclusion that a social contract is the best alternative to their natural condition, so surrendering their liberties and rights.

On the surface of it, Hobbes' logic seems compelling, his deductions persuasive, his arguments admirable. Nonetheless, for a number of reasons, it is questionable that his analysis of human nature was truly scientific.