Showing posts with label Plato. Show all posts
Showing posts with label Plato. Show all posts

Monday 16 September 2024

Plato’s Allegory of the Cave: And the Deception of Perception



By Keith Tidman

 

It is a tribute to the timelessness of Plato’s ideas that his philosophical stories still echo powerfully in the contemporary world. People still do live in the flickering shadows of Plato’s cave, mistaking myths for reality and surmising evidence. We are metaphorically bound, for example, to watch and assent to the shadows cast by social media, influencing our notions of reality. An increasingly subjective and debatable reality, shaped by the passing of gossamer shadows flung onto the wall (today the computer screen) by puppeteers. Today, there’s clearly a risk of deception by partial perception, of information exploited for political ends.


It was in his most-read work, The Republic, written about 380 BCE, that Plato recounted an exchange between Glaucon and Socrates, sometimes called the Allegory of the Cave. Socrates describes how in this cave, seated in a line, are prisoners who have been there since birth, entirely cut off from the outside world. Tightly restrained by chains such that they cannot move, their lived experience is limited to staring at the cave wall in front of them. 

 

What they cannot know is that just behind where they sit is a parapet and fire, in front of which other people carry variously shaped objects, and it is these that cast the strange shadows. The shadows on the wall, and not the fire or the objects themselves, are the prisoners’ only visible reality — the only world they can know. Of the causes of the moving shadows, of the distinction between the abstract and the real, they can know nothing. 

 

Plato asks us to consider what might happen if one of the prisoners is then unchained and forced reluctantly to leave the cave, into the glaring light of the sun. At first, he says, the brightness must obscure the freed prisoner’s vision,  so that he can see only shadows and reflections, similar to being in the cave. However, after a while, his eyes would grow accustomed to the light, and eventually he would be able to see other people and objects themselves, not just their shadows. As the former prisoner adjusts, he begins to believe the outside world offers what he construes as a very different, even better reality than the shadows in the dusky cave.

 

But now suppose, Plato asks, that this prisoner decides to return to the cave to share his experience — to try to convince the prisoners to follow his lead to the sunlight and the ‘forms’ of the outside world. Would they willingly seize the chance? But no, quite the contrary, Plato warns. Far from seizing the opportunity to see more clearly, he thinks the other prisoners would defiantly resist, believing the outside world to be harmful and dangerous and not wanting to leave the security of their cave with the shadows they have become so familiar with, even so expert at interpreting. 

 

The allegory of the cave is part of Plato’s larger theory of knowledge — of ideals and forms. The cave and shadows are representative of how people usually live, often ensconced within the one reality they’re comfortable with and assume to be of greatest good. All the while, they are confronted by having to interpret, adjust to, and live in a wholly dissimilar world. The so-called truth that people meet up with is shaped by contextual circumstances they happened to have been exposed to (their upbringing, education, and experiences, for example), in turn swaying their interpretations, judgments, beliefs, and norms. All often cherished. Change requires overcoming inertia and myopia, which proves arduous, given prevailing human nature.

 

People may wonder which is in fact the most authentic reality. And they may wonder how they might ultimately overcome trepidation, choosing whether or not to turn their backs to their former reality, and understanding and embracing the alternative truth. A process that perhaps happens again and again. The undertaking, or journey, from one state of consciousness to another entails conflict and requires parsing the differences in awareness of one truth over another, to be edified of the supposed higher levels of reality and to overcome what one might call the deception of perception: the unreal world of blurry appearances..

 

Some two and a half millennia after Plato crafted his allegory of the cave, popular culture has borrowed the core storyline, in both literature as well as movies. For example, the pilots of both Fahrenheit 451, by Ray Bradbury, and Country of the Blind, by H.G. Wells, concern eventual enlightened awareness, where key characters come to grips with the shallowness of the world with which they’re familiar every day. 


Similarly, in the movie The Matrix, the lead character, Neo, is asked to make a difficult choice: to either take a blue pill and continue living his current existence of comfort but obscurity and ignorance, or take a red pill and learn the hard truth. He opts for the red pill, and in doing so becomes aware that the world he has been living in is merely a contrivance, a computer-generated simulation of reality intended to pacify people.

 

Or take the movie The Truman Show. In this, the lead character, Truman Burbank, lives a suburban, family life as an insurance agent for some thirty years, before the illusion starts to crumble and he suspects his family is made up of actors and everything else is counterfeit. It even turns out that he is living on a set that comprises several thousand hidden cameras producing a TV show for the entertainment of spectators worldwide. It is all a duplicitous manipulation of reality — a deception of perception, again — creating a struggle for freedom. And in this movie, after increasingly questioning the unfathomable goings-on around him, Truman (like the prisoner who leaves Plato’s cave) manages to escape the TV set and to enter the real world.

 

Perhaps, then, what is most remarkable about the Allegory of the Cave is there is nothing about it that anchors it exclusively to the ancient world in which it was first imagined. Instead, Plato’s cave is, if anything, even more pertinent in the technological world of today, split as it is between spectral appearances and physical reality. Being surrounded today with the illusory shadows of digital technology, with our attention guided by algorithm-steering, belief-reinforcing social media, strikes a warning note. That today, more than ever, it is our responsibility to continually question our assumptions.

 

Monday 28 August 2023

A Word to the Wise

Philosophy is a sailboat that deftly catches the fair breeze…


By Andrew Porter


We live in a time in which most people, were you to ask them ‘Do you think you’re wise?’, would look askance or confused and not answer straightforwardly. They are not prepared for the question by long anticipation and living in that habitat. But you might hear answers such as, ‘I’m wise about some things’ or ‘I’m pretty savvy when it comes to how to handle people’. But your question would remain unanswered.

Maybe it’s the circles I run in, but it seems that there's little to no hankering for wisdom; it is not prevalent. It is as if many people feel that moral relativism – the common zeitgeist – has taken them off the hook and they are relieved. But choices have a way of illuminating obvious help or harm. There’s really no getting off the hook.

Wisdom can be encapsulated in a reasoned decision by an individual, but it is always in tune with larger reason. One of the great things about Plato as a philosopher is that he walks around and into the thick of the question of wisdom with boldness and perspective. A champion of reason, he grounds human morality in virtue, but emphasises that it is part of a ‘virtue’ of reality: the nature and function of the ontologically real is to be good, true, and beautiful.

This immersion of humankind and personal choices in a larger environment seems a crucial lesson for our times. This odd and ungrounded era we live in does not have a ready and able moral vocabulary; it, more often than not, leaves moral nuance like an abandoned shopping cart in the woods. Why is Plato one of the best voices to re-energise as his philosophy applies to current-day issues and angst?

One of the problems of individuals and institutions in contemporary times is that they think they are wise without ever examining how and if that’s true. So often, they – whether you yourself, a spouse, a boss, politicians, or fellow citizens – assume a virtue they own not. This is exactly what Socrates, in Plato's hands, addresses. What are some of the problems in the world open to reform or transformation?

Certainly, social justice issues continue to rear their head and undermine an equitable society. Entrenched power systems and attendant attitudes are not only slow to respond, but display no moral understanding. Today, it seems there is a raft of problems, from psychological to philosophical, and the consequences turn dire. At the root of all actual and potential catastrophes, it seems, is a lack of that one thing that has been waylaid, discarded, and ignored: wisdom.

Plato crafted his philosophy about soul and virtue, justice and character, in alignment with his metaphysics. This is its genius, making a harmony of inner and outer

In the Republic, Plato himself oscillates between saying that a philosopher-king, the only assurance the city would be happy and just, would be a lover of wisdom and actually wise. In our time, the problem is a lack of desire to find or inculcate wisdom. Societies have, in general, hamstrung themselves. We do not have ready tools to care about and value wisdom, however far off. We do not, to any cogent degree, educate children to be philosopher-kings of their own lives.

Western societies and perhaps Eastern ones as well have not increased in wisdom because they have abandoned the pursuit. The task is left unattended. The current problem is not that the world (or smaller entities such as companies, schools, and individuals) cannot find a truly wise person; so-called civilisation acts wilfully against finding or even thinking about finding such. It is a mobile home that's been put up on blocks.

Philosophy can inculcate the kind of consciousness that the 20th century Swiss philosopher, Jean Gebser, called integral reality, which perceives a truth that, as he says, ‘transluces’ both the world and humankind (in the sense of shining light through). In short, philosophy holds the promise of educating. It is not a crazy old man on his porch, moving his cane to tell the traffic to slow down; rather, philosophy is a sailboat that deftly catches fair breeze – and moves us forward.

Monday 12 June 2023

The Euthyphro Dilemma: What Makes Something Moral?

The sixteenth-century nun and mystic, Saint Teresa. In her autobiography, she wrote that she was very fond of St. Augustine … for he was a sinner too

By Keith Tidman  

Consider this: Is the pious being loved by the gods because it is pious, or is it pious because it is being loved by the gods?  Plato, Euthyphro


Plato has Socrates asking just this of the Athenian prophet Euthyphro in one of his most famous dialogues. The characteristically riddlesome inquiry became known as the Euthyphro dilemma. Another way to frame the issue is to flip the question around: Is an action wrong because the gods forbid it, or do the gods forbid it because it is wrong? This version presents what is often referred to as the ‘two horns’ of the dilemma.

 

Put another way, if what’s morally good or bad is only what the gods arbitrarily make something, called the divine command theory (or divine fiat) — which Euthyphro subscribed to — then the gods may be presumed to have agency and omnipotence over these and other matters. However, if, instead, the gods simply point to what’s already, independently good or bad, then there must be a source of moral judgment that transcends the gods, leaving that other, higher source of moral absolutism yet to be explained millennia later. 

 

In the ancient world the gods notoriously quarreled with one another, engaging in scrappy tiffs over concerns about power, authority, ambition, influence, and jealousy, on occasion fueled by unabashed hubris. Disunity and disputation were the order of the day. Sometimes making for scandalous recounting, these quarrels comprised the stuff of modern students’ soap-opera-styled mythological entertainment. Yet, even when there is only one god, disagreements over orthodoxy and morality occur aplenty. The challenge mounted by the dilemma is as important to today’s world of a generally monotheistic god as it was to the polytheistic predispositions of ancient Athens. The medieval theologians’ explanations are not enough to persuade:


‘Since good as perceived by the intellect is the object of the will, it is impossible for God to will anything but what His wisdom approves. This is as it were, His law of justice, in accordance with which His will is right and just. Hence, what He does according to His will He does justly: as we do justly when we do according to the law. But whereas law comes to us from some higher power, God is a law unto Himself’ (St. Thomas Aquinas, Summa Theologica, First Part, Question 21, first article reply to Obj. 2).


In the seventeenth century, Gottfried Leibniz offered a firm challenge to ‘divine command theory’, in asking the following question about whether right and wrong can be known only by divine revelation. He suggested, rather, there ought to be reasons, apart from religious tradition only, why particular behaviour is moral or immoral:

 

‘In saying that things are not good by any rule of goodness, but sheerly by the will of God, it seems to me that one destroys, without realising it, all the love of God and all his glory. For why praise him for he has done if he would be equally praiseworthy in doing exactly the contrary?’ (Discourse on Metaphysics, 1686). 

 

Meantime, today’s monotheistic world religions offer, among other holy texts, the Bible, Qur’an, and Torah, bearing the moral and legal decrees professed to be handed down by God. But even in the situations’ dissimilarity — the ancient world of Greek deities and modern monotheism (as well as some of today’s polytheistic practices) — both serve as examples of the ‘divine command theory’. That is, what’s deemed pious is presumed to be the case precisely because God chooses to love it, in line with the theory. That pious something or other is not independently sitting adrift, noncontingently virtuous in its own right, with nothing transcendentally making it so.

 

This presupposes that God commands only what is good. It also presupposes that, for example, things like the giving of charity, the avoidance of adultery, and the refrain from stealing, murdering, and ‘graven images’ have their truth value from being morally good if, and only if, God loves these and other commandments. The complete taxonomy (or classification scheme) of edicts being aimed at placing guardrails on human behaviour in the expectation of a nobler, more sanctified world. But God loving what’s morally good for its own sake — that is, apart from God making it so — clearly denies ‘divine command theory’.

 

For, if the pious is loved by the gods because it is pious, which is one of the interpretations offered by Plato (through the mouth of Socrates) in challenging Euthyphro’s thinking, then it opens the door to an authority higher than God. Where matters of morality may exist outside of God’s reach, suggesting something other than God being all-powerful. Such a scenario pushes back against traditionally Abrahamic (monotheist) conceptualisations.

 

Yet, whether the situation calls for a single almighty God or a yet greater power of some indescribable sort, the philosopher Thomas Hobbes, who like St. Thomas Aquinas and Averroës believed that God commands only what is good, argued that God’s laws must conform to ‘natural reason’. Hobbes’s point makes for an essential truism, especially if the universe is to have rhyme and reason. This being true even if the governing forces of natural law and of objective morality are not entirely understood or, for that matter, not compressible into a singularly encompassing ‘theory of all’. 

 

Because of the principles of ‘divine command theory’, some people contend the necessary takeaway is that there can be no ethics in the absence of God to judge something as pious. In fact, Fyodor Dostoyevsky, in The Brothers Karamazov, presumptuously declared that ‘if God does not exist, everything is permitted’. Surely not so; you don’t have to be a theist of faith to spot the shortsighted dismissiveness of his assertion. After all, an atheist or agnostic might recognise the benevolence, even the categorical need, for adherence to manmade principles of morality, to foster the welfare of humanity at large for its own sufficient sake. Secular humanism, in other words  which greatly appeals to many people.

 

Immanuel Kant’s categorical imperative supports these human-centered, do-unto-others notions: ‘Act only in accordance with that maxim through which you can at the same time will that it become a universal law’. An ethic of respect toward all, as we mortals delineate between right and wrong. Even with ‘divine command theory’, it seems reasonable to suppose that a god would have reasons for preferring that moral principles not be arrived at willy-nilly.

  

Monday 15 May 2023

‘Game Theory’: Strategic Thinking for Optimal Solutions

Cortes began his campaign to conquer the Aztec Empire by having all but one of his ships scuttled, which meant that he and his men would either conquer the Aztecs Empire or die trying.. Initially, the Aztecs did not see the Spanish as a threat. In fact, their ruler, Moctezuma II, sent emissaries to present gifts to these foreign strangers. 



By Keith Tidman

 

The Peloponnesian War, chronicled by the historian Thucydides, pitted two major powers of Ancient Greece against each other, the Athenians and the Spartans. The Battle of Delium, which took place in 424 BC, was one of the war’s decisive battles. In two of his dialogues (Laches and Symposium), Plato had Socrates, who actually fought in the war, apocryphally recalling the battle, bearing on combatants’ strategic choices.

 

One episode recalls a soldier on the front line, awaiting the enemy to attack, pondering his options in the context of self-interest — what works best for him. For example, if his comrades are believed to be capable of successfully repelling the attack, his own role will contribute only inconsequentially to the fight, yet he risks pointlessly being killed. If, however, the enemy is certain to win the battle, the soldier’s own death is all the more likely and senseless, given that the front line will be routed, anyway, no matter what it does.

 

The soldier concludes from these mental somersaults that his best option is to flee, regardless of which side wins the battle. His ‘dominant strategy’ being to stay alive and unharmed. However, based on the same line of reasoning, all the soldier’s fellow men-in-arms should decide to flee also, to avoid the inevitability of being cut down, rather than to stand their ground. Yet, if all flee, the soldiers are guaranteed to lose the battle before the sides have even engaged.

 

This kind of strategic analysis is sometimes called game theoryHistory provides us with many other examples of game theory applied to the real world, too. In 1591, the Spanish conqueror Cortéz landed in the Western Hemisphere, intending to march inland and vanquish the Aztec Empire. He feared, however, that his soldiers, exhausted from the ocean journey, might be reluctant to fight the Aztec warriors, who happened also to greatly outnumber his own force.

 

Instead of counting on the motivation of individual soldier’s courage or even group ésprit de corps, Cortéz scuttled his fleet. His strategy was to remove the risk of the ships tempting his men to retreat rather than fight — and thus, with no option, to pursue the Aztecs in a fight-or-die (vice a fight-or-flee) scenario. The calculus for each of Cortéz’s soldiers in weighing his survivalist self-interest had shifted dramatically. At the same time, in brazenly scuttling his ships in the manner of a metaphorical weapon, Cortéz wanted to dramatically demonstrate to the enemy that for reasons the latter couldn’t fathom, his outnumbered force nonetheless appeared fearlessly confident to engage in the upcoming battle.

 

It’s a striking historical example of one way in which game theory provides means to assess situations where parties make strategic decisions that take account of each other’s possible decisions. The parties aim to arrive at best strategies in the framework of their own interests — business, economic, political, etc. — while factoring in what they believe to be the thinking (strategising) of opposite players whose interests may align or differ or even be a blend of both.

 

The term, and the philosophy of game theory, is much more recent, of course, developed in the early twentieth century by the mathematician John von Neumann and the economist Oskar Morgenstern. They focused on the theory’s application to economic decision-making, with what they considered the game-like nature of the field of economics. Some ten years later, another mathematician, called John Nash, along with others expanded the discipline, to include strategic decisions applicable to a wide range of fields and scenarios, analysing how competitors with diverse interests choose to contest with one another in pursuit of optimised outcomes. 

 

Whereas some of the earliest cases focused on ‘zero-sum’ games involving two players whose interests sharply conflicted, later scenarios and games were far more intricate. Such as ‘variable-sum’ games, where there may be all winners or all losers, as in a labour dispute. Or ‘constant-variable’ games, like poker, characterised as pure competition, entailing total conflict. The more intricately constructed games accommodate multiple players, involve a blend of shared and divergent interests, involve successive moves, and have at least one player with more information to inform and shape his own strategic choices than the information his competitors hold in hand.

 

The techniques of game theory and the scenarios examined are notable for their range of applications, including business, economics, politics, law, diplomacy, sports, social sciences, and war. Some features of the competitive scenarios are challenging to probe, such as accurately discerning the intentions of rivals and trying to discriminate behavioural patterns. That being said, many features of scenarios and alternative strategies can be studied by the methods of game theory, grounded in mathematics and logic.

 

Among the real-world applications of the methods are planning to mitigate the effects of climate extremes; running management-labour negotiations to get to a new contract and head off costly strikes; siting a power-generating plant to reflect regional needs; anticipating the choices of voter blocs; selecting and rejecting candidates for jury duty during voir dire; engaging in a price war between catty-cornered grocery stores rather than both keeping their prices aligned and high; avoiding predictable plays in sports, to make it harder to defend against; foretelling the formation of political coalitions; and negotiating a treaty between two antagonistic, saber-rattling countries to head off runaway arms spending or outright conflict.

 

Perhaps more trivially, applications of game theory stretch to so-called parlour games, too, like chess, checkers, poker, and Go, which are finite in the number of players and optional plays, and in which progress is achieved via a string of alternating single moves. The contestant who presages a competitor’s optimal answer to their own move will experience more favourable outcomes than if they try to deduce that their opponent will make a particular move associated with a particular probability ranking.

 

Given the large diversity of ‘games’, there are necessarily multiple forms of game theory. Fundamental to each theory, however, is that features of the strategising are actively managed by the players rather than through resort to just chance, hence why game theory goes several steps farther than mere probability theory.

 

The classic example of a two-person, noncooperative game is the Prisoner’s Dilemma. This is how it goes. Detectives believe that their two suspects collaborated in robbing a bank, but they don’t have enough admissible evidence to prove the charges beyond a reasonable doubt. They need more on which to base their otherwise shaky case. The prisoners are kept apart, out of hearing range of each other, as interrogators try to coax each into admitting to the crime.

 

Each prisoner mulls their options for getting the shortest prison term. But in deciding whether to confess, they’re unaware of what their accomplice will decide to do. However, both prisoners are mindful of their options and consequences: If both own up to the robbery, both get a five-year prison term; if neither confesses, both are sentenced to a one-year term (on a lesser charge); and if one squeals on the other, that one goes free, while the prisoner who stays silent goes to prison for fifteen years. 

 

The issue of trust is of course central to weighing the options presented by the ‘game’. In terms of sentences, both prisoners are better off choosing to act unselfishly and remain hush, with each serving one year. But if they choose to act selfishly in expectation of outmaneuvering the unsuspecting (presumed gullible) partner — which is to say, both prisoners picture themselves going free by spilling the beans while mistakenly anticipating that the other will stay silent — the result is much worse: a five-year sentence for both.


Presaging these types of game theoretic arguments, the English philosopher Thomas Hobbes, in Leviathan (1651), described citizens believing, on general principle, they’re best off with unrestrained freedom. Though, as Hobbes theorised, they will come to realise there are occasions when their interests will be better served by cooperating. The aim being to jointly accomplish things not doable by an individual alone. However, some individuals may inconsiderately conclude their interests will be served best by reaping the benefits of collaboration — that is, soliciting help from a neighbour in the form of physical labour, equipment, and time in tilling — but later defaulting when the occasion is for such help to be reciprocated.

 

Resentment, distrust, and cutthroat competitiveness take hold. Faith in the integrity of neighbours in the community plummets, and the chain of sharing resources to leverage the force-multiplicity of teamwork is broken. Society is worse off — where, as Hobbes memorably put it, life then becomes all the more ‘solitary, poor, nasty, brutish and short’. Hobbes’s conclusion, to avoid what he referred to as a ‘war of all against all’, was that people therefore need a central government — operating with significant authority — holding people accountable and punishing accordingly, intended to keep citizens and their transactions on the up and up.

 

What’s germane about Hobbes’s example is how its core themes resonate with today’s game theory. In particular, Hobbes’s argument regarding the need for an ‘undivided’, authoritative government is in line with modern-day game theorists’ solutions to protecting people against what theorists label as ‘social dilemmas’. That is, when people cause fissures within society by dishonourably taking advantage of other citizens rather than cooperating and reciprocating assistance, where collaboration benefits the common good. To Hobbes, the strategic play is between what he refers to as the ‘tyranny’ of an authoritative government and the ‘anarchy’ of no government. He argues that tyranny is the lesser ‘evil’ of the two. 

 

In dicing real-world ‘games’, people have rationally intuited workable strategies, with those solutions sufficing in many everyday circumstances. What the methodologies of game theory offer are ways to formalise, validate, and optimise the outcomes of select intuitions where outcomes matter more. All the while taking into account the opponent and his anticipated strategy, and extracting the highest benefits from choices based on one’s principles and preferences.

 

Monday 13 June 2022

The Diamond–Water Paradox


All that glitters is not gold! Or at least, is not worth as much as gold. Here, richly interwoven cubic crystals of light metallic golden pyrite – also known as fool’s gold – are rare but nowhere near as valuable. Why’s that?

By Keith Tidman


One of the notable contributions of the Enlightenment philosopher, Adam Smith, to the development of modern economics concerned the so-called ‘paradox of value’.

That is, the question of why one of the most-critical items in people’s lives, water, is typically valued far less than, say, a diamond, which may be a nice decorative bauble to flaunt but is considerably less essential to life? As Smith couched the issue in his magnum opus, titled An Inquiry Into the Nature and Causes of the Wealth of Nations (1776):
‘Nothing is more useful than water: but it will purchase scarcely anything; scarcely anything can be had in exchange for it. A diamond, on the contrary, has scarcely any use-value; but a very great quantity of other goods may frequently be had in exchange for it’.
It turns out that the question has deep roots, dating back more than two millennia, explored by Plato and Aristotle, as well as later luminaries, like the seventeenth-century philosopher John Locke and eighteenth-century economist John Law.

For Aristotle, the solution to the paradox involved distinguishing between two kinds of ‘value’: the value of a product in its use, such as water in slaking thirst, and its value in exchange, epitomised by a precious metal conveying the power to buy, or barter for, another good or service.

But, in the minds of later thinkers on the topic, that explanation seemed not to suffice. So, Smith came at the paradox differently, through the theory of the ‘cost of production’ — the expenditure of capital and labour. In many regions of the world, where rain is plentiful, water is easy to find and retrieve in abundance, perhaps by digging a well, or walking to a river or lake, or simply turning on a kitchen faucet. However, diamonds are everywhere harder to find, retrieve, and prepare.

Of course, that balance in value might dramatically tip in water’s favour in largely barren regions, where droughts may be commonplace — with consequences for food security, infant survival, and disease prevalence — with local inhabitants therefore rightly and necessarily regarding water as precious in and of itself. So context matters.

Clearly, however, for someone lost in the desert, parched and staggering around under a blistering sun, the use-value of water exceeds that of a diamond. ‘Utility’ in this instance is how well something gratifies a person’s wants or needs, a subjective measure. Accordingly, John Locke, too, pinned a commodity’s value to its utility — the satisfaction that a good or service gives someone.

For such a person dying of thirst in the desert, ‘opportunity cost’, or what they could obtain in exchange for a diamond at a later time (what’s lost in giving up the other choice), wouldn’t matter — especially if they otherwise couldn’t be assured of making it safely out of the broiling sand alive and healthy.

But what if, instead, that same choice between water and a diamond is reliably offered to the person every fifteen minutes rather than as a one-off? It now makes sense, let’s say, to opt for a diamond three times out of the four offers made each hour, and to choose water once an hour. Where access to an additional unit (bottle) of water each hour will suffice for survival and health, securing the individual’s safe exit from the desert. A scenario that captures the so-called ‘marginal utility’ explanation of value.

However, as with many things in life, the more water an individual acquires in even this harsh desert setting, with basic needs met, the less useful or gratifying the water becomes, referred to as the ‘law of diminishing marginal utility’. An extra unit of water gives very little or even no extra satisfaction.

According to ‘marginal utility’, then, a person will use a commodity to meet a need or want, based on perceived hierarchy of priorities. In the nineteenth century, the Austrian economic theorist Eugen Ritter von Böhm-Bawerk provided an illustration of this concept, exemplified by a farmer owning five sacks of grain:
  • The farmer sets aside the first sack to make bread, for the basics of survival. 
  • He uses the second sack of grain to make yet more bread so that he’s fit enough to perform strenuous work around the farm. 
  • He devotes the third sack to feed his farm animals. 
  • The fourth he uses in distilling alcohol. 
  • And the last sack of grain the farmer uses to feed birds.
If one of those bags is inexplicably lost, the farmer will not then reduce each of the remaining activities by one-fifth, as that would thoughtlessly cut into higher-priority needs. Instead, he will stop feeding the birds, deemed the least-valuable activity, leaving intact the grain for the four more-valuable activities in order to meet what he deems greater needs.

Accordingly, the next least-productive (least-valuable) sack is the fourth, set aside to make alcohol, which would be sacrificed if another sack is lost. And so on, working backwards, until, in a worst-case situation, the farmer is left with the first sack — that is, the grain essential for feeding him so that he stays alive. This situation of the farmer and his five sacks of grain illustrates how the ‘marginal utility’ of a good is driven by personal judgement of least and highest importance, always within a context.

Life today provides contemporary instances of this paradox of value.

Consider, for example, how society pays individual megastars in entertainment and sports vastly more than, say, school teachers. This is so, even though citizens insist they highly value teachers, entrusting them with educating the next generation for societys future competitive economic development. Megastar entertainers and athletes are of course rare, while teachers are plentiful. According to diminishing marginal utility, acquiring one other teacher is easier and cheaper than acquiring one other top entertainer or athlete.

Consider, too, collectables like historical stamps and ancient coins. Afar from their original purpose, these commodities no longer have use-value. 
Yet, ‘a very great quantity of other goods may frequently be had in exchange for them, to evoke Smiths diamond analogue. Factors like scarcity, condition, provenance, and subjective constructs of worth in the minds of the collector community fuel value, when swapping, selling, buying — or exchanging for other goods and services.

Of course, the dynamics of value can prove brittle. History has taught us that many times. Recall, for example, the exuberant valuing of tulips in seventeenth-century Holland. Speculation in tulips skyrocketed — with some varieties worth more than houses in Amsterdam — in what was surely one of the most-curious bubbles ever. Eventually, tulipmania came to a sudden end; however, whether the valuing of, say, todays cryptocurrencies, which are digital, intangible, and volatile, will follow suit and falter, or compete indefinitely with dollars, euros, pounds, and renminbi, remains an unclosed chapter in the paradox of value.

Ultimately, value is demonstrably an emergent construct of the mind, whereby ‘knowledge, as perhaps the most-ubiquitous commodity, poses a special paradoxical case. Knowledge has value simultaneously and equally in its use and ‘in its exchange. In the former, that is in its use, knowledge is applied to acquire ones own needs and wants; in the latter, that is in its exchange, knowledge becomes of benefit to others in acquiring their needs and wants. Is there perhaps a solution to Smith’s paradox here?

Monday 8 February 2021

Will Democracy Survive?

Image via https://www.ancient-origins.net/history-famous-people/cleisthenes-father-democracy-invented-form-government-has-endured-over-021247

Cleisthenes, the Father of Democracy, Invented a Form of Government That Has Endured for 2,500 Years


Posted by Keith Tidman

How well is democracy faring? Will democracy emerge from despots’ modern-day assaults unscathed?

Some 2,500 years ago there was a bold experiment: Democracy was born in Athens. The name of this daring form of governance sprang from two Greek words (demos and kratos), meaning ‘rule by the people’. Democracy offered the public a voice. The political reformer Cleisthenes is the acknowledged ‘father of democracy’, setting up one of ancient Greece’s most-lasting contributions to the modern world.

 

In Athens, the brand was direct democracy, where citizens composed an assembly as the governing body, writing laws on which citizens had the right to vote. The assembly also decided matters of war and foreign policy. A council of representatives, chosen by lot from the ten Athenian tribes, was responsible for everyday governance. And the courts, in which citizens brought cases before jurors selected from the populace by a lottery, was the third branch. Aristotle believed the courts ‘contributed most to the strength of democracy’.

 

As the ancient Greek historian, Herodotus, put it, in this democratic experiment ‘there is, first, that most splendid of virtues, equality before the law’. Yet, there was a major proviso to this ‘equality’: Only ‘citizens’ were qualified to take part, who were limited to free males — less than half of Athens’s population — excluding women, immigrants, and slaves.

 

Nor did every Greek philosopher or historian in the ancient world share Herodotus’s enthusiasm for democracy’s ‘splendid virtues’. Some found various ways to express the idea that one unsavory product of democracy was mob rule. Socrates, as Plato recalls in the Republic, referred unsparingly to the ‘foolish leaders of democracy . . . full of disorder, and dispensing a sort of equality to equals and unequaled alike’.

 

Others, like the historian Thucydides, Aristotle, the playwright Aristophanes, the historian and philosopher Xenophon, and the anonymous writer dubbed the Old Oligarch, expanded on this thinking. They critiqued democracy for dragging with it the citizens’ perceived faults, including ignorance, lack of virtue, corruptibility, shortsightedness, tyranny of the collective, selfishness, and deceptive sway by the specious rhetoric of orators. No matter, Athens’s democracy endured 200 years, before ceding ground to aristocratic-styled rule: what Herodotus labeled ‘the one man, the best’.

 

Many of the deprecations that ancient Greece’s philosophers heaped upon democratic governance and the ‘masses’ are redolent of the problems that democracy, in its representative form, would face again.


Such internal contradictions recently resulted in the United States, the longest-standing democratic republic in the modern world, having its Congress assailed by a mob, in an abortive attempt to stymie the legislators’ certification of the results of the presidential election. However, order was restored that same day (and congressional certification of the democratic will completed). The inauguration of the new president took place without incident, on the date constitutionally laid out. Democracy working.

 

Yet, around the world, in increasing numbers of countries, people doubt democracy’s ability to advance citizens’ interests. Disillusion and cynicism have settled in. Autocrats and firebrands have gladly filled that vacuum of faith. They scoff at democracy. The rule of law has declined, as reported by the World Justice Project. Its index has documented sharp falloffs in the robustness of proscriptions on government abuse and extravagant power. Freedom House has similarly reported on the tenuousness of government accountability, human rights, and civil liberties. ‘Rulers for life’ dot the global landscape.

 

That democracy and freedoms have absorbed body blows around the world has been underscored by attacks from populist leaders who rebuff pluralism and highjack power to nurture their own ambitions and those of closely orbiting supporters. A triumphalism achieved at the public’s expense. In parts of Eastern Europe, Asia Pacific, sub-Saharan Africa, Middle East and North Africa, South and Central America, and elsewhere. The result has been to weaken free speech and press, free religious expression, free assembly, independence of judiciaries, petition of the government, thwarts to corruption, and other rights, norms, and expectations in more and more countries.


Examples of national leaders turning back democracy in favour of authoritarian rule stretch worldwide. Central Europe's populist overreach, of concern to the European Union, has been displayed in abruptly curtailing freedoms, abolishing democratic checks and balances, self-servingly politicising systems of justice, and brazen leaders acquiring unlimited power indefinitely.


Some Latin American countries, too, have experienced waning democracy, accompanied by turns to populist governments and illiberal policies. Destabilised counterbalances to government authority, acute socioeconomic inequalities, attacks on human rights and civic engagement, emphasis on law and order, leanings toward surveillance states, and power-ravenous leaders have symbolised the backsliding.

 

Such cases notwithstanding, people do have agency to dissent and intervene in their destiny, which is, after all, the crux of democracy. Citizens are not confined to abetting or turning a blind eye toward strongmen’s grab for control of the levers of power or ultranationalistic penchants. In particular, there might be reforms, inspired by ancient Athens’s novel experiment, to bolster democracy’s appeal, shifting power from the acquisitive hands of elites and restoring citizens’ faith. 

 

One systemic course correction might be to return to the variant of direct democracy of Aristotle’s Athens, or at least a hybrid of it, where policymaking becomes a far more populous activity. Decisions and policy are molded by what the citizens decide and decree. A counterweight for wholly representative democracy: the latter emboldening politicians, encouraging the conceit of self-styled philosopher-kings whose judgment they mistakenly presume surpasses that of citizens. 

 

It might behoove democracies to have fewer of these professional politicians, serving as ‘administrators’ clearing roadblocks to the will of the people, while crafting the legal wording of legislation embodying majority public pronouncements on policy. The nomenclature of such a body — assembly, council, congress, parliament, or other — matters little, of course, compared with function: party-less technocrats in direct support of the citizenry.

 

The greatest foe to democracies’ longevity, purity, and salience is often the heavy-handed overreach of elected executives, not insurrectionist armies from within the city gates. Reforms might therefore bear on severe restriction or even elimination of an executive-level figurehead, who otherwise might find the giddy allure of trying to accrete more power irresistible and unquenchable. Other reforms might include:

 

• A return to popular votes and referenda to agree on or reject national and local policies; 

• Normalising of constitutional amendments, to ensure congruence with major social change;

• Fewer terms served in office, to avoid ‘professionalising’ political positions; 

• Limits on campaign length, to motivate focused appeals to electors and voter attentiveness.


Still other reforms might be the public funding of campaigns, to constrain expenditures and, especially, avoid bought candidates. Curtailing of special-interest supplicants, who serve deep-pocketed elites. Ethical and financial reviews to safeguard against corruption, with express accountability. Mandatory voting, on specially designated paid holidays, to solicit all voices for inclusivity. Civic service, based on communal convictions and norms-based standards. And reinvention of public institutions, to amplify pertinence, efficacy, and efficiency.

 

Many more ways to refit democracy’s architecture exist, of course. The starting point, however, is that people must believe democracy works and are prepared to foster it. In the arc of history, democracy is most vulnerable if resignedly allowed to be.

 

Testaments to democracy should be ideas, not majestic buildings or monuments. Despots will not cheerfully yield ground; the swag is too great. Yet ideas, which flourish in liberal democracy, are greater.

 

Above all, an alert, restive citizenry is democracy’s best sentinel: determined to triumph rather than capitulate, despite democracy’s turbulence two and a half millennia after ancient Athens’s audacious experiment. 

Monday 14 December 2020

Persuasion v. Manipulation in the Pandemic


Posted by Keith Tidman

Persuasion and manipulation to steer public behaviour are more than just special cases of each other. Manipulation, in particular, risks short-circuiting rational deliberation and free agency. So, where is the line drawn between these two ways of appealing to the public to act in a certain way, to ‘adopt the right behaviour’, especially during the current coronavirus pandemic? And where does the ‘common good’ fit into choices?

 

Consider two related aspects of the current pandemic: mask-wearing and being vaccinated. Based on research, such as that reported on in Nature (‘Face masks: what the data say’, Oct. 2020), mask-wearing is shown to diminish the spread of virus-loaded airborne particles to others, as well as to diminish one’s own exposure to others’ exhaled viruses. 


Many governments, scientists, medical professionals, and public-policy specialists argue that people therefore ought to wear masks, to help mitigate the contagion. A manifestly utilitarian policy position, but one rooted in controversy nonetheless. In the following, I explain why.

 

In some locales, mask-wearing is mandated and backed by sanctions; in other cases, officials seek willing compliance, in the spirit of communitarianism. Implicit in all this is the ethics-based notion of the ‘common good’. That we owe fellow citizens something, in a sense of community-mindedness. And of course, many philosophers have discussed this ‘common good’; indeed, the subject has proven a major thread through Western political and ethical philosophy, dating to ancient thinkers like Plato and Aristotle.


In The Republic, Plato records Socrates as saying that the greatest social good is the ‘cohesion and unity’ that stems from shared feelings of pleasure and pain that result when all members of a society are glad or sorry for the same successes and failures. And Aristotle argues in The Politics, for example, that the concept of community represented by the city-state of his time was ‘established for the sake of some good’, which overarches all other goods.


Two thousand years later, Jean-Jacques Rousseau asserted that citizens’ voluntary, collective commitment — that is, the ‘general will’ or common good of the community — was superior to each person’s ‘private will’. And prominent among recent thinkers to have explored the ‘common good’ is the political philosopher John Rawls, who has defined the common good as ‘certain general conditions that are . . . equally to everyone’s advantage’ (Theory of Justice, 1971).

 

In line with seeking the ‘common good’, many people conclude that being urged to wear a mask falls under the heading of civic-minded persuasion that’s commonsensical. Other people see an overly heavy hand in such measures, which they argue deprives individuals of the right — constitutional, civil, or otherwise — to freely make decisions and take action, or choose not to act. Free agency itself also being a common good, an intrinsic good. For some concerned citizens, compelled mask-wearing smacks of a dictate, falling under the heading of manipulation. Seen, by them, as the loss of agency and autonomous choice.

 

The readying of coronavirus vaccines, including early rollout, has led to its own controversies around choice. Health officials advising the public to roll up their sleeves for the vaccine has run into its own buzzsaw from some quarters. Pragmatic concerns persist: how fast the vaccines were developed and tested, their longer-term efficacy and safety, prioritisation of recipients, assessment of risk across diverse demographics and communities, cloudy public-messaging narratives, cracks in the supply chain, and the perceived politicising of regulatory oversight.


As a result of these concerns, nontrivial numbers of people remain leery, distrusting authority and harbouring qualms. As recent Pew, Gallup, and other polling on these matters unsurprisingly shows, some people might assiduously refuse ever to be vaccinated, or at least resist until greater clarity is shed on what they view as confusing noise or until early results roll in that might reassure. The trend lines will be watched.

 

All the while, officials point to vaccines as key to reaching a high enough level of population immunity to reduce the virus’s threat. Resulting in less contagion and fewer deaths, while allowing besieged economies to reopen with the business, social, and health benefits that entails. For all sorts of reasons — cultural, political, personal — some citizens see officials’ urgings regarding vaccinations as benign, well-intentioned persuasion, while others see it as guileful manipulation. One might consider where the Rawlsian common good fits in, and how the concept sways local, national, and international policy decision-making bearing on vaccine uptake.

 

People are surely entitled to persuade, even intensely. Perhaps on the basis of ethics or social norms or simple honesty: matters of integrity. But they may not be entitled to resort to deception or coercion, even to correct purportedly ‘wrongful’ decisions and behaviours. The worry being that whereas persuasion innocuously induces human behaviour broadly for the common good, coercive manipulation invalidates consent, corrupting the baseline morality of the very process itself. To that point, corrupt means taint ends.

 

Influence and persuasion do not themselves rise to the moral censure of coercive or deceptive manipulation. The word ‘manipulation’, which took on pejorative baggage in the eighteen hundreds, has special usages. Often unscrupulous in purpose, such as to gain unjust advantage. Meantime, persuasion may allow for abridged assumptions, facts, and intentions, to align with community expectations and with hoped-for behavioural outcomes to uphold the common good. A calculation that considers the veracity, sufficiency, and integrity of narratives designed to influence public choices, informed by the behavioural science behind effective public health communications. A subtler way, perhaps, to look at the two-dimensional axes of persuasion versus manipulation.

 

The seed bedding of these issues is that people live in social relationships, not as fragmented, isolated, socially disinterested individuals. They live in the completeness of what it means to be citizens. They live within relationships that define the Rawlsian common good. A concept that helps us parse persuasion and manipulation in the framework of inducing societal behaviour: like the real-world cases of mask-wearing and vaccinations, as the global community counterattacks this lethal pandemic.

 

Monday 13 April 2020

When the Punishment Does Not Fit the Crime


by Anonymous

Do many capitalist societies today impose relatively harsher punishments for crimes committed by individuals of low socioeconomic status? If so, how does this fact affect popular theories of just punishment?

It would seem that many of these theories (such as retribution, deterrence and rehabilitation) must fail when applied to these societies. That this really is the case can be illustrated with a simple example:
Two individuals commit the exact same crime in the same American city: they both crash into parked cars while driving under the influence of alcohol. Both of these crimes result in the exact same amount of damage, the levels of intoxication are the same between the two offenders, and this is the first offense committed by either person. 
However, one of these individuals is a high-powered businessman and the other is a middle-aged, relatively poor single woman with no living relatives and two young children. Both individuals are arrested and brought to the police station where they are put in jail with bail set at $5,000. The man immediately bails himself out and hires a team of experienced defense attorneys. 
The single mother, on the other hand, is too poor to post bail herself and knows no one who could help her. Because she is forced to sit in jail for the weeks preceding her trial, she loses both of her jobs which had been the only sources of income for her family. When the trials roll around, the man’s attorneys convince the judge and jury that he should not be held responsible for his action, and he is given only a fine. However, the publicly-appointed defense attorney for the woman, perhaps too over-worked to have been able to consider her case carefully, fails to offer any convincing defense on her behalf. She is sentenced to three years in prison.
I think it should be clear that in this case, the theory of retribution fails to offer a legitimate justification of punishment. Because the offenders in the story are given extremely different punishments for the same crime, at least one (or both) has been given a punishment that, morally speaking, breaks from the jus talionis, or “eye for an eye” principle and thus does not serve any kind of true retribution. In this case it is likely that both punishments would be considered morally inappropriate. One on hand, the woman in the example is punished before she is even found guilty of a crime by being forced to stay in jail as a result of her inability to post bail. On the other, the wealthy man is given a more lenient punishment only because of the resources to which he has access.

How about deterrence? Jeremy Bentham asserts that “General prevention ought to be the chief end of punishment, as it is its real justification.” Turning back to the example offered above, it becomes clear how Bentham’s deterrence model fails to justify punishments in capitalist societies in which punishments are functions of economic class. The man’s punishment in the hypothetical case would challenge Bentham’s idea that punishments should prevent future crimes from being committed because it would surely allow other wealthy people in the society to think that as long as they can hire expensive attorneys, they will be able to behave recklessly without much consequence. On the whole, a deterrence theory of punishment would not be able to explain how, for wealthier people who get relatively lenient punishment, those punishments have any deterring effects.

Finally, the rehabilitation theory maintains that punishment should include measures aimed at reforming offenders. That is, in giving punishments, societies should keep in mind the ways in which the punishments will allow offenders to change themselves or be changed so they can peacefully re-enter society. Plato conceives of punishment in such a way; he imagines that to suffer punishment is to suffer some good, and evading punishment is often a worse path to go down. Interestingly, it seems that when punishment practices are functions of class, wealthier people who can pay their way out of punishments are actually deprived of opportunities to reform. The man in the above example surely should have had a chance to think about the harms he caused through his crime, and would, for rehabilitation theorists, have been made better off had he had such opportunities.

All this paints a rather dismal picture of punishment and the attempts to morally justify it in the real world. But what would happen if certain measures were put in place in these capitalist societies that guarantee a fair system of punishment? For example, what if cash bail were determined in a manner proportional to the offender’s income (or simply abolished)? What if every defendant were required to use state-appointed attorneys, and what if implicit biases against poorer people were accounted for? It seems that if all these kinds of issues could truly be taken care of (and whether this is even possible is certainly up for debate), punishment would perhaps not exist as a function of economic class.

However, even if all this came to pass, it still would not mean that society’s response to crime would escape the influence of socioeconomic status. That is, even if the processes surrounding punishment were made completely just and equal, the social and economic inequalities that can lead individuals to commit crimes would still exist. This fact alone would still lead to sections of the population committing certain kinds of crimes in greater proportions than others, and being punished for it. For this reason, it seems that before punishment can truly become morally justifiable in capitalist societies, the social circumstances that lead individuals into confrontations with those institutions as well as the institutions surrounding punishment also have to be made just.

Monday 9 March 2020

Does Power Corrupt?

Mandell Creighton leading his group, ‘The Quadrilateral’, at Oxford University in 1865. (As seen in Louise Creighton’s book, The Life and Letters of Mandell Creighton.)
Posted by Keith Tidman

In 1887, the English historian, Lord John Dalberg-Acton, penned this cautionary maxim in a letter to Bishop Mandell Creighton: ‘Power tends to corrupt, and absolute power corrupts absolutely’. He concluded his missive by sounding this provocative note: ‘Great men are almost always bad men’. Which might lead one to reflect that indeed human history does seem to have been fuller of Neros and Attilas than Buddhas and Gandhis.

Perhaps not unexpectedly, the correlation between power and corruption was amply pointed out before Lord Acton, as evidenced by this 1770 observation by William Pitt the Elder, a former prime minister of Great Britain, in the House of Lords: ‘Unlimited power is apt to corrupt the minds of those who possess it’. To which, the eighteenth-century Irish statesman and philosopher Edmund Burke also seemed to agree:
‘The greater the power, the greater the abuse’.
History is of course replete with scoundrels and tyrants, and worse, rulers who have egregiously and enthusiastically abused power — often with malign, even cruel, brutal, and deadly, consequences. Situations where the Orwellian axiom that ‘the object of power is power’ prevails, with bad outcomes for the world. Indulgent perpetrators have ranged from heads of state like pharaohs to emperors, kings and queens, chancellors, prime ministers, presidents, chiefs, and popes. As well as people scattered throughout the rest of society, from corrupt leaders of industry to criminals to everyday citizens.

In some instances, it seems indeed that wielding great power has led susceptible people to change, in the process becoming corrupt or unkind in erstwhile uncharacteristic ways. As to the psychology of that observation, a much-cited Stanford University experiment, conducted in 1971, suggested such an effect, though its findings come with caveats. The two-week experiment was intended to show the psychological effects of prison life on behaviour, using university students as pretend prison guards and prisoners in a mock prison on campus.

However, the quickly mounting, distressing maltreatment of ‘prisoners’ in the experiment by those in the authoritative role of guards — behaviour that included confiscating the prisoners’ clothes and requiring them to sleep on concrete flooring — led to the experiment being canceled after only six days. Was that the prospect of ‘abuse’ of which Burke warned us above? Was it the prospect of the ‘perpetual and restless desire of power after power’ of which the seventeenth-century English philosopher Thomas Hobbes warned us?

In many other cases, it has also been observed that there seem to be predispositions toward corruption and abuse, in which power serves to amplify rather than simply instill. This view seems favoured today. Power (the acquisition of authority) may prompt people to disregard social checks on their natural instincts and shed self-managing inhibitions. Power uncovers the real persona — those whose instinctual character is malignly predisposed.

President Abraham Lincoln seemed to subscribe to this position regarding preexisting behavioural qualities, saying,
‘Nearly all men can stand adversity, but if you want to test a man’s character [true persona], give him power’.
Among people in leadership positions, in any number of social spheres, power can have two edges — good and bad. Decisions, intent, and outcomes matter. So, for example, ‘socialised power’ translates to the beneficial use of power and influence to inspire others toward the articulation and realisation of visions and missions, as well as the accomplishment of tangible goals. The idea being to benefit others: societal, political, corporate, economic, communal, spiritual. All this in a manner that, by definition, presupposes freedom as opposed to coerced implementation.

‘Personalised power’, on the other hand, reflects a focus on meeting one’s own expectations. If personalised power overshadows or excludes common goods, as sometimes seen among autocratic, self-absorbed, and unsympathetic national leaders, the exclusion is concerning as it may injure through bad policy. Yet, notably these two indices of power can be compatible — they aren’t necessarily adversarial, nor does one necessarily force the other to beat a retreat. Jointly, in fact, they’re more likely force-multiplying.

One corollary (a cautionary note, perhaps) has to do with the ‘power paradox’. As a person acquires power through thoughtfulness, respect, and empathetic behaviours, and his or her influence accordingly flourishes, the risk emerges that the person begins to behave less in those constructive ways. Power may paradoxically spark growing self-centeredness, and less self-constraint. It’s potentially seductive; it can border on Machiavellian doctrine as to control over others, whereby decisions and behaviours become decreasingly framed around laudable principles of morality and, instead, take a turn to exertion of coercive power and fear in place of inspiration.

In a turnabout, this diminution of compassionate behaviours — combined with an increase in impulsivity and self-absorption, increase in ethical shortcuts, and decrease in social intelligence — might steadily lessen the person’s power and influence. It returns to a set point. And unless they’re vigilant, leaders — in politics, business, and other venues — may focus less and less on the shareable common good.

As a matter of disputable attribution, Plato summed up the lessons that have come down through history on the matters discussed here, his purportedly saying in few words but without equivocation:
‘The measure of a man is what he does with power’.
Although he doesn’t seem to have actually ever said this as such, it certainly captures the lesson and message of his famous moral tale, about the magic ring of Gyges that confers the power of invisibility on its owner.