Showing posts with label Thomas Aquinas. Show all posts
Showing posts with label Thomas Aquinas. Show all posts

Monday 9 September 2019

‘Just War’ Theory: Its Endurance Through the Ages


The Illustrious Hugo Grotius of the Law of Warre and Peace: 
With Annotations, III Parts, and Memorials of the Author’s Life and Death.
Book with title page engraving, printed in London, England, by T. Warren for William Lee in 1654.

Posted by Keith Tidman

To some people, the term ‘just war’ may have the distinct ring of an oxymoron, the more so to advocates of pacifism. After all, as the contention goes, how can the lethal violence and destruction unleashed in war ever be just? Yet, not all of the world’s contentiousness, neither historically nor today, lends itself to nonmilitary remedies. So, coming to grips with the realpolitik of humankind inevitability waging successive wars over several millennia, philosophers, dating back to ancient Greece and Rome — like Plato, Aristotle, and Cicero — have thought about when and how war might be justified.

Building on such early luminary thinkers, the thirteenth-century philosopher and theologian Saint Thomas Aquinas, in his influential text, Summa Theologica, advanced the principles of ‘just war’ to a whole other level. Aquinas’s foundational work led to the tradition of just-war principles, broken down into jus ad bellum (the right to resort to war to begin with) and jus in bello (the right way to fight once war is underway). Centuries later came a new doctrinal category, jus post bellum (the right way to act after war has ended).

The rules that govern going to war, jus ad bellum, include the following:
• just authority, meaning that only legitimate national rulers may declare war;

• just cause, meaning that a nation may wage war only for such purposes as self-defence, defence of other nations, and intervention against the gravest inhumanity;

• right intentions, meaning the warring state stays focused on the just cause and doesn’t veer toward illegitimate causes, such as material and economic gain, hegemonic expansionism, regime change, ideological-cultural-religious dissimilarities, or unbridled militarism;

• proportionality, meaning that as best can be determined, the anticipated goods outweigh the anticipated evil that war will cause;

• a high probability of success, meaning that the war’s aim is seen as highly achievable; 
and...

• last resort, meaning that viable, peaceful, diplomatic solutions have been explored — not just between potentially warring parties, but also with the intercession of supranational institutions, as fit — leaving no alternative to war in order to achieve the just cause.

The rules that govern the actual fighting of war, jus in bello, include the following: 
• discrimination, meaning to target only combatants and military objectives, and not civilians or fighters who have surrendered, been captured, or are injured; 

• proportionality, meaning that injury to lives and property must be in line with the military advantage to be gained; 

• responsibility, meaning that all participants in war are accountable for their behaviour; 
and... 
• necessity, meaning that the least-harmful military means, such as choice of weapons, tactics, and amount of force applied, must be resorted to.

The rules that govern behaviour following war’s end, jus post bellum, typically include the following: 
• proportionality, meaning the terms to end war and transition to peace should be reasonable and even-handed; 

• discrimination, meaning that the victor should treat the defeated party fairly and not unduly punitively; 

• restorative, meaning promoting stability, mapping infrastructural redevelopment, and guiding institutional, social, security, and legal order; 

and... 
• accountability, meaning that determination of culpability and retribution for wrongful actions (including atrocities) during hostilities are reasonable and measured.
Since the time of the early philosophers like Augustine of Hippo, Thomas Aquinas, and the ascribed ‘father of international law’ Hugo Grotius (The Law of War and Peace, frontispiece above), the principles tied to ‘just war’, and its basis in moral reciprocity, have shifted. One change has entailed the increasing secularisation of ‘just war’ from largely religious roots.

Meanwhile, the failure of the seventeenth-century Peace of Westphalia — which ended Europe’s devastating Thirty Years’ War and Eighty Years’ War, declaring that states would henceforth honour other nations’ sovereignty — has been particularly dreadful. As well intentioned as the treaty was, it failed to head off repeated militarily bloody incursions into others’ territory over the last three and a half centuries. Furthermore, the modern means of war have necessitated revisiting the principles of just wars — despite the theoretical rectitude of wars’ aims.

One factor is the extraordinary versatility, furtiveness, and lethality of modern means of war — and remarkably accelerating transformation. None of these ‘modern means’ were, of course, even imaginable as just-war doctrine was being developed over the centuries. The bristling technology is familiar: from precision (‘smart’) munitions to nuclear weapons, drones, cyber weapons, long-range missiles, stealthy designs, space-based systems, biological/chemical munitions, global power projection by sea and air, hypervelocity munitions, increasingly sophisticated, lethal, and hard-to-defeat AI weapons, and autonomous weapons (increasingly taking human controllers out of the picture). In their respective ways, these devices are intended to exacerbate the ‘friction and fog’ and lethality of war for the opponent, as well as to lessen exposure of one’s own combatants to threats. 

Weapons of a different ilk, like economic sanctions, are meant to coerce opponents into complying with demands and complying with certain behaviours, even if civilians are among the more direly affected. Tactics, too, range widely, from proxies to asymmetric conflicts, special-forces operations, terrorism (intrinsically episodic), psychological operations, targeted killings of individuals, and mercenary insertion.

So, what does this inventory of weapons and tactics portend regarding just-war principles? The answer hinges on the warring parties: who’s using which weapons in which conflict and with which tactics and objectives. The idea behind precision munitions, for example, is to pinpoint combatant targets while minimising harm to civilians and civilian property.

Intentions aren’t foolproof, however, as demonstrated in any number of currently ongoing wars. Yet, one might argue that, on balance, the results are ‘better’ than in earlier conflicts in which, for example, blankets of inaccurate gravity (‘dumb’) bombs were dropped, and where indifference among combatants as to the effects on innocents — impinging on noncombatant immunity — had become the rule rather than the exception.

There are current ‘hot’ conflicts to which one might readily apply just-war theory. Yemen, Somalia, Libya, Syria, Ukraine, India/Pakistan, Iraq, and Afghanistan, among sundry others, come to mind. (As well as brinkmanship, such as with Iran, North Korea, and Venezuela.) The nature of these conflicts ranges from international to civil to terrorist to hybrid. Their adherence to jus ad bellum and jus in bello narratives and prescriptions differ radically from one to another. These conflicts’ jus post bellum narratives — meaning the right way to act after war has ended — have still to reveal their final chapter in concrete treaties, as for example in the current negotiations between the Taliban and United States in Afghanistan, almost two decades into that wearyingly ongoing war. 

The reality is that the breach left by these sundry wars, either as they end abruptly or simply peter out in exhaustion, will be filled by another. As long as the realpolitik inevitability of war continues to haunt us, humanity needs Aquinas’s guidance.

Just-war doctrine, though developed in another age and necessarily having undergone evolutionary adaptation to parallel wars’ changes, remains enduringly relevant — not to anaesthetise the populace, let alone to entirely cleanse war ethically, but as a practical way to embed some measure of order in the otherwise unbridled messiness of war.

Monday 24 September 2018

Why Is There Something Rather Than Nothing?

For scientists, space is not empty but full of quantum energy
Posted by Keith Tidman

Gottfried Wilhelm Leibniz introduced this inquiry more than three hundred years ago, saying, ‘The first question that should rightly be asked is, “Why is there something rather than nothing?”’ Since then, many philosophers and scientists have likewise pondered this question. Perhaps the most famous restatement of it came in 1929 when the German philosopher, Martin Heidegger, placed it at the heart of his book What Is Metaphysics?: ‘Why are there beings at all, and why not rather nothing?’

Of course, many people around the world turn to a god as a sufficient reason (explanation) for the universe’s existence. Aristotle believed, as did his forerunner Heraclitus, that the world was mutable — everything undergoing perpetual change — which he characterised as movement. He argued that there was a sequence of predecessor causes that led back deep into the past, until reaching an unmoved mover, or Prime Mover (God). An eternal, immaterial, unchanging god exists necessarily, Aristotle believed, itself independent of cause and change.

In the 13th century Saint Thomas Aquinas, a Christian friar, advanced this so-called cosmological view of universal beginnings, likewise perceiving God as the First Cause. Leibniz, in fact, was only proposing something similar, with his Contingency Argument, in the 17th century:

‘The sufficient reason [for the existence of the universe] which needs not further reason must be outside of this series of contingent things and is found in a substance which . . . is a necessary being bearing the reason for its existence within itself. . . .  This final reason for things is called God’ — Leibniz, The Principles of Nature and Grace

However, evoking God as the prime mover or first cause or noncontingent being — arbitrarily, on a priori rather than empirical grounds — does not inescapably make it so. Far from it. The common counterargument maintains that a god correspondingly raises the question that, if a god exists — has a presence — what was its cause? Assuming, that is, that any thing — ‘nothing’ being the sole exception — must have a cause. So we are still left with the question, famously posed by the theoretical physicist Stephen Hawking, ‘What is it that breathes fire into the equations and makes a universe for them to describe?’ To posit the existence of a god does not, as such, get around the ‘hard problem’: why there is a universe at all, not just why our universe is the way it is.



Some go so far as to say that nothingness is unstable, hence again impossible.


 
Science has not fared much better in this challenge. The British mathematician and philosopher Bertrand Russell ended up merely declaring in 1948, ‘I should say that the universe is just there, and that’s all’. A ‘brute fact’, as some have called it. Many scientists have embraced similar sentiments: concluding that ‘something’ was inevitable, and that ‘nothingness’ would be impossible. Some go so far as to say that nothingness is unstable, hence again impossible. But these are difficult positions to support unquestionally, given that, as with many scientific and philosophical predecessors and contemporaries, they do not adequately explain why and how. This was, for example, the outlook of Baruch Spinoza, the 17th-century Dutch philosopher who maintained that the universe (with its innumerable initial conditions and subsequent properties) had to exist. Leaping forward to the 20th century, Albert Einstein, himself an admirer of Spinoza’s philosophy, seemed to concur.

Quantum mechanics poses an interesting illustration of the science debate, informing us that empty space is not really empty — not in any absolute sense, anyway. Even what we might consider the most perfect vacuum is actually filled by churning virtual particles — quantum fluctuations — that almost instantaneously flit in and out of existence. Some theoretical physicists have suggested that this so-called ‘quantum vacuum’ is as close to nothingness as we might get. But quantum fluctuations do not equate to nothingness; they are not some modern-day-science equivalent of the non-contingent Prime Mover discussed above. Rather, no matter however flitting and insubstantial, virtual quantum particles are still something.

It is therefore reasonable to inquire into the necessary origins of these quantum fluctuations — an inquiry that requires us to return to an Aristotelian-like chain of causes upon causes, traceable back in time. The notion of a supposed quantum vacuum still doesn’t get us to what might have garnered something from nothing. Hence, the hypothesis that there has always been something — that the quantum vacuum was the universe’s nursery — peels away as an unsupportable claim. Meanwhile, other scientific hypotheses, such as string theory, bid to take the place of Prime Mover. At the heart of the theory is the hypothesis that the fundamental particles of physics are not really ‘points’ as such but rather differently vibrating energy ‘strings’ existing in many more than the familiar dimensions of space-time. Yet these strings, too, do not get us over the hump of something in place of nothing; strings are still ‘something’, whose origins (causes) would beg to be explained.

In addressing these questions, we are not talking about something emerging from nothing, as nothingness by definition would preclude the initial conditions required for the emergence of a universe. Also, ‘nothingness’ is not the mere absence (or opposite) of something; rather, it is possible to regard ‘nothingness’ as theoretically having been just as possible as ‘something’. In light of such modern-day challenges in both science and philosophy, Lugdwig Wittgenstein was at least partially right in saying, early in the 20th century (Tractatus Logico-Philosophicus, section 6.4 on what he calls ‘the mystical’), that the real mystery was, ‘Not how the world is . . . but that it is’.



Monday 30 July 2018

The Anthropic Principle: Was the Universe Made for Us?

Diagram on the dimensionality of spacetime, by Max Tegmark
Posted by Keith Tidman
‘It is clear that the Earth does not move, and that it does not lie elsewhere than at the center [of the universe]’ 
— Aristotle (4th century BCE)

Almost two millennia after Aristotle, in the 16th century, Nicolas Copernicus dared to differ from the revered ‘father of Western philosophy’. Copernicus rattled the world by arguing that the Earth is not at the center of the universe — in a move that to many at the time seemed to knock humankind off its pedestal, and reduce it from exceptionalism to mediocrity. The so-called ‘Copernican principle’ survived, of course, along with the profound disturbance it had evoked for the theologically minded.

Five centuries later, in the early 1970s, an American astrophysicist called Brandon Carter came up with a different model — the ‘anthropic principle’ — that has kept philosophers and scientists debating its significance cosmologically and metaphysically. With some irony, Carter proposed the principle at a symposium to mark Copernicus’s 500th birthday. The anthropic principle points to what has been referred to as the ‘fine-tuning’ of the universe: a list of cosmological qualities (physical constants) whose extraordinarily precise values were essential to making intelligent life possible.

Yet, as Thomas Nagel, the contemporary American philosopher, suggested, even the physical constants known to be required for our universe and an intelligent carbon-based life form need to be properly understood, especially in context of the larger-scaled universe:
‘One doesn’t show that something doesn’t require explanation by pointing out that it is a condition of one’s existence.’
The anthropic principle — its adherence to simplicity, consistency, and elegance notwithstanding — did not of course place Earth back at the center of the universe. As Carter put it, ‘Although our situation is not necessarily central, it is inevitably privileged’. To widen the preceding idea, let’s pose two questions: Did the anthropic principle reestablish humankind’s special place? Was the universe made for us?

First, some definitions. There are several variants of the anthropic principle, as well as differences among definitions, with Carter originally proposing two: the ‘weak anthropic principle’ and the ‘strong anthropic principle’. Of the weak anthropic principle, Carter says:
‘… our location in the universe [he was referring to the age of the universe at which humankind entered the world stage, as well as to location within space] is necessarily privileged to the extent of being compatible with our existence as observers.’
Of the strong anthropic principle, he explained,
‘The universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage’.
Although Carter is credited with coining the term ‘anthropic principle’, others had turned to the subject earlier than him. One in particular among them was the 19th-century German philosopher Arthur Schopenhauer, who presented a model of the world intriguingly similar to the weak anthropic principle. He argued that the world’s existence depended on numerous variables, like temperature and atmosphere, remaining within a very narrow range — presaging Carter’s fuller explanation. Here’s a snapshot of Schopenhauer’s thinking on the matter:
‘If any one of the actually appearing perturbations of [the planets’ course], instead of being gradually balanced by others, continued to increase, the world would soon reach its end’.
That said, some philosophers and scientists have criticized the weak variant as a logical tautology; however, that has not stopped others from discounting the criticism and favoring the weak variant. At the same time, the strong variant is considered problematic in its own way, as it’s difficult to substantiate this variant either philosophically or scientifically. It may be neither provable nor disprovable. However, at their core, both variants (weak and strong) say that our universe is wired to permit an intelligent observer — whether carbon-based or of a different substrate — to appear.

So, what kinds of physical constants — also referred to as ‘cosmic coincidences’ or ‘initial conditions’ — does the anthropic principle point to as ‘fine-tuned’ for a universe like ours, and an intelligent species like ours, to exist? There are many; however, let’s first take just one, to demonstrate significance. If the force of gravitation were slightly weaker, then following the Big Bang matter would have been distributed too fast for galaxies to form. If gravitation were slightly stronger — with the universe expanding even one millionth slower — then the universe would have expanded to its maximum and collapsed in a big crunch before intelligent life would have entered the scene.

Other examples of constants balanced on a razor’s edge have applied to the universe as a whole, to our galaxy, to our solar system, and to our planet. Examples of fine-tuning include the amount of dark matter and dark energy (minimally understood at this time) relative to all the observable lumpy things like galaxies; the ratio of matter and antimatter; mass density and space-energy density; speed of light; galaxy size and shape; our distance from the Milky Way’s center; the sun’s mass and metal content; atmospheric transparency . . . and so forth. These are measured, not just modeled, phenomena.

The theoretical physicist Freeman Dyson poignantly pondered these and the many other ‘coincidences’ and ‘initial conditions’, hinting at an omnipresent cosmic consciousness:
‘As we look out into the universe and identify the many accidents of physics and astronomy that have worked together to our benefit, it is almost as if the universe must in some sense have known we were coming.’
Perhaps as interestingly, humankind is indeed embedded in the universe, able to contemplate itself as an intelligent species; reveal the features and evolution of the universe in which humankind resides as an observer; and ponder our species’ place and purpose in the universe, including our alternative futures.

The metaphysical implications of the anthropic principle are many. One points to agency and design by a supreme being. Some philosophers, like St. Thomas Aquinas (13th century) and later William Paley (18th century), have argued this case. However, some critics of this explanation have called it a ‘God of the gaps’ fallacy — pointing out what’s not yet explained and filling the holes in our knowledge with a supernatural being.

Alternatively, there is the hypothetical multiverse model. Here, there are a multitude of universes each assumed to have its own unique initial conditions and physical laws. And even though not all universes within this model may be amenable to the evolution of advanced intelligent life, it’s assumed that a universe like ours had to be included among the infinite number. Which at least begins to speak to the German philosopher Martin Heidegger's question, ‘Why are there beings at all, instead of nothing?’