By Keith Tidman
For many years now people have been sounding the clarion call of Artificial Intelligence, buying into its everyday promise in ways we’ve grown accustomed to, as it scrapes the internet trove for usable information while focusing on largely single tasks. But the call of what’s being referred to as Artificial General Intelligence, also known as ‘strong AI’ or simply AGI, has fallen on less-attentive ears. Often, its potentially vast abilities, acting as a proxy for the human brain’s rich neural network, have been relegated by popular culture’s narrow vision to the realm of science fiction.
Yet, the more likely impacts of strong AI will manifest themselves in the form of major shifts in how we model reality across all aspects of civilization, from the natural sciences to the social sciences and the full breadth of the humanities, where ultimately very few if any domains of human intellectualism and other activity will be left untouched. In some cases, adjustments to theories of knowledge will accrete like coral — reflecting vast paradigm shifts, as such fundamental change was termed by the philosopher of science Thomas Kuhn. These sweeping, forward-leaping shifts in knowledge and understanding will serve in turn as the fertile seedbeds of what’s designated AGI superintelligence.
We can expect, in the coming years, a steady stream of eureka moments, as physicists, neuroscientists, biologists, chemists, computer scientists, philosophers of mind, and others working on aspects of strong AI’s development explore the frontiers of what’s possible. Even to this extent, there’s still a way to go in order to grasp the full vision, where the precise timeline is the subject of earnest debate. This, despite the fact that Nobel prizes were chalked up in 2024 for investigations into this field, including for the development of machine-learning technology using artificial neural networks. (Geoffrey Hinton, often referred to as the ‘godfather of AI’, and physicist John Hopfield were among these awardees.)
Deep questions and much learning remain, however, around what’s necessary for even approximating the complexities of the human mind and consciousness, ranging from thinking about thinking and the insatiability of wide-eyed curiosity. After all, unlike the relatively more brute-force-like tactics of today’s narrower, so-called ‘weak AI’, the robustness of Artificial General Intelligence at its pinnacle will allow it to do all sorts of things: truly think, understand, ideate, experience, solve problems in unheard-of ways, experiment, deconstruct and reconstruct, intuit, engage in what-if thought experimentation, critically analyze, and innovate and create on grand scales.
Increasingly, the preceding abilities will be the stuff of science fact, not science fiction. And eventually, through the ensuing possibility of AGI’s self-optimization — that is, absent intervention by biased human algorithm-builders — Artificial General Intelligence will be able to do all that, and more, much better than humans can. Self-optimization translates to the technology managing its own evolutionary journey. That tipping point in the dash toward superintelligence will likely become a matter for irrepressibly curious, enterprising humans and strong AI itself to figure out how to accommodate and catalyze each other, for the best outcome.
Within the philosophy of science, the posture of scientific observation that is rooted in the acceptance that knowledge of the world is always interpreted, structured, and filtered by the observer and that, consequently, pronouncements need to be built on the recognition of how hard it is to grasp the world, is called epistemic humility. The approach has implications in the heady sprint toward superintelligence. Epistemic humility refers to the limits on what we know or think we know (provisional knowledge); the degrees of certainty or uncertainty with which we know it; what we don’t know but later might with further investigation; and what’s deemed, at least for now, as flat-out unknowable. In other words, don’t just assume; instead, rationally, empirically verify or falsify, and then verify or falsify again, with our minds open to new information and calls for changed models. Artificial General Intelligence will be a critical piece of that humbling puzzle.
Other links between AGI and things, events, and conditions in the world will include, in the longer term, consciousness-like abilities such as awareness, perception, sentience, identity, presence in time and space, visions of alternative futures, anchors to history, pondering, volition, imagination, adaptation, innovation, sense of agency, memory — and more. To know that itself purposely exists. Just as the whole range of human cognitive capabilities emerge from the neurophysiological activity of a person’s brain, similarly they will emerge from the inner network of Artificial General Intelligence, its nonbiological foundation notwithstanding. Certainly, the future commercially scaling up of quantum computers, with their stunningly ultra-fast processing compared even with today’s supercomputers (quantum computing is projected to be many millions of times faster), will help fast-track AGI’s reach. The international race is on.
Critics warn, though, that the technology could lead to civilizational and human extinction. Two years ago, one advocacy organization hyperbolically framed humanity’s challenge in the arena of Artificial Intelligence as equivalent to mitigating the risk posed by the trajectory of climate change, the prospect of future pandemics, and the world’s bristling nuclear arsenals. I suspect such apocalyptic anxieties, although admittedly palpable, will ultimately prove to be unhelpful miscues and distractions on the upcoming AGI stage. Ridding ourselves more and more of what today’s AI industry daintily refers to as ‘hallucinations,’ or in everyday parlance errors, will prove a critical early step in moving along toward strong AI. ‘Red teaming’ AGI models in structured environments, while such models evolve in capability and complexity, will test for flaws, harms, vulnerabilities, and misbehaviors, in order to continually inform remediation strategies.
Guardrails are, of course, necessary, but they must not unduly hinder progress. It won’t be enough for even thoughtful protagonists and antagonists of AGI to contest ideas. Rather, the intellectual capital invested in ideas needs to be wide-ranging and inclusive. Humanity will therefore be best served if it allows informed, clear-minded multidisciplinary teams of specialists — ethicists, physicists, legal scholars, anthropologists, philosophers, technologists, neuroscientists, historians, sociologists, psychologists, government policymakers — along with the public at large to share their respective expertise and opinions in contemplating prudent ways forward, and for what purposes. Even, perhaps, to consider the potential rights and responsibilities of such stellarly smart systems.
In those contexts, we might expect that future development of Artificial General Intelligence will help enrich our understanding of what it means for us to be us in such a world of superintelligent, creative, expert systems. It will irrepressibly bring us to a place where human learning and machine learning transect in mutually force-multiplying ways. As the technology evolves, the real challenge will be, in the long run, to fathom the world-altering, pan-visionary promise of what AGI can know, understand, innovate, and do as part of our common enterprises.
3 comments:
The glass half full envisions all the benefits from AGI. The glass half empty envisions the destructive use of AGI. Using the United States as an example, Republican politicians claimed for their convenience that Iraq was responsible for 911, claim climate change is not a result of human activity and that Trump actually won the 2020 election. Republican politicians refused to convict Trump for attempting a coup against American democracy while admitting among themselves that that was exactly what happened. What Republican politicians have demonstrated is that no amount of evil is beyond rationalization.
We live in a world where evil and duplicity reigns. It is inconceivable that humans and countries will not use AGI development for evil purposes. Agreements between nations to limit destructive use of AGI will mean nothing. Nations routinely secretly research and develop contrary to international agreements. We are reminded daily that no system can protect against computer hacking. Likewise, no system can be developed which will insulate AGI development from use for evil purposes.
I find it incomprehensible how this article is quite open in painting the future for, and of, humanity in such spectacularly altering terms yet feel no reason to be concerned. Quantum computers ARE fast. So fast that we will not, as social biological beings, have any ability to keep abreast of the changes envisaged, never mind the changes not even conceivable. 'Human' will become 'other'. It is inevitable. So what will the old style humans do/be? Consider our brethren from the animal world, (never mind our treatment of ourselves by ourselves) and the example current AI is gleaming from it's trawling through our files. Do you seriously think AGI will not note this?
Keith Tidman’s article, *What’s Next for Artificial Intelligence?*, offers a thorough and thought-provoking exploration of the evolution and implications of Artificial General Intelligence (AGI). Below is a brief commentary on the key points:
### Strengths:
1. **Insightful Analysis**:
Tidman deftly situates AGI within both its scientific and philosophical contexts, particularly by referencing concepts like Thomas Kuhn's paradigm shifts and epistemic humility. This enriches the discussion with depth and nuance.
2. **Balanced Perspective**:
The article avoids falling into either utopian or dystopian extremes. While acknowledging the transformative potential of AGI, it also highlights challenges, ethical dilemmas, and risks, such as "hallucinations" in AI systems and the necessity of robust guardrails.
3. **Interdisciplinary Approach**:
The call for collaboration among ethicists, scientists, policymakers, and the public underscores the complexity of AGI's implications and the need for diverse expertise. This inclusivity is vital for navigating AGI's societal impact.
4. **Future-Oriented Vision**:
Tidman envisions AGI as a force multiplier for human intellectual and creative capacities, emphasizing its potential to reshape our understanding of humanity and knowledge itself. This optimistic tone inspires engagement rather than fear.
---
### Areas for Reflection:
1. **Potential Overemphasis on Positives**:
While risks like extinction are mentioned, they are somewhat downplayed as "unhelpful miscues." A more balanced engagement with such scenarios could strengthen the discussion.
2. **Ethical and Existential Questions**:
The article touches on AGI's potential rights and responsibilities but leaves this underexplored. Expanding on these points would address profound moral questions about AGI’s role in society.
3. **Public Accessibility**:
Some concepts, like epistemic humility and AGI self-optimization, might be challenging for a general audience. Simplifying or explaining these terms further could broaden the article's reach.
---
### Conclusion:
Tidman’s article successfully frames AGI as both a transformative and daunting frontier, encouraging careful and inclusive deliberation. By blending philosophical insight with technical foresight, the piece invites readers to consider not only what AGI might achieve but also how humanity can responsibly shape its trajectory. It’s a compelling roadmap for navigating the uncharted territory of strong AI.
Post a Comment