By Keith Tidman
For many years now people have been sounding the clarion call of Artificial Intelligence, buying into its everyday promise in ways we’ve grown accustomed to, as it scrapes the internet trove for usable information while focusing on largely single tasks. But the call of what’s being referred to as Artificial General Intelligence, also known as ‘strong AI’ or simply AGI, has fallen on less-attentive ears. Often, its potentially vast abilities, acting as a proxy for the human brain’s rich neural network, have been relegated by popular culture’s narrow vision to the realm of science fiction.
Yet, the more likely impacts of strong AI will manifest themselves in the form of major shifts in how we model reality across all aspects of civilization, from the natural sciences to the social sciences and the full breadth of the humanities, where ultimately very few if any domains of human intellectualism and other activity will be left untouched. In some cases, adjustments to theories of knowledge will accrete like coral — reflecting vast paradigm shifts, as such fundamental change was termed by the philosopher of science Thomas Kuhn. These sweeping, forward-leaping shifts in knowledge and understanding will serve in turn as the fertile seedbeds of what’s designated AGI superintelligence.
We can expect, in the coming years, a steady stream of eureka moments, as physicists, neuroscientists, biologists, chemists, computer scientists, philosophers of mind, and others working on aspects of strong AI’s development explore the frontiers of what’s possible. Even to this extent, there’s still a way to go in order to grasp the full vision, where the precise timeline is the subject of earnest debate. This, despite the fact that Nobel prizes were chalked up in 2024 for investigations into this field, including for the development of machine-learning technology using artificial neural networks. (Geoffrey Hinton, often referred to as the ‘godfather of AI’, and physicist John Hopfield were among these awardees.)
Deep questions and much learning remain, however, around what’s necessary for even approximating the complexities of the human mind and consciousness, ranging from thinking about thinking and the insatiability of wide-eyed curiosity. After all, unlike the relatively more brute-force-like tactics of today’s narrower, so-called ‘weak AI’, the robustness of Artificial General Intelligence at its pinnacle will allow it to do all sorts of things: truly think, understand, ideate, experience, solve problems in unheard-of ways, experiment, deconstruct and reconstruct, intuit, engage in what-if thought experimentation, critically analyze, and innovate and create on grand scales.
Increasingly, the preceding abilities will be the stuff of science fact, not science fiction. And eventually, through the ensuing possibility of AGI’s self-optimization — that is, absent intervention by biased human algorithm-builders — Artificial General Intelligence will be able to do all that, and more, much better than humans can. Self-optimization translates to the technology managing its own evolutionary journey. That tipping point in the dash toward superintelligence will likely become a matter for irrepressibly curious, enterprising humans and strong AI itself to figure out how to accommodate and catalyze each other, for the best outcome.
Within the philosophy of science, the posture of scientific observation that is rooted in the acceptance that knowledge of the world is always interpreted, structured, and filtered by the observer and that, consequently, pronouncements need to be built on the recognition of how hard it is to grasp the world, is called epistemic humility. The approach has implications in the heady sprint toward superintelligence. Epistemic humility refers to the limits on what we know or think we know (provisional knowledge); the degrees of certainty or uncertainty with which we know it; what we don’t know but later might with further investigation; and what’s deemed, at least for now, as flat-out unknowable. In other words, don’t just assume; instead, rationally, empirically verify or falsify, and then verify or falsify again, with our minds open to new information and calls for changed models. Artificial General Intelligence will be a critical piece of that humbling puzzle.
Other links between AGI and things, events, and conditions in the world will include, in the longer term, consciousness-like abilities such as awareness, perception, sentience, identity, presence in time and space, visions of alternative futures, anchors to history, pondering, volition, imagination, adaptation, innovation, sense of agency, memory — and more. To know that itself purposely exists. Just as the whole range of human cognitive capabilities emerge from the neurophysiological activity of a person’s brain, similarly they will emerge from the inner network of Artificial General Intelligence, its nonbiological foundation notwithstanding. Certainly, the future commercially scaling up of quantum computers, with their stunningly ultra-fast processing compared even with today’s supercomputers (quantum computing is projected to be many millions of times faster), will help fast-track AGI’s reach. The international race is on.
Critics warn, though, that the technology could lead to civilizational and human extinction. Two years ago, one advocacy organization hyperbolically framed humanity’s challenge in the arena of Artificial Intelligence as equivalent to mitigating the risk posed by the trajectory of climate change, the prospect of future pandemics, and the world’s bristling nuclear arsenals. I suspect such apocalyptic anxieties, although admittedly palpable, will ultimately prove to be unhelpful miscues and distractions on the upcoming AGI stage. Ridding ourselves more and more of what today’s AI industry daintily refers to as ‘hallucinations,’ or in everyday parlance errors, will prove a critical early step in moving along toward strong AI. ‘Red teaming’ AGI models in structured environments, while such models evolve in capability and complexity, will test for flaws, harms, vulnerabilities, and misbehaviors, in order to continually inform remediation strategies.
Guardrails are, of course, necessary, but they must not unduly hinder progress. It won’t be enough for even thoughtful protagonists and antagonists of AGI to contest ideas. Rather, the intellectual capital invested in ideas needs to be wide-ranging and inclusive. Humanity will therefore be best served if it allows informed, clear-minded multidisciplinary teams of specialists — ethicists, physicists, legal scholars, anthropologists, philosophers, technologists, neuroscientists, historians, sociologists, psychologists, government policymakers — along with the public at large to share their respective expertise and opinions in contemplating prudent ways forward, and for what purposes. Even, perhaps, to consider the potential rights and responsibilities of such stellarly smart systems.
In those contexts, we might expect that future development of Artificial General Intelligence will help enrich our understanding of what it means for us to be us in such a world of superintelligent, creative, expert systems. It will irrepressibly bring us to a place where human learning and machine learning transect in mutually force-multiplying ways. As the technology evolves, the real challenge will be, in the long run, to fathom the world-altering, pan-visionary promise of what AGI can know, understand, innovate, and do as part of our common enterprises.
No comments:
Post a Comment