Showing posts with label cognition. Show all posts
Showing posts with label cognition. Show all posts

Monday, 30 December 2024

What’s Next for Artificial Intelligence?


By Keith Tidman

For many years now people have been sounding the clarion call of Artificial Intelligence, buying into its everyday promise in ways we’ve grown accustomed to, as it scrapes the internet trove for usable information while focusing on largely single tasks. But the call of what’s being referred to as Artificial General Intelligence, also known as ‘strong AI’ or simply AGI, has fallen on less-attentive ears. Often, its potentially vast abilities, acting as a proxy for the human brain’s rich neural network, have been relegated by popular culture’s narrow vision to the realm of science fiction. 

Yet, the more likely impacts of strong AI will manifest themselves in the form of major shifts in how we model reality across all aspects of civilization, from the natural sciences to the social sciences and the full breadth of the humanities, where ultimately very few if any domains of human intellectualism and other activity will be left untouched. In some cases, adjustments to theories of knowledge will accrete like coral  reflecting vast paradigm shifts, as such fundamental change was termed by the philosopher of science Thomas Kuhn. These sweeping, forward-leaping shifts in knowledge and understanding will serve in turn as the fertile seedbeds of what’s designated AGI superintelligence.

 

We can expect, in the coming years, a steady stream of eureka moments, as physicists, neuroscientists, biologists, chemists, computer scientists, philosophers of mind, and others working on aspects of strong AIs development explore the frontiers of whats possible. Even to this extent, there’s still a way to go in order to grasp the full vision, where the precise timeline is the subject of earnest debate. This, despite the fact that Nobel prizes were chalked up in 2024 for investigations into this field, including for the development of machine-learning technology using artificial neural networks. (Geoffrey Hinton, often referred to as the ‘godfather of AI’, and physicist John Hopfield were among these awardees.)


Deep questions and much learning remain, however, around what’s necessary for even approximating the complexities of the human mind and consciousness, ranging from thinking about thinking and the insatiability of wide-eyed curiosity. After all, unlike the relatively more brute-force-like tactics of today’s narrower, so-called ‘weak AI’, the robustness of Artificial General Intelligence at its pinnacle will allow it to do all sorts of things: truly think, understand, ideate, experience, solve problems in unheard-of ways, experiment, deconstruct and reconstruct, intuit, engage in what-if thought experimentation, critically analyze, and innovate and create on grand scales.

 

Increasingly, the preceding abilities will be the stuff of science fact, not science fiction. And eventually, through the ensuing possibility of AGI’s self-optimization — that is, absent intervention by biased human algorithm-builders — Artificial General Intelligence will be able to do all that, and more, much better than humans can. Self-optimization translates to the technology managing its own evolutionary journey. That tipping point in the dash toward superintelligence will likely become a matter for irrepressibly curious, enterprising humans and strong AI itself to figure out how to accommodate and catalyze each other, for the best outcome.


Within the philosophy of science, the posture of scientific observation that is rooted in the acceptance that knowledge of the world is always interpreted, structured, and filtered by the observer and that, consequently, pronouncements need to be built on the recognition of how hard it is to grasp the world, is called epistemic humility. The approach has implications in the heady sprint toward superintelligence. Epistemic humility refers to the limits on what we know or think we know (provisional knowledge); the degrees of certainty or uncertainty with which we know it; what we don’t know but later might with further investigation; and what’s deemed, at least for now, as flat-out unknowable. In other words, don’t just assume; instead, rationally, empirically verify or falsify, and then verify or falsify again, with our minds open to new information and calls for changed models. Artificial General Intelligence will be a critical piece of that humbling puzzle.

 

Other links between AGI and things, events, and conditions in the world will include, in the longer term, consciousness-like abilities such as awareness, perception, sentience, identity, presence in time and space, visions of alternative futures, anchors to history, pondering, volition, imagination, adaptation, innovation, sense of agency, memory — and more. To know that itself purposely exists. Just as the whole range of human cognitive capabilities emerge from the neurophysiological activity of a person’s brain, similarly they will emerge from the inner network of Artificial General Intelligence, its nonbiological foundation notwithstanding. Certainly, the future commercially scaling up of quantum computers, with their stunningly ultra-fast processing compared even with today’s supercomputers (quantum computing is projected to be many millions of times faster), will help fast-track AGI’s reach. The international race is on.

 

Critics warn, though, that the technology could lead to civilizational and human extinction. Two years ago, one advocacy organization hyperbolically framed humanity’s challenge in the arena of Artificial Intelligence as equivalent to mitigating the risk posed by the trajectory of climate change, the prospect of future pandemics, and the world’s bristling nuclear arsenals. I suspect such apocalyptic anxieties, although admittedly palpable, will ultimately prove to be unhelpful miscues and distractions on the upcoming AGI stage. Ridding ourselves more and more of what today’s AI industry daintily refers to as ‘hallucinations,’ or in everyday parlance errors, will prove a critical early step in moving along toward strong AI. ‘Red teaming’ AGI models in structured environments, while such models evolve in capability and complexity, will test for flaws, harms, vulnerabilities, and misbehaviors, in order to continually inform remediation strategies.


Guardrails are, of course, necessary, but they must not unduly hinder progress. It won’t be enough for even thoughtful protagonists and antagonists of AGI to contest ideas. Rather, the intellectual capital invested in ideas needs to be wide-ranging and inclusive. Humanity will therefore be best served if it allows informed, clear-minded multidisciplinary teams of specialists — ethicists, physicists, legal scholars, anthropologists, philosophers, technologists, neuroscientists, historians, sociologists, psychologists, government policymakers — along with the public at large to share their respective expertise and opinions in contemplating prudent ways forward, and for what purposes. Even, perhaps, to consider the potential rights and responsibilities of such stellarly smart systems.

 

In those contexts, we might expect that future development of Artificial General Intelligence will help enrich our understanding of what it means for us to be us in such a world of superintelligent, creative, expert systems. It will irrepressibly bring us to a place where human learning and machine learning transect in mutually force-multiplying ways. As the technology evolves, the real challenge will be, in the long run, to fathom the world-altering, pan-visionary promise of what AGI can know, understand, innovate, and do as part of our common enterprises. 

Monday, 23 May 2022

Are There Limits to Human Knowledge?


By Keith Tidman

‘Any research that cannot be reduced to actual visual observation is excluded where the stars are concerned…. It is inconceivable that we should ever be able to study, by any means whatsoever, their chemical or mineralogical structure’.
A premature declaration of the end of knowledge, made by the French philosopher, Auguste Comte, in 1835.
People often take delight in saying dolphins are smart. Yet, does even the smartest dolphin in the ocean understand quantum theory? No. Will it ever understand the theory, no matter how hard it tries? Of course not. We have no difficulty accepting that dolphins have cognitive limitations, fixed by their brains’ biology. We do not anticipate dolphins even asking the right questions, let alone answering them.

Some people then conclude that for the same reason — built-in biological boundaries of our species’ brains — humans likewise have hard limits to knowledge. And that, therefore, although we acquired an understanding of quantum theory, which has eluded dolphins, we may not arrive at solutions to other riddles. Like the unification of quantum mechanics and the theory of relativity, both effective in their own dominions. Or a definitive understanding of how and from where within the brain that consciousness arises, and what a complete description of consciousness might look like.

The thinking isn’t that such unification of branches of physics is impossible or that consciousness doesn’t exist, but that supposedly we’ll never be able to fully explain either one, for want of natural cognitive capacity. It’s argued that because of our allegedly ill-equipped brains, some things will forever remain a mystery to us. Just as dolphins will never understand calculus or infinity or the dolphin genome, human brains are likewise closed off from categories of intractable concepts.

Or at least, as it has been said.

Some among these believers of this view have adopted the self-describing moniker ‘mysterians’. They assert that as a member of the animal kingdom, homo sapiens are subject to the same kinds of insuperable cognitive walls. And that it is hubris, self-deception, and pretension to proclaim otherwise. There’s a needless resignation.

After all, the fact that early hominids did not yet understand the natural order of the universe does not mean that they were ill-equipped to eventually acquire such understanding, or that they were suffering so-called ‘cognitive closure’. Early humans were not fixed solely on survival, subsistence, and reproduction, where existence was defined solely by a daily grind over the millennia in a struggle to hold onto the status quo.

Instead, we were endowed from the start with a remarkable evolutionary path that got us to where we are today, and to where we will be in the future. With dexterously intelligent minds that enable us to wonder, discover, model, and refine our understanding of the world around us. To ponder our species’ position within the cosmic order. To contemplate our meaning, purpose, and destiny. And to continue this evolutionary path for however long our biological selves ensure our survival as opposed to extinction at our own hand or by external factors.

How is it, then, that we even come to know things? There are sundry methods, including (but not limited to) these: Logical, which entails the laws (rules) of formal logic, as exemplified by the iconic syllogism where conclusion follow premises. Semantic, which entails the denotative and connotative definitions and context-based meanings of words. Systemic, which entails the use of symbols, words, and operations/functions related to the universally agreed-upon rules of mathematics. And empirical, which entails evidence, information, and observation that come to us through our senses and such tools like those below for analysis, to confirm or finetune or discard hypotheses.

Sometimes the resulting understanding is truly paradigm-shifting; other times it’s progressive, incremental, and cumulative — contributed to by multiple people assembling elements from previous theories, not infrequently stretching over generations. Either way, belief follows — that is, until the cycle of reflection and reinvention begins again. Even as one theory is substituted for another, we remain buoyed by belief in the commonsensical fundamentals of attempting to understand the natural order of things. Theories and methodologies might both change; nonetheless, we stay faithful to the task, embracing the search for knowledge. Knowledge acquisition is thus fluid, persistently fed by new and better ideas that inform our models of reality.

We are aided in this intellectual quest by five baskets of ‘implements’: Physical devices like quantum computers, space-based telescopes, DNA sequencers, and particle accelerators. Tools for smart simulation, like artificial intelligence, augmented reality, big data, and machine learning. Symbolic representations, like natural languages (spoken and written), imagery, and mathematical modeling. The multiplicative collaboration of human minds, functioning like a hive of powerful biological parallel processors. And, lastly, the nexus among these implements.

This nexus among implements continually expands, at a quickening pace; we are, after all, consummate crafters of tools and collaborators. We might fairly presume that the nexus will indeed lead to an understanding of the ‘brass ring’ of knowledge, human consciousness. The cause-and-effect dynamic is cyclic: theoretical knowledge driving empirical knowledge driving theoretical knowledge — and so on indefinitely, part of the conjectural froth in which we ask and answer the tough questions. Such explanations of reality must take account, in balance, of both the natural world and metaphysical world, in their respective multiplicity of forms.

My conclusion is that, uniquely, the human species has boundless cognitive access rather than bounded cognitive closure. Such that even the long-sought ‘theory of everything’ will actually be just another mile marker on our intellectual journey to the next theory of everything, and the next one — all transient placeholders, extending ad infinitum.

There will be no end to curiosity, questions, and reflection; there will be no end to the paradigm-shifting effects of imagination, creativity, rationalism, and what-ifs; and there will be no end to answers, as human knowledge incessantly accrues.