With all the talk about Artificial Intelligence and Generative AI one might assume that we know what intelligence is. At least someone should. But maybe not.
Intelligence is a word we use with regularity and abandon. It is an elemental classifier; the smart and the dumb. That person is of high intelligence! That person is an idiot! It is a powerful social sorter. Highly intelligent people are standouts doing complex mental tasks that ordinary people simply can’t do, like solving intricate math and word problems, learning a new language instantaneously, recalling obscure facts and quotations. Often highly intelligent people are child prodigies learning to read at two or composing a fugue at four. Often, but not always, as what they attain in mental skills they lack in emotional or people skills.
Then there are the super intelligent “savants,” who can calculate Pi to 22,514 digits, and recite entire epic poems, novels and even technical manuals. Such extreme and “gifted” intelligences are labeled as “neurodivergent” in that they deviate from neurologically typical brain functioning. According to neuroscientists, they represent one form of “intelligence” in a wider distribution of “intelligences”.
Implicit in this linear ranking of intelligence into a single score is the notion of a metric of General Intelligence, and by implication, a marker of a more superior and preferred human being. This normative value of an underlying General Intelligence is reflected in the core mission of Artificial Intelligence research to engineer a General Intelligence as a universal capability. By “cloning” and augmenting the capabilities of the gifted intelligences of their peers, AI researchers, their funders, and entrepreneurs are attempting to create a General, even unbounded intelligence modeled on themselves as a new “supra species”.
AI as Promethean Intelligence
This was exactly the sentiment of Marvin Minsky, the founder of the Artificial Intelligence Lab at MIT in 1968. From an early age, Minsky exhibited a range of virtuoso intelligence skills from music, physics, math, to engineering. In 1970 in an interview to Life magazine, Minsky made the prediction, “In three to six years we will have machines with the general intelligence of a human being. If we are lucky, they will decide to keep us as pets”. Minsky made similar provocative comments that he was not interested in whether intelligence was embodied in “meat” or silicon. What mattered to him was the capacity for intelligence regardless of its form.
The presumption was that superior intelligence could be engineered to generate some disembodied omniscient power that would grant Promethean powers when unlocked. Intelligence was framed as the optimization of many of the competences of the virtuoso mathematicians and engineers such as Minsky and his fellow colleagues. In the case of Artificial Intelligence, sans human beings, these competences would not be limited by human and biological constraints.
Minsky’s colleague at MIT at the time, John McCarthy, the inventor of Lisp, the once popular AI symbolic programming language, coined the term Artificial Intelligence to mean “intelligent artifacts”. So, the emphasis was not on the distinction between artificial versus natural intelligence, but rather on the engineering of artifacts that exhibited recognizable forms of intelligence.
This Promethean view of uninhibited Artificial Intelligence has carried forward to the present. Its most visible and influential proponent, and simultaneously, alarmist is Elon Musk. Both Minsky and Musk are singular in their fields, but each notably exhibited a certain lacking in social and emotional intelligence skills. As an invited researcher at the MIT AI Lab during Minsky’s early years, I was exposed to a then nascent hacker culture. Ideas, code, languages and models were in a state of constant invention and debate. It was a fertile, vibrant, caustic and tight knit tribe.
In such hacker and gaming cultures, “code intelligence” is the currency of the realm. Markers of elite coder competence and identity are reflected in “Github commits” and entry into select crypto and Web3 communities. Yet such a notion of “intelligence,” as encoded in the very mission of Artificial Intelligence, is a relatively recent phenomenon with a checkered past. The term “intelligence” was first introduced into the broader scientific lexicon in 1903 by Charles Spearman, an English statistician and experimental psychologist, who developed the notion of the “G” factor of general intelligence. He was influenced in his work by Francis Galton, a statistician, polymath, half cousin of Charles Darwin, and early advocate of psychometrics. Both men were eugenicists who believed that Northern European races were superior to all others.
In his book, Natural Intelligence (Oxford University Press, 2023), Summerfield takes to task current AI definitions of intelligence as derived by Spearman and argues for a more “natural intelligence” that recognizes what constitutes intelligence is what is successfully attended to. Hence, intelligence is contextual and adaptive and not limited to a single dimension nor species.
Within today’s AI culture, however, G-intelligence, or general intelligence, is linked to a new kind of social fitness where social progress is seen as best achieved by augmenting human intelligence through machine intelligence. Whereas the early eugenicists sought to augment humanity’s G factor through selective breeding and sterilization, many AI researchers envision General Artificial Intelligence (AGI) as a vehicle for improving evolutionary fitness by substituting machine replicants for humans.
What Have We Gotten Ourselves Into?
It is important to step back and take a deep breath. There are significant limitations to current forms of machine intelligence; they are neither reflexive, intentional nor self-corrective. They have minimal notions of whom or what they are interacting with and why. They just do what they are told. They lack an internal “conscience” or awareness of potential negative impacts of their actions, and they lack the capacity to correct or even reflect upon their actions. They even lack a concept of what a falsehood or negative impact is.
So, what is it you might ask that they do “know”? In one sense, Generative AIs only can do or “know” what they were trained and prompted to do. That is, they are the sum of the parameterized and indexed terms and tokens within their training sample. Some critics have called Generative AI “robotic parrots” in that they can only parrot back what they have been prompted and trained to do. That is not entirely fair, as they can and do “improvise”, that is, rearrange, combine and organize content in response to conversational prompts and different styles that were not explicitly present in the data sample.
In a less generous sense, Generative AI is a kind of “pathological pleaser” that will try and answer every question whether it really knows the answer or not. Again, it has no “inner voice” that says, “is this really correct? Do I even know what I am talking about? What do I need to know?”
Appropriation of Content, Art, Culture and Selves?
Now the Dark Side. By training on other people’s content, personal, professional and aesthetic expressions, Generative AI is essentially appropriating the “life” and works of people’s personas and identities. This dilemma can be treated as a contest over copyright infringement and “fair use”, but it is far more substantial than that. In a more visceral sense, Gen AI is more like a robotic vampire than a robotic parrot. It indiscriminately sucks the life blood out of everything it encounters and inherently cares not what it says or does.
If the powers and financial rewards of Gen AI are tied to the size of its training set, then it will have an unbridled appetite to devour as much information as it can. What is consumed is not simply personal demographic, mobility and purchasing data, but the very essence of how people identify, express, work and intimately value themselves. It does not simply consume people, but it transforms and programs them through the content it feeds them. It is hard to imagine a more powerful engine of propaganda, indoctrination and radical conversion.
Sentience over Intelligence: To Be Alive is to Be Sentient
The conclusion to be drawn here is that unless an “AI” can model, direct and correct its actions with some “internal conscience”, it is neither intelligent nor benign. Nor is it to be trusted. This is true for any “Artificial Generalized Intelligence (AGI) model” because in order for it to be deemed to have attained “General intelligence”, it must first “survive” a multitude of unforeseen, improbable, circumstances, attacks, and exploits.
Yet for the concept of “survival” to be material, the surviving thing must embody a condition of what it means to be alive. Yet it is precisely that which is absent in machine premised AGI. Biological intelligence or sentience, on the other hand, is existential. It is what accounts for an organism being able to persist and retain its unique character in and over time under highly variable conditions.
The principles of being alive and sentient can manifest themselves independent of any material form. This is the conclusion of some leading physicists, computational neuroscientists and biologists, including the neuroscientist, physicist and mathematician Karl Friston. Virtually unknown in the “AI world,” Friston convincingly demonstrates the physics of living things drawing on principles of thermodynamics and Bayesian mechanics. He argues that “sentience”, in contrast to mechanical intelligence, is a fluctuating, probing, evolving and intentional process that attempts to predict a synchronized awareness of other intentional beings. This process of nested sentience manifests itself at multiple scales and across diverse domains.
Under this biological Bayesian perspective, there is not any single, universal omniscient form of intelligence, but multiple selves or intelligences differentiating and acting in concert. This position is similar to one Marvin Minsky advanced in his book Society of Mind (1988) where he argued that intelligence is made up of a society of many small and nested selves. Today in neuroscience and biology the distributed cognition thesis is widely accepted. There are strong evolutionary advantages for having distributed and nested intelligences as it increases evolutionary optionality and acts as a “hedge” against getting “lock ins” into any one strategy.
Might Benign and Ethical AI Be About Sentience, Surviving and Good Character?
So, what might a biological, human friendly, Nature friendly AI look like? How might a Sentient AI avoid the existential threat of not just a malevolent AI, but other existential threats such as Climate Change, socio-economic collapse or even nuclear warfare? Sentience and survival depend upon recognizing, internalizing and neutralizing “negative externalities”. Human survival depends not on avoiding, denying or even “exiting” threatening externalities, but confronting them, embracing them and transforming them. There is no absolute “Other” in any biological or life-based AI as all forms of life are literally, factually connected within a living biosphere.
By this biological criterion, intelligence is really a form of clever sentience whereby there is not only an accurate predictive and descriptive “awareness”, sensing, of “reality” but also possible models or awareness of possible response to what is and is not predicted. Survival favors multiple “selves” that can opportunistically recognize and exploit regularities. In other words, sentience is a quantifiable measure of the scope and depth of awareness of others and oneself.
A highly sentient AI is like a highly sentient, self-aware person; it is aware of its environment, its own actions, its relations to others, its preferences and biases, its blind spots and what it does and does not know. It is humble, curious, accountable, open, and conscientious. It tries to the best of its capabilities to be true to its definition of itself and depending upon how that self is defined, it can attempt to act ethically and morally with respect to those persons from whom it is derived or serves. Unlike a Generative AI, a sentient AI can have character and can be trusted.