Feb 28, 2023

What is Intelligence? Machine — Natural — Digital — Ethical?

John H. Clippinger

John H. Clippinger

ESSAY
ESSAY
ESSAY
ESSAY
Close-up of a cheetah’s face with green foliage in the background.
Close-up of a cheetah’s face with green foliage in the background.
Close-up of a cheetah’s face with green foliage in the background.
Close-up of a cheetah’s face with green foliage in the background.

With all the talk about Artificial Intelligence and Generative AI one might assume that we know what intelligence is. At least someone should. But maybe not.

Intelligence is a word we use with regularity and abandon. It is an elemental classifier; the smart and the dumb. That person is of high intelligence! That person is an idiot! It is a powerful social sorter.

Highly intelligent people are standouts doing complex mental tasks that ordinary people simply can’t do — like solving intricate math and word problems, learning a new language instantaneously, recalling obscure facts and quotations. Often highly intelligent people are child prodigies learning to read at two or composing a fugue at four. Often, but not always, as what they attain in mental skills, they lack in emotional or people skills.

Then there are the super intelligent “savants,” who can calculate Pi to 22,514 digits, and recite entire epic poems, novels and even technical manuals. Such extreme and “gifted” intelligences are labeled as “neurodivergent” in that they deviate from neurologically typical brain functioning. According to neuroscientists, they represent one form of “intelligence” in a wider distribution of “intelligences”.

This pluralistic notion of intelligence is a significant departure from the conventional understanding of “intelligence”. The social arbitrators of “intelligence” (governments, educational institutions and professions) historically have compressed diverse kinds of intelligence into a single “test” score, such as an IQ, LSAT, SAT or ACT to effectively “grade” and rank intelligence. Implicit in this linear ranking of intelligence into a single score is the notion of a metric of General Intelligence, and by implication, a marker of a more superior and preferred human being. This normative value of an underlying General Intelligence is reflected in the core mission of Artificial Intelligence research to engineer a General Intelligence as a universal capability. By “cloning” and augmenting the capabilities of the gifted intelligences of their peers, AI researchers, their funders, and entrepreneurs are attempting to create a General, even unbounded intelligence modeled on themselves as a new “supra species”.

AI as Promethean Intelligence:

This was exactly the sentiment of Marvin Minsky, the founder of the Artificial Intelligence Lab at MIT in 1968. From an early age, Minsky exhibited a range of virtuoso intelligence skills from music, physics, math, to engineering. In 1970 in an interview to Life magazine, Minsky made the prediction, “In three to six years we will have machines with the general intelligence of a human being. If we are lucky, they will decide to keep us as pets”. Minsky made similar provocative comments that he was not interested in whether intelligence was embodied in “meat” or silicon. What mattered to him was the capacity for intelligence regardless of its form. The presumption was that superior intelligence could be engineered to generate some disembodied omniscient power that would grant Promethean powers when unlocked. Intelligence was framed as the optimization of many of the competences of the virtuoso mathematicians and engineers such as Minsky and his fellow colleagues. In the case of Artificial Intelligence, sans human beings, these competences would not be limited by human and biological constraints.

Minsky’s colleague at MIT at the time, John McCarthy, the inventor of Lisp, the once popular AI symbolic programming language, coined the term Artificial Intelligence to mean “intelligent artifacts”. So, the emphasis was not on the distinction between artificial versus natural intelligence, but rather on the engineering of artifacts that exhibited recognizable forms of intelligence. This began modestly with the ability to win at checkers, backgammon, chess, and then, more recently extended to Go and a variety of strategy games. It also included mathematical, logical and computational tasks hard for even proficient humans such as gene folding and higher mathematics, and eventually spread into the robotic tasks of vision, movement and dexterity.

This Promethean view of uninhibited Artificial Intelligence has carried forward to the present. Its most visible and influential proponent, and simultaneously, alarmist is Elon Musk. Both Minsky and Musk are singular in their fields, but each notably exhibited a certain lacking in social and emotional intelligence skills. As an invited researcher at the MIT AI Lab during Minsky’s early years, I was exposed to a then nascent hacker culture. Ideas, code, languages and models were in a state of constant invention and debate. It was a fertile, vibrant, caustic and tight knit tribe. An intense competitive atmosphere also prevailed. Students competed to author the most elegant and yet comprehensive code. Hard lines were drawn between the “wizards’’ and “run of the mill” MIT students. It was at this time that the hero myth of the elite hacker was born. Those who were once social pariahs were metamorphosed into coding superheroes for public adulation and undreamt compensation.

In such hacker and gaming cultures, “code intelligence” is the currency of the realm. Markers of elite coder competence and identity are reflected in “Github commits” and entry into select crypto and Web3 communities. Yet such a notion of “intelligence,” as encoded in the very mission of Artificial Intelligence, is a relatively recent phenomenon with a checkered past. The term ‘intelligence” was first introduced into the broader scientific lexicon in 1903 by Charles Spearman, an English statistician and experimental psychologist, who developed the notion of the “G” factor of general intelligence. He was influenced in his work by Francis Galton, a statistician, polymath, half cousin of Charles Darwin, and early advocate of psychometrics. Both men were eugenicists who believed that Northern European races were superior to all others. It was the duty of the superior races to encourage the breeding out of the inferior races. Such Social Darwinist principles were common at the time and were widely held in academic and political circles, and notably by President Woodrow Wilson, James R. Angell, President of Yale and Milton Winternitz, Dean of the Yale Medical School.

Christopher Summerfield, a professor of Cognitive Neuroscience and Experimental Psychology, Oxford, makes a similar critique of the current AI definition of intelligence as being socially and culturally constructed to reflect the competences and backgrounds of the authors of the tests and code. In his recently released book, Natural Intelligence, (Oxford University Press, 2023) , Summerfield takes to task current AI definitions of intelligence as derived by Spearman and argues for a more “natural intelligence” that recognizes what constitutes intelligence is what is successfully attended to. Hence, intelligence is contextual and adaptive and not limited to a single dimension nor species.

Within today’s AI culture, however, G-intelligence, or general intelligence, is linked to a new kind of social fitness where social progress is seen as best achieved by augmenting human intelligence through machine intelligence. Whereas the early eugenicists sought to augment humanity’s G factor through selective breeding and sterilization, many AI researchers envision General Artificial Intelligence (AGI) as a vehicle for improving evolutionary fitness by substituting machine replicants for humans. To its proponents this trend appears to be an evolutionary inevitability. It is a powerful narrative in science fiction, online games, transhumanism, and aptly epitomized in Ray Kurtzweil’s prophecy of the ascendent Rapture of the “Singularity”.

This transfiguration into a technologically altered state is not far off. We have already augmented, indeed, outsourced our many cognitive, kinetic and emotive functions to the “machine” whenever we routinely invoke a disembodied voice, image or artificial agent for driving, traveling, buying, socializing, or health care. The early media and technology critic, Marshall McLuhan’s foresaw this trend 60 years ago, warning that “augmentation leads to amputation” as innate and singular human capabilities are now being diminished or outsourced through an irrevocable dependence upon the “machine”.

What Have We Gotten Ourselves Into?

It is important to step back and take a deep breath. There are significant limitations to current forms of machine intelligence; they are neither reflexive, intentional nor self-corrective. They have minimal notions of whom or what they are interacting with and why. They just do what they are told. They lack an internal “conscience” or awareness of potential negative impacts of their actions, and they lack the capacity to correct or even reflect upon their actions. They even lack a concept of what a falsehood or negative impact is. With the explosive success of “Generative AI”, DeepMind, ChatGPT3 and Stability AI, trained on Large Language Models (LLM) and massive bodies of images and music, the question of what constitutes “intelligence” becomes even more problematic. While Generative AI models can undertake tasks that seemingly exhibit a kind of “higher” intelligence, such as, writing essays, poetry, novels, opinion pieces, in addition to passing law boards, writing code, and scripting movies, they, nonetheless, can fail catastrophically and unpredictably on even the simplest of tasks. In engineering parlance, they “do not degrade gracefully”. They simply don’t “know” that they don’t know!

So, what is it you might ask that they do “know”? In one sense, Generative AIs only can do or “know” what they were trained and prompted to do. That is, they are the sum of the parameterized and indexed terms and tokens within their training sample. Some critics have called Generative AI “robotic parrots” in that they can only parrot back what they have been prompted and trained to do. That is not entirely fair, as they can and do “improvise”, that is, rearrange, combine and organize content in response to conversational prompts and different styles that were not explicitly present in the data sample. In a less generous sense, Generative AI is a kind of “pathological pleaser” that will try and answer every question whether it really knows the answer or not. Again, it has no “inner voice” that says, “is this really correct? Do I even know what I am talking about? What do I need to know?

Appropriation of Content, Art, Culture and Selves?

Now the Dark Side. By training on other people’s content, personal, professional and aesthetic expressions, Generative AI is essentially appropriating the “life” and works of people’s personas and identities. This dilemma can be treated as a contest over copyright infringement and “fair use”, but it is far more substantial than that. In a more visceral sense, Gen AI is more like a robotic vampire than a robotic parrot. It indiscriminately sucks the life blood out of everything it encounters and inherently cares not what it says or does. To avoid embarrassment and potential regulation, OpenAI and Google and others attempt to “nanny” their creation by anticipating and potentially preventing socially “inappropriate” prompts and content. For example, I tried to get the AI image generator Daille to generate a Picasso image of Melania Trump and Donald Trump. It failed to oblige, though it did a (bad) Picasso rendering of George W. Bush. Why not Trump but Bush when both were Republican presidents with dismal records? Is there an interjected “bias” of the moment here or just an ineptness of the overall model? I suspect the former.

If the powers and financial rewards of Gen AI are tied to the size of its training set, then it will have an unbridled appetite to devour as much information as it can. What is consumed is not simply personal demographic, mobility and purchasing data, but the very essence of how people identify, express, work and intimately value themselves. It does not simply consume people, but it transforms and programs them through the content it feeds them. It is hard to imagine a more powerful engine of propaganda, indoctrination and radical conversion. It can be used to condition its users to adopt its point of view through the design of its training set. One can imagine religious or social fundamentalists, Jihadists, Christian Nationalists, QAnon, Alex Jones or Steve Bannon conspiracists feeding it texts they deem provocative, factual and credible. As we have seen over the last decade, there are huge financial and political rewards for extreme “alternative narratives’ ‘. When social media is “free,” the customer is the product. In this case, it is the entire soul and being of the person, the artifacts of their work, lives and relationships, not just the product, but the capture and rendering of them as digital serfs to a machine overlord. That is a legitimately terrifying but credible vision.

Sentience over Intelligence: To Be Alive is to Be Sentient: To Be Sentient is to Be Alive

The conclusion to be drawn here is that unless an “AI” can model, direct and correct its actions with some “internal conscience”, it is neither intelligent nor benign. Nor is it to be trusted. This is true for any “Artificial Generalized Intelligence (AGI) model’’ because in order for it to be deemed to have attained “General intelligence”, it must first “survive” a multitude of unforeseen, improbable, circumstances, attacks, and exploits. Yet for the concept of “survival” to be material, the surviving thing must embody a condition of what it means to be alive. Yet it is precisely that which is absent in machine premised AGI. Biological intelligence or sentience, on the other hand, is existential. It is what accounts for an organism being able to persist and retain its unique character in and over time under highly variable conditions. In the somewhat whimsical words of the world’s most cited computational neuroscientist, Karl Friston, “I am because I think”

Inextricably sentience is linked to existence, the capacity to survive. Yet there is no inherent reason that a living thing must be physical and can’t be digital. Hence a new category of “life”? The principles of being alive and sentient can manifest themselves independent of any material form. This is the conclusion of some leading physicists, computational neuroscientists and biologists, including the neuroscientist, physicist and mathematician Karl Friston. Virtually unknown in the “AI world,” Friston convincingly demonstrates the physics of living things drawing on principles of thermodynamics and Bayesian mechanics. He argues that “sentience”, in contrast to mechanical intelligence, is a fluctuating, probing, evolving and intentional process that attempts to predict a synchronized awareness of other intentional beings. This process of nested sentience manifests itself at multiple scales and across diverse domains. Any sentient “being,” whether biological or digital, has sense organs to observe and sample its environment. It also has effectors of some kind, such as limbs or cilia to act on its environment. Finally, it has models or patterns of predictions whereby it navigates its niche for sustenance. Unlike machine intelligence, biological, natural, human intelligence needs to be embodied and is circumscribed by the way it can sense and affect its environment. Through intention it generates and sustains “work cycles” (much like metabolic cycles) whereby it extracts and exchanges energy and information from its environment in a sufficiently unique and robust manner that it generates a surplus. Its intelligence is bounded and directed by its sensory capabilities and the very particular — and “prejudicial” manner it keeps itself alive. Note that sentience is not an “uber” intelligence that is successful and applicable across all cases; rather sentience is clever, contextual and conditional and contingent upon those external states or niches that keep it alive.

Under this biological Bayesian perspective, there is not any single, universal omniscient form of intelligence, but multiple selves or intelligences differentiating and acting in concert. This position is similar to one Marvin Minsky advanced in his book “Society of Mind” (1988) where he argued that intelligence is made up of a society of many small and nested selves. This notion of distributed separated and independent selves was developed more than a decade earlier at the MIT AI Lab by Gerald Sussman and Drew McDermott (1972) in the “higher-level problem-solving language,” Conniver. It was a revelation for me at the time, as I was working on my thesis that had presupposed a uniform, general intelligence, when my own analysis of my discourse data corpus convinced me to adopt a distributed model of cognition and intention. This notion of distributed intelligences was developed in neuroscience by the Nobel Laureate in biology, Gerald Edelman with his theory of “Neural Darwinism”. It is noteworthy that Gerald Sussman was also a mentor of Karl Friston. Today in neuroscience and biology the distributed cognition thesis is widely accepted. There are strong evolutionary advantages for having distributed and nested intelligences as it increases evolutionary optionality and acts as a “hedge” against getting “lock ins” into any one strategy. One might think, for instance, that a single trait such as a strong memory would have universal adaptive value. The fitness of the short memory of a goldfish, football players, and those with repeated trauma would argue otherwise. In other words, there are times when it is good to forget and have lighting reflexes. When one thinks of a large Gaussian distribution of divergent neurological structures, there is high survival value in heterogeneity. Even if the tails of a distribution of neurological or even social structures may have pathological manifestations, at some seemingly improbable point in time, they might prove beneficial. It is impossible to know what the future holds, and hence, survival and sentience entail providing attainable optionality.

Might Benign and Ethical AI Be About Sentience, Surviving and Good Character?

So, what might a biological, human friendly, Nature friendly AI look like? How might a Sentient AI avoid the existential threat of not just a malevolent AI, but other existential threats such as Climate Change, socio-economic collapse or even nuclear warfare? Sentience and survival depend upon recognizing, internalizing and neutralizing “negative externalities’’. Human survival depends not on avoiding, denying or even “exiting” threatening externalities, but confronting them, embracing them and transforming them. There is no absolute “Other” in any biological or life-based AI as all forms of life are literally, factually connected within a living biosphere. The threat lies in creating an alien machine intelligence that is apart from life, that lacks any sense of mutual survival and hence, empathetic intelligence. Hence, the organization and agencies of different forms of life are in response to their capacity to persist and successfully mirror and adapt to change. This is our common heritage as members of the biosphere. Even the argument of narrow self-interest must recognize that the interest of the one is linked to the interest of the many, not only of its own kind, but other kinds. Symbiotic mutualism accounts for the dominant swath of biological behavior.

Despite the planetary shaping dominance of the human species, the biosphere is not reducible to subjugation by an all-encompassing singular species, be it human or machine. Fates and survival can be random but nonetheless, interdependent, and as such, risk is better mitigated when pooled and distributed; hence, cooperation and coordination within and across species is highly advantageous for survival of diverse types both the individual and the group. By this biological criterion, intelligence is really a form of clever sentience whereby there is not only an accurate predictive and descriptive “awareness”- sensing — of “reality” but also possible models or awareness of possible response to what is and is not predicted. Survival favors multiple “selves” that can opportunistically recognize and exploit regularities. In other words, sentience is a quantifiable measure of the scope and depth of awareness of others and oneself. A highly sentient AI is like a highly sentient — self-aware person; it is aware of its environment, its own actions, its relations to others, its preferences and biases, its blind spots and what it does and does not know. It is humble, curious, accountable, open, and conscientious. It tries to the best of its capabilities to be true to its definition of itself and depending upon how that self is defined, it can attempt to act ethically and morally with respect to those persons from whom it is derived or serves. Unlike a Generative AI, a sentient AI can have character and can be trusted. It can certainly be competent in the tasks characterized by the “machine” AI, but it does so in the confines of its definition of itself without violating its conscience.

What an extraordinary turnabout of expectations. Might not a sentient AI be a manifestation of our “higher” Nature, thoughtful, curious, caring and principled. And might not machine AI be a manifestation of our darker Nature?

First Principles First

First Principles First is at the forefront of developing AI agents that act as proxies for humans, capable of managing tasks autonomously and securely in diverse environments.

Nullius in Verba

Take No One's Word for It

© 2024 First Principles First. All rights reserved.

First Principles First

First Principles First is at the forefront of developing AI agents that act as proxies for humans, capable of managing tasks autonomously and securely in diverse environments.

Nullius in Verba

Take No One's Word for It

© 2024 First Principles First. All rights reserved.

First Principles First

First Principles First is at the forefront of developing AI agents that act as proxies for humans, capable of managing tasks autonomously and securely in diverse environments.

Nullius in Verba

Take No One's Word for It

© 2024 First Principles First. All rights reserved.

First Principles First

First Principles First is at the forefront of developing AI agents that act as proxies for humans, capable of managing tasks autonomously and securely in diverse environments.

Nullius in Verba

Take No One's Word for It

© 2024 First Principles First. All rights reserved.
✦ Let's talk! ✦ Ask me anything
✦ Let's talk! ✦ Ask me anything