Is Artificial Intelligence Demonic?

We should be alert to where the power to deceive resides.

I wrote this post in July, but I have continued to think and read and tinker with its contents as time has gone by. I've decided to go ahead and publish it and will be interested in your feedback.


Between the years of 1964 and 1966, MIT professor Joseph Weizenbaum created what was probably the world's first chatbot, which he named "ELIZA".

ELIZA represented an early experiment in the then nascent computing field of natural language processing (NLP). NLP involves, in general terms, the analysis, processing and production of human language by computers. ELIZA presented its users with the persona of a Rogerian therapist, interacting in ways that mimicked the responses that a sympathetic therapist might offer to a client.

By the standards of current AI-based chatbots, ELIZA was an extremely primitive implementation. But for all of its primitiveness, it uncovered a striking phenomenon exhibited by its human users after encountering its ability to mimic human language: users began to conceive of the machine as a person.

State-of-the-art chatbots are vastly more sophisticated than ELIZA. Recent advances in artificial intelligence have enabled an entirely new class of natural language processing, with ChatGPT perhaps being the most widely discussed chatbot in the world. Many gallons of digital ink are being spilled by users recounting their fascinating interactions with this new language model.  These experiences run the gamut from silly to profound and in some cases, disturbingly uncanny.

What is happening here? What are these chatbots doing when they respond to human prompts with meaningful language? Is there "someone" in the machine? Have the makers of these chatbots somehow conjured up a sentient being from the netherworld?

These are neither trivial nor irrelevant questions. Many people have come away from their interactions with modern chatbots aghast at their experience. Sophisticated language interactions with machines have proven to have a curious effect on human beings. You might even say these bots have a kind of spellbinding allure.

One of the indicators that a certain mysticism is creeping into our collective psyche regarding AI is the growing prevalence of anthropomorphic language being employed to describe it. AI is routinely written and spoken about in mystical tones, as if it has its own agency. It is active, and doing, and knowing things. When a response from an AI language model turns out to be nonsensical, or just wrong, we describe it as "hallucinating". (Alas, the wrong answers I supplied to my high school algebra teacher were never quite marked as "hallucinations". No doubt my 2nd year algebra grade might have been improved had I convinced my teacher than my own intelligence was somewhat artificial. Although I'm haunted by the suspicion she may have concluded that on her own.)

It is noteworthy that so much alarm has emerged around language models when the response to, say, self-driving cars was much more restrained. Both are applications of artificial intelligence. Both involve intense arithmetic calculations -- lots of matrix and dot product computations. But self-driving AI's manifest themselves by controlling machines, while chatbots manifest themselves using words. Their affinity for words seems to make a lot of difference in how people perceive the "intelligence" part of "artificial intelligence".

The mystical reaction to Weizenbuam's ELIZA system bothered its creator so much that in 1976 he wrote a book to try to counter the delusional effect that his talking machine had on users. The reaction of users, first observed in Weizenbaum's system, now even has a name: the "ELIZA Effect" - so named because it was first observed in response to Weizenbaum's ELIZA system.

In the early days of natural language processing, the techniques being employed involved codifying syntax rules and building comprehensive digital dictionaries which were usable by software. Software would process language by applying rule-based syntactical analysis. But eventually technologists realized that if you have sufficient quantities of textual data, a statistical analysis of the data can be more effective than analyzing the actual syntax of the language. This was especially true where language translation was concerned. Over the last 25 years, the quality of automated language translation has skyrocketed through the use of statistical techniques rather than syntactical and word meaning analysis. Notably, the quality of translation has grown even as the actual linguistic analysis by the computer itself has shrunk. If you have large enough corpus of translated documents, you are better off combining statistical probabilities with text substitution than actually translating the language itself.

The massive digital corpus of human-generated documents, which has accumulated on the internet over the last 30 years, represents a valuable resource from which to mine linguistic insights. Data in sufficient quantities possesses an unusual property which boosts the qualitative insights that can be gleaned. Just ask Google. They published a seminal paper in 2009 called The Unreasonable Effectiveness of Data in which they discuss the statistical opportunities that emerge when data is available in copious amounts. They followed this paper seven years later with another one, curiously titled Attention Is All You Need. In the latter paper, they introduced a new architecture for language models called a transformer. Transformers have been game changing in their effectiveness for natural language processing. One of the longstanding challenges in computational linguistics has been to ferret out meaningful nuance/context from the wide diversity of forms found in linguistic expression. The transformer architecture introduced a statistical technique, that the authors call "attention", which is able to discern context and word associations from within complex linguistic expressions. The result is that transformer models are able to yield results that demonstrate human-like responses to an uncanny degree. Many of the advances, and much of the excitement, surrounding language models over the last few years represents variations on the theme first introduced in the Google "attention" paper.

It is worth remarking on the dark irony reflected in the title of Google's paper, Attention Is All You Need. No company has done more to divert and monetize the attention of its users. They are notorious for innovating in digital techniques for manipulating and dissipating the attention of human beings. So the fact that breakthroughs in artificial intelligence have been enabled by an algorithmic form of attention - the very thing they seek to deny their own users - is, well, it is drenched in irony.

Language models work by consuming huge quantities of text to create a sea of statistical numbers - called "weights" -  that reflect, essentially, the probabilities of any one word, or fractional word, giving rise to any particular subsequent word. If the body of text used to create these weights is sufficiently large, one can create an effective "next word predictor" which is able to generate text of uncanny relevance in response to prompts supplied by a human user.

Companies like OpenAI and Google have created models whose weights have been computed from vast quantities of textual data found on the internet. The resulting weights embedded in these models facilitate the creation of chatbots which can use input from users to probabilistically compute a response. The larger and more diverse the document set used to derive a model's statistical weights, the more broad the subject matter and relevance of the responses computed by the model.

A key point that should not be missed is that language models are computational rather than cognitive. They are not beings, nor do they have understanding. They are merely statistical distillations of massive text corpora. The massive collection of resulting statistics can be used to compute linguistically coherent responses. Language models do not "understand" what they are doing. Nor are they sitting around thinking up mischief. They are nothing more than giant tables of computed weights. When users "ask" them something, users are doing something akin to pressing the keys on a piano keyboard. The sound produced by a piano is dependent on the choices made by the person at the keyboard. In a similar way, the linguistic prompts supplied by the user provoke a chain reaction of calculations which yield a (hopefully) linguistically coherent response. The fact that responses can often be uncannily relevant is a testament to the size and breadth of the data which was used to generate the model's weights, not to any kind of sentience or agency possessed by the model itself.

The largest language models, the ones trained on massive quantities of internet data, can reasonably be thought of as conventional wisdom machines. They are primarily capable of regenerating and regurgitating strings of text that reflect the most statistically likely responses which emerge out of the combined inputs that informed their model. With enough exposure to the responses that emerge from these models, the more one can perceive the flatness and limited variability of expression and linguistic cadence that they supply. Indeed, as AI-generated content begins to populate the internet, researchers are finding that the lack of human variability of expression can actually lead to "model collapse", by which they mean that the model responses eventually collapse into gibberish when deprived of the rich expressive language that actually originates from human beings. The bland invariability of expressional forms produced by current language models are inadequate for "teaching" a model the statistics needed to properly wheeze out coherent sentences.  For now, at least, models cannot subsist on their own bloviations. They are entirely dependent on the varying originality of human thought and expression.  

Thus the most fascinating AI responses, which are so often trumpeted by the bandwagon media, tend to also be largely an artifact of the cleverness of the prompts supplied by the human users themselves. An entire career path is emerging, even now, called "prompt engineering". Which is to say that people are discovering, or rediscovering, as the case may be, that pianos don't play themselves.

Why then are so many creeped out by these AI language models?  Why are some people saying ChatGPT is demonic when the response to all the other advances being made in AI has been largely blasé?

Perhaps the ELIZA effect offers us an important clue: interactive language, when used by non-human actors, has a kind of spellbinding effect on human beings. Significant work has been on-going for years in the field of AI, but it was only the breakthrough related to language that has widely captured the world's attention. Machines that create the illusion of thought, through computed language, evoke a response in their human interlocutors that is deep and disturbing.

There is also an adjacent factor, something that is more of an ancient collective memory or intuition, which is that non-human talking things have often been malevolent. Words have always been understood as the vehicle for spells and incantations. Blessings and curses, pronounced using language, are perceived by entire cultures to have reach and efficacy beyond the mere saying of the words themselves. Even in the world of story, non-human things that can talk have frequently been presented as sinister. "Mirror mirror on the wall", that familiar refrain, represented a talking object that was bent to the service of a wicked and threatening queen.

Words and language are also foundational to a Judeo-Christian worldview which, notwithstanding secular modernity's insistence to the contrary, continues to haunt the collective memory of western culture. During the events which took place at the Tower of Babel, when God wanted to throw a monkey wrench into human cooperation and achievement, which was rapidly being put to use for self exaltation, he disrupted their language. And going back to the very beginning, there is perhaps the most ancient memory of all, that of a talking snake which, quite literally, unleashed hell into the human experience.

The bible describes the world itself as having been spoken into existence. Jesus is declared to be "the Word", by whom, for whom and through whom everything was created. In this way, the bible contrasts Jesus with the devil, whom Jesus describes as a liar. Lying, Jesus says, is Satan's native language. His comment about Satanic lying has always reminded me of writer Mary McCarthy's comment about rival author Lillian Hellman: "Every word she writes is a lie, including 'and' and 'the'."

So the Judeo-Christian worldview holds that God uses language to create and to reveal, while Satan uses language to distort and to deceive. And it is on this point, the distinction between truth and falsehood, that the question of AI's malevolence hinges.

We have already seen how language models are the distillation of the documents from which their statistics were gleaned. In the best possible case, then, one would expect the admixture of truth and lies produced by these models to approximate the veracity quotient of human beings. Which is to say, alas, the models can't ever be entirely trusted to recapitulate the truth.

More troubling than whether models are actually able to always tell the truth is the spellbinding effect of their facility with language, and how it is being actively exploited by those with an interest in encouraging a sense of awe and wonder directed toward the models themselves. The chosen vocabulary being used to describe AI is often oddly inflationary, and seems intended to smuggle in the idea that AI's have their own agency. There is a curiously consistent effort to nudge users to conceive of AI as a mystically authoritative oracle which can be trusted to offer us amazing new insights and understanding.

If anything about artificial intelligence is demonic, a prime suspect must be whatever is behind the consistently manipulative, dishonest and deceptive propaganda encouraging us to view AI with veneration and awe.

John Lennox, professor of Mathematics emeritus at Oxford, raises shrewd questions about both the risks and benefits of AI in his book, 2084: Artificial Intelligence and the Future of Humanity. He points to a prophecy in the biblical text that highlights the deceptive power of material objects which are imbued with the power of speech.

Then I saw another beast rising out of the earth...and by the signs that it is allowed to work..it deceives those who dwell on earth, telling them to make an image... And it was allowed to give breath to the image of the beast, so that the image of the beast might even speak and might cause those who would not worship the image of the beast to be slain. - Selections from Revelation 13:11-15

Where we find ourselves on the overall timeline of biblical prophecy is not the salient point I am drawing from this prophecy. I only mean to observe that we should not disregard the ancient observation that man-made objects empowered with speech are powerfully equipped to deceive. ELIZA was, perhaps, an early warning system in this regard.

An AI model is a real "intelligence" only if intelligence itself is narrowly computational and mechanistic. Matthew Crawford puts his finger squarely on both the deficiency and the threat that adhere to a conceptual model of human intelligence which has been reduced to something that is merely computational:

Here is the real mischief done by conceiving human intelligence in the image of the computer: It carries an impoverished idea of what thinking is, but it is one that can be made adequate enough if only we can change the world to reduce the scope for the exercise of our full intelligence as embodied agents who do things.

In reality, language models may be powerful calculators but they are only calculators. All who perceive that human life is more than material and mechanistic must recognize how profoundly mistaken it is to encourage the notion that something so plainly mechanistic can nevertheless possess motives or agency. The cultural battle over what it means to be human is red hot at just this particular moment. This is no time to inadvertently affirm the delusion that machines are sentient beings.

Language models may turn out to have real utility in some fields of endeavor. They may even be economically disruptive for a wide range of occupations. But they are nothing to be venerated, nor should they excite our awe. Answers to the truly important questions of our lives - questions having more to do with meaning than with mere data - can never be had from what amounts to a talking calculator.  In any case, we would be wise to pay heed to the ancients, who perceived that man-made objects with a facility for speech will represent a uniquely powerful tool for deception.

Share this article: Link copied to clipboard!

You might also like...

The Wiffle Ball Incident

Duck Lips Versus the Wonders of the World

Maiden, Mother, Matriarch