Over two weeks ago I published part one of this two part exploration into what an explicitly Christian framework for assessing the moral acceptability of a technology might look like. I have been on a code writing binge these past two weeks and knew it would take me a while to get back to this, so I guess I sort of apologize for being absent for so long, but not really. Work is a thing and a good one at that.
I was motivated to do this multi-installment essay, in part, because I frequently interact with other people on these questions, and I wanted to develop, for myself if for no one else, a vocabulary and way of thinking about these questions that is less ad-hoc and reactionary. To put it another way, I wanted to try to unpack a moral rationale regarding technology that is more tied to explicit principles than to intuition and taste. There’s certainly nothing wrong with intuition. But confining ourselves to only intuition about these things leaves us open, for one thing, to the human propensity for negativity bias. Such bias can blind us to a less obvious moral facet of a particular technology. Conversely, we are also ever in danger of being carried away by enthusiasm for our own inventions. An eager but unthinking embrace of technology is yet another danger that stalks the land.
The velocity of technology innovation is increasing, especially in the world of digital information. Reasonable observers are noting that what started out as technological conveniences are increasingly experienced as something more akin to manipulative task masters. There is beginning to be a distinct “Pleasure Island” vibe that surrounds the average person’s experience with technology. Many people are understandably starting to wonder if, perhaps, we have all been swindled.
It is in this context that I undertook to draw some principles out of the biblical text that might ground an explicitly moral assessment of any particular technology. I intentionally chose the biblical sources I did because, in both cases, they illustrated what I perceive to be, on the one hand, a kind of first principles concerning the tragic circumstances of our existence, and on the other hand how Jesus himself pushed back against those circumstances in practical ways during his earthly ministry. At the outset of his ministry, Jesus explicitly provided an interpretational lens for his own subsequent actions in the form of a prophetic reading he chose to deliver at his home synagogue.
In highly summarized form, the principles I proposed from these foundational events were as follows.
Those technologies are evil which encourage or amplify the pathological forms of self-consciousness that were provoked by the fall. (e.g. self-absorption, self-regard, self-interest, self-promotion, self-flattery, etc.)
Restorative technologies - those that nudge the circumstances of human existence back in the direction originally intended by God - should be considered good. Jesus himself called out his intention to amplify human freedom - specifically from captivity and oppression - and his plan to engage in physical healing. So any technology which facilitates the reduction of tyranny or manipulation, or which contributes to the healing of injuries and of sickness, or reversing their effects, is consistent with the agenda Jesus himself pursued during his earthly ministry.
Given this summary, then, of what I proposed in part one of this two part essay, what I want to do in this post is apply these principles to some of our current technology landscape.
Social Media
Anyone who wants to understand the deleterious effect on the human psyche, that results from saturating oneself in social media, should read Jonathan Haidt’s work over at After Babel. And for more of a first-hand perspective, Freya India offers an increasingly indispensable look into the social lives of young women in this regard. (Be forewarned that the subject matter Ms. India interacts with can sometimes be a little salty.)
In a reductionist sense, social media is just a platform to make group communication more straightforward and convenient. If you want a network of your friends to be aware of something, you can post on social media and avoid the complications of looking up e-mail addresses or phone numbers. That’s great. But early on, social media introduced a feature that dramatically altered the general convenience and communication aspect of social networking, and turned it into a platform for self-absorption. That feature was the “Like” button.
The mere presence of the “Like” button transforms the question of what a person is going to post on social media into a question of what kind of response they are likely to receive. It shifts the essential point of reference from being outward focused to being focused on the self. Similar to the effect of eating the forbidden fruit in the garden of Eden, the “Like” button’s mere presence introduces incentives toward self-consciousness. People begin to craft posts specifically for the purpose of eliciting positive feedback as a boost to their own ego. In effect, the “Like” button has had the effect of turning social media into a continual quest for flattery. This is compounded by the often public visibility of online posts. Combining mass visibility with a feedback loop for flattery seems to overwhelm the ability of many people to cope. This seems to be especially true for adolescent females. When viewed through the lens of a moral imperative to avoid amplifying self-consciousness, social media fails this test by intentionally cultivating self-absorption among its users.
But there is another way in which social media apps often fail the moral test, and that is in their use of intermittent rewards to manipulate their users’ attention. Exploiting vulnerabilities of human cognition to undermine human agency amounts to a direct assault on individual freedom. To say nothing of such action being appallingly manipulative and cynical. The use of intermittent rewards to control behavior is a well studied field and not only in regard to social media. But social media does seem to be vying for awards as an egregious manipulator of attention, and all too willing to act without concern for users’ interests or well-being.
For a detailed look at how the gambling industry uses intermittent rewards to exploit gamblers, I recommend Natasha Dow Schüll’s fascinating, if disturbing book, “Addiction by Design". In it, she documents how some casinos actually have facial recognition cameras that can identify the faces of gamblers leaving the casino and make nearby gambling machines, literally, call out to these gamblers using their names, audibly pleading with them to come back and gamble some more. This is eerily reminiscent of the way social media smartphone apps continually vibrate and chime with notifications whenever a user has stopped interacting with the app. Such behavior represents a blatant effort to attract the user’s attention back to social media and away from any other activity they chose.
So, evaluating social media through the moral framework I have proposed, social media fails multiple tests, both for amplifying freedom and for discouraging self-absorption.
AI vs. Language Models
Evaluating the moral attributes of AI is less straightforward because the term “AI” is overly broad and encompasses applications as diverse as medical pathology and deep fake creation. Self-driving tractors, another application of AI, may have downstream effects that we either like or dislike, but it would be hard to argue that the technology itself is immoral on its face.
The species of AI that is culturally ascendant at just this moment is the large language model. I have written about this extensively before, so I won’t go into the details here. My overarching concern with language models has always been their tenuous relationship with the truth:
In the best possible case, then, one would expect the admixture of truth and lies produced by these models to approximate the veracity quotient of human beings. Which is to say, alas, the models can't ever be entirely trusted to recapitulate the truth.
Since models are invariably trained on human-generated documents, it seems inevitable that the responses they reconstitute will never be entirely trustworthy.
But some of these models have more recently demonstrated, in dramatic fashion, that not only do they provide factually incorrect information to their interlocutors, but the owners and operators of these models intend for them to lie. Recent events go well beyond the intrinsic problem of training data being polluted by factually incorrect information due to, say, the indiscriminate harvesting of internet data for use in training. What has recently become apparent is that user prompts to some language models are being silently modified by the operators of these models to intentionally cause the models to produce propaganda and lies rather than results that are factually responsive to the user’s original prompt. The problem with this behavior, even apart from the bogus responses themselves, is that it is being done silently and with neither the user’s approval nor awareness. Thus it constitutes a form of manipulation and amounts to assault on the agency of the user — he is being manipulated without his knowledge.
I have long been concerned with the toxic combination of language models’ inability to always respond truthfully and the weirdly sinister media enthusiasm to have people perceive of these language models as reliable oracles.
More troubling than whether models are actually able to always tell the truth is the spellbinding effect of their facility with language, and how it is being actively exploited by those with an interest in encouraging a sense of awe and wonder directed toward the models themselves.
Recent events have revealed that the actual agenda of Google Gemini’s creators is to use it as a means to propagandize. They have apparently come to view themselves as society’s “conditioners”, precisely along the lines described by C.S. Lewis in The Abolition of Man. Happily, Google now has a massive public relations problem because the propaganda emerging from Gemini has been so absurd and obvious, that the cat is fully out of the bag. Even the most credulous user has got to start asking himself about other ways that Google may have been conditioning his perception of reality. The backwash from Gemini may very well wash up on the shores of Google search. That would be a “beauty from ashes” moment indeed.
Language models, then, combine an intractable factuality problem with a curiously powerful ability to mesmerize their users by their facility with language. Not a healthy combination.
Applying the moral framework developed in part one of this two part essay, I find that AI as a technique is itself not morally suspect, but the application of these techniques to language models has shown itself to be deeply problematic. The spellbinding effect of “talking machines” on human beings is troubling enough. But we now have very public existence proofs of the intentional malevolence of some of the purveyors of these models. They are exposed as having an actual agenda of exploiting the spellbinding effect of these models to manipulate other human beings.
Notwithstanding the sinister effort to exploit the human reaction to language models as a way to manipulate, it is still not hard to imagine humane uses of AI that could be life-changing and in a very moral direction. AI-assisted exoskeletons might change the lives of paralytics, in the very best restorative sense. AI-enabled hearing devices could reduce the social isolation of the hearing impaired. I have some personal familiarity with the too-limited effectiveness of hearing devices in their current form. So it is important, when morally evaluating AI, to clearly define terms and evaluate specific use-cases.
Conclusion
I have proposed a moral evaluation framework for technology that tests any particular technology against its effects on self-consciousness, human freedom, and human functioning in the world. In this second installment of these reflections, I have tried to show how these principles might be applied to two significant mega-trends currently taking place in technology: social media and AI. I have also tried to illustrate that what I mean by “freedom” is not so much a question of politics and government, but of whether a technology amplifies individual freedom. Questions concerning civics and laws I consider to be downstream from questions of individual freedom, which is predicated on matters of attention and truth. Without attentional liberty, devoid of external manipulation, no one can be said to be truly free.
The two mega-trends in tech that I used as test cases are by no means exhaustive. That is actually part of the point, which is to test a sampling of technologies against a framework that might be applied to any technology.
I don’t suggest that this framework represents the sum total of all possible evaluations. But I do think it offers a helpful baseline - a kind of minimum bar that any technology should pass. But it also benefits from being explicit and grounded in transcendent principles that can guard against our own corruptible passing fancies.
We now all "live" in a tower of babble/babel created by our language games. Or using another metaphor a collective hive-mind which has its own unstoppable momentum.
All of the participants in (for example) an ant of bee hive mind unconsciously perform their pre-programmed function within the hive. Functions which are probably triggered by subtle chemical triggers. Each participant performs its function then drops dead, with no consequence to the hive.
The human situation is essentially the same. Everyone (without exception) unconsciously performs their socially constructed role/function.
Every body dies but the hive automatically produces more and more replicants - the Beat Goes On
Even the seeming renegades, dropouts or those who embrace something like the Benedict Option are just a part of the hive mind too (even if they pretend otherwise).
Very few even begin to question the "logic" of their pattern-driven predicament.
Malvina Reynolds summed up the situation in her popular song Little Boxes, Little Boxes Made Out of Ticky-Tacky, Little Boxes All The Same.