Technology and Christian Priorities
How can we think in explicitly Christian terms about our current technology environment? This is the text of a talk I was asked to give to an adult gathering at my church in October of 2022.
Any sufficiently advanced technology is indistinguishable from magic. - Arthur C. Clarke
If as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them. - God
In 2009 Peter Norvig and two other Google employees wrote a seminal paper on the qualitative impact of massive data. It was titled The Unreasonable Effectiveness of Data and in it they made the observation (I paraphrase) that sufficient amounts of data alter, not only the quantitative approaches to computation, but the qualitative opportunities as well. Put another way, if you have sufficient amounts of data it fundamentally alters, not only how you go about using it, but the very essence of what is possible at all.
The Google paper reflected a kind of primordial realization of the possibilities that adhere when data is accumulated at sufficient scale.
Technology developments in the 21st century have been characterized by the precipitous decline in the cost of storing data, and a parallel increase in the capacity we have for moving vast quantities of data around. For most people, these changes manifest themselves in such things as the ability to simultaneously stream multiple high-definition movie streams to our homes. Or the ability to store years of photographs or videos on our phones.
But for technology companies, these technological developments have created unprecedented opportunities for exploiting the inherent effectiveness of all this data for other things.
Everything Changed in the Late 1990's
In the early 1990's, Microsoft had become such a dominant technology juggernaut that it was difficult for anyone else to actually succeed in technology. Those of us who had historically made a living by selling software products to users had, for the most part, been run out of business as Microsoft dominated practically every area of consumer software. We used to joke that maybe Microsoft would condescend to let us all sell pencils on the street corner.
Microsoft owned the Windows franchise. Microsoft froze competitors out of the market by including important software applications "for free" as part of a license to Microsoft Windows. This behavior on Microsoft's part eventually attracted the attention of anti-trust regulators but, for many software entrepreneurs, that attention was too little and too late. Microsoft's behavior had, for most markets, destroyed the opportunity to engage in software sales as a business model. If all computers run on Windows, and Windows includes most of what you need for free, there isn't much opportunity to sell software to people who have already paid for Windows.
An entire generation of new technologists began looking for business models that were not susceptible to Microsoft's strategy of running everyone else out of business. In 1994 the internet was commercialized for the first time, and Microsoft was late off the starting blocks. This created a temporary opening to provide internet-related software that Microsoft did not have available. For a few years in the late 1990's it looked as if there was a commercial opportunity to sell internet software. But Microsoft repeated their strategy of bundling free capabilities with Windows to run Netscape out of business and by 1999 Netscape had been sold and began a long slow decline that ended in 2003 when it was disbanded by AOL.
Why all of this history matters is that these events combined with the emergence of the Internet to fundamentally alter the relational dynamics between technology providers and users. Microsoft's actions to inhibit the entire business model of selling software yielded fruit in a thousand different ways as software developers searched for alternative paths for monetizing their technology.
Prior to 1995, software companies sold software to users and the economic incentives of technology companies aligned with the satisfaction of their users. Since users were themselves the source of revenue for a software company, those companies were oriented around serving the needs of those users. But in the late 1990's a new business model emerged involving the delivery of "free" internet services to users. The first services offered were things like search engines, and email. Now we have social media, and navigation, and streaming media. These sites offer free storage (e.g. gmail and youtube) for user-generated content and deliver that content over the internet for free.
But free isn't always free. And these free services come at a significant, though often unrecognized, cost. Delivering these services, especially at the scale they require, is enormously expensive. If users are no longer paying software license fees, then who is footing the bill for all this yummy goodness? The answer: advertisers.
Very early during Google's existence they began to realize that the insights to be gained from information they could collect on users could be monetized for far more than they could ever receive in software license fees. Steven Levy, in his book In the Plex, describes how Google realized that the contents of their server logs might be the key to their future. The eclectic technological pursuits of Google engineering during the 21st century become more explicable if they are understood through the lens of data collection. Chrome browsers, Android devices, Google glasses, and even self-driving vehicles may best be understood in the context of facilitating the growth of Google's massive stash of data about you and me.
Most companies who offer free software and services employ a business model involved in monetizing the data they collect about and from their users. Mostly this means leveraging that data to target personalized advertising. Increasingly, though, the lines between advertising and the services themselves are being blurred.
The holy grail for advertisers is not merely to place ads in front of users. It is not even to place highly personalized ads in front of users, as attractive as that may be to advertisers. Those are merely means to an end. The end to which they aspire - their actual goal - is not ad delivery per se, but behavior modification.
It is a short step from crafting personalized ads to actually manipulating search results so as to influence a user's perception and his subsequent behavior. And if the behavior of even a small percentage of users can be manipulated and monitored by these services, then the economic benefit for both the service provider and advertisers is enormous.
But the shift from user-paid software to advertiser-paid online services has not been benign. It has necessitated a fundamental shift in the outlook and incentive structure motivating the choices of the service provider. Users are no longer "customers" in any historic sense. They are more like livestock who are being farmed by the service provider and sold to the actual customer -- advertisers. Or, to use a different metaphor: Instagram is the zookeeper and we are the animal attractions.
The lucrative nature of advertising, tracking, and behavior modification has financed an entire technology infrastructure to facilitate the symbiotic relationship between ad serving and surveillance.
Just to illustrate the thoroughgoing nature of this surveillance, in one weekend experiment I did on my personal smartphone, I found that the majority of internet connections my phone made were for purposes of either surveillance or ad serving. These surveillance/ad-related internet connections were consuming 40% of my phone's entire internet data use.
Even unsophisticated users have begun to have the creeping sense that the goals of their service providers may not align with their own goals for themselves. Thus the implications of the shift from users being the customer to users being the product has started to seep into the collective consciousness. Users may not know what, exactly, is wrong. They sometimes just have an intuition that something vaguely sinister is going on with their phone and internet use.
The shift away from users being customers is perhaps most vividly observed in the role of technology companies in cancel culture. The notion that it might be the role of a software supplier to punish or discipline its users would have been unthinkable prior to the move to "free services". But cancel culture is just a form of "product management" when users themselves are products rather than customers. Digital livestock without proper opinions aren't very marketable, so online service providers don't think twice about purging the "herd" of undesirable traits. Twitter is the most notorious for this behavior, but Facebook follows closely behind. Just this past week there was a giant brouhaha when PayPal announced that it intended to "fine" any of its users who were spreading "misinformation" by confiscating $2500 of their users' money. They have since backtracked on this but consider the mental model they are operating from when they think of their users in this way. Nothing about this reflects the historical norms of "serving the customer". The entire relationship has been turned upside down.
So if users feel like they are being watched and managed, it is probably because they are.
What is Artificial Intelligence?
The dynamics of how this change in the relationship, between technology supplier and user, shows up as a growing level of user angst regarding what technology companies may be cooking up next. Perhaps no area of technology raises as many questions and fears as the area of artificial intelligence.
At the end of the day, artificial intelligence is just applied statistics. But it is statistics in a scope and scale that has never been possible before in human history. Internet technologies for moving vast amounts of data have converged with rapid increases in storage and computational capacity to enable kinds of statistical analysis that have not been feasible before in human history. Computer scientists have applied learnings from biology related to how neurons in the brain interact in logical terms, and applied that to software implementations. This largely takes the the mathematical form of layers of linear algebra and matrix calculations.
The results of processing data in this way continue to surprise practitioners. By training statistical models with vast oceans of data, these models can be used to automate things we haven't been able to automate before. They can make highly accurate predictions about human behavior in very narrow contexts. They can create images of human faces that are shockingly realistic. They can respond to written questions with multi-thousand word "original" essays that are indistinguishable from ones written by actual human beings.
Computer scientists don't even always understand why these things work. If you google "explainable AI" you will find there are over 3 million results. One of the biggest legal, ethical, and technical challenges right now is to unpack how these models actually arrive at their conclusions.
The uncanny results being produced by AI are happening at a moment in history in which western culture has rejected a supernatural understanding of creation and confined itself to materialist assumptions about the circumstances of our existence. A premise of materialism means, in part, that human beings are reduced to being merely biological machines. Mechanistic assumptions regarding what it means to be human paves the way for superstitious understandings of artificial intelligence. If we human beings are only machines, then machines themselves can be human beings.
Indeed, just this past summer, Google engineer Blake Lemoine was fired for publishing an exchange he had with an AI-based Google chat bot which, in the engineer's view, suggested that the bot was sentient.
“When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt,” Lemoine wrote on Twitter late Monday. “Who am I to tell God where he can and can’t put souls?”
What we're actually seeing in chat bots like this are the results of statistics driven chat content distilled from a massive body of text used to create the statistical model. But the sheer volume of data we can now acquire and use for this kind of statistical analysis can yield curious results. Because the data the chat bot is basing its responses on covers such a wide range of human expression, and because the statistical analysis is able to uncover subtle linguistic associations, you will increasingly see mathematically-produce textual results that have an uncanny resemblance to what a human being might actually say.
There are many different kinds of AI models and some of them hold great promise for contributing to human flourishing. Models for image-based medical diagnostics are being developed. Models for accelerating exploration in the sciences are being used. Models that produce realistic simulations for training and education. So AI can have compassionate and humane uses.
But there are AI models that can mislead and deceive. Models that can produce artificial video and audio. Models underpinning chat bots that can pretend to be human. Models that can curate information so that a user's perspective can also be carefully curated. These are the models most useful for behavior modification. And as we mentioned earlier, behavior modification is the holy grail of the advertising business model.
Connecting the Dots
So to connect the dots about our technological landscape. There has been a shift in the fundamental relationship between technology providers and end users. For the largest technology companies in the world, users are no longer customers but products. This relational change alters the incentives of technology companies for how they treat their users.
Massive user communities, like those cultivated by Google and Facebook, offer unprecedented opportunities for collecting data which is able to offer insight into human behavior and how to influence it. These massive data sets are fueling AI models which these companies can use to influence the behavior of their users.
AI is a dual use technology which can contribute to human flourishing, but can also be used to manipulate and to deceive. The biggest threat represented by artificial intelligence, in our current moment, is that it can be a powerful tool for deception at a very large scale. AI-generated video and audio can be used for blackmail or to influence voting and ultimately elections. Carefully crafted search results can influence a person's understanding of what's true about the world or world events. AI-generated e-mail or direct messages can sow conflict and misunderstanding, evade security, or masquerade as evidence of lawbreaking or regulatory non-compliance.
The Temptations of Technology
The abiding temptation since shortly after the fall has been for human beings to worship the works of their own hands. Prohibitions against this were written by God's own hand in the 10 commandments. The apostle Paul, in Romans 1, describes human failure, in part, as "worshiping and serving created things", which surely includes things created by God but also those things made by man.
So it would seem that one of the temptations of technology is the ancient temptation to worship the works of our own hands. I have wondered if the flailing and elite reaction to Covid has been at least as much temper tantrum as fear. Covid revealed the puny limits of our current technology and many in the west were insulted by that. The behavior of many bureaucrats has resembled nothing so much as wounded pride. Covid has delivered an unwelcome reminder that jettisoning the hierarchy of goods established by God, while subjecting all value judgments to "the science", has been a foolish trade. Our elites have behaved rather like someone who is horrified by the sudden realization that he has been swindled.
I touched on this idea in a recent essay I wrote for Touchstone magazine:
"Viruses are overwhelmingly complicated. The science is actually uncertain. But many people don’t want the science to be uncertain. As moderns, we conceive of ourselves as technological masters of our domain. “Mystery is a great embarrassment to the modern mind,” as Flannery O’Connor observed. But the recent pandemic has delivered a massive torpedo to our tanker of modern hubris, right below the water line.
In response, we have attempted to patch the hole left by the Covid torpedo by reconstituting the definition of “science.” Any distinction between “the science” and the pronouncements of bureaucrats has been thoroughly blurred. And more than that, the opinions of health bureaucrats are increasingly viewed by many as not only reflecting a vaguely defined utilitarian good, but as amounting to an actual moral good...Elevating the official Covid storyline to the apex of moral virtue has the comforting advantage of relieving us of the burden of sifting through all the complex and scary uncertainty. But it also reassures us that, though we increasingly view those who bear the image of God primarily as vectors for the transmission of disease, we nevertheless have things under control and our virtue is still intact."
Which is a good segue to the second and related temptation: the deadly sin of pride.
Not only are we tempted to worship what we make. But we are tempted toward the view that we should receive the credit for talents and abilities that are not of our own making. There is nothing we can do that is not grounded in God's providence. And yet, we rush to seek credit for what we do with talents that did not originate with us and are entirely a gift from God. This amounts to a kind of stealing and false witness and it will blind us to the truth. Pride is a seductively attractive pathway into becoming the kind of people described in Romans 1: "they did not honor him as God, nor did they give him thanks".
As far back as the tower of Babel - arguably the world's first technology venture - the motivation and intention was (no surprise) to "make a name for ourselves". (Genesis 11:4)
Excellence, or Self Promotion?
I have been writing code now for 40 years. I have not only had a front row seat at these events, I've had a role in the play. And I have observed some unhappy changes in the cast.
I began my career during the phase in which the incentives of technology suppliers and users was still aligned. This was prior to Microsoft's dominance and the resulting shift to advertising business models. This was back in the earliest days of the micro-computer revolution and those of us involved in it were considered, not to be cool, but to be weird. Alas, we probably were kind of weird. But by-and-large we were motivated by curiosity and by the technical challenge. We all knew that our friends and family didn't really understand what we were doing or why we spent our days doing it, but we wanted to build and create.
But somewhere along the way, probably shortly after 1994 when the internet became a commercial thing, there was a kind of prestige that began to attach itself to technologists. And we went from being in a place where everyone worked in anonymity to being in a spotlight where nerds started becoming famous.
And over the years I have observed a changing set of sensibilities in those who came seeking work in silicon valley. Many of these young people were at least as interested in fame as they were in engineering. The work itself increasingly became, for many of them, merely a means to an end. But, in many cases, sad to say, the work itself was not even enjoyable to them. I have interviewed many young candidates for employment who have freshly minted PhD's but who had long since lost interest in the actual subject matter of their studies. They had become motivated mostly by competitive concerns regarding their own standing relative to the standing of their peers. And so, somewhere along the way, the joy of discovery had been replaced by the dungeon of self-absorption.
Everyone is becoming Eustace Scrubb:
He always had this notebook with him and kept a record of his marks in it, for though he didn't care much about any subject for its own sake, he cared a great deal about marks - CS Lewis, Voyage of the Dawn Treader
What should we expect or be concerned about?
There are many hard and interesting technical challenges that could be solved but many of the brightest minds have confined their focus to expanding the ability of technology companies to surveil their users.
We should have a clear understanding of the business models of technology companies, their incentives, and interpret their actions and their products accordingly. Google started out as a search engine, but upon realizing the power represented by massive data, began to leverage their search engine as a way to expand their ability to collect data. Their entire business model is monetizing data. If you'll think back, Google evolved from being only a search engine to being an email provider, a cell phone provider; they worked on Google glasses and self-driving vehicles and maps. We must understand that all of these pursuits were in service to the goal of acquiring more data. Any Google service or product that is offered must be viewed through this lens. And not just Google, of course, but all social media companies and many financial services companies.
Artificial intelligence is a dual-use technology but I do not have concerns about AI "taking over the world". My concerns regarding AI are twofold.
First, applied AI presents unprecedented opportunities for deception and manipulation. The ability to create realistic video means that AI will be used to deceive. And the far-flung reach of the internet suggests that deception can be done at scale. I think this is especially true where visual content content is concerned.
Let's do a thought experiment. The hypothetical I'm about to describe is not an endorsement of Trump. I'm not concerned with whether you're a Trump voter or not. I myself would have preferred more mean Trump Tweets to the possibility of nuclear war that we have now, but to each his own. In this particular thought experiment, Trump is merely the vehicle for a point about artificial intelligence and deception.
The last election and the entire nation was roiled by accusations that the Russians had compromising information on Trump who had allegedly romped with prostitutes at a Russian hotel. We now know that this was an accusation made up out of thin air and with malice. But what if there had been fabricated AI video - sometimes called a "deep fake" - that was difficult to detect as a fake? How might that have affected the course of events?
I'm concerned about, especially, fabricated visual content because I think there is both biblical and scientific evidence to suggest that we are uniquely wired to believe and act on what we see. Jesus said in Matthew 6:22ff that "the eye is the lamp of the body". The phrase "seeing is believing" came about for a reason. The nature of our physical existence, and even the requirements of our own safety, necessitate that we accept the reality of what we see with our eyes. So I think that fabricated visual content is an especially powerful tool for deception.
The second concern I have regarding AI is how those who are hungry for power will seek to leverage the aura of complexity to empower themselves. This is a lesson that should be learned from both the pandemic and the fight over global warming. Pandemics and climates are exceedingly complex. Those who lust for power have learned that complex threats are useful to create fear in the hearts of entire populations of people. It ought to give us pause that, no matter what the threat, the solution on offer is always and only the centralization of power and money. Is there no threat - ever - that is best responded to by decentralization?
(Loosely coupled systems are more resilient and adaptive by their very nature.)
We are continually told, with a straight face, that we must "trust the science". I used the qualifier "with a straight face" because "science" is continuously in flux. If we have learned nothing from the recent pandemic, we have at least learned that the pronouncements of science last week are completely upended this week.
AI presents an attractive tool for those inclined to exploit complexity as a vehicle for expanding their own power. "Trust the science", which has taken it on the chin of late, may very well become "trust the machine" in response to the fears being stoked in an attempt stampede people into the faux safety of rule by experts.
A Biblically Informed Perspective on Complexity
The socio-technical complexity of our time has left many Christians deeply unsettled. The complexity can feel overwhelming, but I want to close by offering another snippet from my essay for Touchstone. In it, I suggest that complex circumstances and environments can be drained of their complexity when our principled commitments override our temptation to prioritize safety or convenience over the true and the good.
The Judeo-Christian understanding for millennia has, of course, been that God is both real and knowable, and that we live our lives in obligation to him. (In this regard, we might want to consider the possibility that the historically disproportionate success of Western culture has been an outgrowth of this reality-based Judeo-Christian perspective. The current mania for rewriting the history of the West is probably an effort, at least in part, to erase the memory of a Judeo-Christian worldview. But I digress.)
While the Judeo-Christian understanding does not necessarily offer answers to hard technical questions, it does set those questions within the larger context of God’s existence, and of our obligations to him. The Judeo-Christian worldview doesn’t tell us whether vaccines will “work,” but it does tell us something about the importance of human freedom. It teaches unambiguously of the need for us to restrain our fallen inclination to impose our will on others. It doesn’t tell us whether lockdowns “work,” but it does tell us that policies that force us to abandon our parents and grandparents, even in their dying moments, are assuredly evil. It doesn’t tell us whether masks “work,” but it does teach us that the human countenance matters, and that blithely ignoring such a consideration is incompatible with wisdom.
If embraced, the Judeo-Christian perspective should have introduced a simplifying dynamic to the recent crisis because it provides a definitive framework for placing Covid complexity within the larger context of human spiritual concerns.
Let me offer a personal illustration.
A few years ago, I came very close to dying. I found myself in a situation where the only way for me to survive more than another 24-48 hours was to submit myself to a harrowing surgical intervention that had never actually been attempted before. It offered a high probability of leaving me permanently disabled in multiple ways. And in the very best possible outcome, the recovery was going to be lengthy and hard.
The surgeon told my wife and me that he believed the surgical team needed a good night’s rest to tackle my case, and that he thought I could survive putting off the surgery until the next morning. In the event, the doctor was right. I survived until the next morning, and the surgery took 20-plus hours, so he was right about his needing a good night’s rest as well. As things transpired, the experimental surgery was a rip-roaring success, though the recovery was as miserable as predicted. (If one is absolutely determined to be a celebrity, I can authoritatively advise against taking the path of having an exotic medical condition.)
As my wife and I sat together that long night in the ICU before the surgery, we had no inkling of the future. We talked our way through that black night as we watched the hours tick by and anticipated the surgeon’s return in the morning. We both understood that these might be our final moments together in this life. We were wrestling with whether or not to go through with the surgery. Hindsight is 20/20 of course, but there were non-trivial risks of massive cognitive and physical disability. The complexity and uncertainty of that crisis seemed overwhelming, and making any confident choice was impossibly hard. I was reluctant to go through with the surgery, in part because of the likelihood of permanent disability.
It was in the wee hours of the morning, as I wrestled with the decision, that I had the epiphany which instantly made the path forward more obvious and clear. My epiphany was this: I had previously made a vow that I was bound to keep. Many years before that night, I had made a promise to my then bride to neither leave her nor forsake her. In those waning hours before the surgeon returned, I suddenly remembered my promise and realized that my decision had actually been made long before the seeming complexity of my immediate crisis arose. As far as it was within my power, I simply must not leave her or forsake her.
What I learned that night—or remembered, since I should have known it already—was this: principled commitments have a way of draining complex crises of their debilitating uncertainty and confusion. Our life in Christ is grounded in a set of principled commitments to him, and to his Church, which should have similarly defused the complexities and uncertainties of the recent pandemic.
A commitment to the truth of God’s existence and to our resulting obligations to him, even when such commitment entails incremental risks to our lives, is the only pathway that leads to wisdom. It is the only pathway that can lead to wisdom. Any perspective that fails to first account for our obligations to God and the hierarchy of goods that he has established, is unable to produce wisdom even in the best of times, much less during a complex pandemic.