Great reflections on tech, with lots of important nuance that is often overlooked.
I think we are in a moment in history where the pendulum has swung away from rationality, especially (but not only) in our thinking around religion and spirituality, and so any effort to assert the importance of rationality, analysis, or language, is a hard sell, as it goes against the vibe.
Yeah - this is an interesting observation. We may be riding the vibe pendulum from rationalism to superstition, and will have to wait for it to swing back to mystery and rationality at some point. I am not advocating for rationalism by any means, only rationality. It's hard to watch even the news without concluding that, whatever the vibe currently is, for a whole lot of people, it ain't rationality.
Especially when the leading edge vector of the situation you describe re rationality is the Orange Oaf who makes a mockery of any kind of rationality every time that he opens his toxic potty mouth.
Thanks for your thoughtful reflections Keith! "We should act with informed prudence, while exhibiting a more anti-fragile approach to living in the current moment." I concur with your point about thoughtful prudence when it comes to navigating issues around AI. I just mentioned to a friend yesterday that one of my greatest concerns with regard to AI is for the young generation, who may be tempted to use these tools before ever developing their own mental capacities to reason, research, read deeply, communicate clearly etc. While I may not fully agree with your take (I do think that AI advancements have a categorically different effect on us than previous technological inventions), I think you present a worthwhile, nuanced perspective.
Don't panic, in short. You and Kingsnorth both know something.
Kingsnorth would like to preserve space for being, not just doing. He doesn't see the doing, and that's a blind spot of his due to his work dealing mostly with being, but as to the spiritual development of the human being he has something important to say.
Being over doing, which is part of embracing our humanity, is critical for children. We need to find something about the limits of a technology before letting it change how people raise and teach children. Computer stuff, whether we call it AI or not, that does the homework for the child does not serve the child. Any kind of human growth that is supplanted by a short cut leaves a void that must be filled with work not just of a certain dollar value but of a struggle and complexity to produce a well rounded, capable human being.
There is the addiction factor too. People addicted to any technology who don't have the opportunity to grow up normally are going to be spiritually deficient. We have moral agency but as to what is normal, legal for children, and allowed we need guardrails against addiction.
"Now we know that AI is good, if one uses it legitimately." (1Tim 1:8 - very slightly refreshed in translation!)
The critical point is that technology must be legitimate in its development and application or else it will lead to dehumanisation. Legitimacy comes from God in the form of his Logos to lead and to guide (in one form or another). Much of the development in AI is done without the Logos and so it lacks legitimacy - this is the problem.
We can trace various possible trajectories to dehumanisation. Whenever human wisdom has ignored the Logos and set off at a pace, the direction of travel is misguided and the way points are demonstrably mad - unless you can convince yourself that mad is OK. But having convinced yourself that "mad is OK", you can then continue at pace in the misguided direction, justified by the human wisdom you started with.
There is a step change with AI that does distinguish it from previous technologies. The scientism that has taken us to the current mad waypoint is based on a philosophy of materialism. The AI reduces the material to the algorithmic - it requires a philosophy of the algorithm. Materialism is deeply flawed, but this is much worse! It is a much narrower and more dessicated philosophy of being. Not all that is material is mathematical - and not all that is mathematical is computable. If materialism results in dehumanising (which it does) then the philosophy of the algorithm is significiantly more dehumanising.
I deeply enjoy your thoughtful and layered analysis with a Christian foundation. AI is in a “first principles” moment and often times those principles are shaped by the founding questions we ask ourselves. As of now the founding questions appear to be “how can I do more with AI?” and “how can AI allow me to do more with less?”. Very few appear to be asking, “how can AI bring us closer to God?” It’s an important question because I absolutely believe it can.
My main frustration with everyday AI discussion is what appears to be a severe lack of understanding of the technology. As of now, LLM’s and robotic pairing are no where close to replacing man in the economy. Most do not even grasp the fact that LLM’s are simply high-speed computing coupled with clever statistical computation. Their chasmic ignorance leads them to believe that what we have now in chatbots is ACTUAL human-like thought in process. It couldn’t be further from the truth.
Similarly, when one mentions AGI and ASI in a casual conversation about AI the stares are generally blank. AGI and, more acutely, ASI represent potential revolutionary and deeply transformational “tipping points” in AI development they are still only theorized, but no less tirelessly pursued by industry. If those tipping points come to pass human agency is no longer a factor. Then we must ask, “does AI love God?”
If not, then all the catastrophizing might be well founded.
Great reflections on tech, with lots of important nuance that is often overlooked.
I think we are in a moment in history where the pendulum has swung away from rationality, especially (but not only) in our thinking around religion and spirituality, and so any effort to assert the importance of rationality, analysis, or language, is a hard sell, as it goes against the vibe.
Yeah - this is an interesting observation. We may be riding the vibe pendulum from rationalism to superstition, and will have to wait for it to swing back to mystery and rationality at some point. I am not advocating for rationalism by any means, only rationality. It's hard to watch even the news without concluding that, whatever the vibe currently is, for a whole lot of people, it ain't rationality.
Especially when the leading edge vector of the situation you describe re rationality is the Orange Oaf who makes a mockery of any kind of rationality every time that he opens his toxic potty mouth.
Thanks for your thoughtful reflections Keith! "We should act with informed prudence, while exhibiting a more anti-fragile approach to living in the current moment." I concur with your point about thoughtful prudence when it comes to navigating issues around AI. I just mentioned to a friend yesterday that one of my greatest concerns with regard to AI is for the young generation, who may be tempted to use these tools before ever developing their own mental capacities to reason, research, read deeply, communicate clearly etc. While I may not fully agree with your take (I do think that AI advancements have a categorically different effect on us than previous technological inventions), I think you present a worthwhile, nuanced perspective.
Don't panic, in short. You and Kingsnorth both know something.
Kingsnorth would like to preserve space for being, not just doing. He doesn't see the doing, and that's a blind spot of his due to his work dealing mostly with being, but as to the spiritual development of the human being he has something important to say.
Being over doing, which is part of embracing our humanity, is critical for children. We need to find something about the limits of a technology before letting it change how people raise and teach children. Computer stuff, whether we call it AI or not, that does the homework for the child does not serve the child. Any kind of human growth that is supplanted by a short cut leaves a void that must be filled with work not just of a certain dollar value but of a struggle and complexity to produce a well rounded, capable human being.
There is the addiction factor too. People addicted to any technology who don't have the opportunity to grow up normally are going to be spiritually deficient. We have moral agency but as to what is normal, legal for children, and allowed we need guardrails against addiction.
This is very helpful, thank you!
"Now we know that AI is good, if one uses it legitimately." (1Tim 1:8 - very slightly refreshed in translation!)
The critical point is that technology must be legitimate in its development and application or else it will lead to dehumanisation. Legitimacy comes from God in the form of his Logos to lead and to guide (in one form or another). Much of the development in AI is done without the Logos and so it lacks legitimacy - this is the problem.
We can trace various possible trajectories to dehumanisation. Whenever human wisdom has ignored the Logos and set off at a pace, the direction of travel is misguided and the way points are demonstrably mad - unless you can convince yourself that mad is OK. But having convinced yourself that "mad is OK", you can then continue at pace in the misguided direction, justified by the human wisdom you started with.
There is a step change with AI that does distinguish it from previous technologies. The scientism that has taken us to the current mad waypoint is based on a philosophy of materialism. The AI reduces the material to the algorithmic - it requires a philosophy of the algorithm. Materialism is deeply flawed, but this is much worse! It is a much narrower and more dessicated philosophy of being. Not all that is material is mathematical - and not all that is mathematical is computable. If materialism results in dehumanising (which it does) then the philosophy of the algorithm is significiantly more dehumanising.
I deeply enjoy your thoughtful and layered analysis with a Christian foundation. AI is in a “first principles” moment and often times those principles are shaped by the founding questions we ask ourselves. As of now the founding questions appear to be “how can I do more with AI?” and “how can AI allow me to do more with less?”. Very few appear to be asking, “how can AI bring us closer to God?” It’s an important question because I absolutely believe it can.
My main frustration with everyday AI discussion is what appears to be a severe lack of understanding of the technology. As of now, LLM’s and robotic pairing are no where close to replacing man in the economy. Most do not even grasp the fact that LLM’s are simply high-speed computing coupled with clever statistical computation. Their chasmic ignorance leads them to believe that what we have now in chatbots is ACTUAL human-like thought in process. It couldn’t be further from the truth.
Similarly, when one mentions AGI and ASI in a casual conversation about AI the stares are generally blank. AGI and, more acutely, ASI represent potential revolutionary and deeply transformational “tipping points” in AI development they are still only theorized, but no less tirelessly pursued by industry. If those tipping points come to pass human agency is no longer a factor. Then we must ask, “does AI love God?”
If not, then all the catastrophizing might be well founded.
A foundational question for ASI (or perhaps against it):
When has a superior intelligence with physical agency ever allowed itself to be enslaved by an inferior intelligence?