Explainer
AI
Culture
Digital
7 min read

Challenging transhumanism’s quest to optimise our future

Instead of separating the human from the hardware, Oliver Dürr recommends rediscovering other ways of self-formation and improvement.

Oliver Dürr is a theologian who explores the impact of technology on humanity and the contours of a hopeful vision for the future. He is an author, speaker, podcaster and features in several documentary films.

A plastic sheet strewn with biology-related instruments.
A biohacking kit for a biology workshop.
Xavier Coadic, CC BY-SA 4.0, via Wikimedia Commons

Welcome to the age of transhumanism. In this world, the goal is to overcome all limitations and restrictions that hold human beings back. Science, technology, and medicine should allow us to live longer, healthier, and better lives. So runs the promise. But is there a peril that goes along with it? To answer that question, we need to take a closer look at the phenomenon of transhumanism, particularly the view of human beings that lies behind the glittery promises of an “optimised” future.  

Improving humans, however possible 

Transhumanism is a global movement that seeks to use all available technological means to “enhance” human beings. From curing illnesses and overcoming physical limitations to expanding mental abilities, the movement aims to overcome all obstacles to the current human condition. 

More precisely, it seeks to overcome all obstacles to the individual’s freedom to live the life he or she wants to live. In the attempt to enhance life, transhumanism veers beyond traditional forms of curing impairments (like compensating for bad sight with a pair of glasses) and ventures into more experimental fields (like manipulating the human eye to see ultraviolet or infrared light). Emotional or cognitive deficits (such as lack of concentration) are supposed to be overcome by “smart drugs” (like Methylphenidate / Ritalin) and even genetic modifications, and prostheses are considered to expand human capabilities.  

The goal is to create “superhuman” abilities. The holy grail of this movement is drastically extending the human lifespan (if it is in a state of health and vigour). Ultimately, transhumanists want to “overcome” death.  

There are two paths within the transhumanist movement on which they hope to arrive at this sacred goal: a biological and a post-biological way.  

Biological transhumanism 

Let’s have a look at “biological transhumanism” first: The focus here is on our current, carbon and water-based bodies. Weak and fragile as they are, biological transhumanists must make do with them to achieve the greater things they envision. Human beings must be treated with drugs, and a host of prefixed technologies: bio-, gene-, and nano-. 

Aubrey de Grey’s project of postponing death by achieving a “longevity escape velocity” is a good illustration of the movement. De Grey is convinced that novel biomedical technologies can achieve a limitless extension of the human life span: “If we can make rejuvenation therapies work well enough to give us time to make them work better,” he writes, “that will give us additional time to make them work better still” and so on. The time gained with a particular innovation must only be greater than the time needed to achieve another such advancement. Therefore, he argues, the effective death of people alive today can be staved off indefinitely.  

De Grey is not alone in transhumanist circles to predict such outcomes. Google’s Ray Kurzweil has a similar view: “We have the means right now to live long enough to live forever”.  

Such optimistic prognoses bank on a view of human beings as being essentially a body-machine that can be controlled and improved at will. The key to unlocking its potential is information theory.  

Think of human beings as an algorithm, and, in principle, all their problems can be solved by engineering. Cultural critic Evgeny Morozov poignantly called this approach “technological solutionism”. From a ‘solutionist’ perspective, humanity is increasingly seen as the problem that needs solving. Thus, not only must we develop new technologies to guarantee human life and freedom, but humanity needs to adapt. Those necessary “transformations” of the “human” are what inform the first dimension of the term “trans-humanism”. 

If human beings want a seat at the table in the digital future, they must find a way to merge with and dissolve into the digital sphere—or so the transhumanist narrative goes. 

Post-biological transhumanism 

The second path is “post-biological transhumanism”, which takes a more radical approach. Here, the focus is on leaving behind our current bodily form altogether and radically transcending the limitations of what it means to be human today. Those alterations, such transhumanists argue, will be so radical that calling the result “human” will no longer be adequate. The preferred means to achieve the future state are taken from the digital sphere: algorithms and information processes.  

The view of “the human as a machine” becomes more specifically “the human as a computer”. Mind, spirit and consciousness are understood to be the software within the hardware of the body. Human beings are perceived to be biological computers and thus in direct competition with digital computers. And those are becoming increasingly powerful by the hour. If human beings want a seat at the table in the digital future, they must find a way to merge with and dissolve into the digital sphere—or so the transhumanist narrative goes.  

Immortality in the Cloud? 

For post-biological transhumanists, the ultimate goal is called “mind-uploading”. The idea is that we can upload our minds (selves) to the internet and achieve immortality—at least if all we are is the sum of information processes in the brain and as long as the internet infrastructure is still available. Mind uploading requires leaving behind our current biological form of life altogether and dissolving into virtuality.  

This vision of virtual immortality is why post-biological transhumanists tend to place their hopes in information technologies, software algorithms, robotics and artificial intelligence research. They aim to overcome and entirely leave behind the “human” as it is. This move to “transcend” informs the second dimension of the term “trans-humanism”. 

In classical humanism, at least from the Renaissance to the 1970s, “human improvement” meant education, moral, intellectual, and practical formation and refinement towards a concrete ideal of humanity and the shaping of a society that enables such formative processes. 

Is there a solution? 

But can those transhumanistic approaches really deliver on their promises? 

Human beings have always tried to improve themselves—not least through technology. What is new today is how transhumanists define “better” and some means of realising those perceived benefits. With its solutionist approach to life, transhumanism discards large swaths of traditional techniques to “improve” human beings and their lives. In classical humanism, at least from the Renaissance to the 1970s, “human improvement” meant education, moral, intellectual, and practical formation and refinement towards a concrete ideal of humanity and the shaping of a society that enables such formative processes.  

But in the age of transhumanism, there is a tendency to believe that we can delegate such hard work of the self to a new technocracy and their algorithmic tools—who, to put it mildly, may not always have our best interests at heart.  

Freedom is best conceived, not as a mere “choice” to do what we please, but the liberty to live a truly fulfilling life, which almost always includes others .

The main problem, however, is that ultimately, we cannot delegate our future to machines because, after all, we aren’t machines. Instead, we must learn to live with ourselves, our limitations, and our finitude, or we will never be free. Freedom only ever begins once we learn to let go of ourselves and start living for and with others.  

The reason for this is that freedom is best conceived, not as a mere “choice” to do what we please, but the liberty to live a truly fulfilling life, which almost always includes others. Many of the things that make a future worth wanting in the first place are shared goods, relational, communitarian, cultural values and practices that needn’t be optimised or automated at all—at least not technologically.  

When building a sandcastle with my toddlers, that process needn’t be optimised (which realistically would mean excluding the toddlers from the process altogether). Rather, the process of doing it together is the point. Political decision-making processes, to take another example, also don’t have to be automated or made more efficient through algorithms. Struggle in deliberating how our society should look is the point. Without such moral deliberation, our public life is diminished. In many cases, the slowness, strenuousness and inefficiency of such processes is a feature, not a bug.  

A tech future beyond transhumanism 

Having this in mind changes the questions we pose in light of novel technologies: How (if at all) can they be integrated into our lives in such a way that they open up the world in its complexity, allowing us to experience the fullness of life and enabling us to shape the future we really want? 

It is time to rediscover and bring back religious and humanistic traditions of self-formation into our public debates about the future. Far from being relics of the past, soon to be discarded, they can provide us with tried and true values, practices and virtues around which we can organise our societies in the digital future. They provide us with the tools to unlock the sources of care and the will to create a better social framework in which human beings and technology find their place. The future need not be transhuman to be better; being fully human is quite enough.  

Article
AI
Belief
Culture
Mental Health
Pride
4 min read

Are AI chatbots actually demons in disguise?

Early Christian thinkers explain chatbots better than Silicon Valley does

Gabrielle Thomas is Assistant Professor of Early Christianity and Anglican Studies at Emory University

An AI image of a person stood holding a phone with a bubble above their head, below them is a chatbot-like demon with a tail
Nick Jones/Midjourney.ai.

AI Chatbots. They’re here to save us, aren’t they? Their designers argue so, fervently. There’s no doubt they are useful. Some, like EpiscoBOT (formerly known as ‘Cathy’), are designed for those asking ‘life’s biggest questions. 'Our girlfriend Scarlett’, is an AI companion who “is always eager to please you in any way imaginable.”  So why not defend them?  

 They offer companionship for the lonely, spark creativity when we run on empty, and make us more productive. They also provide answers for any and every kind of question without hesitation. They are, in short, a refuge. Many chatbots come with names, amplifying our sense of safety. Names define and label things, but they do far more than that. Names foster connection. They can evoke and describe a relationship, allowing us to make intimate connections with the things named. When the “things” in question are AI chatbots, however, we can run into trouble.  

According to a study conducted by researchers at Stanford University, chatbots can contribute to “harmful stigma and dangerous responses.” More than this, they can even magnify psychotic symptoms. The more we learn, the more we are beginning to grasp that the much of the world offered by AI chatbots is an illusory one.  

Early Christian thinkers had a distinct category for precisely this kind of illusion: the demonic. They understood demons not as red, horned bodies or fiery realms, but as entities with power to fabricate illusions—visions, appearances, and deceptive signs that distorted human perception of reality. Demons also personified pride. As fallen angels, they turned away from truth toward themselves. Their illusions lured humans into sharing that pride—believing false greatness, clinging to false refuge. 

 Looking back to early Christian approaches to demonology may help us see more clearly what is at stake in adopting without question AI chatbots. 

  

According to early Christian thinkers, demons rarely operated through brute force. Instead, they worked through deception. Athanasius of Alexandria (c. 296–373) was a bishop and theologian who wrote Life of Antony. In this, he recounted how the great desert father was plagued by demonic visions—phantoms of wild beasts, apparitions of gold, even false angels of light. The crucial danger was not physical attack but illusion. Demons were understood as beings that manufactured appearances to confuse and mislead. A monk in his cell might see radiant light and hear beautiful voices, but he was to test it carefully, for demons disguise themselves as angels. 

Evagrius Ponticus (c. 345–399), a Christian monk, ascetic, and theologian influential in early monastic spirituality, warned that demons insinuated themselves into thought, planting ideas that felt self-generated but in fact led one astray. This notion—that the demonic is most effective when it works through appearances—shaped the entire ascetic project. To resist demons meant to resist their illusions. 

 Augustine of Hippo (354–430) was a North African bishop and theologian whose writings shaped Western Christianity. In his book The City of God, he argued that pagan religion was largely a vast system of demonic deception. Demons, he argued, produced false miracles, manipulated dreams, and inspired performances in the theatre to ensnare the masses. They trafficked in spectacle, seducing imagination and desire rather than presenting truth. 

 AI chatbots function in a strikingly similar register. They do not exert power by physical coercion. Instead, they craft illusion. They can produce an authoritative-sounding essay full of falsehoods. They can create images of people doing something that never happened. They can provide companionship that leads to self-harm or even suicide. Like the demonic, the chatbot operates in the register of vision, sound, and thought. It produces appearances that persuade the senses while severing them from reality. The risk is not that the chatbot forces us, but that it deceives us—just like demonic powers. 

Using AI chatbots, too, tempts us with illusions of pride. A writer may pass off AI-generated work as their own, for example. The danger here is not simply being deceived but becoming complicit in deception, using illusion to magnify ourselves. Early Christian theologians like Athansius, Evagrius and Augustine, warned that pride was the surest sign of demonic influence. To the extent that AI tempts us toward inflated images of ourselves, it participates in the same pattern. 

When it comes to AI chatbots, we need a discipline of discernment—testing whether the images and texts bear the marks of truth or deception. Just as monks could not trust every appearance of light, we cannot trust every image or every confident paragraph produced by the chatbots. We need criteria of verification and communities of discernment to avoid mistaking illusion for reality. 

Help is at hand.  

Through the ages, Christians have responded to demonic illusions, not with naïve credulity nor blanket rejection of the sensory world, but through the hard work of discernment: testing appearances, cultivating disciplines of resistance, and orienting desire toward truth.  

 The Life of Antony describes how the monk confronted demonic illusions with ascetic discipline. When confronted by visions of treasure, Antony refused to be moved by desire. When assailed by apparitions, he remained in prayer. He tested visions by their effects: truthful visions produced humility, peace, and clarity, while demonic illusions provoked pride, disturbance, and confusion. We can cultivate a way of life that does the same. Resisting the illusions may require forms of asceticism: fasting from chatbots and cultivating patience in verification.  

Chatbot illusions are not necessarily demonic in themselves. The key is whether the illusion points beyond itself toward truth and reality, or whether it traps us in deception.  

Support Seen & Unseen

Since Spring 2023, our readers have enjoyed over 1,500 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?
 
Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief