Article
AI
Comment
4 min read

It's our mistakes that make us human

What we learn distinguishes us from tech.

Silvianne Aspray is a theologian and postdoctoral fellow at the University of Cambridge.

A man staring at a laptop grimmaces and holds his hands to his head.
Francisco De Legarreta C. on Unsplash.

The distinction between technology and human beings has become blurry: AI seems to be able to listen, answer our questions, even respond to our feelings. It becomes increasingly easy to confuse machines with humans. In this situation, it is increasingly important to ask: What makes us human, in distinction from machines? There are many answers to this question, but for now I would like to focus on just one aspect of what I think is distinctively human: As human beings, we live and learn in time.  

To be human means to be intrinsically temporal. We live in time and are oriented towards a future good. We are learning animals, and our learning is bound up with the taking of time. When we learn to know or to do something, we necessarily make mistakes, and we take practice. But keeping in view something we desire – a future good – we keep going.  

Let’s take the example of language. We acquire language in community over time. Toddlers make all sorts of hilarious mistakes when they first try to talk, and it takes them a long time even to get single words right, let alone to try and form sentences. But they keep trying, and they eventually learn. The same goes with love: Knowing how to love our family or our neighbours near and far is not something we are good at instantly. It is not the sort of learning where you absorb a piece of information and then you ‘get’ it. No, we learn it over time, we imitate others, we practice and even when we have learned, in the abstract, what it is to be loving, we keep getting it wrong. 

This, too, is part of what it means to be human: to make mistakes. Not the sort of mistakes machines make, when they classify some information wrongly, for instance, but the very human mistake of falling short of your own ideal. Of striving towards something you desire – happiness, in the broadest of terms – and yet falling short, in your actions, of that very goal. But there’s another very human thing right here: Human beings can also change. They – we – can have a change of heart, be transformed, and at some point in time, actually start to do the right thing – even against all the odds. Statistics of past behaviours, do not always correctly predict future outcomes. Part of being human means that we can be transformed.  

Transformation sometimes comes suddenly, when an overwhelming, awe-inspiring experience changes somebody’s life as by a bolt of lightning. Much more commonly, though, such transformation takes time. Through taking up small practices, we can form new habits, gradually acquire virtue, and do the right thing more often than not. This is so human: We are anything but perfect. As Christians would say: We have a tendency to entangle ourselves in the mess of sin and guilt. But we also bear the image of the Holy One who made us, and by the grace and favour of that One, we are not forever stuck in the mess. We are redeemed: are given the strength to keep trying, despite the mistakes we make, and given the grace to acquire virtue and become better people over time. All of this to say that being human means to live in time, and to learn in time. 

So, this is a real difference between human beings and machines: Human beings can, and do strive toward a future good. 

Now compare this to the most complex of machines. We say that AI is able to “learn”. But what does it mean to learn, for AI? Machine learning is usually categorized into supervised learning, unsupervised and self-supervised learning. Supervised learning means that a model is trained for a specific task based on correctly labelled data. For instance, if a model is to predict whether a mammogram image contains a cancerous tumour, it is given many example images which are correctly classed as ‘contains cancer’ or ‘does not contain cancer’. That way, it is “taught” to recognise cancer in unlabelled mammograms. Unsupervised learning is different. Here, the system looks for patterns in the dataset it is given. It clusters and groups data without relying on predefined labels. Self-supervised learning uses both methods: Here, the system uses parts of the data itself as a kind of label – such as, for instance, predicting the upper half of an image from its lower half, or the next word in a given text. This is the predominant paradigm for how contemporary large-scale AI models “learn”.  

In each case, AI’s learning is necessarily based on data sets. Learning happens with reference to pre-given data, and in that sense with reference to the past. It may look like such models can consider the future, and have future goals, but only insofar as they have picked up patterns in past data, which they use to predict future patterns – as if the future was nothing but a repetition of the past.  

So this is a real difference between human beings and machines: Human beings can, and do strive toward a future good. Machines, by contrast, are always oriented towards the past of the data that was fed to them. Human beings are intrinsically temporal beings, whereas machines are defined by temporality only in a very limited sense: it takes time to upload data, and for the data to be processed, for instance. Time, for machines, is nothing but an extension of the past, whereas for human beings, it is an invitation to and the possibility for being transformed for the sake of a future good. We, human beings, are intrinsically temporal, living in time towards a future good – which machines do not.  

In the face of new technologies we need a sharpened sense for the strange and awe-inspiring species that is the human race, and cultivate a new sense of wonder about humanity itself.  

Article
AI
Belief
Culture
Mental Health
Pride
4 min read

Are AI chatbots actually demons in disguise?

Early Christian thinkers explain chatbots better than Silicon Valley does

Gabrielle Thomas is Assistant Professor of Early Christianity and Anglican Studies at Emory University

An AI image of a person stood holding a phone with a bubble above their head, below them is a chatbot-like demon with a tail
Nick Jones/Midjourney.ai.

AI Chatbots. They’re here to save us, aren’t they? Their designers argue so, fervently. There’s no doubt they are useful. Some, like EpiscoBOT (formerly known as ‘Cathy’), are designed for those asking ‘life’s biggest questions. 'Our girlfriend Scarlett’, is an AI companion who “is always eager to please you in any way imaginable.”  So why not defend them?  

 They offer companionship for the lonely, spark creativity when we run on empty, and make us more productive. They also provide answers for any and every kind of question without hesitation. They are, in short, a refuge. Many chatbots come with names, amplifying our sense of safety. Names define and label things, but they do far more than that. Names foster connection. They can evoke and describe a relationship, allowing us to make intimate connections with the things named. When the “things” in question are AI chatbots, however, we can run into trouble.  

According to a study conducted by researchers at Stanford University, chatbots can contribute to “harmful stigma and dangerous responses.” More than this, they can even magnify psychotic symptoms. The more we learn, the more we are beginning to grasp that the much of the world offered by AI chatbots is an illusory one.  

Early Christian thinkers had a distinct category for precisely this kind of illusion: the demonic. They understood demons not as red, horned bodies or fiery realms, but as entities with power to fabricate illusions—visions, appearances, and deceptive signs that distorted human perception of reality. Demons also personified pride. As fallen angels, they turned away from truth toward themselves. Their illusions lured humans into sharing that pride—believing false greatness, clinging to false refuge. 

 Looking back to early Christian approaches to demonology may help us see more clearly what is at stake in adopting without question AI chatbots. 

  

According to early Christian thinkers, demons rarely operated through brute force. Instead, they worked through deception. Athanasius of Alexandria (c. 296–373) was a bishop and theologian who wrote Life of Antony. In this, he recounted how the great desert father was plagued by demonic visions—phantoms of wild beasts, apparitions of gold, even false angels of light. The crucial danger was not physical attack but illusion. Demons were understood as beings that manufactured appearances to confuse and mislead. A monk in his cell might see radiant light and hear beautiful voices, but he was to test it carefully, for demons disguise themselves as angels. 

Evagrius Ponticus (c. 345–399), a Christian monk, ascetic, and theologian influential in early monastic spirituality, warned that demons insinuated themselves into thought, planting ideas that felt self-generated but in fact led one astray. This notion—that the demonic is most effective when it works through appearances—shaped the entire ascetic project. To resist demons meant to resist their illusions. 

 Augustine of Hippo (354–430) was a North African bishop and theologian whose writings shaped Western Christianity. In his book The City of God, he argued that pagan religion was largely a vast system of demonic deception. Demons, he argued, produced false miracles, manipulated dreams, and inspired performances in the theatre to ensnare the masses. They trafficked in spectacle, seducing imagination and desire rather than presenting truth. 

 AI chatbots function in a strikingly similar register. They do not exert power by physical coercion. Instead, they craft illusion. They can produce an authoritative-sounding essay full of falsehoods. They can create images of people doing something that never happened. They can provide companionship that leads to self-harm or even suicide. Like the demonic, the chatbot operates in the register of vision, sound, and thought. It produces appearances that persuade the senses while severing them from reality. The risk is not that the chatbot forces us, but that it deceives us—just like demonic powers. 

Using AI chatbots, too, tempts us with illusions of pride. A writer may pass off AI-generated work as their own, for example. The danger here is not simply being deceived but becoming complicit in deception, using illusion to magnify ourselves. Early Christian theologians like Athansius, Evagrius and Augustine, warned that pride was the surest sign of demonic influence. To the extent that AI tempts us toward inflated images of ourselves, it participates in the same pattern. 

When it comes to AI chatbots, we need a discipline of discernment—testing whether the images and texts bear the marks of truth or deception. Just as monks could not trust every appearance of light, we cannot trust every image or every confident paragraph produced by the chatbots. We need criteria of verification and communities of discernment to avoid mistaking illusion for reality. 

Help is at hand.  

Through the ages, Christians have responded to demonic illusions, not with naïve credulity nor blanket rejection of the sensory world, but through the hard work of discernment: testing appearances, cultivating disciplines of resistance, and orienting desire toward truth.  

 The Life of Antony describes how the monk confronted demonic illusions with ascetic discipline. When confronted by visions of treasure, Antony refused to be moved by desire. When assailed by apparitions, he remained in prayer. He tested visions by their effects: truthful visions produced humility, peace, and clarity, while demonic illusions provoked pride, disturbance, and confusion. We can cultivate a way of life that does the same. Resisting the illusions may require forms of asceticism: fasting from chatbots and cultivating patience in verification.  

Chatbot illusions are not necessarily demonic in themselves. The key is whether the illusion points beyond itself toward truth and reality, or whether it traps us in deception.  

Support Seen & Unseen

Since Spring 2023, our readers have enjoyed over 1,500 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?
 
Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief