Article
AI
Comment
4 min read

It's our mistakes that make us human

What we learn distinguishes us from tech.

Silvianne Aspray is a theologian and postdoctoral fellow at the University of Cambridge.

A man staring at a laptop grimmaces and holds his hands to his head.
Francisco De Legarreta C. on Unsplash.

The distinction between technology and human beings has become blurry: AI seems to be able to listen, answer our questions, even respond to our feelings. It becomes increasingly easy to confuse machines with humans. In this situation, it is increasingly important to ask: What makes us human, in distinction from machines? There are many answers to this question, but for now I would like to focus on just one aspect of what I think is distinctively human: As human beings, we live and learn in time.  

To be human means to be intrinsically temporal. We live in time and are oriented towards a future good. We are learning animals, and our learning is bound up with the taking of time. When we learn to know or to do something, we necessarily make mistakes, and we take practice. But keeping in view something we desire – a future good – we keep going.  

Let’s take the example of language. We acquire language in community over time. Toddlers make all sorts of hilarious mistakes when they first try to talk, and it takes them a long time even to get single words right, let alone to try and form sentences. But they keep trying, and they eventually learn. The same goes with love: Knowing how to love our family or our neighbours near and far is not something we are good at instantly. It is not the sort of learning where you absorb a piece of information and then you ‘get’ it. No, we learn it over time, we imitate others, we practice and even when we have learned, in the abstract, what it is to be loving, we keep getting it wrong. 

This, too, is part of what it means to be human: to make mistakes. Not the sort of mistakes machines make, when they classify some information wrongly, for instance, but the very human mistake of falling short of your own ideal. Of striving towards something you desire – happiness, in the broadest of terms – and yet falling short, in your actions, of that very goal. But there’s another very human thing right here: Human beings can also change. They – we – can have a change of heart, be transformed, and at some point in time, actually start to do the right thing – even against all the odds. Statistics of past behaviours, do not always correctly predict future outcomes. Part of being human means that we can be transformed.  

Transformation sometimes comes suddenly, when an overwhelming, awe-inspiring experience changes somebody’s life as by a bolt of lightning. Much more commonly, though, such transformation takes time. Through taking up small practices, we can form new habits, gradually acquire virtue, and do the right thing more often than not. This is so human: We are anything but perfect. As Christians would say: We have a tendency to entangle ourselves in the mess of sin and guilt. But we also bear the image of the Holy One who made us, and by the grace and favour of that One, we are not forever stuck in the mess. We are redeemed: are given the strength to keep trying, despite the mistakes we make, and given the grace to acquire virtue and become better people over time. All of this to say that being human means to live in time, and to learn in time. 

So, this is a real difference between human beings and machines: Human beings can, and do strive toward a future good. 

Now compare this to the most complex of machines. We say that AI is able to “learn”. But what does it mean to learn, for AI? Machine learning is usually categorized into supervised learning, unsupervised and self-supervised learning. Supervised learning means that a model is trained for a specific task based on correctly labelled data. For instance, if a model is to predict whether a mammogram image contains a cancerous tumour, it is given many example images which are correctly classed as ‘contains cancer’ or ‘does not contain cancer’. That way, it is “taught” to recognise cancer in unlabelled mammograms. Unsupervised learning is different. Here, the system looks for patterns in the dataset it is given. It clusters and groups data without relying on predefined labels. Self-supervised learning uses both methods: Here, the system uses parts of the data itself as a kind of label – such as, for instance, predicting the upper half of an image from its lower half, or the next word in a given text. This is the predominant paradigm for how contemporary large-scale AI models “learn”.  

In each case, AI’s learning is necessarily based on data sets. Learning happens with reference to pre-given data, and in that sense with reference to the past. It may look like such models can consider the future, and have future goals, but only insofar as they have picked up patterns in past data, which they use to predict future patterns – as if the future was nothing but a repetition of the past.  

So this is a real difference between human beings and machines: Human beings can, and do strive toward a future good. Machines, by contrast, are always oriented towards the past of the data that was fed to them. Human beings are intrinsically temporal beings, whereas machines are defined by temporality only in a very limited sense: it takes time to upload data, and for the data to be processed, for instance. Time, for machines, is nothing but an extension of the past, whereas for human beings, it is an invitation to and the possibility for being transformed for the sake of a future good. We, human beings, are intrinsically temporal, living in time towards a future good – which machines do not.  

In the face of new technologies we need a sharpened sense for the strange and awe-inspiring species that is the human race, and cultivate a new sense of wonder about humanity itself.  

Explainer
AI
Culture
Digital
6 min read

Tech has changed: it’s no longer natural or neutral

The first in a three-part series exploring the implications of technology.

James is Canon Missioner at Blackburn Cathedral. He researches technology and theology at Oxford University.

A caveman holding a hammer looks at a bench on which are a broken bicycle and a laptop.
Nick Jones/Midjourney.ai.

My son was born in February last year and it seems that every day he is developing new skills or facial expressions and adorable quirks. Just the other day he was playing with some wooden blocks and when they inevitably fell over, he let out the most adorable giggle. As you can guess I immediately reached for my phone so that I could capture the moment. Moments like this happen all the time in the life of a modern parent- we want to share with our spouse, family, and friends or just capture the moment for ourselves because it’s something we treasure. And yet, in this series of articles I would like to consider this moment, and the thousands like it that take place in a technological society, and ask: is everything as benign as it seems? 

There are two ideas that often come up whenever people talk about technology. The first is that technology is basically ‘neutral’, that technology only becomes good or bad depending on what you are doing with it. “Look at a hammer,” someone might say, “there is nothing intrinsically good or bad about this hammer, only the end result is good or bad depending on whether I’m using it to hit nails or people!” On this reading of technology, the only important questions relate to the consequences of use.  

If technology is neutral, then the primary concern for users, legislators and technologists is the consequences of technology, and not the technology itself. The only way to ensure that the technology is used for good is to ensure, somehow, that more good people will use the technology for good things than bad people using it for bad things. Often this idea will present itself as a conversation about competing freedoms: very few people (with some important exceptions, see this article from Ezra Klein) are debating whether there is something intrinsically problematic about the app formerly known as Twitter, most discussion revolves around how to maintain the freedom of good users while curtailing the freedom of bad users. 

We assume that these tools of social interaction like Facebook and Instagram are, in and of themselves, perfectly benign. We are encouraged to think this by massive corporations who have a vested interest in maintaining our use of their platforms, and at first glance, they seem completely harmless: what could possibly be the problem with a website in which grandma can share photos of her cat? And while the dark underbelly of these platforms has violent real-world consequences – like the rise of antisemitism and anti-Muslim hatred – the solution is primarily imagined as a matter of dealing with ‘bad actors’ rather than anything intrinsically problematic with the platforms themselves. 

Jobs here draws a straight-line comparison between the bicycle and the PC. As far as Jobs is concerned, there is no quantitative difference in kind between the two tools.

The second idea is related but somewhat different: Advocates of modern technology will suggest that humanity has been using technology ever since there were humans and therefore all this modern technology is not really anything to worry about. “Yes, modern technology looks scary,” someone might say, “but it’s really nothing to worry about, humans have been using tools since the Stone Age don’t you know!” This view proposes that because hammers are technology, and all technology is the same, there is, therefore, no difference between a hammer and the internet, or between the internet and a cyborg.  

This second idea tends to be accompanied by an emphasis on the slow and steady evolution of technology and by highlighting the fact that at every major technological advancement there have been naysayers decrying the latest innovation. (Even Plato was suspicious of writing when that was invented). Taken as part of a very long view of human history, the technological innovations of the last 100 years seem to be a normal and natural part of the evolution of our species which has always set itself apart from the rest of the animal kingdom in its use of technology. 

Steve Jobs gives a good example of this in an interview he gave about the development PC: 

“I think one of the things that really separates us from the high primates is that we’re tool builders. I read a study that measured the efficiency of locomotion for various species on the planet. The condors used the least energy to move a kilometer. And humans came in with a rather unimpressive showing about a third of the way down the list… not too proud of a showing for the crown of creation… But then somebody at Scientific American had the insight to test the efficiency of locomotion for a man on a bicycle. And a human on a bicycle blew the condor away – completely off the top of the charts. 

And that’s what a computer is to me… It’s the most remarkable tool we’ve ever come up with… It’s the equivalent of a bicycle for our minds”  

Notice that Jobs here draws a straight-line comparison between the bicycle and the PC. As far as Jobs is concerned, there is no quantitative difference in kind between the two tools: one is more complex than the other but otherwise, they are just technologies that expand human capacity. “A Bicycle for our minds” is a fascinating way to describe a computer because it implies that nothing about our minds will be changed, they’ll just be a little bit faster. 

And yet, despite the attempts of thought leaders like Jobs to convince us that modern technology is entirely benign, many of us are left with a natural suspicion that there is more going on. As a priest in the Church of England, I often have conversations with parishioners and members of the public who are looking for language or a framework which describes the instinctive recognition that something has changed at some point (fairly recently) about the nature of the technology that we use, or the way that it influences our lives. That modern technology is not simply the natural extension of the sorts of tools that humans have been using since the Stone Age and that modern technology is not neutral but in significant ways has already had an effect regardless of how we might use it. How do we respond to such articulate and thoughtful people such as Steve Jobs who make a compelling case that modern technology is neutral and natural?  

I often have conversations with parishioners who are looking for language or a framework which describes the instinctive recognition that something has changed about the nature of the technology that we use, or the way that it influences our lives.

Thinking back to that moment with my son when he giggles and I take a photo of him, at first glance it seems completely innocuous. But what resources are available if I did want to think more carefully about that moment (and the many like it) which suffuse my daily life? Thankfully there is a growing body of literature from philosophers and theologians who are thinking about the impact of modern technology on the human condition.  In the next two articles I would like to introduce the work of Martin Heidegger, outline his criticism of modern technology, showing how he challenges the idea that technology is simply a natural extension of human capacity or a neutral tool.  

Heidegger is a complex character in philosophy and in Western history. There is no getting around the fact that he was a supporter of the Nazi Party during the second world war. His politics have been widely condemned and rightly so, nevertheless, his insights on the nature of modern technology continue to this day to provide insights that are useful. His claim is that modern technology essentially and inevitably changes our relationship with the world in which we live and even with ourselves. It is this claim, and Heidegger’s suggested solution, that I will unpack in the next two articles.