Essay
AI
Culture
Identity
8 min read

Roll on AI, you'll make us more human

I’m not necessarily stupidly optimistic about AI, but there’s a tentative case to be so.

Daniel is an advertising strategist turned vicar-in-training.

An AI-generated image of a man folding a paper plan in a relaxed lounger, around him are creative tools and screens giving status updates are visible.
Nick Jones/Midjourney.ai.

I still come across people who insist that there are simply things that AI (artificial intelligence) can’t and will never be able to do. Humans will always have an edge. They tend to be journalists or editors who will insist that ChatGPT’s got nothing on their persuasive intentionality and honed command of nuance, wit, and word play. Of course, machines can replace the humans at supermarket check-out tills but not them. What they do is far too complex and requires such emotional precision and incisive insight into the audience psyche. Okay then. I nod, rolling my eyes into the back of my head.  

At this point, it’s just naive to put a limiter on the capabilities of what AI can do. It’s not even been two years since ChatGPT3 was released into the wild and started this whole furore. It’s only been 18 months and OpenAI have just launched ChatGPT4 which can produce a whole persona who can listen, look, and talk back in such a natural and convincing voice that it may as well be a scene from the 2013 film, Her. A future where Joachim Phoenix falls in love with the sultry AI voice of Scarlet Johannsson doesn’t seem too far off. We have been terrible at predicting the speed at which generative AI has developed. AI video generation was one of the clearest examples of that in the last year. In 2023, we were lauding it over the AI models for generating this surreal, nightmarish scene of Will Smith eating spaghetti. “Silly AI! aren’t you cute.” we said. We swallowed our words earlier this year, when Open AI came out with Sora, their video generation model, which spat out photorealistic film trailers that would feel at home on the screens of Cannes.  

There might be limits, but that ‘might’ gets smaller and smaller every single month, and we’re probably better off presuming that there is no ‘might’. We’ll be in for less surprises if we live from the presumption that there will be AIs that will make better newspaper editors, diagnostic radiologists, children’s book writers, and art-directors than most, if not all, humans.  

With the mass reproduction and generation capabilities of AI, we may recognise that we crave the human touch not because it’s better but because it’s human

I promised you a “stupidly optimising” take on this. So far, I’ve given you nothing but the bleak dystopian future where the labour market collapses and humans are dispossessed of all our technical, editorial, and creative skills. Where’s the good news?  

Well, the stupidly optimistic take is this: the dispossession of all our human faculties by AI will force us to embrace the truest and most fundamental core of what makes us valuable - nothing other than simply our humanity. The value of humanity goes up if we presume that everything can be done better by AI.  

In 1936, the German art critic, Walter Benjamin, prophesied the apocalyptic collapse of the art market in the essay: The Work of Art in the Age of Mechanical Reproduction. It was at a time when photographic reproduction of paintings was becoming a mainstream technique and visitors to a gallery could buy a print of their favourite painting. He argued that the mass reproduction of paintings would devalue the original painting by stripping away the aura of work - its unique presence in space and cultural heritage; the je ne sais quoi of art that draws us to a place of encounter with it. Benjamin would gawp at the digital age where masterpieces would be reduced to default iPhone background screens, but he would also be surprised by the exponentially greater value the art market has placed on the original piece. The aura of the original is sought after, all the more, precisely because mechanical reproduction has become so cheap. Why? Because in a world of mass reproduction, we crave human authenticity and connection. With the mass reproduction and generation capabilities of AI, we may recognise that we crave the human touch not because it’s better but because it’s human. And for no other reason.   

We continually place our identities in whatever talents we think make us uniquely worthwhile and value-creating for the world. 

What are we to make of the AI trials happening in the NHS which spot cancer at rates significantly higher than any human doctor. The Royal College of Radiologists insists that “There is no question that real-life clinical radiologists are essential and irreplaceable”. But really? Apart from checking the AI’s work, what’s the “essential” and “irreplaceable” part? Well, it’s the human part. Somebody must deliver the bad news to the patient and that sure as hell shouldn’t be an AI. Even if an AI could emulate the trembling voice and calming tone of the most empathic consultant, it is the human-to-human interpersonal exchange that creates the space for grief, sorrow, and shock.   

Think utopian with me for a moment. (I know, very counter-intuitive for us). In a society where all our technical skills are superseded, the most valuable skills that a human could possess might be the interpersonal ones. Empathy, compassion, intentionality, love even! The midwife who can hold the hand of a suffering first-time mother could be a more respected member of society than the editor of an edgy magazine or newspaper. As they should be! That’s a tantalising and stupidly optimistic vision of an AI future, but it’s a vision that aligns with what we know to be the true about ourselves. In our personal and spiritual lives, we already recognise that the most valuable aspects of our lives are our human relationships and the state of our inner selves. People on their death beds reflect on what kind of person they’ve been and reach out for the hands of their loved ones - not for their Q4, 2011 balance sheet. Our identities are shaped most deeply by our relationships and our character, and yet, we continually place our identities in whatever talents we think make us uniquely worthwhile and value-creating for the world. It’s good to create value, it’s nice to be good at something, and it’s meaningful to leave a lasting impact, but it is delusional to think that those things make us valuable. Our dispossession by AI might be the dispelling of these delusions! 

In a few decades, there may be nothing that humans can do better than AI, other than simply being human in the world

At least on a philosophical and spiritual level, being stripped of our human exceptionalism might be the most liberating experience for a society that has devalued and instrumentalised humanity to being glorified calculators. Being dispossessed is the truest thing about all of us. We are all being dispossessed daily by the slow march of time. The truest thing about us is that we will, one day, be wholly dispossessed by death itself. That was Heidegger’s fundamental insight into the human condition and this feeling of dispossession is the root of our anxiety and fear in the world. This might be part of the anxiety and ick we feel towards AI. Being dispossessed of our creativity and technical ability is a kind of violence and death against ourselves which we rage against. We can rage against it politically, socially, and economically, but there might be something helpful about resisting the rage from a psychological and spiritual point of view. Experiencing this dispossession might be the key to unlocking an authentic human existence in a world that we can’t control.  

I believe in human creativity. I believe that what we make is valuable. I believe in the mesmerising aura of art, cinema, music, and every other beautiful thing that we get up to in the world. I believe in the unique connection between artist and audience and the power of blood, sweat, and tears. I believe in the beautiful and tortuous self-violence of creativity to make something that will make my heart tremble and transport me to places never imagined. I believe in the intuitions of an editor to make the cut at precisely the right moment that suspends the tension and has me gripping the seat. I believe in the bedroom teenagers recording their first demos on Garageband, or the gospel choir taking their congregations to heaven and back. Now, more than ever, I believe in these miracles.  

But my belief is not anchored in any unique technical excellence, or some hubris about our exceptionalist mastery of craft. It is rooted in the profound humanity of it all, which radiates, however dimly, with the image of the divine. Writing poetry, humming a new melody, baking a cake or, even discovering a new mathematical conjecture can feel like “divine inspiration” as the leading mathematician, Thomas Fink, asserts. Or as the Romantic German theologian, Schleiermacher, so rhapsodically expressed, it can feel like the soul being “ignited from an ethereal fire, and the magic thunder of a charmed speech’" from above. This transcendent human experience is something that AI can’t usurp or supersede.  

In a few decades, there may be nothing that humans can do better than AI, other than simply being human in the world. However, Once we are stripped of everything, we won’t find ourselves naked in the dark, or at least, we don’t have to. We can stand before the world and God with the works of our hands - finite, flawed, and dispossessed - and yet, inestimably valuable and worthwhile for the simple fact of our mere humanity. 

 

*This article was something of a thought experiment. It’s far more natural to take a sandwich-board, bullhorn-wielding apocalyptic take on the rise of AI. The powers-to-be at Microsoft and OpenAI have their own ideological agendas, and it’s not unlikely that in this technological cycle, we’ll live through a profoundly destabilising labour market. We are right to fear the consolidation of wealth to supreme tech feudal lords with their companies of AI employees who cost a fraction of real humans. Civilisational collapse! What I wanted to suggest here is that there might be a unique spiritual and philosophical opportunity afforded to us as we continue to experience the break-neck development of AI and its encroachment into everything we once held as uniquely human skills.*  

 

Help share Seen & Unseen

"Seen & Unseen is a liberating point of view which has opened my mind to new possibilities."

All our content is free for anyone who wants to read it. This is made possible by our amazing community of regular supporters.

Article
Character
Culture
Leading
Virtues
6 min read

What is Putin thinking? And how would you know?

The self-centeredness of modern culture is antithetical to strategic thinking.

Emerson writes on geopolitics. He is also a business executive and holds a doctorate in theology.

Preisdent Putin stands behind a lectern with a gold door and Russian flag behind him.
What is Putin thinking?

In a world of Google Maps when walking on city streets, or of Waze when driving, it is difficult to ever become lost.  

The AI algorithm provides us with the shortest route to our destination, adjusting whenever we make the wrong turn. We do not need to think for ourselves, technology instead showing the way forward.  

But there are times where it is possible to get lost. This happens less in a city with its clearly set-out streets, and more so when taking the wrong turn in open expanses: hiking in the mountains, traversing farmers’ fields or while navigating at sea. In each of these situations, a miscalculation may lead to peril.  

It is in these situations that we must carefully think through our steps, determining how to proceed, or whether to turn back. Often, these situations are ambiguous, the right way forward unclear.  

Much of life – perhaps more than we wish to acknowledge – is like this, more akin to a walk across an open field with multiple possible routes forward, than a technology-enabled walk through a city.  

When making important decisions, our grasp of a given situation, of others’ intentions and motives, and the networks facilitating and constraining action, are less evident than we may initially think.  

This acknowledgement of uncertainty is no reason for delay, but rather a basis for careful deliberation in determining what to do, and how to proceed. It is necessary if we are to pursue what we believe is right, in a manner that may produce positive results.   

In a recent interview with the BBC Newscast podcast, University of Durham Chancellor Dr Fiona Hill – who previously served as White House National Security Council Senior Director for European and Russian Affairs, and currently as Co-Lead of the UK’s Strategic Defence Review – provides listeners with a powerful reminder on how to proceed within ambiguous situations, especially in navigating the choppy seas, or rocky terrains, of human relationships.  

Strategic empathy requires self-restraint when natural impulses urge a person to make rapid conclusions about the reality of a given situation – the default human tendency. 

Get updates

Dr Hill uses the term “strategic empathy” to consider how the political West might proceed in its relationship with Russia, and specifically with Vladimir Putin.  

Strategic empathy is a serious commitment to understanding how another person thinks, considering their worldview, their key sources of information (in other words, their main three or four advisors, who have a person’s ear), and other emotional considerations that underpin decision-making.  

It is much more than just putting oneself in other’s shoes, as is often said about empathy. The approach is one of realism, suspending judgment based on self-protective or self-aggrandising illusions, in favour of what is actually the case.  

In the case of Putin, Dr Hill helpfully reminds listeners that his worldview is drastically different than that of Westerners, and that significant intellectual effort (and specifically, intellectual humility in setting aside one’s own default frames of reference) is necessary to consider decisions from Putin’s perspective, and so make the right decisions from ours.  

Technology is here an assistant but not a cure-all. Whereas AI might – based on a gathering of all possible publicly available information written by and about a particular person – help to predict a person’s next move, this prediction is imperfect at best.  

There are underlying factors – perhaps a deeply engrained sense of historical grievance and resentment in the case of Putin – that shapes another’s action and that can scarcely be picked up through initial conversation. These factors may not make sense from our perspectives, or be logical, but they exist and must be treated seriously.  

This empathy is strategic, because effective strategy is the “How?” of any mission. Whereas a person’s or organisation’s mission, vocation or purpose (all words that can be used relatively interchangeably) is the “Why?” of a pursuit, strategy is the “How?” which itself consists of the questions “Who?” “What?” “When?” and “Where?”  

To understand how to act strategically requires a prior effective assessment of reality. This requires going beyond what others say, our initial perception of a situation, any haughty beliefs that we simply know what is happening, or even the assessments of supposedly well-connected and expert contacts.  

Dr Hill’s strategic empathy is an appeal to listeners to ask questions – digging as much as possible – to arrive at an assessment that approximates reality to the greatest degree possible.  This exercise might be aided by AI, but it is at its heart a human endeavour. 

Strategic empathy requires self-restraint when natural impulses urge a person to make rapid conclusions about the reality of a given situation – the default human tendency. The persistent asking of questions is difficult – requiring mental, emotional and intellectual endurance. 

There is considerable wisdom in Dr Hill’s reflections on strategic empathy, which extend well beyond the fields of intelligence, geopolitics or defence. The idea of strategic empathy helps show us that in much of modern culture – which glorifies the self, individuals putting their wants, needs and desires before those of others – developing strategy is very difficult.  

The key then, when deliberating on potential right courses of action in ambiguous situations, is to not begin believing that the right way is clear. It rarely is.

Why is this the case? When popular culture favours phrases such as “You do you,” the you becomes a barrier to asking questions, with some aloofness to the situation, necessary for understanding how another thinks. People are encouraged to focus on themselves at the expense of others, and so fail to understand others’ worldviews and ways of operating. 

Simply put, the self-centeredness of modern culture is antithetical to strategy. It impedes deliberation, which involves patience in the gradual formation of purpose for action. It wages war against the considered politics or statesmanship that many want to see return. In place of this is crisis or catastrophe, in which self-focus leads to clashes with others that could otherwise be avoided or worked through carefully.   

The Biblical story of the serpent in the garden is another vantage point for the idea of strategic empathy. Soon after Adam and Eve eat the apple in the garden and become “like gods, knowing good and evil,” God searches for them and asks “Where are you?” 

It is right after individuals try to become the judges of good and evil – “like gods,” that Adam and Eve find themselves lost: God’s “Where are you?”  

Put differently, when a person is convinced they are right, but without asking questions, they make mistakes, they likely suffer unnecessarily because of this, and then become anchorless – the “Where are you?”  

This applies to countries as much as it does to people: the more they moralize, seeking to become the judges of good and evil in a complex geopolitical landscape, the more they drift from their sense of purpose.  

The key then, when deliberating on potential right courses of action in ambiguous situations, is to not begin believing that the right way is clear. It rarely is. A belief in evident rightness often leads to error, whereas the ability to suspend such judgment helps reveal – often gradually – the right path forward.   

The strategic empathy approach requires both assertiveness – in asking good questions and maintaining persistence in doing so – and self-restraint in the face of believing that the right answer is clear.  

The glue between assertiveness on the one hand and restraint on the other is faith, which helps a person to move forward in a trusting manner, but without exerting oneself so much so that they become the centre of the situation.  

So, while Google Maps, Waze or other technologies might be at our disposal in our travels, both real and metaphorical, these technologies only get us so far.  

The right way forward is seldom initially clear when navigating ambiguous situations, the frequency and stakes of which increase as we embark boldly – with faith – on the adventure of life.  

Dr Hill’s strategic empathy – asking questions, listening carefully, suspending a self of sense, seriously considering diverging worldviews, and adjusting as necessary – helps us to achieve the understanding and direction we need.  

Indeed, this approach is fundamental to a more effective and resilient political West. It is necessary for sounder deliberation, better strategy and statesmanship, in an increasingly ambiguous world.