Column
Comment
4 min read

There’s more than one way to lose our humanity

How we treat immigrants and how AI might treat humans weighs on the mind of George Pitcher.

George is a visiting fellow at the London School of Economics and an Anglican priest.

A grey multi-story accommodation barge floats beside a dock.
The Bibby Stockholm accommodation barge in Portland Harbour.
shley Smith, CC BY-SA 4.0 , via Wikimedia Commons.

“The greatness of humanity,” said Mahatma Gandhi, “is not in being human, but in being humane.” At first glance, this is something of a truism. But actually Gandhi neatly elides the two meanings of humanity in this tight little phrase. 

Humanity means both the created order that we know as the human race and its capacity for self-sacrificial love and compassion. In the Christian tradition, we celebrate at Christmas what we call the incarnation – the divine sharing of the human experience in the birth of the Christ child.  

Our God shares our humanity and in doing so, shows his humanity in the form of a universal and unconditional love for his people. So, it’s an act both for humanity and of humanity. 

This Christmas, there are two very public issues in which humanity has gone missing in both senses. And it’s as well to acknowledge them as we approach the feast. That’s in part a confessional act; where we identify a loss of humanity, in both its definitions, we can resolve to do something about it. Christmas is a good time to do that. 

The first is our loss of humanity in the framing of legislation to end illegal immigration to the UK. The second is the absence of humanity in the development of artificial intelligence. The former is about political acts that are inhumane and the latter goes to the nature of what it is to be human. 

We have literally lost a human to our inhumanity, hanged in a floating communal bathroom. It’s enough to make us look away from the crib, shamed rather than affirmed in our humanity. 

There is a cynical political line that the principal intention of the government’s Safety of Rwanda (Asylum and Immigration) Bill, voted through the House of Commons this week, is humane, in that it’s aimed at stopping the loss of life among migrants exploited by criminal gangs. But it commodifies human beings, turning them into cargo to be exported elsewhere. That may not be a crime – the law has yet to be tested – but it is at least an offence against humanity. 

Where humanity, meaning what it is to be human, is sapped, hope withers into despair. When a human being is treated as so much freight, its value not only diminishes objectively but so does its self-worth. The suicide of an asylum seeker on the detention barge Bibby Stockholm in Portland Harbour is a consequence of depreciated humanity. Not that we can expect to hear any official contrition for that. 

To paraphrase Gandhi, when we cease to be humane we lose our humanity. And we have literally lost a human to our inhumanity, hanged in a floating communal bathroom. It’s enough to make us look away from the crib, shamed rather than affirmed in our humanity. 

That’s inhumanity in the sense of being inhumane. Turning now to humanity in the sense of what it means to be human, we’re faced with the prospect of artificial intelligence which not only replicates but replaces human thought and function.  

To be truly God-like, AI would need to allow itself to suffer and to die on humanity’s part. 

The rumoured cause of the ousting of CEO Sam Altman last month from OpenAI (before his hasty reinstatement just five days later) was his involvement in a shadowy project called Q-star, GPT-5 technology that is said to push dangerously into the territory of human intelligence. 

But AI’s central liability is that it lacks humanity. It is literally inhuman, rather than inhumane. We should take no comfort in that because that’s exactly where its peril lies. Consciousness is a defining factor of humanity. AI doesn’t have it and that’s what makes it so dangerous. 

 To “think” infinitely quicker across unlimited data and imitate the best of human creativity, all without knowing that it’s doing so, is a daunting technology. It begins to look like a future in which humanity becomes subservient to its technology – and that’s indeed dystopian. 

But we risk missing a point when our technology meets our theology. It’s often said that AI has the potential to take on God-like qualities. This relates to the prospect of its supposed omniscience. Another way of putting that is that it has the potential to be all-powerful. 

The trouble with that argument is that it takes no account of the divine quality of being all-loving too, which in its inhumanity AI cannot hope to replicate. In the Christmastide incarnation, God (as Emmanuel, or “God with us”) comes to serve, not to be served. If you’ll excuse the pun, you won’t find that mission on a computer server. 

Furthermore, to be truly God-like, AI would need to allow itself to suffer and to die on humanity’s part, albeit to defeat its death in a salvific way. Sorry, but that isn’t going to happen. We must be careful with AI precisely because it’s inhuman, not because it’s too human. 

Part of what we celebrate at Christmas is our humanity and, in doing so, we may re-locate it. We need to do that if we are to treat refugees with humanity and to re-affirm that humanity’s intelligence is anything but artificial. Merry Christmas. 

Article
Assisted dying
Care
Comment
Death & life
Suffering
5 min read

Why end of life agony is not a good reason to allow death on demand

Assisted dying and the unintended consequences of compassion.

Graham is the Director of the Centre for Cultural Witness and a former Bishop of Kensington.

A open hand hold a pill.
Towfiqu Barbhuiya on Unsplash.

Those advocating Assisted Dying really have only one strong argument on their side – the argument from compassion. People who have seen relatives dying in extreme pain and discomfort understandably want to avoid that scenario. Surely the best way is to allow assisted dying as an early way out for such people to avoid the agony that such a death involves?  

Now it’s a powerful argument. To be honest I can’t say what I would feel if I faced such a death, or if I had to watch a loved one go through such an ordeal. All the same, there are good reasons to hold back from legalising assisted dying even in the face of distress at the prospect of enduring or having to watch a painful and agonising death.  

In any legislation, you have to bear in mind unintended consequences. A law may benefit one particular group, but have knock-on effects for another group, or wider social implications that are profoundly harmful. Few laws benefit everyone, so lawmakers have to make difficult decisions balancing the rights and benefits of different groups of people. 

It feels odd to be citing percentages and numbers faced with something so elemental and personal and death and suffering, but it is estimated that around two per cent of us will die in extreme pain and discomfort. Add in the 'safeguards' this bill proposes (a person must be suffering from a terminal disease with fewer than six months to live, capable of making such a decision, with two doctors and a judge to approve it) and the number of people this directly affects becomes really quite small. Much as we all sympathise and feel the force of stories of agonising suffering - and of course, every individual matters - to put it bluntly, is it right to entertain the knock-on effects on other groups in society and to make such a fundamental shift in our moral landscape, for the sake of the small number of us who will face this dreadful prospect? Reading the personal stories of those who have endured extreme pain as they approached death, or those who have to watch over ones do so is heart-rending - yet are they enough on their own to sanction a change to the law? 

Much has been made of the subtle pressure put upon elderly or disabled people to end it all, to stop being a burden on others. I have argued elsewhere on Seen and Unseen that that numerous elderly people will feel a moral obligation to safeguard the family inheritance by choosing an early death rather than spend the family fortune on end of life care, or turning their kids into carers for their elderly parents. Individual choice for those who face end of life pain unintentionally  lands an unenviable and unfair choice on many more vulnerable people in our society. Giles Fraser describes the indirect pressure well: 

“You can say “think of the children” with the tiniest inflection of the voice, make the subtlest of reference to money worries. We communicate with each other, often most powerfully, through almost imperceptible gestures of body language and facial expression. No legal safeguard on earth can detect such subliminal messaging.” 

There is also plenty of testimony that suggests that even with constant pain, life is still worth living. Michelle Anna-Moffatt writes movingly  of her brush with assisted suicide and why she pulled back from it, despite living life in constant pain.  

Once we have blurred the line between a carer offering a drink to relieve thirst and effectively killing them, a moral line has been crossed that should make us shudder. 

Despite the safeguards mentioned above, the move towards death on the NHS is bound to lead to a slippery slope – extending the right to die to wider groups with lesser obvious needs. As I wrote in The Times recently, given the grounds on which the case for change is being made – the priority of individual choice – there are no logical grounds for denying the right to die of anyone who chooses that option, regardless of their reasons. If a teenager going through a bout of depression, or a homeless person who cannot see a way out of their situation chooses to end it all, and their choice is absolute, on what grounds could we stop them? Once we have based our ethics on this territory, the slippery slope is not just likely, it is inevitable.  

Then there is the radical shift to our moral landscape. A disabled campaigner argues that asking for someone to help her to die “is no different for me than asking my caregiver to help me on the toilet, or to give me a shower, or a drink, or to help me to eat.” Sorry - but it is different, and we know it. Once we have blurred the line between a carer offering a drink to relieve thirst and effectively killing them, a moral line has been crossed that should make us shudder.  

In Canada, many doctors refuse, or don’t have time to administer the fatal dose so companies have sprung up, offering ‘medical professionals’ to come round with the syringe to finish you off. In other words, companies make money out of killing people. It is the commodification of death. When we have got to that point, you know we have wandered from the path somewhere.  

You would have to be stony-hearted indeed not to feel the force of the argument to avoid pain-filled deaths. Yet is a change to benefit such people worth the radical shift of moral value, the knock-on effects on vulnerable people who will come under pressure to die before their time, the move towards death on demand?  

Surely there are better ways to approach this? Doctors can decide to cease treatment to enable a natural death to take its course, or increase painkillers that will may hasten death - that is humane and falls on the right side of the line of treatment as it is done primarily to relieve pain, not to kill. Christian faith does not argue that life is to be preserved at any cost – our belief in martyrdom gives the lie to that. More importantly, a renewed effort to invest in palliative care and improved anaesthetics will surely reduce such deaths in the longer term. These approaches are surely much wiser and less impactful on the large numbers of vulnerable people in our society than the drastic step of legalising killing on the NHS.