Column
Comment
4 min read

There’s more than one way to lose our humanity

How we treat immigrants and how AI might treat humans weighs on the mind of George Pitcher.

George is a visiting fellow at the London School of Economics and an Anglican priest.

A grey multi-story accommodation barge floats beside a dock.
The Bibby Stockholm accommodation barge in Portland Harbour.
shley Smith, CC BY-SA 4.0 , via Wikimedia Commons.

“The greatness of humanity,” said Mahatma Gandhi, “is not in being human, but in being humane.” At first glance, this is something of a truism. But actually Gandhi neatly elides the two meanings of humanity in this tight little phrase. 

Humanity means both the created order that we know as the human race and its capacity for self-sacrificial love and compassion. In the Christian tradition, we celebrate at Christmas what we call the incarnation – the divine sharing of the human experience in the birth of the Christ child.  

Our God shares our humanity and in doing so, shows his humanity in the form of a universal and unconditional love for his people. So, it’s an act both for humanity and of humanity. 

This Christmas, there are two very public issues in which humanity has gone missing in both senses. And it’s as well to acknowledge them as we approach the feast. That’s in part a confessional act; where we identify a loss of humanity, in both its definitions, we can resolve to do something about it. Christmas is a good time to do that. 

The first is our loss of humanity in the framing of legislation to end illegal immigration to the UK. The second is the absence of humanity in the development of artificial intelligence. The former is about political acts that are inhumane and the latter goes to the nature of what it is to be human. 

We have literally lost a human to our inhumanity, hanged in a floating communal bathroom. It’s enough to make us look away from the crib, shamed rather than affirmed in our humanity. 

There is a cynical political line that the principal intention of the government’s Safety of Rwanda (Asylum and Immigration) Bill, voted through the House of Commons this week, is humane, in that it’s aimed at stopping the loss of life among migrants exploited by criminal gangs. But it commodifies human beings, turning them into cargo to be exported elsewhere. That may not be a crime – the law has yet to be tested – but it is at least an offence against humanity. 

Where humanity, meaning what it is to be human, is sapped, hope withers into despair. When a human being is treated as so much freight, its value not only diminishes objectively but so does its self-worth. The suicide of an asylum seeker on the detention barge Bibby Stockholm in Portland Harbour is a consequence of depreciated humanity. Not that we can expect to hear any official contrition for that. 

To paraphrase Gandhi, when we cease to be humane we lose our humanity. And we have literally lost a human to our inhumanity, hanged in a floating communal bathroom. It’s enough to make us look away from the crib, shamed rather than affirmed in our humanity. 

That’s inhumanity in the sense of being inhumane. Turning now to humanity in the sense of what it means to be human, we’re faced with the prospect of artificial intelligence which not only replicates but replaces human thought and function.  

To be truly God-like, AI would need to allow itself to suffer and to die on humanity’s part. 

The rumoured cause of the ousting of CEO Sam Altman last month from OpenAI (before his hasty reinstatement just five days later) was his involvement in a shadowy project called Q-star, GPT-5 technology that is said to push dangerously into the territory of human intelligence. 

But AI’s central liability is that it lacks humanity. It is literally inhuman, rather than inhumane. We should take no comfort in that because that’s exactly where its peril lies. Consciousness is a defining factor of humanity. AI doesn’t have it and that’s what makes it so dangerous. 

 To “think” infinitely quicker across unlimited data and imitate the best of human creativity, all without knowing that it’s doing so, is a daunting technology. It begins to look like a future in which humanity becomes subservient to its technology – and that’s indeed dystopian. 

But we risk missing a point when our technology meets our theology. It’s often said that AI has the potential to take on God-like qualities. This relates to the prospect of its supposed omniscience. Another way of putting that is that it has the potential to be all-powerful. 

The trouble with that argument is that it takes no account of the divine quality of being all-loving too, which in its inhumanity AI cannot hope to replicate. In the Christmastide incarnation, God (as Emmanuel, or “God with us”) comes to serve, not to be served. If you’ll excuse the pun, you won’t find that mission on a computer server. 

Furthermore, to be truly God-like, AI would need to allow itself to suffer and to die on humanity’s part, albeit to defeat its death in a salvific way. Sorry, but that isn’t going to happen. We must be careful with AI precisely because it’s inhuman, not because it’s too human. 

Part of what we celebrate at Christmas is our humanity and, in doing so, we may re-locate it. We need to do that if we are to treat refugees with humanity and to re-affirm that humanity’s intelligence is anything but artificial. Merry Christmas. 

Snippet
Assisted dying
Care
Comment
Ethics
2 min read

Who holds the keys of death? The logic of assisted dying

The ethical principle of double effect.

Tom is a physician and completing a theology doctorate. 

white pills form an angle on a blue background
Hal Gatewood on Unsplash.

Healthcare hinges on the principle of double effect. This ethical principle makes the vital distinction between intent and effect. That is, one’s intent does not always result in a single intended effect, whether foreseen or not. In taking a patient’s blood, for example, my intent is to acquire information to aid treatment. An additional effect of this process is that—almost inevitably—this patient will experience pain, albeit minor. This principle of single intent and multiple effects applies throughout the practice of caring for human bodies, in all those instances where caring for those bodies involves physical interference, from prescribing medications to surgical procedures. And, in some instances, identifying and treating symptoms (such as terminal breathlessness) involves the use of medications that, as an unintended effect, result in death. 

In the case of assisted dying, the distinction is important. The intent of assisted dying is to end pain and suffering by ending life. The ending of life is the treatment used to relieve pain and suffering. The intent is not to isolate and treat particular symptoms associated with a condition. The intent is to bring the condition itself to an end—which requires bringing the patient’s life to an end. This is not to make any judgment whatsoever about whether such a course is “right” or “wrong”, but rather to draw out the simple observation that this course involves an unprecedented change in medical practice. Assisted dying involves the categorical adoption of ending life as a possible treatment for a condition. 

This is not quite the same as the slippery slope argument; it is about the logic of assisted dying. The point I am making is this: once ending life is introduced as a treatment, the key ethical step has already been taken. Applying that treatment in other instances of “suffering” (be they mental illness or ageing, for example) does not involve any new ethical steps. It simply involves the further application of a principle that has already been adopted. Despite the considered safeguards of the bill, therefore, the moral-ethical arguments against applying this treatment more widely will, at best, stand on shaky ground. For who could be so bold as to insist on what constitutes “suffering” for an individual?  

Should the bill hold out the keys of death in this way? I can only think of One who is strong enough to wield those…