Article
AI
Culture
5 min read

What AI needs to learn about dying and why it will save it

Those programming truthfulness can learn a lot from mortality.

Andrew Steane has been Professor of Physics at the University of Oxford since 2002, He is the author of Faithful to Science: The Role of Science in Religion.

An angel of death lays a hand of a humanioid robot that has died amid a data centre
A digital momento mori.
Nick Jones/midjourney.ai

Google got itself into some unusual hot water in recently when its Gemini generative AI software started putting out images that were not just implausible but downright unethical. The CEO Sundar Pichai has taken the situation in hand and I am sure it will improve. But before this episode it was already clear that currently available chat-bots, while impressive, are capable of generating misleading or fantastical responses and in fact they do this a lot. How to manage this? 

Let’s use the initials ‘AI’ for artificial intelligence, leaving it open whether or not the term is entirely appropriate for the transformer and large language model (LLM) methods currently available. The problem is that the LLM approach causes chat-bots to generate both reasonable and well-supported statements and images, and also unsupported and fantastical (delusory and factually incorrect) statements and images, and this is done without signalling to the human user any guidance in telling which is which. The LLMs, as developed to date, have not been programmed in such a way as to pay attention to this issue. They are subject to the age-old problem of computer programming: garbage in, garbage out

If, as a society, we advocate for greater attention to truthfulness in the outputs of AI, then software companies and programmers will try to bring it about. It might involve, for example, greater investment in electronic authentication methods. An image or document will have to have, embedded in its digital code, extra information serving to authenticate it by some agreed and hard-to-forge method. In the 2002 science fiction film Minority Report an example of this was included: the name of a person accused of a ‘pre-crime’ (in the terminology of the film) is inscribed on a wooden ball, so as to use the unique cellular structure of a given piece of hardwood as a form of data substrate that is near impossible to duplicate.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. 

It is clear that a major issue in the future use of AI by humans will be the issue of trust and reasonable belief. On what basis will we be able to trust what AI asserts? If we are unable to check the reasoning process in a result claimed to be rational, how will be able to tell that it was in fact well-reasoned? If we only have an AI-generated output as evidence of something having happened in the past, how will we know whether it is factually correct? 

Among the strategies that suggest themselves is the use of several independent AIs. If they are indeed independent and all propose the same answer to some matter of reasoning or of fact, then there is a prima facie case for increasing our degree of trust in the output. This will give rise to the meta-question: how can we tell that a given set of AIs are in fact independent? Perhaps they all were trained on a common faulty data set. Or perhaps they were able to communicate with each other and thus influence each other.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. We know humans in general are capable of both ignorance and deliberate deception. We manage this by building up degrees of trust based on whether or not people show behaviours that suggest they are trustworthy. This also involves the ability to recognize unique individuals over time, so that a case for trustworthiness can be built up over a sequence of observations. We also need to get a sense of one another's character in more general ways, so that we can tell if someone is showing a change in behaviour that might signal a change in their degree of trustworthiness. 

In order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

Issues of trust and of reasonable belief are very much grist to the mill of theology. The existing theological literature may have much that can be drawn upon to help us in this area. An item which strikes me as particularly noteworthy is the connection between suffering and loss and earning of trust, and the relation to mortality. In brief, a person you can trust is one who has ventured something of themselves on their pronouncements, such that they have something to lose if they prove to be untrustworthy. In a similar vein, a message which is costly to the messenger may be more valuable than a message which costs the messenger nothing. They have already staked something on their message. This implies they are working all the harder to exert their influence on you, for good or ill. (You will need to know them in other ways in order to determine which of good or ill is their intention.)  

Mortality brings this issue of cost to a point of considerable sharpness. A person willing to die on behalf of what they claim certainly invests a lot in their contribution. They earn attention. It is not a guarantee of rationality or factual correctness, but it is a demonstration of commitment to a message. It signals a sense of importance attached to whatever has demanded this ultimate cost. Death becomes a form of bearing witness.  

A thought-provoking implication of the above is that in order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

In the case of human life, even if making a specific claim does not itself lead directly to one's own death, the very fact that we die lends added weight to all the choices we make and all the actions we take. For, together, they are our message and our contribution to the world, and they cannot be endlessly taken back and replaced. Death will curtail our opportunity to add anything else or qualify what we said before. The things we said and did show what we cared about whether we intended them to or not. This effect of death on the weightiness of our messages to one another might be called the weight of mortality. 

In order for this kind of weight to become attached to the claims an AI may make, the coming death has to be clearly seen and understood beforehand by the AI, and the timescale must not be so long that the AI’s death is merely some nebulous idea in the far future. Also, although there may be some hope of new life beyond death it must not be a sure thing, or it must be such that it would be compromised if the AI were to knowingly lie, or fail to make an effort to be truthful. Only thus can the pronouncements of an AI earn the weight of mortality. 

For as long as AI is not imbued with mortality and the ability to understand the implications of its own death, it will remain a useful tool as opposed to a valued partner. The AI you can trust is the AI reconciled to its own mortality. 

Article
Assisted dying
Care
Culture
Death & life
8 min read

The deceptive appeal of assisted dying changes medical practice

In Canada the moral ethos of medicine has shifted dramatically.

Ewan is a physician practising in Toronto, Canada. 

a doctor consults a tablet against the backdrop of a Canadian flag.

Once again, the UK parliament is set to debate the question of legalizing euthanasia (a traditional term for physician-assisted death). Political conditions appear to be conducive to the legalization of this technological approach to managing death. The case for assisted death appears deceptively simple—it’s about compassion, respect, empowerment, freedom from suffering. Who can oppose such positive goals? Yet, writing from Canada, I can only warn of the ways in which the embrace of physician-assisted death will fundamentally change the practice of medicine. Reflecting on the last 10 years of our experience, two themes stick out to me—pressure, and self-deception. 

I still remember quite distinctly the day that it dawned on me that the moral ethos of medicine in Canada was shifting dramatically. Traditionally, respect for the sacredness of the patient’s life and a corresponding absolute prohibition on deliberately causing the death of a patient were widely seen as essential hallmarks of a virtuous physician. Suddenly, in a 180 degree ethical turn, a willingness to intentionally cause the death of a patient was now seen as the hallmark of patient-centered doctor. A willingness to cause the patient’s death was a sign of compassion and even purported self-sacrifice in that one would put the patient’s desires and values ahead of their own. Those of us who continued to insist on the wrongness of deliberately causing death would now be seen as moral outliers, barriers to the well-being and dignity of our patients. We were tolerated to some extent, and mainly out of a sense of collegiality. But we were also a source of slight embarrassment. Nobody really wanted to debate the question with us; the question was settled without debate. 

Yet there was no denying the way that pressure was brought to bear, in ways subtle and overt, to participate in the new assisted death regime. We humans are unavoidably moral creatures, and when we come to believe that something is good, we see ourselves and others as having an obligation to support it. We have a hard time accepting those who refuse to join us. Such was the case with assisted death. With the loudest and most strident voices in the Canadian medical profession embracing assisted death as a high and unquestioned moral good, refusal to participate in assisted death could not be fully tolerated.  

We deceive ourselves if we think that doctors have fully accepted that euthanasia is ethical when only very few are actually willing to administer it. 

Regulators in Ontario and Nova Scotia (two Canadian provinces) stipulated that physicians who were unwilling to perform the death procedure must make an effective referral to a willing “provider”. Although the Supreme Court decision made it clear in their decision to strike down the criminal prohibition against physician-assisted death that no particular physician was under any obligation to provide the procedure, the regulators chose to enforce participation by way of this effective referral requirement. After all, this was the only way to normalize this new practice. Doctors don't ordinarily refuse to refer their patients for medically necessary procedures; if assisted death was understood to be a medically necessary good, then an unwillingness to make such referral could not be tolerated.  

And this form of pressure brings us to the pattern of deception. First, it is deceptive to suggest that an effective referral to a willing provider confers no moral culpability on the referring physician for the death of the patient. Those of us who objected to referring the patient were told that like Pilate, we could wash our hands of the patient’s death by passing them along to someone else who had the courage to do the deed. Yet the same regulators clearly prohibited referral for female genital mutilation. They therefore seemed to understand the moral responsibility attached to an effective referral. Such glaring inconsistencies about the moral significance of a referral suggests that when they claimed that a referral avoided culpability for death by euthanasia, they were deceiving themselves and us. 

The very need for a referral system signifies another self-deception. Doctors normally make referrals only when an assessment or procedure lies outside their technical expertise. In the case of assisted death, every physician has the requisite technical expertise to cause death. There is nothing at all complicated or difficult or specialized about assessing euthanasia eligibility criteria or the sequential administration of toxic doses of midazolam, propofol, rocuronium, and lidocaine. The fact that the vast majority of physicians are unwilling to perform this procedure entails that moral objection to participation in assisted death remains widespread in the medical profession. The referral mechanism is for physicians who are “uncomfortable” in performing the procedure; they can send the patient to someone else more comfortable. But to be comfortable in this case is to be “morally comfortable”, not “technically comfortable”. We deceive ourselves if we think that doctors have fully accepted that euthanasia is ethical when only very few are actually willing to administer it. 

We deceived ourselves into thinking that assisted death is a medical therapy for a medical problem, when in fact it is an existential therapy for a spiritual problem.

There is also self-deception with respect to the cause of death. In Canada, when a patient dies by doctor-assisted death, the person completing the death certificate is required to record the cause of death as the reason that the patient requested euthanasia, not the act of euthanasia per se. This must lead to all sorts of moments of absurdity for physicians completing death certificates—do patients really die from advanced osteoarthritis? (one of the many reasons patients have sought and obtained euthanasia). I suspect that this practice is intended to shield those who perform euthanasia from any long-term legal liability should the law be reversed. But if medicine, medical progress, and medical safety are predicated on an honest acknowledgment about causes of death, then this form of self-deception should not be countenanced. We need to be honest with ourselves about why our patients die. 

There has also been self-deception about whether physician-assisted death is a form of suicide. Some proponents of assisted death contend that assisted death is not an act of deliberate self-killing, but rather merely a choice over the manner and timing of one's death. It's not clear why one would try to distort language this way and deny that “physician-assisted suicide” is suicide, except perhaps to assuage conscience and minimize stigma. Perhaps we all know that suicide is never really a form of self-respect. To sustain our moral and social affirmation of physician-assisted death, we have to deny what this practice actually represents. 

There has been self-deception about the possibility of putting limits around the practice of assisted death. Early on, advocates insisted that euthanasia would be available only to those for whom death was reasonably foreseeable (to use the Canadian legal parlance). But once death comes to be viewed as a therapeutic option, the therapeutic possibilities become nearly limitless. Death was soon viewed as a therapy for severe disability or for health-related consequences of poverty and loneliness (though often poverty and loneliness are the consequence of the health issues). Soon we were talking about death as a therapy for mental illness. If beauty is in the eye of the beholder, then so is grievous and irremediable suffering. Death inevitably becomes therapeutic option for any form of suffering. Efforts to limit the practice to certain populations (e.g. those with disabilities) are inevitably seen as paternalistic and discriminatory. 

There has been self-deception about the reasons justifying legalization of assisted death. Before legalization, advocates decry the uncontrolled physical suffering associated with the dying process and claim that prohibiting assisted death dehumanizes patients and leaves them in agony. Once legalized, it rapidly becomes clear that this therapy is not for physical suffering but rather for existential suffering: the loss of autonomy, the sense of being a burden, the despair of seeing any point in going on with life. The desire for death reflects a crisis of meaning. We deceived ourselves into thinking that assisted death is a medical therapy for a medical problem, when in fact it is an existential therapy for a spiritual problem. 

We have also deceived ourselves by claiming to know whether some patients are better off dead, when in fact we have no idea what it's like to be dead. The utilitarian calculus underpinning the logic of assisted death relies on the presumption that we know what it is like before we die in comparison to what it is like after we die. In general, the unstated assumption is that there is nothing after death. This is perhaps why the practice is generally promoted by atheists and opposed by theists. But in my experience, it is very rare for people to address this question explicitly. They prefer to let the question of existence beyond death lie dormant, untouched. To think that physicians qua physicians have any expertise on or authority on the question of what it’s like to be dead, or that such medicine can at all comport with a scientific evidence-based approach to medical decision-making, is a profound self-deception. 

Finally, we deceive ourselves when we pretend that ending people’s lives at their voluntary request is all about respecting personal autonomy. People seek death when they can see no other way forward with life—they are subject to the constraints of their circumstances, finances, support networks, and even internal spiritual resources. We are not nearly so autonomous as we wish to think. And in the end, the patient does not choose whether to die; the doctor chooses whether the patient should die. The patient requests, the doctor decides. Recent new stories have made clear the challenges for practitioners of euthanasia to pick and choose who should die among their patients. In Canada, you can have death, but only if your doctor agrees that your life is not worth living. However much these doctors might purport to act from compassion, one cannot help see a connection to Nazi physicians labelling the unwanted as “Lebensunwortes leben”—life unworthy of life. In adopting assisted death, we cannot avoid dehumanizing ourselves. Death with dignity is a deception. 

These many acts of self-deception in relation to physician-assisted death should not surprise us, for the practice is intrinsically self-deceptive. It claims to be motivated by the value of the patient; it claims to promote the dignity of the patient; it claims to respect the autonomy of the patient. In fact, it directly contravenes all three of those goods. 

It degrades the value of the patient by accepting that it doesn't matter whether or not the patient exists.  

It denies the dignity of the patient by treating the patient as a mere means to an end—the sufferer is ended in order to end the suffering. 

 It destroys the autonomy of the patient because it takes away autonomy. The patient might autonomously express a desire for death, but the act of rendering someone dead does not enhance their autonomy; it obliterates it. 

Yet the need for self-deception represents the fatal weakness of this practice. In time, truth will win over falsehood, light over darkness, wisdom over folly. So let us ever cling to the truth, and faithfully continue to speak the truth in love to the dying and the living. Truth overcomes pressure. The truth will set us free.