Article
AI
Comment
4 min read

It's our mistakes that make us human

What we learn distinguishes us from tech.

Silvianne Aspray is a theologian and postdoctoral fellow at the University of Cambridge.

A man staring at a laptop grimmaces and holds his hands to his head.
Francisco De Legarreta C. on Unsplash.

The distinction between technology and human beings has become blurry: AI seems to be able to listen, answer our questions, even respond to our feelings. It becomes increasingly easy to confuse machines with humans. In this situation, it is increasingly important to ask: What makes us human, in distinction from machines? There are many answers to this question, but for now I would like to focus on just one aspect of what I think is distinctively human: As human beings, we live and learn in time.  

To be human means to be intrinsically temporal. We live in time and are oriented towards a future good. We are learning animals, and our learning is bound up with the taking of time. When we learn to know or to do something, we necessarily make mistakes, and we take practice. But keeping in view something we desire – a future good – we keep going.  

Let’s take the example of language. We acquire language in community over time. Toddlers make all sorts of hilarious mistakes when they first try to talk, and it takes them a long time even to get single words right, let alone to try and form sentences. But they keep trying, and they eventually learn. The same goes with love: Knowing how to love our family or our neighbours near and far is not something we are good at instantly. It is not the sort of learning where you absorb a piece of information and then you ‘get’ it. No, we learn it over time, we imitate others, we practice and even when we have learned, in the abstract, what it is to be loving, we keep getting it wrong. 

This, too, is part of what it means to be human: to make mistakes. Not the sort of mistakes machines make, when they classify some information wrongly, for instance, but the very human mistake of falling short of your own ideal. Of striving towards something you desire – happiness, in the broadest of terms – and yet falling short, in your actions, of that very goal. But there’s another very human thing right here: Human beings can also change. They – we – can have a change of heart, be transformed, and at some point in time, actually start to do the right thing – even against all the odds. Statistics of past behaviours, do not always correctly predict future outcomes. Part of being human means that we can be transformed.  

Transformation sometimes comes suddenly, when an overwhelming, awe-inspiring experience changes somebody’s life as by a bolt of lightning. Much more commonly, though, such transformation takes time. Through taking up small practices, we can form new habits, gradually acquire virtue, and do the right thing more often than not. This is so human: We are anything but perfect. As Christians would say: We have a tendency to entangle ourselves in the mess of sin and guilt. But we also bear the image of the Holy One who made us, and by the grace and favour of that One, we are not forever stuck in the mess. We are redeemed: are given the strength to keep trying, despite the mistakes we make, and given the grace to acquire virtue and become better people over time. All of this to say that being human means to live in time, and to learn in time. 

So, this is a real difference between human beings and machines: Human beings can, and do strive toward a future good. 

Now compare this to the most complex of machines. We say that AI is able to “learn”. But what does it mean to learn, for AI? Machine learning is usually categorized into supervised learning, unsupervised and self-supervised learning. Supervised learning means that a model is trained for a specific task based on correctly labelled data. For instance, if a model is to predict whether a mammogram image contains a cancerous tumour, it is given many example images which are correctly classed as ‘contains cancer’ or ‘does not contain cancer’. That way, it is “taught” to recognise cancer in unlabelled mammograms. Unsupervised learning is different. Here, the system looks for patterns in the dataset it is given. It clusters and groups data without relying on predefined labels. Self-supervised learning uses both methods: Here, the system uses parts of the data itself as a kind of label – such as, for instance, predicting the upper half of an image from its lower half, or the next word in a given text. This is the predominant paradigm for how contemporary large-scale AI models “learn”.  

In each case, AI’s learning is necessarily based on data sets. Learning happens with reference to pre-given data, and in that sense with reference to the past. It may look like such models can consider the future, and have future goals, but only insofar as they have picked up patterns in past data, which they use to predict future patterns – as if the future was nothing but a repetition of the past.  

So this is a real difference between human beings and machines: Human beings can, and do strive toward a future good. Machines, by contrast, are always oriented towards the past of the data that was fed to them. Human beings are intrinsically temporal beings, whereas machines are defined by temporality only in a very limited sense: it takes time to upload data, and for the data to be processed, for instance. Time, for machines, is nothing but an extension of the past, whereas for human beings, it is an invitation to and the possibility for being transformed for the sake of a future good. We, human beings, are intrinsically temporal, living in time towards a future good – which machines do not.  

In the face of new technologies we need a sharpened sense for the strange and awe-inspiring species that is the human race, and cultivate a new sense of wonder about humanity itself.  

Article
Assisted dying
Comment
Justice
5 min read

Will clinicians and carers objecting to assisted death be treated as nuisances?

The risk and mental cost of forcing someone to act against their conscience.
A tired-looking doctor sits at a desk dealing with paperwork.
Francisco Venâncio on Unsplash.

After a formal introduction to the House of Commons next Wednesday, MP’s will debate a draft Bill to change UK legislation on Assisted Dying. Previously, a draft Bill was introduced in the Scottish Parliament in March 2024, and is currently at committee stage. Meanwhile, in the House of Lords, a Private Member’s Bill was introduced by Lord Falconer in July and currently awaits its second reading. These draft Bills, though likely to be dropped and superseded by the Commons Bill in the fullness of time, give an early indication of what provision might be made on behalf of clinicians and other healthcare workers who wish to recuse themselves from carrying out a patient’s end of life wishes on grounds of Conscientious Objection.  

There are various reasons why someone might want to conscientiously object. The most commonly cited are faith or religious commitments. This is not to say that all people of faith are against a change in the law – there are some high-profile religious advocates for the legalisation of Assisted Dying, including both Rabbi Dr Jonathan Romain and Lord Carey, the former Archbishop of Canterbury. Even so, there will be many adherents to various faith traditions who find themselves unable to take part in hastening the end of someone’s life because they feel it conflicts with their views on God and what it means to be human. 

However, there are also Conscientious Objectors who are not religious, or not formally so. Some people, perhaps many, simply feel unsure of the rights and wrongs of the matter. The coming debates will no doubt feature discussion of how changing the law for those who are terminally ill in the Netherlands and Canada has to lead to subsequent changes in the law to include those who are not terminally, but instead chronically ill. The widening of the eligibility criteria has reached a point where, in the Netherlands, one in every 20 people now ends their life by euthanasia. This troubling statistic includes many who are neurodivergent, who suffer from depression or are disabled. It is reasonable that, even if a Conscientious Objector does not adhere to a particular religion, they can be allowed to object if they feel uneasy about the social message that Assisted Dying seems to send to vulnerable people.  

“You will often find that legislation that provides a right to conscientious objection is interpreted by judges these days in a way that seems to treat conscientious objectors as nuisances” 

Mehmet Ciftci

  Conscientious Objection clauses can themselves send a social message. A response to the Scottish Bill produced by the Law Society of Scotland notes concern over the wording of the Conscientious Objection clause, as it appears to be more prescriptive in the draft Bill than in previous Acts such as the Abortion Act of 1967. In the case of any legal proceedings that arise from a clinician’s refusal to cooperate, the current wording places the burden of proof onto the Conscientious Objector, stating (at 18.2):  

In any legal proceedings the burden of proof of conscientious objection is to rest on the person claiming to rely on it.  

The Bill provides no indication of what is admissible as ‘proof’. Evidence of membership of a Church, Synagogue, Mosque or similar might be the obvious starting point. But where does that leave those described above, who object on grounds of personal conscience alone? How does one meaningfully evidence an inner sense of unease?  

The wording of the Private Member’s Bill, currently awaiting its second reading in the House of Lords, provides even less clarity, stating only (at 5.0): 

A person is not under any duty (whether by contract or arising from any statutory or other legal requirement) to participate in anything authorised by  this Act to which that person has a conscientious objection. 

Whilst this indicates that there is no duty to participate in assisting someone to end their life, there remains a wider duty of care that healthcare professionals cannot ignore. Thus, a general feature in the interpretation of such conscience clauses in medicine is that that the conscientious objector is under an obligation to refer the case to a professional who does not share the same objection. This can be seen in practice looking at abortion law, where ideas around conscientious objection are more developed and have been tried in the courts. In the case of an abortion, a clinician can refuse to take part in the procedure, but they must still find an alternative clinician who is willing to perform their role, and they must still carry out ancillary care and related administrative tasks.  

Placing such obligations onto clinicians could be seen as diminishing rather than respecting their objection. Dr Mehmet Ciftci, a Researcher at the McDonald Centre for Theology, Ethics and Public Life at the University of Oxford comments:  

You will often find that legislation that provides a right to conscientious objection is interpreted by judges these days in a way that seems to treat conscientious objectors as nuisances who are just preventing the efficient delivery of services. They are forced to refer patients on to those who will perform whatever procedure they are objecting to, which involves a certain cooperation or facilitation with the act. 

This touches everyone, even those who (if the Bill becomes law) will still choose to conscientiously object. Therefore, it is important to consider that the human conscience is a very real phenomenon, which means that facilitating an act that feels morally wrong can give rise to feelings of guilt or shame, even if one has not been a direct participant.  

Psychologists observe that when feelings of guilt are not addressed, if they are treated dismissively or internalised, this can significantly erode self-confidence and increase the likelihood of depressive symptoms. But even before modern psychology could speak to the effects of guilt, biblical writers already had much to say on the painful consequences of living with a troubled conscience. In the Psalms, more than one ancient poet pours out their heart to God, saying that living with guilt has caused their bones to feel weak, or their heart to feel heavy, or their world to feel desolate and lonely.   

If the Conscientious Objection clauses of the new Bill being proposed on Wednesday are not significantly more robust than those in the draft Bills proposed thus far, then perhaps that is something to which we should all conscientiously object? There is much to discuss about the potential rights and wrongs of legalising Assisted Dying, but there is much to discuss about the rights and wrongs of forcing people to act against their consciences too.