Article
AI
Culture
5 min read

What AI needs to learn about dying and why it will save it

Those programming truthfulness can learn a lot from mortality.

Andrew Steane has been Professor of Physics at the University of Oxford since 2002, He is the author of Faithful to Science: The Role of Science in Religion.

An angel of death lays a hand of a humanioid robot that has died amid a data centre
A digital momento mori.
Nick Jones/midjourney.ai

Google got itself into some unusual hot water in recently when its Gemini generative AI software started putting out images that were not just implausible but downright unethical. The CEO Sundar Pichai has taken the situation in hand and I am sure it will improve. But before this episode it was already clear that currently available chat-bots, while impressive, are capable of generating misleading or fantastical responses and in fact they do this a lot. How to manage this? 

Let’s use the initials ‘AI’ for artificial intelligence, leaving it open whether or not the term is entirely appropriate for the transformer and large language model (LLM) methods currently available. The problem is that the LLM approach causes chat-bots to generate both reasonable and well-supported statements and images, and also unsupported and fantastical (delusory and factually incorrect) statements and images, and this is done without signalling to the human user any guidance in telling which is which. The LLMs, as developed to date, have not been programmed in such a way as to pay attention to this issue. They are subject to the age-old problem of computer programming: garbage in, garbage out

If, as a society, we advocate for greater attention to truthfulness in the outputs of AI, then software companies and programmers will try to bring it about. It might involve, for example, greater investment in electronic authentication methods. An image or document will have to have, embedded in its digital code, extra information serving to authenticate it by some agreed and hard-to-forge method. In the 2002 science fiction film Minority Report an example of this was included: the name of a person accused of a ‘pre-crime’ (in the terminology of the film) is inscribed on a wooden ball, so as to use the unique cellular structure of a given piece of hardwood as a form of data substrate that is near impossible to duplicate.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. 

It is clear that a major issue in the future use of AI by humans will be the issue of trust and reasonable belief. On what basis will we be able to trust what AI asserts? If we are unable to check the reasoning process in a result claimed to be rational, how will be able to tell that it was in fact well-reasoned? If we only have an AI-generated output as evidence of something having happened in the past, how will we know whether it is factually correct? 

Among the strategies that suggest themselves is the use of several independent AIs. If they are indeed independent and all propose the same answer to some matter of reasoning or of fact, then there is a prima facie case for increasing our degree of trust in the output. This will give rise to the meta-question: how can we tell that a given set of AIs are in fact independent? Perhaps they all were trained on a common faulty data set. Or perhaps they were able to communicate with each other and thus influence each other.  

The questions we face with AI thus come close to some of those we face when dealing with one another as humans. We know humans in general are capable of both ignorance and deliberate deception. We manage this by building up degrees of trust based on whether or not people show behaviours that suggest they are trustworthy. This also involves the ability to recognize unique individuals over time, so that a case for trustworthiness can be built up over a sequence of observations. We also need to get a sense of one another's character in more general ways, so that we can tell if someone is showing a change in behaviour that might signal a change in their degree of trustworthiness. 

In order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

Issues of trust and of reasonable belief are very much grist to the mill of theology. The existing theological literature may have much that can be drawn upon to help us in this area. An item which strikes me as particularly noteworthy is the connection between suffering and loss and earning of trust, and the relation to mortality. In brief, a person you can trust is one who has ventured something of themselves on their pronouncements, such that they have something to lose if they prove to be untrustworthy. In a similar vein, a message which is costly to the messenger may be more valuable than a message which costs the messenger nothing. They have already staked something on their message. This implies they are working all the harder to exert their influence on you, for good or ill. (You will need to know them in other ways in order to determine which of good or ill is their intention.)  

Mortality brings this issue of cost to a point of considerable sharpness. A person willing to die on behalf of what they claim certainly invests a lot in their contribution. They earn attention. It is not a guarantee of rationality or factual correctness, but it is a demonstration of commitment to a message. It signals a sense of importance attached to whatever has demanded this ultimate cost. Death becomes a form of bearing witness.  

A thought-provoking implication of the above is that in order to earn our trust, an AI too will have to be able to suffer and, perhaps, to die. 

In the case of human life, even if making a specific claim does not itself lead directly to one's own death, the very fact that we die lends added weight to all the choices we make and all the actions we take. For, together, they are our message and our contribution to the world, and they cannot be endlessly taken back and replaced. Death will curtail our opportunity to add anything else or qualify what we said before. The things we said and did show what we cared about whether we intended them to or not. This effect of death on the weightiness of our messages to one another might be called the weight of mortality. 

In order for this kind of weight to become attached to the claims an AI may make, the coming death has to be clearly seen and understood beforehand by the AI, and the timescale must not be so long that the AI’s death is merely some nebulous idea in the far future. Also, although there may be some hope of new life beyond death it must not be a sure thing, or it must be such that it would be compromised if the AI were to knowingly lie, or fail to make an effort to be truthful. Only thus can the pronouncements of an AI earn the weight of mortality. 

For as long as AI is not imbued with mortality and the ability to understand the implications of its own death, it will remain a useful tool as opposed to a valued partner. The AI you can trust is the AI reconciled to its own mortality. 

Article
Culture
Film & TV
Monsters
5 min read

Cartoon villains: who's the real baddie?

What kind of villain do we want?

James Cary is a writer of situation comedy for BBC TV (Miranda, Bluestone 42) and Radio (Think the Unthinkable, Hut 33).

 A cartoon chase sees a car driven by a cow escaping from a car of baddies under a giant poster of their villainous boss.
Jazz Cow vs. Dr Popp.

“Nobody thinks they’re the bad guy”. That’s a phrase I often use when helping people write situation comedies. It’s always useful to have a strong antagonist who gets in the way of our hero. But the villains tend not to consider themselves to be evil. In fact, they are offended at the suggestion. 

The Batman universe has turned the interesting villain to new levels. The Penguin is Gotham’s latest production, a brand-new TV series on HBO. Colin Farrell plays a highly nuanced anti-hero, exploring The Penguin’s “awkwardness, and his strength, and his villainy, yes, his propensity for violence”. Farrell told Comicbook.com he was attracted to the role because “there's also a heartbroken man inside there you know, which just makes it really tasty.” Audiences are often invited to have sympathy for the devil. Should we be worried about the blurring of the lines between good and evil? 

I’ve been asking myself this question as I’ve been writing a new animation which involves a villain called Dr Popp who is trying to take over a city. But what kind of villain do we want in 2024? 

Jack Nicholson’s portrayal of The Joker back in 1989 feels like pop-culture ancient history. His Joker was an embittered agent of chaos without many redeeming qualities but mercifully lacked the nihilism of later versions. It was an old-fashioned story of cops and robbers which has its own simplistic charm. But have those days gone forever, having been shot in the head and dropped off a bridge into a river? 

The problem is it is so easy to humanise evil. You just give it a human face. The arch-villains of the twentieth century – the Nazi members of the SS – are rather sweet when portrayed by comedians Mitchell and Webb. A nervous member of the SS Unit (Mitchell) waiting for an attack from the Russians looks at the skull on his cap and asks his fellow comrade-in-arms (Webb): “Hans, are we the baddies?” 

Any student of World War Two will know that it’s never as simple as good versus evil. Many terrible things were done by people who felt justified in their behaviour. Moreover, ‘the goodies’ also felt compelled to do morally dubious things – like the bombing of civilians in cities – in order to defeat ‘the baddies. After all, they started it.’ The truth is always far more complicated than the war films suggest. 

Dr Popp is the very worst kind of villain: he has great power and he wants to help. In his own mind, he’s completely clear about his mission. 

Ten years ago, I was researching real life baddies for my sitcom Bluestone 42 about a bomb disposal team set in Afghanistan. At times, I had to think like the Taliban who, in their own minds, were entirely justified in leaving bombs by the side of the road, to be triggered by British soldiers or Afghan children. They were pretty relaxed about the outcome. It’s hard to sympathise with this way of thinking, but it made sense to them. 

My internet search history from that time probably put me on some sort of Home Office watchlist. Maybe a small dossier was started on me. More recently, that dossier would have become thicker as I’ve moved sideways from sitcom into murder mysteries, having recently worked on Death in Paradise and Shakespeare and Hathaway. To work on shows like these, you need to be thinking of good reasons for good people to commit murder. Someone would need a very strong motive to commit a murder on an idyllic Caribbean island where the local detective has a 100 per cent resolution rate. You also need to research ingenuous methods for murdering people in a way that escapes detection. I’m surprised I’ve not yet had a knock on my door, or enquiries made to the neighbours to call a number if they see anything suspicious. 

But what about cartoon villains where nothing is real? The bold colours and the larger-than-life characters might suggest that there is more clarity about goodies and baddies. But there isn’t. Evil villains – that is, villains who realise they are evil – are extremely rare. Skeletor from He-Man and the Masters of the Universe comes to mind. This kind of demonic baddie can be entertaining with wit and charm, like Hades in the Disney movie, Hercules. This character had some brilliant one-liners and was superbly brought to life by the voice of James Woods. Overall, however, purely evil characters are hard to write. 

Cartoon villains need proper motivation. This is either a character flaw or a backstory. In The Lion King, Scar is consumed with envy that his brother is king – and a good one at that. In The Incredibles, Syndrome is playing out his sense of injustice that he was not allowed to be Mr Incredible’s sidekick, Incrediboy. In The Simpsons, Mr Burns is essentially Mr Potter from It’s a Wonderful Life. He’s a Scrooge-type figure who doesn’t care about love and respect. He just wants to own the town. 

The cartoon villain I’ve been thinking about is for a new animation project I’ve been working on called Jazz Cow. The eponymous hero is a saxophone-playing cow and a reluctant Bogart-style leader of a bohemian band of misfits. They are trying resist the advance of the all-consuming algorithm created by Dr Popp, the villain. But what’s his motivation? 

Dr Popp is the very worst kind of villain: he has great power and he wants to help. In his own mind, he’s completely clear about his mission. He’s trying to make the world better, easier, safer, cheaper, more efficient and convenient. Why would anyone want to refuse his technology, reject his software and keep away from his algorithm? 

This is why Dr Popp has to silence Jazz Cow, literally, by stealing his saxophone. He simply cannot allow Jazz Cow to delight audiences at Connie Snott’s with live improvised music. There’s no need for this music! Dr Popp has all the music you could possibly need, want or imagine. Why improvise when we have artificial intelligence? 

Dr Popp is a cartoon villain for today when relativism is still alive and well. ‘Good’ and ‘Evil’ are still concepts or points of view rather than absolutes. However, there is good and evil in Jazz Cow. But the evil doesn’t come from Doctor Popp. It comes from the user or consumer.  That would be us. 

‘The Algorithm’ is always learning and always trying to give us our hearts’ desire. And that’s the problem: our hearts frequently desire that which they cannot – and should not – have. Dr Popp’s algorithm is like a mirror held up to our faces. In it, we see the real baddie: ourselves. Not even Jazz Cow can save us from that. But what this horn-playing cow can do is to make the world a more humane place. 

  

For more information about Jazz Cow, and information on how you can make the show happen, take a look at our Kickstarter – and don’t worry. Jazz Cow would approve, as it’s the creative’s way of sticking IT to the man.