Explainer
AI
Belief
Creed
5 min read

Whether it's AI or us, it's OK to be ignorant

Our search for answers begins by recognising that we don’t have them.

Simon Walters is Curate at Holy Trinity Huddersfield.

A street sticker displays multiple lines reading 'and then?'
Stephen Harlan on Unsplash.

When was the last time you admitted you didn’t know something? I don’t say it as much as I ought to. I’ve certainly felt the consequences of admitting ignorance – of being ridiculed for being entirely unaware of a pop culture reference, of being found out that I wasn’t paying as close attention to what my partner was saying as she expected. In a hyper-connected age when the wealth of human knowledge is at our fingertips, ignorance can hardly be viewed as a virtue. 

A recent study on the development of artificial intelligence holds out more hope for the value of admitting our ignorance than we might have previously imagined. Despite wide-spread hype and fearmongering about the perils of AI, our current models are in many ways developed in similar ways to how an animal is trained. An AI system such as ChatGPT might have access to unimaginable amounts of information, but it requires training by humans on what information is valuable or not, whether it has appropriately understood the request it has received, and whether its answer is correct. The idea is that human feedback helps the AI to hone its model through positive feedback for correct answers, and negative feedback for incorrect answers, so that it keeps whatever method led to positive feedback and changes whatever method led to negative feedback. It really isn’t that far away from how animals are trained. 

However, a problem has emerged. AI systems have become adept at giving coherent and convincing sounding answers that are entirely incorrect. How has this happened? 

This is a tool; it is good at some tasks, and less good at others. And, like all tools, it does not have an intrinsic morality. 

In digging into the training method for AI, the researchers found that the humans training the AI flagged answers of “I don’t know” as unsatisfactory. On one level this makes sense. The whole purpose of these systems is to provide answers, after all. But rather than causing the AI to return and rethink its data, it instead developed increasingly convincing answers that were not true whatsoever, to the point where the human supervisors didn’t flag sufficiently convincing answers as wrong because they themselves didn’t realise that they were wrong. The result is that “the more difficult the question and the more advanced model you use, the more likely you are to get well-packaged, plausible nonsense as your answer.” 

Uncovering some of what is going on in AI systems dispels both the fervent hype that artificial intelligence might be our saviour, and the deep fear that it might be our societal downfall. This is a tool; it is good at some tasks, and less good at others. And, like all tools, it does not have an intrinsic morality. Whether it is used for good or ill depends on the approach of the humans that use it. 

But this study also uncovers our strained relationship with ignorance. Problems arise in the answers given by systems like ChatGPT because a convincing answer is valued more than admitting ignorance, even if the convincing answer is not at all correct. Because the AI has been trained to avoid admitting it doesn’t know something, all of its answers are less reliable, even the ones that are actually correct.  

This is not a problem limited to artificial intelligence. I had a friend who seemed incapable of admitting that he didn’t know something, and whenever he was corrected by someone else, he would make it sound like his first answer was actually the correct one, rather than whatever he had said. I don’t know how aware he was that he did this, but the result was that I didn’t particularly trust whatever he said to be correct. Paradoxically, had he admitted his ignorance more readily, I would have believed him to be less ignorant. 

It is strange that admitting ignorance is so avoided. After all, it is in many ways our default state. No one faults a baby or a child for not knowing things. If anything, we expect ignorance to be a fuel for curiosity. Our search for answers begins in the recognition that we don’t have them. And in an age where approximately 500 hours of video is uploaded to YouTube every minute, the sum of what we don’t know must by necessity be vastly greater than all that we do know. What any one of us can know is only a small fraction of all there is to know. 

Crucially, admitting we do not know everything is not the same as saying that we do not know anything

One of the gifts of Christian theology is an ability to recognize what it is that makes us human. One of these things is the fact that any created thing is, by definition, limited. God alone is the only one who can be described by the ‘omnis’. He is omnipotent, omnipresent, and omniscient. There is no limit to his power, and presence, and knowledge. The distinction between creator and creation means that created things have limits to their power, presence, and knowledge. We cannot do whatever we want. We cannot be everywhere at the same time. And we cannot know everything there is to be known.  

Projecting infinite knowledge is essentially claiming to be God. Admitting our ignorance is therefore merely recognizing our nature as created beings, acknowledging to one another that we are not God and therefore cannot know everything. But, crucially, admitting we do not know everything is not the same as saying that we do not know anything. Our God-given nature is one of discovery and learning. I sometimes like to imagine God’s delight in our discovery of some previously unknown facet of his creation, as he gets to share with us in all that he has made. Perhaps what really matters is what we do with our ignorance. Will we simply remain satisfied not to know, or will it turn us outwards to delight in the new things that lie behind every corner? 

For the developers of ChatGPT and the like, there is also a reminder here that we ought not to expect AI to take on the attributes of God. AI used well in the hands of humans may yet do extraordinary things for us, but it will not truly be able to do anything, be everywhere, or know everything. Perhaps if it was trained to say ‘I don’t know’ a little more, we might all learn a little more about the nature of the world God has made. 

Review
Ageing
AI
Culture
Film & TV
5 min read

Foundation shows you can’t ‘Ctrl+V’ a soul

A sci-fi classic unearths transhumanism’s flaws

Giles Gough is a writer and creative who hosts the God in Film podcast.

A woman confronts a man whose clone stands behind her.
Apple TV.

One of the reasons that science fiction has had enduring popularity as a genre is its ability to illustrate thought experiments. The way it can attempt to answer questions that can’t even be asked in any other kind of fiction is what gives it power as a form of storytelling. One question that keeps coming up is: what if you could live forever, through technology?  

One person to attempt to answer this question is Isaac Asimov, one of the early giants of the sci-fi genre. Born in 1920, Asimov arrived into a world that was rapidly changing, and yet, his imagination was still able to outpace it. Much of what he is known for is his depiction of robots, with ‘Asimov’s laws of robotics’ influencing the depiction of androids in Star Trek: The Next Generation. However, direct adaptations of Asimov’s own work were few and far between. Robin Williams’ Bicentennial Man released in 1999 and Will Smith starred in I, Robot in 2004 were the best of the bunch. That is, until Apple TV began adapting Asimov’s Foundation

Asimov’s Foundation books were written across the span of fifty years. The premise of the stories is that in a distant future, a galactic empire is beginning to fail and cannot be saved. The mathematician Hari Seldon develops the theory of psychohistory, where he uses statistical laws to predict the future of large populations. In the wake of the empire’s fall, Seldon predicts a dark age lasting 30,000 years before a second empire arises. Seldon devises a plan to reduce this dark age to just one thousand years by preserving a ‘foundation’ of knowledge. The novels describe some of the dramatic events that frustrate, or are a result of Seldon's Plan. One of the features of the story that the Apple TV show of Foundation focuses on is attempted immortality.  

Foundation gives us three depictions of ‘immortality’. Firstly, Seldon orchestrates having his conscience eventually uploaded into the Prime Radiant, a super-computer in order to allow him to shepherd his plans beyond the limits of his own human lifespan. Secondly, his protégé, Gaal Dornick is throughout the first season put into a cryo-sleep that lets her move into the future without ageing. Finally, the characters of Dawn, Day and Dusk attempt immortality through cloning. The tyrannical emperor Cleon decided that the only person fit to succeed him was…himself. So, he creates a revolving triumvirate of his own clones: Brother Day, a Cleon in his prime; Brother Dusk, an aging Cleon who serves to advise Day; and Brother Dawn, a young Cleon being trained to succeed Brother Day. This "genetic dynasty" has been ruling with an iron fist for 400 years by the start of the series.  

These interpretations of immortality grant each character the ability to shape and curate history in a way that no one human could ever achieve. But as there’s no drama without conflict, Foundation shows us the downsides of this kind of immortality. Firstly, Gaal’s version, being frozen in cryo-sleep for decades might literally extend her life, but from Gaal’s perspective, it is no longer than it would have been otherwise. Whilst she does get to see history play out, she loses connections with people like her family and her lover Raych. She is unable to build the life she would have planned for herself.    

No-one mourns your absence because there’s an identical copy of you still walking about. 

Seldon’s version of immortality is flirted with by tech bros and transhumanists like Peter Thiel. The idea of a computer that has the processing power to replicate a human brain turns up in numerous stories, but it’s another false immortality. Firstly, the original Hari Seldon still dies, and the ‘digital version’ stored eventually in the Prime Radiant is merely a copy. We might not think much of copy and pasting a document or file on our computer, but it doesn’t quite work the same for human beings. A copy is not the same as the original. You can’t ‘Ctrl+V’ a soul. In addition to this, we find out at one point that due to a mistake, Hari’s digital self has been trapped in darkness, fully conscious but with no rest, no distractions and no way of communicating with the outside world for 148 years. This naturally drags Hari into an interminable madness.  

Lastly, the Empire run by the clones, Dawn, Day and Dusk suffer much the same problem as the other two. It’s not a real immortality; as each clone eventually dies. But in many ways, it’s even worse than death. No-one mourns your absence because there’s an identical copy of you still walking about. This is a trope that is troubling, because a protagonist dying and being returned via cloning is often presented as a ‘resurrection’. It has been used as a story arc in the X-men comics and in Peter Capaldi’s era of Doctor Who, with very little outcry from their respective fandoms. Possibly because the thought that the producers have canonically killed the main character and replaced them with an exact copy is simply too uncomfortable to consider. In Foundation itself, the clones are judged by their fidelity to the original (a cold and petty despot) and any deviation is met with a death sentence. Whilst clones may be one way to rule a sci-fi galactic empire, it’s possibly their inability to adapt to changing circumstances that contributes to the fall of civilisation.  

The great irony in all of these interpretations is; you are only immortal to those observing you, and an immortality that relies on perspective is not really an immortality at all.  

It seems that hard science fiction, and ancient Greek myths can at times, overlap in their focus. Viewed in one light, Asimov’s Foundation series can be seen as one long story of Prometheus, who steals fire from the gods to give it as a gift to mankind, only to be punished by Zeus for his actions. Asimov appears to be telling us that mankind can’t accurately predict the future and you can’t live forever. So despite being a staunch atheist, one of the great minds of science fiction might be suggesting that immortality may belong squarely in the realm of the divine.

Support Seen & Unseen

Since Spring 2023, our readers have enjoyed over 1,500 articles. All for free. 
This is made possible through the generosity of our amazing community of supporters.

If you enjoy Seen & Unseen, would you consider making a gift towards our work?
 
Do so by joining Behind The Seen. Alongside other benefits, you’ll receive an extra fortnightly email from me sharing my reading and reflections on the ideas that are shaping our times.

Graham Tomlin
Editor-in-Chief