Explainer
AI
Belief
Creed
5 min read

Whether it's AI or us, it's OK to be ignorant

Our search for answers begins by recognising that we don’t have them.

Simon Walters is Curate at Holy Trinity Huddersfield.

A street sticker displays multiple lines reading 'and then?'
Stephen Harlan on Unsplash.

When was the last time you admitted you didn’t know something? I don’t say it as much as I ought to. I’ve certainly felt the consequences of admitting ignorance – of being ridiculed for being entirely unaware of a pop culture reference, of being found out that I wasn’t paying as close attention to what my partner was saying as she expected. In a hyper-connected age when the wealth of human knowledge is at our fingertips, ignorance can hardly be viewed as a virtue. 

A recent study on the development of artificial intelligence holds out more hope for the value of admitting our ignorance than we might have previously imagined. Despite wide-spread hype and fearmongering about the perils of AI, our current models are in many ways developed in similar ways to how an animal is trained. An AI system such as ChatGPT might have access to unimaginable amounts of information, but it requires training by humans on what information is valuable or not, whether it has appropriately understood the request it has received, and whether its answer is correct. The idea is that human feedback helps the AI to hone its model through positive feedback for correct answers, and negative feedback for incorrect answers, so that it keeps whatever method led to positive feedback and changes whatever method led to negative feedback. It really isn’t that far away from how animals are trained. 

However, a problem has emerged. AI systems have become adept at giving coherent and convincing sounding answers that are entirely incorrect. How has this happened? 

This is a tool; it is good at some tasks, and less good at others. And, like all tools, it does not have an intrinsic morality. 

In digging into the training method for AI, the researchers found that the humans training the AI flagged answers of “I don’t know” as unsatisfactory. On one level this makes sense. The whole purpose of these systems is to provide answers, after all. But rather than causing the AI to return and rethink its data, it instead developed increasingly convincing answers that were not true whatsoever, to the point where the human supervisors didn’t flag sufficiently convincing answers as wrong because they themselves didn’t realise that they were wrong. The result is that “the more difficult the question and the more advanced model you use, the more likely you are to get well-packaged, plausible nonsense as your answer.” 

Uncovering some of what is going on in AI systems dispels both the fervent hype that artificial intelligence might be our saviour, and the deep fear that it might be our societal downfall. This is a tool; it is good at some tasks, and less good at others. And, like all tools, it does not have an intrinsic morality. Whether it is used for good or ill depends on the approach of the humans that use it. 

But this study also uncovers our strained relationship with ignorance. Problems arise in the answers given by systems like ChatGPT because a convincing answer is valued more than admitting ignorance, even if the convincing answer is not at all correct. Because the AI has been trained to avoid admitting it doesn’t know something, all of its answers are less reliable, even the ones that are actually correct.  

This is not a problem limited to artificial intelligence. I had a friend who seemed incapable of admitting that he didn’t know something, and whenever he was corrected by someone else, he would make it sound like his first answer was actually the correct one, rather than whatever he had said. I don’t know how aware he was that he did this, but the result was that I didn’t particularly trust whatever he said to be correct. Paradoxically, had he admitted his ignorance more readily, I would have believed him to be less ignorant. 

It is strange that admitting ignorance is so avoided. After all, it is in many ways our default state. No one faults a baby or a child for not knowing things. If anything, we expect ignorance to be a fuel for curiosity. Our search for answers begins in the recognition that we don’t have them. And in an age where approximately 500 hours of video is uploaded to YouTube every minute, the sum of what we don’t know must by necessity be vastly greater than all that we do know. What any one of us can know is only a small fraction of all there is to know. 

Crucially, admitting we do not know everything is not the same as saying that we do not know anything

One of the gifts of Christian theology is an ability to recognize what it is that makes us human. One of these things is the fact that any created thing is, by definition, limited. God alone is the only one who can be described by the ‘omnis’. He is omnipotent, omnipresent, and omniscient. There is no limit to his power, and presence, and knowledge. The distinction between creator and creation means that created things have limits to their power, presence, and knowledge. We cannot do whatever we want. We cannot be everywhere at the same time. And we cannot know everything there is to be known.  

Projecting infinite knowledge is essentially claiming to be God. Admitting our ignorance is therefore merely recognizing our nature as created beings, acknowledging to one another that we are not God and therefore cannot know everything. But, crucially, admitting we do not know everything is not the same as saying that we do not know anything. Our God-given nature is one of discovery and learning. I sometimes like to imagine God’s delight in our discovery of some previously unknown facet of his creation, as he gets to share with us in all that he has made. Perhaps what really matters is what we do with our ignorance. Will we simply remain satisfied not to know, or will it turn us outwards to delight in the new things that lie behind every corner? 

For the developers of ChatGPT and the like, there is also a reminder here that we ought not to expect AI to take on the attributes of God. AI used well in the hands of humans may yet do extraordinary things for us, but it will not truly be able to do anything, be everywhere, or know everything. Perhaps if it was trained to say ‘I don’t know’ a little more, we might all learn a little more about the nature of the world God has made. 

Review
AI
Art
Culture
5 min read

Art, AI and apocalypse: Michael Takeo Magruder addresses our fears and questions

The digital artist talks about the possibilities and challenges of artificial intelligence.

Jonathan is Team Rector for Wickford and Runwell. He is co-author of The Secret Chord, and writes on the arts.

A darkened art gallery displays images and screens on three walls.
Takeo.org.

In the current fractured debate about the future development of Artificial Intelligence (AI) systems, artists are among those informing our understanding of the issues through their creative use of technologies. British-American visual artist Michael Takeo Magruder is one such, with his current exhibition Un/familiar Terrain{s} infusing leading-edge AI systems with traditional artistic practices to reimagine the world anew. In so doing, this exhibition pushes visitors to question the organic nature of their own memories and the unsettling notions of automatic processing, misattribution, and reconstruction. 

The exhibition uses personal footage of specific places of renowned natural beauty that has been captured on first generation AI-enabled smartphones. Every single frame of the source material has then been revised, reworked, and rebuilt into digital prints and algorithmic videos which recast these captured moments as uncanny encounters. In this exhibition at Washington DC’s Henry Luce III Center for the Arts & Religion, the invisible work of the AI allows people to experience more than there ever was, expanding both time and space. 

Magruder has been using Information Age technologies and systems to examine our networked, media-rich world for over 25 years. A residency in the Department of Theology and Religious Studies at King’s College London resulted in De/coding the Apocalypse, an exhibition exploring contemporary creative visions inspired by and based on the Book of Revelation. Imaginary Cities explored the British Library’s digital collection of historic urban maps to create provocative fictional cityscapes for the Information Age. 

JE: You are a visual artist who works with emerging media including real-time data, digital archives, VR environments, mobile devices, and AI processes. What is it about the possibilities and challenges of emerging media that captures your artistic imagination? 

MTM: As a first-generation digital native, computer technologies – and the evolving range of potentials they offer – have deeply informed my life and art. Computational media not only opens different avenues for artistic expression but provides a novel means to recontextualise traditional artforms and histories of practice; its ephemeral nature is a particular draw. However, this also creates new challenges, especially in areas concerning preservation and access. I sometimes wonder if my art will still exist for future generations to experience in full, or if it will simply fade alongside the technologies that I’ve used in its production. 

JE: To what extent does Un/familiar Terrain{s} build on past exhibitions like Imaginary Landscapes and Imaginary Cities, and to what extent does it break new ground for you? 

MTM: Un/familiar Terrain{s} certainly arises from and expands on the artistic concepts of those past projects. The main difference is that each artwork in Un/familiar Terrain{s} is generated from a small sample of personal data (a scenic moment that I’ve captured intentionally), not digital materials gleaned from large public archives and online collections.      

JE: Do you find that working with images of the natural world (as is the case with this exhibition) as opposed to images of human-made environments (as you did with 'Imaginary Cities') leads to different approaches or inspiration on your part? 

MTM: My projects that explore constructed environments often reference principles of Modernist architecture and design whereas my pieces in Un/familiar Terrain{s} explicitly seek to dialogue with the long history of Western landscape art. The AI systems that I have used in their creation are leading edge but conversely, their conceptual references extend back to long before the onset of what we consider ‘modern’ art.  

JE: I've heard many artists criticise digital art in terms of degrading the principal tools and techniques of artists throughout history and those arguments would be made even more vigorously in relation to AI. In this exhibition you're enabling a conversation about the painterly effects you can create as a digital artist and those that can be achieved through AI, yet without leading us to one side or other of that argument. Is your vision essentially one of wanting to see the possibilities in whatever tools, techniques or technologies we have to hand? 

MTM: Absolutely. For me that’s one of the fundamental purposes of art. AI is unquestionably the most disruptive (and potentially problematic) technology affecting creative communities at present, but it’s just the most recent historical example. I imagine similar criticisms arose during the proliferation of devices like the printing press and the first photographic cameras. Such inventions clearly did not ‘degrade’ art, but they indisputably shifted its trajectory. 

JE: While your work is not expressly religious, you have engaged with theological themes and institutions as with Un/familiar Terrain{s}, which is on show at Wesley Theological Seminary in Washington DC. What do you think it is about your work and the ways you use and explore emerging media that enables such a dialogue to take place?  

MTM: I feel that many of the social and ethical questions raised by the emergence of transformative digital technologies are quite similar (and sometimes identical) to ones that have been traditionally posed by theologians. With that in mind, although the fields are quite different in many ways, at present there are some strange and compelling intersections. 

JE: From your experience, what can theological or religious institutions learn from a more engaged involvement with emerging media, particularly AI? 

MTM: Like artists, perhaps theologians can use emerging (and disruptive) media to not only expand possibilities for their work, but more importantly, to refocus their efforts towards areas that these technologies cannot presently (and will likely never) address. 

JE: Apocalyptic scenarios are often invoked in response to developments such as AI, the refugee crisis, populist political movements or the climate emergency. In De/coding the Apocalypse, you worked with emerging media to explore contemporary creative visions inspired by and based on the Book of Revelation. From that experience, what advice would you give to emerging artists wanting to engage with or invoke apocalyptic imagery? How might emerging artists live in the shadow of apocalypse or what have you noticed about our contemporary fear of modern apocalypses? 

MTM: Throughout history, visions of apocalypse have been consistently rooted in humanity’s prevailing fears. In the Digital Age these sit alongside our growing concerns about technologies that afford increasingly greater potential to create or destroy. Of course, artists should continue to reveal the deeply problematic (and potentially apocalyptic) aspects of new technologies, but they should also highlight their positive aspects to encourage the creation of “a new heaven and a new earth” that can be a better place for all. 

 

Un/familiar Terrain{s}, 30 May – 18 September 2024, The Dadian Gallery, Henry Luce III Center for the Arts & Religion.