Essay
AI
Culture
13 min read

Machines and their ghosts

What impacts has artificial intelligence had on society, past, present and future? Simon Cross explores just where have our machines got us.

Simon Cross researches ethical aspects of technology and advises on the Church’s of England's policy and legislative activity in these areas.

A complex of linear and metal parts in a machine-like sculpture.
Machine complexity, in sculptural form.
Ruth Hartnup, CC BY 2.0, via Wikimedia Commons.

But Humanity, in its desire for comfort, had over-reached itself. It had exploited the riches of nature too far. Quietly and complacently, it was sinking into decadence, and progress had come to mean the progress of the Machine. 

E. M. Forster

Human cosmology has changed over the millennia. Not only from the heliocentric to the relativistic but also from organic to mechanistic. Our success in deconstructing nature and exploiting those discoveries to construct ever more capable machines now persuades many that the soul is illusory and the universe made only of physical objects reconfigurable in new and novel ways according to particular mathematical relationships. And yet. And yet the debate about our latest machines, about intelligence, and about the mysterious ghost of human consciousness – let alone soul - continues unresolved across the ages.  

The ghost in the AI machines of the past

The journey from Charles Babbage’s unfinished analytical engines to Elon Musk’s complete business empire of rockets, robot-cars and social media rants is familiar to many. Karel Čapek drew on the Slavonic Orthodox word for servitude or serfdom when he baptised the word robot in his 1920 play, R.U.R., or Rossum’s Universal Robots. Čapek’s machines eventually gained a soul but only in the final act of the play. While the term artificial intelligence (AI) is attributed to a gathering at Dartmouth College in New Hampshire, it was Alan Turing who successfully conceptualised how to fabricate robots like those of Čapek’s imagination. Turing neatly sidestepped the pesky question of whether such ‘universal Turing machines’ need human-like consciousness (let alone a soul) in a famous 1950 thought experiment posterity simply calls the Turing Test.  

The invention of finely controlled micro-processors and their ever tighter transcription onto silicon chips enabled the architecture of increasingly complex algorithmic mathematical operation. After which came operating systems with simple and accessible user interfaces and programmes exploiting a prolific increase in speed and memory. So too the invention by Tim Berners-Lee of an internet with open protocols that, via Mosaic and its browser progeny, has become the operational backbone of the world wide web. All are tales already familiar or easily told using a now ubiquitous search engine. 

A main feature of the past twenty years has been the network effect. This has concentrated power in a handful of companies, initially the FAANGs (Facebook, Apple, Amazon, Netflix and Google) but now too their Chinese counterparts Tencent and ByteDance (owners of TikTok). A European counterpart is conspicuously absent. 

The Ghost in the AI machines of the present

More recently still, advances in types of machine learning and the invention of a new suite of tools called 'transformers' has given rise to AI that increasingly resembles its human creators in one task or another even if the furore over Brad Lemoyne and Googles’ LaMDA (Language Model for Dialogue Applications) proves the relationship between intelligence, artifice, and consciousness remains deeply contested.  

The metaphysical nature of artificial consciousness notwithstanding, it is also worth reflecting, however, on what these machines may be doing to our souls – metaphorical or otherwise. Where have our machines got us?

Two features define the technological landscape of today: data and prediction. Exactly how those ingredients combine depends on the machine in view. 

Satellite around earth
AI helps interpretate atmospheric data into weather forecasts. While below, the Internet itself now accounts for around 2% of carbon emissions. IMAGE CREDIT: ESA–J. Huart, CC BY-SA IGO 3.0

Some of our machines are focussed on the external world. Data gathering, its interpretation and use for prediction underpin a whole suite of tasks from geophysical remote sensing to weather forecasting and predicting real-time energy demand; to medical image interpretation for diagnosis; to monitoring and managing replacement life cycles of critical infrastructure. Not forgetting that the internet itself now accounts for around 2% of annual global emissions.

But many of our machines are focussed on the internal: the mental and psychological world of human being. In the machines of entertainment and social media, data and prediction serve a mundane but vital goal of securing our attention to facilitate advertising. Every user of the web is simultaneously subject and object, exposed to adverts and tailored content (though how tailored it really is, is moot according to some recent research from Mozilla showing that user controls have little effect on which videos YouTube’s influential AI recommends). We are concurrently enmeshed in a secondary and highly sophisticated real-time bidding market that captures trades and parses data about us every time we connect to the web. Shoshanna Zuboff calls it surveillance capitalism.  

Ever find it tough to stop doomscrolling or to put your own portable machine down for very long? That’s partly because constant experimentation identifies the best type of presentation, not just content, to captivate you most personally. But when it comes to corralling attention, data, prediction, and seductive design aren’t the only options. Friction makes signing up easy but quitting difficult by design, while dark patterns add subliminal twists like ambiguously labelled toggles and countdown clocks that nudge us toward actions that favour the product or service provider. Herbert Simon calls it all the attention economy. 

Yet human souls being what they are, anger, argument and scandal are good for business. 

Social media companies are, for reasons buried in the history of American legislation, free from any regulatory responsibility for the content they carry. Yet human souls being what they are, anger, argument and scandal are good for business.  Clickbait arose because algorithms tuning us to surrender our attention neither know nor care how they succeed, which often means a drift towards more extreme content with every run of the autoplay function that is set to on by default and by design.  

Our design and use of these machines thus reflects the state of our collective souls.

The large data sets many of these machines feed off contain societal structures and values implicitly. This only becomes clear when careless labelling and/or processing at the statistical scale perpetuate rather than correct for biases and unjust social structures embedded in the data. Some of our machines inadvertently crystalize inequity, perpetuating harms to society by cementing social and financial exclusion, or through racially biased facial recognition, or predictive policing algorithms

Our design and use of these machines thus reflects the state of our collective souls, sometimes for good but sometimes for evil. 

Legislation to address such varied challenges and mitigate some of the harms is now in train in Europe and the UK, and also promised in America. But there is much ground to make up. And the tragic suicide of teenager Molly Russell shows how ineffective protection, especially from the machinery of social media, is for the children of today with unpredictable consequences for society’s future.  

Damaged souls indeed. 

Much has also been made of an imminent Web3 and associated metaverse. On the evidence to date, however, this is more akin to a virtual goldrush in which virtual land and activity thereon can be monetised with the largest profits promised to the first generation of settlers. Claims are staked using NFTs (non-fungible tokens) bought with crypto currencies and deposited on the blockchain. Molly White shows just how soulless much of this new, and alarmingly wild, west really is.  

Investing tens of billions of dollars per year in the metaverse or a single product like Alexa might signal the scale of rewards just around the now virtual corner. But history may equally decide this is an era of malinvestment by a global 1% awash with cheap, quantitatively eased capital and, if not ‘#FOMO’, at least insufficient institutional memory of financial bubbles of yore. Yet even ‘Big Tech’s biggest corporate behemoths are now enduring the chill winds of a tech unicorn winter almost as intense as the one afflicting crypto land.  

Machines with Souls? A ghostly forecast of what lies ahead

Forster’s The Machine Stops envisages a dystopian future where society is unable to maintain the machinery on which it has become dependent. His intuition that the new airships of his own day portended a key infrastructure of the future illustrates the hazards of future-casting. Some nascent technologies fail to live up to the hype (ahem…blockchain and driverless cars, anyone?) and artificial general intelligence (AGI) seems forever destined to be just a few more years “perhaps a decade”  away, although Elon Musk has yet to accept Gary Marcuse’s bet on that timeline. 

So let me venture two more modest but still speculative predictions; one positive and one problematic.  

Positively, the years ahead promise much increase in human augmentation of many kinds. A range of health and medical benefits are now in view, from efficiency gains in healthcare provision and design of medication at molecular level to bespoke pharmacological prescription based on individualised biological markers. Expect more wearable tech to supplement smartwatches.  

Some anticipate an overarching machine of almost Forsteresque proportions via the internet of things (IoT) although political and economic battles over device interoperability and security will, I think, garner increasing public attention and debate in due course.  

Augmented reality will substantially improve safety, , and will shift many enhancements from screen to full field of view with additional benefits for road users and pedestrians alike.  

Increasingly sophisticated geospatial sensing and data processing will enhance our understanding of the climate and biosphere emergencies and how successful various remedial steps prove. New technologies may radically reprice the costs of decarbonisation and unlock energy solutions that remain, as Babbage’s first difference engine was in his own day, the stuff of contemporary dreams. 

 This may be the first industrial revolution to be a net eliminator of jobs, although whether that promises to be good news is moot because navigating the consequences would be deeply challenging both socially and politically. Most of all, I anticipate a proliferation of new technologies and machines over the next few decades that will bolster and complete the reuse and recycle portions of a genuinely circular economy, together with an increasing emphasis on finite planetary budgets.  

We are on the cusp of a new and novel post-McLuhan era.

Now the problematic development. Top of the list is our newest and hottest ability: to mimetically recreate the surface view of reality using language itself. There are, it seems to me, profound risks posed by the very latest tools of natural language processing like Google’s LaMDA, Microsoft’s ChatGPT and Meta’s Galactica and Cicero.  

The Web to date has been an epistemological wonder. Knowledge has, of course, always been socially embedded. Wikipedia provides an enormous open-access repository of socially agreed knowledge. The discussion pages associated with any article can be hotbeds of debate but the active role of human editors in moderating and agreeing what counts as factual knowledge is both intrinsic and essential to the role that Wikipedia plays in informing and maintaining a flourishing society.  

Marshall McLuhan famously asserted that “the medium is the message”. But now we are on the cusp of a new and novel post-McLuhan era where the machine literally and autonomously manufactures the words and messages it then also mediates, doing both at super-human speed. This new generative AI machinery for reconfiguring words and images carries many consequences some of which are difficult to predict and some of which may be profoundly negative. Just read these headlines. From CNN: These artists found out their work was used to train AI.Now they’re furious. And, from Forbes: Armed With ChatGPT, Cybercriminals Build Malware And Plot Fake Girl Bots.

Beyond dreams of electric sheep – AI hallucinates

Babbage's Difference Engine no. 1 was conceived to save the government money by preventing the mistakes that almost always crept into tables calculated or copied by hand. But these ultra-modern machines don’t just calculate or copy, they probabilistically infer - which does not necessarily lead to the best explanation. In fact, it does not always lead to possible explanation. Large language models (LLMs) like LaMDA, ChatGPT and Galactica ‘hallucinate’, transitioning seamlessly (though unpredictably, from our perspective) from predicting words and strings in ways that match the actual world, to predicting words and strings that portray an unreal world.  

Why does such hallucination happen? The crucial distinction is that human knowledge is consciously and not just socially embedded. But our new machines do not reason the way we do; cannot reason the way we do. As Erik Larson argues persuasively in The Myth of Artificial Intelligence, abductive reasoning of the kind Charles Sanders Pierce outlines, and inference to best explanation, are not yet in the realm of the suite of techniques gathered anywhere under the rubric of the ‘AI’ these machines practise. 

The consequences can be amusing, but experimentation also shows how difficult these models are to defend against deliberate manipulation by so-called ‘prompt injection’ and the online world is packed to the rafters with bad actors, whether individual or state, enthusiastic to get their hands on a machine that will opaquely mix real-world information with hallucination and then use it to quickly produce and instantly distribute misinformation at the touch of a button. Imagine, for example, an AI generated paper that includes a real scientist but cites and then summarises a paper she never actually wrote. Or imagine an AI that presents a stylistically convincing case for the benefits of consuming ground glass because it ‘knows’ about dietary silica. You don’t need to. Its already here: Meta Galactica AI Model Suspended After Problems.

Powerful and captivating machines are being let loose with no regulatory guardrails.

I worry that we are about to envelope ourselves in an epistemic fog; a veritable pea souper in which navigation becomes permanently difficult and increasingly dangerous. I hope I’m wrong, but ChatGPT hit a million users within a week of being introduced and these powerful and captivating machines are being let loose with no regulatory guardrails to stop their creators or help their users from straying into dangerous territory; no independent oversight; and little to no precautionary principle being exercised by the creators and masters of these mimetic machines. 

Perhaps it sounds dramatic but I believe this new generative form of AI is going to transform digitally entangled societies like ours profoundly.  

A final prediction, therefore. A prediction about how such societies, increasingly dependent on the kinds of machine envisaged by Forster or Čapek, will have to adapt and adjust if we are to avoid machine mediated myopia

Seeing through the fog

Besides the aforementioned and urgently needed regulatory guardrails, I foresee two other responses that will help societies cope with this rapidly enveloping epistemic fog. First stronger tools for transparency and verification. Secondly, better education for digital literacy and digital habits that protect and enhance a healthy soul. 

First, then, transparency and verification. The EU’s new AI Bill will require companies to notify users whenever they interact with an artificial agent. Between the technology of deepfakes and game playing bots like Meta’s Cicero, we have already surpassed the Turing test in increasingly broad areas of human machine interaction. But I anticipate a further shift in emphasis from ‘explainability’ - how any algorithm works per se - toward transparency – how it impacts and influences both individual users and society emergently. We need more publicly accessible evaluation of the holistic if unintended effects of our machines even now. That need is only going to grow.  

The fundamental question of transparency “who, or what is really in view here?” is going to take centre stage. 

One consequence may well be an increasingly fraught battle between, on the one hand, commercial intellectual property (IP) rights, and, on the other, individual rights and the common good. With the notable exception of sites like Wikipedia society has so far struggled painfully and inconsistently with the challenges of effective content moderation – especially where values rather than empirical facts are concerned. Until now, and to pick just one example; Facebook’s secretive behaviour and cherry picked transparency metrics have wilfully kept both customers and regulators in the dark. The idea that we can mechanise or automate by outsourcing intrinsically value-laden problems to algorithms, however mimetic the surface results, is patently utopian. Continuing to withhold evidence of biases and harms from generative deepfakery using AI can only invite a steeper descent towards dystopia. And as generative AI combines with increasingly convincing deepfake technology to fool every human sense the fundamental question of transparency “who, or what is really in view here?” is going to take centre stage with increasing importance.  

A veracity FAQ

Veracity will take on increasing scope as well as importance. Soon not just the ‘facts’ of a matter but equally basic questions like “who (or what?) is saying this?”, “why is this being said?” and “what are the consequences (holistically) of saying this?” will become central to deciding “is this true?” We are now in a situation where truth and fiction can be opaquely intermixed by machines autonomously at a pace and a scale, but also at a quality, that will overwhelm any fact-checking of the kind we deploy now. Proving our identity - including the basic fact that we are human, and protecting ourselves not merely from susceptibly to fakes but being faked will become increasingly important and will therefore become central tasks of the next web.   

Clearly there is a role for government here; a need for clear regulation, strong inspection and enforcement mechanisms, and an effective precautionary principle that ensures new techniques and new machines are only let loose in ways that have proven demonstrably safe. There will a role too for (new?) trustworthy bodies and institutions as fact-checkers and as repositories of verified content. New institutions as well as new technologies like https://datatrusts.uk/ are a helpful early response. 

Lastly, new demands and new digital habits will be needed by each one of us. The ancients associated a healthy soul with good habits but we are still at a formative stage of learning – and teaching one another – even healthy digital etiquette, let alone the digital habits and behaviours to keep humans safe and able to thrive as fully rounded souls navigating a world created for us by powerfully mimetic but deceptively soulless machinery. 

It won’t be easy. As Forster and others perceptively show, the machinery of modern life invites our souls towards decadence. Self-control is not in vogue. But the ancients have long associated the good life with cultivating character; with generosity, moderation, and self-less-ness as the only route to becoming truly whole. 

Article
AI
Belief
Digital
8 min read

When tech holds us captive, here’s how to find liberation

The last of a three-part series exploring the implications of technology.

James is Canon Missioner at Blackburn Cathedral. He researches technology and theology at Oxford University.

A person wearing a heavy backpack of tech, connected by tangled cables to other technology behind him, walks towards a simpler space.

In my previous article, I outlined Heidegger’s suspicions about the technological age in which we live. We noticed that Heidegger saw a ‘way of being’ which lay underneath all the tech that fills our lives and that as members of a technological society, we have been shaped, or you could say ‘discipled’, to live in a certain way. This ‘way of being’, the essence of technology, is to see everything in the world primarily as a collection of tools and resources to be extracted as and when they are needed.  

In contrast with the technology optimists that we looked at in the first article, Heidegger wants us to see that modern technology is not made up of neutral tools to be used for good or bad, nor is it simply a natural extension of human activity that we have been doing since the stone age. Modern technology has shaped a technological society and the members of that society, so that we position everything in the natural world (including ourselves and our neighbours) into resources to be mined. 

So, if that is Heidegger’s diagnosis of modern technology – which he dubbed Gestell, what can we do about it? What is Heidegger’s solution to the problem that he identified? Is there a way to live free of the Gestell of modern technology? However, before we get to that question we first have to ask if it’s actually possible to do anything. Because, if Heidegger’s view of modern technology is correct and our thinking and being in the world have been so shaped by the essence of technology, we might be stuck within a way of thinking that has shaped us with no way to change.  

A method or technique is simply a technology of self-transformation and therefore keeps us entrapped within the technological essence. 

There are two reasons why we might not be able to fix the problem of technology that Heidegger has revealed to us.  

Firstly, the problem of being trapped within a system that has formed our thinking: How could we think our way out of this technological age if we have already been shaped by that age’s way of being in the world? If the technological system is as totalising and has so powerfully shaped the minds of people within the society in the way that Heidegger suggests it would seem almost impossible to think beyond or around the system and therefore break out of it.  

Secondly, there is the problem of using technological thinking to solve the problem of technological thinking. This second point is a natural extension of the first: within a technological society it will feel most natural to devise a series of techniques or methods that could be used to set people free from the technological age but, because they are techniques, they would do nothing more than reinforce the problems of technological thinking. Or to put it another way, we need a new way of thinking and being in the world that does not lead to just another method. A method or technique is simply a technology of self-transformation and therefore keeps us entrapped within the technological essence. Self-help books are the most obvious example of this. As Brian Brock says, “What must at all costs be avoided is trying to meet the problems raised by technological thinking using yet another technological or formalist decision-making method. The problem of technology lies in its addiction to methods of thinking and perceiving.”

Heidegger’s solution to the problem of Gestell is to invite members of a technological society to live open-handedly rather than grasping at the natural world. 

Heidegger’s proposed solution to Gestell lies in another German word: Gelassenheit. If Gestell was about ‘positionality’, or ‘enframing’ then Gelassenheit refers to ‘releasement’, ‘tranquility’ or ‘letting things be for themselves’.  

Heidegger develops the term Gelassenheit by inviting his readers to reject the desire that they find within themselves to force the natural world to conform to their needs. Secondly, and similarly, to invite the world around to present itself to the person rather than for the person. Heidegger’s solution to the problem of Gestell is to invite members of a technological society to live open-handedly rather than grasping at the natural world. The solution he offers is at the level of desire rather than activity. This is Heidegger’s only option given the diagnosis, if he were to offer a step-by-step solution to the problem of Gestell or a set of activities he would only be enframing the problem of enframing: one cannot use techniques to solve the problems of a technological age.  

As Christopher Merwin says, “Heidegger’s account of releasement is neither a wholly active not, a wholly passive disposition… Heidegger is not a Neo-Luddite, and he does not think we can or should entirely abandon technology. Gelassenheit is not meant to overcome technology, but to place in check the tendency of technology to render everything into an object for use and production… Gelassenheit releases us from the danger of technology and opens us to alternative ways of relating to reality.”

Social media turns the human beings who use it into the content that it sells, we have become the resource that the machine is mining. 

As a Christian and a priest in the Church of England, there is a lot about Heidegger’s analysis of our technological age that I find very compelling. I instinctively resonate with his existential description of the essence of modern technology as Gestell. When I observe my own habits, and when I listen to the stories of my parishioners, I see example after example of the technology in our lives training our sensibilities to treat the natural world as nothing more than a resource to be plundered for our needs and pleasures.  

I think Heidegger’s concept of Gestell gives a real insight into why we are so far failing to curb our use of fossil fuels despite the near universal consensus that it would be a good and right thing to do. As a society, we have become conditioned to see nature as nothing more than a source of fuel to be harnessed. Our societal addiction to hydrocarbons begins with the assumption that oil is there for our use. It is only the Gestell mindset of a technological age which would make that assumption: oil isn’t there to be for itself but is instead positioned within the inventory as a useful and therefore valuable commodity to be harvested and deployed.  

Beyond the natural resources of the creation within which we live, I see Heidegger’s analysis of Gestell at work in the attitudes of people to one another. It is becoming increasingly hard not to treat other human beings as nothing more than resources to be used or discarded depending on whether they fulfil their purpose or not. The ‘intention’ of the social media algorithm (obviously, this is an anthropomorphism: algorithms don’t have intentions) is to turn each of its users into content creators. We are encouraged to post, like, and share and we often fail to notice that the content we are ‘creating’ is ourselves. Social media turns the human beings who use it into the content that it sells, we have become the resource that the machine is mining. And while social media provides a stark example of human beings becoming little more than resources to be harvested, the effects of this technological mindset are not restricted to the virtual environment. When I fail to notice to person across the counter from me in the coffee shop, or the Uber driver, or the sales assistant, I am slipping into the Gestell mindset which characterises the problem of technology. 

While I think Heidegger articulates the problem of technology more clearly and insightfully than almost anyone else in the modern era, I think his solution would benefit from deeper reflection on the Christian tradition. 

Here we find a person through whom our minds can be transformed, who can set us free from the patterns of thinking of this world, who can reshape our desires. 

Firstly, within the Christian tradition, there has long been the recognition of competing forces of discipleship. In the Christian worldview there is no neutral space of existence, our attitudes and desires are always being trained by one thing or another. In his letter to the church in Rome, Paul puts it like this: “Do not be conformed to this world, but be transformed by the renewing of your minds, so that you may discern what is the good and acceptable and perfect will of God.”  Paul tells us that ‘the world’, or in our case ‘the essence of modern technology’, is constantly pulling our thinking into conformity with it. But Paul goes on to point us to something that Heidegger cannot, the voice from outside the system. In the face of a totalising and all-compassing technological society which en-fames everything as a resource waiting to be used, Heidegger’s encouragement is Gelassenheit, to, by force of will, release yourself and the world from the drive to Gestell. Heidegger has no other hope than the willpower of the individual to liberate themselves from the system because he has no other site of hope, nothing outside the system. Paul on the other hand points us to God. A source of transformation and life that is not conformed to the world and is not dependent on the world for existence but nevertheless, by an act of grace, has chosen to reveal himself within the world for the sake of the world. Here we find a person through whom our minds can be transformed, who can set us free from the patterns of thinking of this world, who can reshape our desires. This is the gift of prayer, a space to be and to allow God and the world to be. For many Christians, the experience of prayer is that through sheer inactivity and silence, they are (slowly, sometimes imperceptibly) transformed.  

However, Heidegger alerted us to a significant difficulty in finding our way out of the technological mindset. Am I suggesting that we turn God into a method for transforming our minds so that we might escape the pitfalls of modern technological thinking? I hope not. While it is certainly possible to attempt to turn prayer into a technique for getting God to give you what you want, that is not what I’m suggesting here. I’m aiming instead of the sort of prayer that Mother Teresa famously described when she was once asked in an interview, "What do you say when you pray?" She replied, "Nothing, I just listen." The reporter then asked, "Well then, what does God say to you?" To which she answered: "Nothing much, He listens too."