Essay
AI
Culture
13 min read

Machines and their ghosts

What impacts has artificial intelligence had on society, past, present and future? Simon Cross explores just where have our machines got us.

Simon Cross researches ethical aspects of technology and advises on the Church’s of England's policy and legislative activity in these areas.

A complex of linear and metal parts in a machine-like sculpture.
Machine complexity, in sculptural form.
Ruth Hartnup, CC BY 2.0, via Wikimedia Commons.

But Humanity, in its desire for comfort, had over-reached itself. It had exploited the riches of nature too far. Quietly and complacently, it was sinking into decadence, and progress had come to mean the progress of the Machine. 

E. M. Forster

Human cosmology has changed over the millennia. Not only from the heliocentric to the relativistic but also from organic to mechanistic. Our success in deconstructing nature and exploiting those discoveries to construct ever more capable machines now persuades many that the soul is illusory and the universe made only of physical objects reconfigurable in new and novel ways according to particular mathematical relationships. And yet. And yet the debate about our latest machines, about intelligence, and about the mysterious ghost of human consciousness – let alone soul - continues unresolved across the ages.  

The ghost in the AI machines of the past

The journey from Charles Babbage’s unfinished analytical engines to Elon Musk’s complete business empire of rockets, robot-cars and social media rants is familiar to many. Karel Čapek drew on the Slavonic Orthodox word for servitude or serfdom when he baptised the word robot in his 1920 play, R.U.R., or Rossum’s Universal Robots. Čapek’s machines eventually gained a soul but only in the final act of the play. While the term artificial intelligence (AI) is attributed to a gathering at Dartmouth College in New Hampshire, it was Alan Turing who successfully conceptualised how to fabricate robots like those of Čapek’s imagination. Turing neatly sidestepped the pesky question of whether such ‘universal Turing machines’ need human-like consciousness (let alone a soul) in a famous 1950 thought experiment posterity simply calls the Turing Test.  

The invention of finely controlled micro-processors and their ever tighter transcription onto silicon chips enabled the architecture of increasingly complex algorithmic mathematical operation. After which came operating systems with simple and accessible user interfaces and programmes exploiting a prolific increase in speed and memory. So too the invention by Tim Berners-Lee of an internet with open protocols that, via Mosaic and its browser progeny, has become the operational backbone of the world wide web. All are tales already familiar or easily told using a now ubiquitous search engine. 

A main feature of the past twenty years has been the network effect. This has concentrated power in a handful of companies, initially the FAANGs (Facebook, Apple, Amazon, Netflix and Google) but now too their Chinese counterparts Tencent and ByteDance (owners of TikTok). A European counterpart is conspicuously absent. 

The Ghost in the AI machines of the present

More recently still, advances in types of machine learning and the invention of a new suite of tools called 'transformers' has given rise to AI that increasingly resembles its human creators in one task or another even if the furore over Brad Lemoyne and Googles’ LaMDA (Language Model for Dialogue Applications) proves the relationship between intelligence, artifice, and consciousness remains deeply contested.  

The metaphysical nature of artificial consciousness notwithstanding, it is also worth reflecting, however, on what these machines may be doing to our souls – metaphorical or otherwise. Where have our machines got us?

Two features define the technological landscape of today: data and prediction. Exactly how those ingredients combine depends on the machine in view. 

Satellite around earth
AI helps interpretate atmospheric data into weather forecasts. While below, the Internet itself now accounts for around 2% of carbon emissions. IMAGE CREDIT: ESA–J. Huart, CC BY-SA IGO 3.0

Some of our machines are focussed on the external world. Data gathering, its interpretation and use for prediction underpin a whole suite of tasks from geophysical remote sensing to weather forecasting and predicting real-time energy demand; to medical image interpretation for diagnosis; to monitoring and managing replacement life cycles of critical infrastructure. Not forgetting that the internet itself now accounts for around 2% of annual global emissions.

But many of our machines are focussed on the internal: the mental and psychological world of human being. In the machines of entertainment and social media, data and prediction serve a mundane but vital goal of securing our attention to facilitate advertising. Every user of the web is simultaneously subject and object, exposed to adverts and tailored content (though how tailored it really is, is moot according to some recent research from Mozilla showing that user controls have little effect on which videos YouTube’s influential AI recommends). We are concurrently enmeshed in a secondary and highly sophisticated real-time bidding market that captures trades and parses data about us every time we connect to the web. Shoshanna Zuboff calls it surveillance capitalism.  

Ever find it tough to stop doomscrolling or to put your own portable machine down for very long? That’s partly because constant experimentation identifies the best type of presentation, not just content, to captivate you most personally. But when it comes to corralling attention, data, prediction, and seductive design aren’t the only options. Friction makes signing up easy but quitting difficult by design, while dark patterns add subliminal twists like ambiguously labelled toggles and countdown clocks that nudge us toward actions that favour the product or service provider. Herbert Simon calls it all the attention economy. 

Yet human souls being what they are, anger, argument and scandal are good for business. 

Social media companies are, for reasons buried in the history of American legislation, free from any regulatory responsibility for the content they carry. Yet human souls being what they are, anger, argument and scandal are good for business.  Clickbait arose because algorithms tuning us to surrender our attention neither know nor care how they succeed, which often means a drift towards more extreme content with every run of the autoplay function that is set to on by default and by design.  

Our design and use of these machines thus reflects the state of our collective souls.

The large data sets many of these machines feed off contain societal structures and values implicitly. This only becomes clear when careless labelling and/or processing at the statistical scale perpetuate rather than correct for biases and unjust social structures embedded in the data. Some of our machines inadvertently crystalize inequity, perpetuating harms to society by cementing social and financial exclusion, or through racially biased facial recognition, or predictive policing algorithms

Our design and use of these machines thus reflects the state of our collective souls, sometimes for good but sometimes for evil. 

Legislation to address such varied challenges and mitigate some of the harms is now in train in Europe and the UK, and also promised in America. But there is much ground to make up. And the tragic suicide of teenager Molly Russell shows how ineffective protection, especially from the machinery of social media, is for the children of today with unpredictable consequences for society’s future.  

Damaged souls indeed. 

Much has also been made of an imminent Web3 and associated metaverse. On the evidence to date, however, this is more akin to a virtual goldrush in which virtual land and activity thereon can be monetised with the largest profits promised to the first generation of settlers. Claims are staked using NFTs (non-fungible tokens) bought with crypto currencies and deposited on the blockchain. Molly White shows just how soulless much of this new, and alarmingly wild, west really is.  

Investing tens of billions of dollars per year in the metaverse or a single product like Alexa might signal the scale of rewards just around the now virtual corner. But history may equally decide this is an era of malinvestment by a global 1% awash with cheap, quantitatively eased capital and, if not ‘#FOMO’, at least insufficient institutional memory of financial bubbles of yore. Yet even ‘Big Tech’s biggest corporate behemoths are now enduring the chill winds of a tech unicorn winter almost as intense as the one afflicting crypto land.  

Machines with Souls? A ghostly forecast of what lies ahead

Forster’s The Machine Stops envisages a dystopian future where society is unable to maintain the machinery on which it has become dependent. His intuition that the new airships of his own day portended a key infrastructure of the future illustrates the hazards of future-casting. Some nascent technologies fail to live up to the hype (ahem…blockchain and driverless cars, anyone?) and artificial general intelligence (AGI) seems forever destined to be just a few more years “perhaps a decade”  away, although Elon Musk has yet to accept Gary Marcuse’s bet on that timeline. 

So let me venture two more modest but still speculative predictions; one positive and one problematic.  

Positively, the years ahead promise much increase in human augmentation of many kinds. A range of health and medical benefits are now in view, from efficiency gains in healthcare provision and design of medication at molecular level to bespoke pharmacological prescription based on individualised biological markers. Expect more wearable tech to supplement smartwatches.  

Some anticipate an overarching machine of almost Forsteresque proportions via the internet of things (IoT) although political and economic battles over device interoperability and security will, I think, garner increasing public attention and debate in due course.  

Augmented reality will substantially improve safety, , and will shift many enhancements from screen to full field of view with additional benefits for road users and pedestrians alike.  

Increasingly sophisticated geospatial sensing and data processing will enhance our understanding of the climate and biosphere emergencies and how successful various remedial steps prove. New technologies may radically reprice the costs of decarbonisation and unlock energy solutions that remain, as Babbage’s first difference engine was in his own day, the stuff of contemporary dreams. 

 This may be the first industrial revolution to be a net eliminator of jobs, although whether that promises to be good news is moot because navigating the consequences would be deeply challenging both socially and politically. Most of all, I anticipate a proliferation of new technologies and machines over the next few decades that will bolster and complete the reuse and recycle portions of a genuinely circular economy, together with an increasing emphasis on finite planetary budgets.  

We are on the cusp of a new and novel post-McLuhan era.

Now the problematic development. Top of the list is our newest and hottest ability: to mimetically recreate the surface view of reality using language itself. There are, it seems to me, profound risks posed by the very latest tools of natural language processing like Google’s LaMDA, Microsoft’s ChatGPT and Meta’s Galactica and Cicero.  

The Web to date has been an epistemological wonder. Knowledge has, of course, always been socially embedded. Wikipedia provides an enormous open-access repository of socially agreed knowledge. The discussion pages associated with any article can be hotbeds of debate but the active role of human editors in moderating and agreeing what counts as factual knowledge is both intrinsic and essential to the role that Wikipedia plays in informing and maintaining a flourishing society.  

Marshall McLuhan famously asserted that “the medium is the message”. But now we are on the cusp of a new and novel post-McLuhan era where the machine literally and autonomously manufactures the words and messages it then also mediates, doing both at super-human speed. This new generative AI machinery for reconfiguring words and images carries many consequences some of which are difficult to predict and some of which may be profoundly negative. Just read these headlines. From CNN: These artists found out their work was used to train AI.Now they’re furious. And, from Forbes: Armed With ChatGPT, Cybercriminals Build Malware And Plot Fake Girl Bots.

Beyond dreams of electric sheep – AI hallucinates

Babbage's Difference Engine no. 1 was conceived to save the government money by preventing the mistakes that almost always crept into tables calculated or copied by hand. But these ultra-modern machines don’t just calculate or copy, they probabilistically infer - which does not necessarily lead to the best explanation. In fact, it does not always lead to possible explanation. Large language models (LLMs) like LaMDA, ChatGPT and Galactica ‘hallucinate’, transitioning seamlessly (though unpredictably, from our perspective) from predicting words and strings in ways that match the actual world, to predicting words and strings that portray an unreal world.  

Why does such hallucination happen? The crucial distinction is that human knowledge is consciously and not just socially embedded. But our new machines do not reason the way we do; cannot reason the way we do. As Erik Larson argues persuasively in The Myth of Artificial Intelligence, abductive reasoning of the kind Charles Sanders Pierce outlines, and inference to best explanation, are not yet in the realm of the suite of techniques gathered anywhere under the rubric of the ‘AI’ these machines practise. 

The consequences can be amusing, but experimentation also shows how difficult these models are to defend against deliberate manipulation by so-called ‘prompt injection’ and the online world is packed to the rafters with bad actors, whether individual or state, enthusiastic to get their hands on a machine that will opaquely mix real-world information with hallucination and then use it to quickly produce and instantly distribute misinformation at the touch of a button. Imagine, for example, an AI generated paper that includes a real scientist but cites and then summarises a paper she never actually wrote. Or imagine an AI that presents a stylistically convincing case for the benefits of consuming ground glass because it ‘knows’ about dietary silica. You don’t need to. Its already here: Meta Galactica AI Model Suspended After Problems.

Powerful and captivating machines are being let loose with no regulatory guardrails.

I worry that we are about to envelope ourselves in an epistemic fog; a veritable pea souper in which navigation becomes permanently difficult and increasingly dangerous. I hope I’m wrong, but ChatGPT hit a million users within a week of being introduced and these powerful and captivating machines are being let loose with no regulatory guardrails to stop their creators or help their users from straying into dangerous territory; no independent oversight; and little to no precautionary principle being exercised by the creators and masters of these mimetic machines. 

Perhaps it sounds dramatic but I believe this new generative form of AI is going to transform digitally entangled societies like ours profoundly.  

A final prediction, therefore. A prediction about how such societies, increasingly dependent on the kinds of machine envisaged by Forster or Čapek, will have to adapt and adjust if we are to avoid machine mediated myopia

Seeing through the fog

Besides the aforementioned and urgently needed regulatory guardrails, I foresee two other responses that will help societies cope with this rapidly enveloping epistemic fog. First stronger tools for transparency and verification. Secondly, better education for digital literacy and digital habits that protect and enhance a healthy soul. 

First, then, transparency and verification. The EU’s new AI Bill will require companies to notify users whenever they interact with an artificial agent. Between the technology of deepfakes and game playing bots like Meta’s Cicero, we have already surpassed the Turing test in increasingly broad areas of human machine interaction. But I anticipate a further shift in emphasis from ‘explainability’ - how any algorithm works per se - toward transparency – how it impacts and influences both individual users and society emergently. We need more publicly accessible evaluation of the holistic if unintended effects of our machines even now. That need is only going to grow.  

The fundamental question of transparency “who, or what is really in view here?” is going to take centre stage. 

One consequence may well be an increasingly fraught battle between, on the one hand, commercial intellectual property (IP) rights, and, on the other, individual rights and the common good. With the notable exception of sites like Wikipedia society has so far struggled painfully and inconsistently with the challenges of effective content moderation – especially where values rather than empirical facts are concerned. Until now, and to pick just one example; Facebook’s secretive behaviour and cherry picked transparency metrics have wilfully kept both customers and regulators in the dark. The idea that we can mechanise or automate by outsourcing intrinsically value-laden problems to algorithms, however mimetic the surface results, is patently utopian. Continuing to withhold evidence of biases and harms from generative deepfakery using AI can only invite a steeper descent towards dystopia. And as generative AI combines with increasingly convincing deepfake technology to fool every human sense the fundamental question of transparency “who, or what is really in view here?” is going to take centre stage with increasing importance.  

A veracity FAQ

Veracity will take on increasing scope as well as importance. Soon not just the ‘facts’ of a matter but equally basic questions like “who (or what?) is saying this?”, “why is this being said?” and “what are the consequences (holistically) of saying this?” will become central to deciding “is this true?” We are now in a situation where truth and fiction can be opaquely intermixed by machines autonomously at a pace and a scale, but also at a quality, that will overwhelm any fact-checking of the kind we deploy now. Proving our identity - including the basic fact that we are human, and protecting ourselves not merely from susceptibly to fakes but being faked will become increasingly important and will therefore become central tasks of the next web.   

Clearly there is a role for government here; a need for clear regulation, strong inspection and enforcement mechanisms, and an effective precautionary principle that ensures new techniques and new machines are only let loose in ways that have proven demonstrably safe. There will a role too for (new?) trustworthy bodies and institutions as fact-checkers and as repositories of verified content. New institutions as well as new technologies like https://datatrusts.uk/ are a helpful early response. 

Lastly, new demands and new digital habits will be needed by each one of us. The ancients associated a healthy soul with good habits but we are still at a formative stage of learning – and teaching one another – even healthy digital etiquette, let alone the digital habits and behaviours to keep humans safe and able to thrive as fully rounded souls navigating a world created for us by powerfully mimetic but deceptively soulless machinery. 

It won’t be easy. As Forster and others perceptively show, the machinery of modern life invites our souls towards decadence. Self-control is not in vogue. But the ancients have long associated the good life with cultivating character; with generosity, moderation, and self-less-ness as the only route to becoming truly whole. 

Article
Culture
Film & TV
Monsters
Weirdness
Zombies
5 min read

Zombies: a philosopher's guide to the purpose-driven undead

Don’t dismiss zombiecore as lowbrow.

Ryan is the author of A Guidebook to Monsters: Philosophy, Religion, and the Paranormal.

A regency woman dabs her mouth with a bloody hankerchief.
Lilly James in Pride and Prejudice and Zombies.
Lionsgate.

Writing from his new book, A Guidebook to Monsters, Ryan Stark delves into humanity’s fascination for all things monsterous. In the second of a two-part series, he asks what and where zombies remind us of, and why they caught the eyes of C.S. Lewis and Salvador Dali 

 

On how Frankenstein’s monster came to life nobody knows for sure, but he is more urbane than zombies tend to be. Nor do Jewish golems and Frosty the Snowman count as zombiecore. The latter sings too much, and both are wrongly formulated. Frosty comes from snow, obviously, and the golems—from mere loam, not what the Renaissance playwrights call “gilded loam,” that is, already pre-assembled bodies, which is a zombie requirement. Tolkien’s orcs function likewise as golem-esque monsters, cast from miry clay and then enlivened by the grim magic of Mordor. We do not, for instance, discover scenes with orc children. 

And neither is Pinocchio a zombie, nor Pris from Blade Runner, but dolls, automatons, and C3POs border upon the land of zombies insofar as they all carry a non-human tint. Zombies, however, carry something else as well, a history of personhood, and so in their present form appear as macabre parodies of the human condition writ large. They are gruesome undead doppelgangers, reminding us of who we are not and perhaps—too—of where we are not. Hell is a place prepared for the Devil and his angels, Christ tells us in the book of Matthew. And maybe, subsequently, for zombies. 

Kolchak, in an episode of Kolchak: The Night Stalker aptly titled “The Zombie,” correctly discerns the grim scenario at hand: “He, sir, is from Hell itself!”  

C.S. Lewis pursues a similar line of thinking in The Problem of Pain: “You will remember that in the parable, the saved go to a place prepared for them, while the damned go to a place never made for men at all. To enter Heaven is to become more human than you ever succeeded in being on earth; to enter Hell is to be banished from humanity. What is cast (or casts itself) into Hell is not a man: it is ‘remains.’” Lewis makes an intriguing point, which has as its crescendo the now-famous line about the doors of Hell: “I willingly believe that the damned are, in one sense, successful, rebels to the end; that the doors of Hell are locked on the inside by zombies.” I added that last part about zombies. 

I make this point—in part—to correct those in the cognoscenti who dismiss zombies as a subject too lowbrow for serious consideration.

Not everyone believes in Hell, of course, yet most concede that some people behave worse than others, which also helps our cause. Indeed, part of zombiecore’s wisdom is to show that bad people often produce more horror than the zombies themselves. Such is the character of Legendre Murder, a case in point from the film White Zombie. Not fortunate in name, Mr. Murder runs a dark satanic mill populated by hordes of zombie workers, which is the film’s heavy-handed critique of sociopathic industrialization. The truth to be gleaned, here, is that zombies did not invent the multinational corporation; rather, they fell prey to it. 

We might think, too, of Herman Melville’s dehumanized characters from Bartleby the Scrivener: Nippers, Turkey, Ginger Nut, and the other functionaries whose nicknames themselves indicate the functions. From an economic standpoint, their value becomes a matter of utility, not essence, which is Melville’s reproach of the despairingly corporate drive to objectify personhood—of which zombies are an example beyond the pale. They might as well be fleshy mannequins, in fact, and as such provide the perfect foil for the human being properly conceived. 

Here, then, is why we do not blame zombies for eating brains, nor do we hold them accountable for wearing white pants after Labor Day, as some inevitably do. They cannot help it—in ethics and in fashion. Perhaps especially in fashion. The best we can hope for in the realm of zombie couture is Solomon Grundy, the quasi-zombie supervillain who holds up his frayed pants with a frayed rope, a fashion victory to be sure, however small it might be, though “zombie fashion” is a misnomer in the final analysis. They wear clothes, but not for the same reasons we do. 

The point holds true for Salvador Dali’s zombies as well, most of whom find themselves in nice dresses. I make this point—in part—to correct those in the cognoscenti who dismiss zombies as a subject too lowbrow for serious consideration. Not so. Exhibit A: the avant-garde Dali, darling of the highbrow, or at least still of the middlebrow, now that his paintings appear on t-shirts and coffee mugs. Burning giraffe. Mirage. Woman with Head of Roses. All zombies, too ramshackle and emaciated to live, never mind the missing head on the last one, and yet there they are posed for the leering eye, not unlike those heroin-chic supermodels from Vogue magazine in the late 1990s. Necrophilia never looked so stylish. 

The zombie’s gloomy predicament bears a striking resemblance to that of the Danaids in the classical underworld, those sisters condemned to fill a sieve with water for all eternity...

But never let it be said that zombies are lazy. They are tired, to be sure. Their ragged countenances tell us this, but they are not indolent. Zombies live purpose-driven undead lives. They want to eat brains, or any human flesh, depending on the mythos, and their calendars are organized accordingly. No naps. No swimming lessons. Just brains.  

But we quickly discern that no amount of flesh will satisfy. There is always one more hapless minimart clerk to ambush, one more sorority girl in bunny slippers to chase down the corridor. In this way, the zombie’s gloomy predicament bears a striking resemblance to that of the Danaids in the classical underworld, those sisters condemned to fill a sieve with water for all eternity, an emblem of the perverse appetite unchecked, which has at its core the irony of insatiable hunger. And as the pleasure becomes less and less, the craving becomes more and more. The law of diminishing returns. So, it is with all vices. The love of money demands more money, and the love of brains, more brains. 

And so, in conclusion, a prayer. God bless the obsessive-compulsive internet shoppers, the warehouse workers on unnecessarily tight schedules, and the machine-like managers of the big data algorithms. God bless the students who sedate themselves in order to survive their own educations, taking standardized test after standardized test. And God bless the Emily Griersons of the world, who keep their petrified-boyfriend corpses near them in the bedroom, an emblem of what happens when one tries too mightily to hold on to the past. And God help us, too, when we see in our own reflections a zombie-like affectation, the abyss who stares back at us and falsely claims that we are not the righteousness of God, as Paul says we are in 2 Corinthians. And, finally, Godspeed to Gussie Fink-Nottle from the P.G. Wodehouse sagas: “Many an experienced undertaker would have been deceived by his appearance, and started embalming on sight.”  

  

From A Guidebook to Monsters, Ryan J. Stark.  Used by permission of Wipf and Stock Publishers.