Essay
AI
Culture
13 min read

Machines and their ghosts

What impacts has artificial intelligence had on society, past, present and future? Simon Cross explores just where have our machines got us.

Simon Cross researches ethical aspects of technology and advises on the Church’s of England's policy and legislative activity in these areas.

A complex of linear and metal parts in a machine-like sculpture.
Machine complexity, in sculptural form.
Ruth Hartnup, CC BY 2.0, via Wikimedia Commons.

But Humanity, in its desire for comfort, had over-reached itself. It had exploited the riches of nature too far. Quietly and complacently, it was sinking into decadence, and progress had come to mean the progress of the Machine. 

E. M. Forster

Human cosmology has changed over the millennia. Not only from the heliocentric to the relativistic but also from organic to mechanistic. Our success in deconstructing nature and exploiting those discoveries to construct ever more capable machines now persuades many that the soul is illusory and the universe made only of physical objects reconfigurable in new and novel ways according to particular mathematical relationships. And yet. And yet the debate about our latest machines, about intelligence, and about the mysterious ghost of human consciousness – let alone soul - continues unresolved across the ages.  

The ghost in the AI machines of the past

The journey from Charles Babbage’s unfinished analytical engines to Elon Musk’s complete business empire of rockets, robot-cars and social media rants is familiar to many. Karel Čapek drew on the Slavonic Orthodox word for servitude or serfdom when he baptised the word robot in his 1920 play, R.U.R., or Rossum’s Universal Robots. Čapek’s machines eventually gained a soul but only in the final act of the play. While the term artificial intelligence (AI) is attributed to a gathering at Dartmouth College in New Hampshire, it was Alan Turing who successfully conceptualised how to fabricate robots like those of Čapek’s imagination. Turing neatly sidestepped the pesky question of whether such ‘universal Turing machines’ need human-like consciousness (let alone a soul) in a famous 1950 thought experiment posterity simply calls the Turing Test.  

The invention of finely controlled micro-processors and their ever tighter transcription onto silicon chips enabled the architecture of increasingly complex algorithmic mathematical operation. After which came operating systems with simple and accessible user interfaces and programmes exploiting a prolific increase in speed and memory. So too the invention by Tim Berners-Lee of an internet with open protocols that, via Mosaic and its browser progeny, has become the operational backbone of the world wide web. All are tales already familiar or easily told using a now ubiquitous search engine. 

A main feature of the past twenty years has been the network effect. This has concentrated power in a handful of companies, initially the FAANGs (Facebook, Apple, Amazon, Netflix and Google) but now too their Chinese counterparts Tencent and ByteDance (owners of TikTok). A European counterpart is conspicuously absent. 

The Ghost in the AI machines of the present

More recently still, advances in types of machine learning and the invention of a new suite of tools called 'transformers' has given rise to AI that increasingly resembles its human creators in one task or another even if the furore over Brad Lemoyne and Googles’ LaMDA (Language Model for Dialogue Applications) proves the relationship between intelligence, artifice, and consciousness remains deeply contested.  

The metaphysical nature of artificial consciousness notwithstanding, it is also worth reflecting, however, on what these machines may be doing to our souls – metaphorical or otherwise. Where have our machines got us?

Two features define the technological landscape of today: data and prediction. Exactly how those ingredients combine depends on the machine in view. 

Satellite around earth
AI helps interpretate atmospheric data into weather forecasts. While below, the Internet itself now accounts for around 2% of carbon emissions. IMAGE CREDIT: ESA–J. Huart, CC BY-SA IGO 3.0

Some of our machines are focussed on the external world. Data gathering, its interpretation and use for prediction underpin a whole suite of tasks from geophysical remote sensing to weather forecasting and predicting real-time energy demand; to medical image interpretation for diagnosis; to monitoring and managing replacement life cycles of critical infrastructure. Not forgetting that the internet itself now accounts for around 2% of annual global emissions.

But many of our machines are focussed on the internal: the mental and psychological world of human being. In the machines of entertainment and social media, data and prediction serve a mundane but vital goal of securing our attention to facilitate advertising. Every user of the web is simultaneously subject and object, exposed to adverts and tailored content (though how tailored it really is, is moot according to some recent research from Mozilla showing that user controls have little effect on which videos YouTube’s influential AI recommends). We are concurrently enmeshed in a secondary and highly sophisticated real-time bidding market that captures trades and parses data about us every time we connect to the web. Shoshanna Zuboff calls it surveillance capitalism.  

Ever find it tough to stop doomscrolling or to put your own portable machine down for very long? That’s partly because constant experimentation identifies the best type of presentation, not just content, to captivate you most personally. But when it comes to corralling attention, data, prediction, and seductive design aren’t the only options. Friction makes signing up easy but quitting difficult by design, while dark patterns add subliminal twists like ambiguously labelled toggles and countdown clocks that nudge us toward actions that favour the product or service provider. Herbert Simon calls it all the attention economy. 

Yet human souls being what they are, anger, argument and scandal are good for business. 

Social media companies are, for reasons buried in the history of American legislation, free from any regulatory responsibility for the content they carry. Yet human souls being what they are, anger, argument and scandal are good for business.  Clickbait arose because algorithms tuning us to surrender our attention neither know nor care how they succeed, which often means a drift towards more extreme content with every run of the autoplay function that is set to on by default and by design.  

Our design and use of these machines thus reflects the state of our collective souls.

The large data sets many of these machines feed off contain societal structures and values implicitly. This only becomes clear when careless labelling and/or processing at the statistical scale perpetuate rather than correct for biases and unjust social structures embedded in the data. Some of our machines inadvertently crystalize inequity, perpetuating harms to society by cementing social and financial exclusion, or through racially biased facial recognition, or predictive policing algorithms

Our design and use of these machines thus reflects the state of our collective souls, sometimes for good but sometimes for evil. 

Legislation to address such varied challenges and mitigate some of the harms is now in train in Europe and the UK, and also promised in America. But there is much ground to make up. And the tragic suicide of teenager Molly Russell shows how ineffective protection, especially from the machinery of social media, is for the children of today with unpredictable consequences for society’s future.  

Damaged souls indeed. 

Much has also been made of an imminent Web3 and associated metaverse. On the evidence to date, however, this is more akin to a virtual goldrush in which virtual land and activity thereon can be monetised with the largest profits promised to the first generation of settlers. Claims are staked using NFTs (non-fungible tokens) bought with crypto currencies and deposited on the blockchain. Molly White shows just how soulless much of this new, and alarmingly wild, west really is.  

Investing tens of billions of dollars per year in the metaverse or a single product like Alexa might signal the scale of rewards just around the now virtual corner. But history may equally decide this is an era of malinvestment by a global 1% awash with cheap, quantitatively eased capital and, if not ‘#FOMO’, at least insufficient institutional memory of financial bubbles of yore. Yet even ‘Big Tech’s biggest corporate behemoths are now enduring the chill winds of a tech unicorn winter almost as intense as the one afflicting crypto land.  

Machines with Souls? A ghostly forecast of what lies ahead

Forster’s The Machine Stops envisages a dystopian future where society is unable to maintain the machinery on which it has become dependent. His intuition that the new airships of his own day portended a key infrastructure of the future illustrates the hazards of future-casting. Some nascent technologies fail to live up to the hype (ahem…blockchain and driverless cars, anyone?) and artificial general intelligence (AGI) seems forever destined to be just a few more years “perhaps a decade”  away, although Elon Musk has yet to accept Gary Marcuse’s bet on that timeline. 

So let me venture two more modest but still speculative predictions; one positive and one problematic.  

Positively, the years ahead promise much increase in human augmentation of many kinds. A range of health and medical benefits are now in view, from efficiency gains in healthcare provision and design of medication at molecular level to bespoke pharmacological prescription based on individualised biological markers. Expect more wearable tech to supplement smartwatches.  

Some anticipate an overarching machine of almost Forsteresque proportions via the internet of things (IoT) although political and economic battles over device interoperability and security will, I think, garner increasing public attention and debate in due course.  

Augmented reality will substantially improve safety, , and will shift many enhancements from screen to full field of view with additional benefits for road users and pedestrians alike.  

Increasingly sophisticated geospatial sensing and data processing will enhance our understanding of the climate and biosphere emergencies and how successful various remedial steps prove. New technologies may radically reprice the costs of decarbonisation and unlock energy solutions that remain, as Babbage’s first difference engine was in his own day, the stuff of contemporary dreams. 

 This may be the first industrial revolution to be a net eliminator of jobs, although whether that promises to be good news is moot because navigating the consequences would be deeply challenging both socially and politically. Most of all, I anticipate a proliferation of new technologies and machines over the next few decades that will bolster and complete the reuse and recycle portions of a genuinely circular economy, together with an increasing emphasis on finite planetary budgets.  

We are on the cusp of a new and novel post-McLuhan era.

Now the problematic development. Top of the list is our newest and hottest ability: to mimetically recreate the surface view of reality using language itself. There are, it seems to me, profound risks posed by the very latest tools of natural language processing like Google’s LaMDA, Microsoft’s ChatGPT and Meta’s Galactica and Cicero.  

The Web to date has been an epistemological wonder. Knowledge has, of course, always been socially embedded. Wikipedia provides an enormous open-access repository of socially agreed knowledge. The discussion pages associated with any article can be hotbeds of debate but the active role of human editors in moderating and agreeing what counts as factual knowledge is both intrinsic and essential to the role that Wikipedia plays in informing and maintaining a flourishing society.  

Marshall McLuhan famously asserted that “the medium is the message”. But now we are on the cusp of a new and novel post-McLuhan era where the machine literally and autonomously manufactures the words and messages it then also mediates, doing both at super-human speed. This new generative AI machinery for reconfiguring words and images carries many consequences some of which are difficult to predict and some of which may be profoundly negative. Just read these headlines. From CNN: These artists found out their work was used to train AI.Now they’re furious. And, from Forbes: Armed With ChatGPT, Cybercriminals Build Malware And Plot Fake Girl Bots.

Beyond dreams of electric sheep – AI hallucinates

Babbage's Difference Engine no. 1 was conceived to save the government money by preventing the mistakes that almost always crept into tables calculated or copied by hand. But these ultra-modern machines don’t just calculate or copy, they probabilistically infer - which does not necessarily lead to the best explanation. In fact, it does not always lead to possible explanation. Large language models (LLMs) like LaMDA, ChatGPT and Galactica ‘hallucinate’, transitioning seamlessly (though unpredictably, from our perspective) from predicting words and strings in ways that match the actual world, to predicting words and strings that portray an unreal world.  

Why does such hallucination happen? The crucial distinction is that human knowledge is consciously and not just socially embedded. But our new machines do not reason the way we do; cannot reason the way we do. As Erik Larson argues persuasively in The Myth of Artificial Intelligence, abductive reasoning of the kind Charles Sanders Pierce outlines, and inference to best explanation, are not yet in the realm of the suite of techniques gathered anywhere under the rubric of the ‘AI’ these machines practise. 

The consequences can be amusing, but experimentation also shows how difficult these models are to defend against deliberate manipulation by so-called ‘prompt injection’ and the online world is packed to the rafters with bad actors, whether individual or state, enthusiastic to get their hands on a machine that will opaquely mix real-world information with hallucination and then use it to quickly produce and instantly distribute misinformation at the touch of a button. Imagine, for example, an AI generated paper that includes a real scientist but cites and then summarises a paper she never actually wrote. Or imagine an AI that presents a stylistically convincing case for the benefits of consuming ground glass because it ‘knows’ about dietary silica. You don’t need to. Its already here: Meta Galactica AI Model Suspended After Problems.

Powerful and captivating machines are being let loose with no regulatory guardrails.

I worry that we are about to envelope ourselves in an epistemic fog; a veritable pea souper in which navigation becomes permanently difficult and increasingly dangerous. I hope I’m wrong, but ChatGPT hit a million users within a week of being introduced and these powerful and captivating machines are being let loose with no regulatory guardrails to stop their creators or help their users from straying into dangerous territory; no independent oversight; and little to no precautionary principle being exercised by the creators and masters of these mimetic machines. 

Perhaps it sounds dramatic but I believe this new generative form of AI is going to transform digitally entangled societies like ours profoundly.  

A final prediction, therefore. A prediction about how such societies, increasingly dependent on the kinds of machine envisaged by Forster or Čapek, will have to adapt and adjust if we are to avoid machine mediated myopia

Seeing through the fog

Besides the aforementioned and urgently needed regulatory guardrails, I foresee two other responses that will help societies cope with this rapidly enveloping epistemic fog. First stronger tools for transparency and verification. Secondly, better education for digital literacy and digital habits that protect and enhance a healthy soul. 

First, then, transparency and verification. The EU’s new AI Bill will require companies to notify users whenever they interact with an artificial agent. Between the technology of deepfakes and game playing bots like Meta’s Cicero, we have already surpassed the Turing test in increasingly broad areas of human machine interaction. But I anticipate a further shift in emphasis from ‘explainability’ - how any algorithm works per se - toward transparency – how it impacts and influences both individual users and society emergently. We need more publicly accessible evaluation of the holistic if unintended effects of our machines even now. That need is only going to grow.  

The fundamental question of transparency “who, or what is really in view here?” is going to take centre stage. 

One consequence may well be an increasingly fraught battle between, on the one hand, commercial intellectual property (IP) rights, and, on the other, individual rights and the common good. With the notable exception of sites like Wikipedia society has so far struggled painfully and inconsistently with the challenges of effective content moderation – especially where values rather than empirical facts are concerned. Until now, and to pick just one example; Facebook’s secretive behaviour and cherry picked transparency metrics have wilfully kept both customers and regulators in the dark. The idea that we can mechanise or automate by outsourcing intrinsically value-laden problems to algorithms, however mimetic the surface results, is patently utopian. Continuing to withhold evidence of biases and harms from generative deepfakery using AI can only invite a steeper descent towards dystopia. And as generative AI combines with increasingly convincing deepfake technology to fool every human sense the fundamental question of transparency “who, or what is really in view here?” is going to take centre stage with increasing importance.  

A veracity FAQ

Veracity will take on increasing scope as well as importance. Soon not just the ‘facts’ of a matter but equally basic questions like “who (or what?) is saying this?”, “why is this being said?” and “what are the consequences (holistically) of saying this?” will become central to deciding “is this true?” We are now in a situation where truth and fiction can be opaquely intermixed by machines autonomously at a pace and a scale, but also at a quality, that will overwhelm any fact-checking of the kind we deploy now. Proving our identity - including the basic fact that we are human, and protecting ourselves not merely from susceptibly to fakes but being faked will become increasingly important and will therefore become central tasks of the next web.   

Clearly there is a role for government here; a need for clear regulation, strong inspection and enforcement mechanisms, and an effective precautionary principle that ensures new techniques and new machines are only let loose in ways that have proven demonstrably safe. There will a role too for (new?) trustworthy bodies and institutions as fact-checkers and as repositories of verified content. New institutions as well as new technologies like https://datatrusts.uk/ are a helpful early response. 

Lastly, new demands and new digital habits will be needed by each one of us. The ancients associated a healthy soul with good habits but we are still at a formative stage of learning – and teaching one another – even healthy digital etiquette, let alone the digital habits and behaviours to keep humans safe and able to thrive as fully rounded souls navigating a world created for us by powerfully mimetic but deceptively soulless machinery. 

It won’t be easy. As Forster and others perceptively show, the machinery of modern life invites our souls towards decadence. Self-control is not in vogue. But the ancients have long associated the good life with cultivating character; with generosity, moderation, and self-less-ness as the only route to becoming truly whole. 

Article
Assisted dying
Care
Culture
Death & life
8 min read

The deceptive appeal of assisted dying changes medical practice

In Canada the moral ethos of medicine has shifted dramatically.

Ewan is a physician practising in Toronto, Canada. 

a doctor consults a tablet against the backdrop of a Canadian flag.

Once again, the UK parliament is set to debate the question of legalizing euthanasia (a traditional term for physician-assisted death). Political conditions appear to be conducive to the legalization of this technological approach to managing death. The case for assisted death appears deceptively simple—it’s about compassion, respect, empowerment, freedom from suffering. Who can oppose such positive goals? Yet, writing from Canada, I can only warn of the ways in which the embrace of physician-assisted death will fundamentally change the practice of medicine. Reflecting on the last 10 years of our experience, two themes stick out to me—pressure, and self-deception. 

I still remember quite distinctly the day that it dawned on me that the moral ethos of medicine in Canada was shifting dramatically. Traditionally, respect for the sacredness of the patient’s life and a corresponding absolute prohibition on deliberately causing the death of a patient were widely seen as essential hallmarks of a virtuous physician. Suddenly, in a 180 degree ethical turn, a willingness to intentionally cause the death of a patient was now seen as the hallmark of patient-centered doctor. A willingness to cause the patient’s death was a sign of compassion and even purported self-sacrifice in that one would put the patient’s desires and values ahead of their own. Those of us who continued to insist on the wrongness of deliberately causing death would now be seen as moral outliers, barriers to the well-being and dignity of our patients. We were tolerated to some extent, and mainly out of a sense of collegiality. But we were also a source of slight embarrassment. Nobody really wanted to debate the question with us; the question was settled without debate. 

Yet there was no denying the way that pressure was brought to bear, in ways subtle and overt, to participate in the new assisted death regime. We humans are unavoidably moral creatures, and when we come to believe that something is good, we see ourselves and others as having an obligation to support it. We have a hard time accepting those who refuse to join us. Such was the case with assisted death. With the loudest and most strident voices in the Canadian medical profession embracing assisted death as a high and unquestioned moral good, refusal to participate in assisted death could not be fully tolerated.  

We deceive ourselves if we think that doctors have fully accepted that euthanasia is ethical when only very few are actually willing to administer it. 

Regulators in Ontario and Nova Scotia (two Canadian provinces) stipulated that physicians who were unwilling to perform the death procedure must make an effective referral to a willing “provider”. Although the Supreme Court decision made it clear in their decision to strike down the criminal prohibition against physician-assisted death that no particular physician was under any obligation to provide the procedure, the regulators chose to enforce participation by way of this effective referral requirement. After all, this was the only way to normalize this new practice. Doctors don't ordinarily refuse to refer their patients for medically necessary procedures; if assisted death was understood to be a medically necessary good, then an unwillingness to make such referral could not be tolerated.  

And this form of pressure brings us to the pattern of deception. First, it is deceptive to suggest that an effective referral to a willing provider confers no moral culpability on the referring physician for the death of the patient. Those of us who objected to referring the patient were told that like Pilate, we could wash our hands of the patient’s death by passing them along to someone else who had the courage to do the deed. Yet the same regulators clearly prohibited referral for female genital mutilation. They therefore seemed to understand the moral responsibility attached to an effective referral. Such glaring inconsistencies about the moral significance of a referral suggests that when they claimed that a referral avoided culpability for death by euthanasia, they were deceiving themselves and us. 

The very need for a referral system signifies another self-deception. Doctors normally make referrals only when an assessment or procedure lies outside their technical expertise. In the case of assisted death, every physician has the requisite technical expertise to cause death. There is nothing at all complicated or difficult or specialized about assessing euthanasia eligibility criteria or the sequential administration of toxic doses of midazolam, propofol, rocuronium, and lidocaine. The fact that the vast majority of physicians are unwilling to perform this procedure entails that moral objection to participation in assisted death remains widespread in the medical profession. The referral mechanism is for physicians who are “uncomfortable” in performing the procedure; they can send the patient to someone else more comfortable. But to be comfortable in this case is to be “morally comfortable”, not “technically comfortable”. We deceive ourselves if we think that doctors have fully accepted that euthanasia is ethical when only very few are actually willing to administer it. 

We deceived ourselves into thinking that assisted death is a medical therapy for a medical problem, when in fact it is an existential therapy for a spiritual problem.

There is also self-deception with respect to the cause of death. In Canada, when a patient dies by doctor-assisted death, the person completing the death certificate is required to record the cause of death as the reason that the patient requested euthanasia, not the act of euthanasia per se. This must lead to all sorts of moments of absurdity for physicians completing death certificates—do patients really die from advanced osteoarthritis? (one of the many reasons patients have sought and obtained euthanasia). I suspect that this practice is intended to shield those who perform euthanasia from any long-term legal liability should the law be reversed. But if medicine, medical progress, and medical safety are predicated on an honest acknowledgment about causes of death, then this form of self-deception should not be countenanced. We need to be honest with ourselves about why our patients die. 

There has also been self-deception about whether physician-assisted death is a form of suicide. Some proponents of assisted death contend that assisted death is not an act of deliberate self-killing, but rather merely a choice over the manner and timing of one's death. It's not clear why one would try to distort language this way and deny that “physician-assisted suicide” is suicide, except perhaps to assuage conscience and minimize stigma. Perhaps we all know that suicide is never really a form of self-respect. To sustain our moral and social affirmation of physician-assisted death, we have to deny what this practice actually represents. 

There has been self-deception about the possibility of putting limits around the practice of assisted death. Early on, advocates insisted that euthanasia would be available only to those for whom death was reasonably foreseeable (to use the Canadian legal parlance). But once death comes to be viewed as a therapeutic option, the therapeutic possibilities become nearly limitless. Death was soon viewed as a therapy for severe disability or for health-related consequences of poverty and loneliness (though often poverty and loneliness are the consequence of the health issues). Soon we were talking about death as a therapy for mental illness. If beauty is in the eye of the beholder, then so is grievous and irremediable suffering. Death inevitably becomes therapeutic option for any form of suffering. Efforts to limit the practice to certain populations (e.g. those with disabilities) are inevitably seen as paternalistic and discriminatory. 

There has been self-deception about the reasons justifying legalization of assisted death. Before legalization, advocates decry the uncontrolled physical suffering associated with the dying process and claim that prohibiting assisted death dehumanizes patients and leaves them in agony. Once legalized, it rapidly becomes clear that this therapy is not for physical suffering but rather for existential suffering: the loss of autonomy, the sense of being a burden, the despair of seeing any point in going on with life. The desire for death reflects a crisis of meaning. We deceived ourselves into thinking that assisted death is a medical therapy for a medical problem, when in fact it is an existential therapy for a spiritual problem. 

We have also deceived ourselves by claiming to know whether some patients are better off dead, when in fact we have no idea what it's like to be dead. The utilitarian calculus underpinning the logic of assisted death relies on the presumption that we know what it is like before we die in comparison to what it is like after we die. In general, the unstated assumption is that there is nothing after death. This is perhaps why the practice is generally promoted by atheists and opposed by theists. But in my experience, it is very rare for people to address this question explicitly. They prefer to let the question of existence beyond death lie dormant, untouched. To think that physicians qua physicians have any expertise on or authority on the question of what it’s like to be dead, or that such medicine can at all comport with a scientific evidence-based approach to medical decision-making, is a profound self-deception. 

Finally, we deceive ourselves when we pretend that ending people’s lives at their voluntary request is all about respecting personal autonomy. People seek death when they can see no other way forward with life—they are subject to the constraints of their circumstances, finances, support networks, and even internal spiritual resources. We are not nearly so autonomous as we wish to think. And in the end, the patient does not choose whether to die; the doctor chooses whether the patient should die. The patient requests, the doctor decides. Recent new stories have made clear the challenges for practitioners of euthanasia to pick and choose who should die among their patients. In Canada, you can have death, but only if your doctor agrees that your life is not worth living. However much these doctors might purport to act from compassion, one cannot help see a connection to Nazi physicians labelling the unwanted as “Lebensunwortes leben”—life unworthy of life. In adopting assisted death, we cannot avoid dehumanizing ourselves. Death with dignity is a deception. 

These many acts of self-deception in relation to physician-assisted death should not surprise us, for the practice is intrinsically self-deceptive. It claims to be motivated by the value of the patient; it claims to promote the dignity of the patient; it claims to respect the autonomy of the patient. In fact, it directly contravenes all three of those goods. 

It degrades the value of the patient by accepting that it doesn't matter whether or not the patient exists.  

It denies the dignity of the patient by treating the patient as a mere means to an end—the sufferer is ended in order to end the suffering. 

 It destroys the autonomy of the patient because it takes away autonomy. The patient might autonomously express a desire for death, but the act of rendering someone dead does not enhance their autonomy; it obliterates it. 

Yet the need for self-deception represents the fatal weakness of this practice. In time, truth will win over falsehood, light over darkness, wisdom over folly. So let us ever cling to the truth, and faithfully continue to speak the truth in love to the dying and the living. Truth overcomes pressure. The truth will set us free.