Explainer
AI
Culture
Digital
7 min read

Challenging transhumanism’s quest to optimise our future

Instead of separating the human from the hardware, Oliver Dürr recommends rediscovering other ways of self-formation and improvement.

Oliver Dürr is a theologian who explores the impact of technology on humanity and the contours of a hopeful vision for the future. He is an author, speaker, podcaster and features in several documentary films.

A plastic sheet strewn with biology-related instruments.
A biohacking kit for a biology workshop.
Xavier Coadic, CC BY-SA 4.0, via Wikimedia Commons

Welcome to the age of transhumanism. In this world, the goal is to overcome all limitations and restrictions that hold human beings back. Science, technology, and medicine should allow us to live longer, healthier, and better lives. So runs the promise. But is there a peril that goes along with it? To answer that question, we need to take a closer look at the phenomenon of transhumanism, particularly the view of human beings that lies behind the glittery promises of an “optimised” future.  

Improving humans, however possible 

Transhumanism is a global movement that seeks to use all available technological means to “enhance” human beings. From curing illnesses and overcoming physical limitations to expanding mental abilities, the movement aims to overcome all obstacles to the current human condition. 

More precisely, it seeks to overcome all obstacles to the individual’s freedom to live the life he or she wants to live. In the attempt to enhance life, transhumanism veers beyond traditional forms of curing impairments (like compensating for bad sight with a pair of glasses) and ventures into more experimental fields (like manipulating the human eye to see ultraviolet or infrared light). Emotional or cognitive deficits (such as lack of concentration) are supposed to be overcome by “smart drugs” (like Methylphenidate / Ritalin) and even genetic modifications, and prostheses are considered to expand human capabilities.  

The goal is to create “superhuman” abilities. The holy grail of this movement is drastically extending the human lifespan (if it is in a state of health and vigour). Ultimately, transhumanists want to “overcome” death.  

There are two paths within the transhumanist movement on which they hope to arrive at this sacred goal: a biological and a post-biological way.  

Biological transhumanism 

Let’s have a look at “biological transhumanism” first: The focus here is on our current, carbon and water-based bodies. Weak and fragile as they are, biological transhumanists must make do with them to achieve the greater things they envision. Human beings must be treated with drugs, and a host of prefixed technologies: bio-, gene-, and nano-. 

Aubrey de Grey’s project of postponing death by achieving a “longevity escape velocity” is a good illustration of the movement. De Grey is convinced that novel biomedical technologies can achieve a limitless extension of the human life span: “If we can make rejuvenation therapies work well enough to give us time to make them work better,” he writes, “that will give us additional time to make them work better still” and so on. The time gained with a particular innovation must only be greater than the time needed to achieve another such advancement. Therefore, he argues, the effective death of people alive today can be staved off indefinitely.  

De Grey is not alone in transhumanist circles to predict such outcomes. Google’s Ray Kurzweil has a similar view: “We have the means right now to live long enough to live forever”.  

Such optimistic prognoses bank on a view of human beings as being essentially a body-machine that can be controlled and improved at will. The key to unlocking its potential is information theory.  

Think of human beings as an algorithm, and, in principle, all their problems can be solved by engineering. Cultural critic Evgeny Morozov poignantly called this approach “technological solutionism”. From a ‘solutionist’ perspective, humanity is increasingly seen as the problem that needs solving. Thus, not only must we develop new technologies to guarantee human life and freedom, but humanity needs to adapt. Those necessary “transformations” of the “human” are what inform the first dimension of the term “trans-humanism”. 

If human beings want a seat at the table in the digital future, they must find a way to merge with and dissolve into the digital sphere—or so the transhumanist narrative goes. 

Post-biological transhumanism 

The second path is “post-biological transhumanism”, which takes a more radical approach. Here, the focus is on leaving behind our current bodily form altogether and radically transcending the limitations of what it means to be human today. Those alterations, such transhumanists argue, will be so radical that calling the result “human” will no longer be adequate. The preferred means to achieve the future state are taken from the digital sphere: algorithms and information processes.  

The view of “the human as a machine” becomes more specifically “the human as a computer”. Mind, spirit and consciousness are understood to be the software within the hardware of the body. Human beings are perceived to be biological computers and thus in direct competition with digital computers. And those are becoming increasingly powerful by the hour. If human beings want a seat at the table in the digital future, they must find a way to merge with and dissolve into the digital sphere—or so the transhumanist narrative goes.  

Immortality in the Cloud? 

For post-biological transhumanists, the ultimate goal is called “mind-uploading”. The idea is that we can upload our minds (selves) to the internet and achieve immortality—at least if all we are is the sum of information processes in the brain and as long as the internet infrastructure is still available. Mind uploading requires leaving behind our current biological form of life altogether and dissolving into virtuality.  

This vision of virtual immortality is why post-biological transhumanists tend to place their hopes in information technologies, software algorithms, robotics and artificial intelligence research. They aim to overcome and entirely leave behind the “human” as it is. This move to “transcend” informs the second dimension of the term “trans-humanism”. 

In classical humanism, at least from the Renaissance to the 1970s, “human improvement” meant education, moral, intellectual, and practical formation and refinement towards a concrete ideal of humanity and the shaping of a society that enables such formative processes. 

Is there a solution? 

But can those transhumanistic approaches really deliver on their promises? 

Human beings have always tried to improve themselves—not least through technology. What is new today is how transhumanists define “better” and some means of realising those perceived benefits. With its solutionist approach to life, transhumanism discards large swaths of traditional techniques to “improve” human beings and their lives. In classical humanism, at least from the Renaissance to the 1970s, “human improvement” meant education, moral, intellectual, and practical formation and refinement towards a concrete ideal of humanity and the shaping of a society that enables such formative processes.  

But in the age of transhumanism, there is a tendency to believe that we can delegate such hard work of the self to a new technocracy and their algorithmic tools—who, to put it mildly, may not always have our best interests at heart.  

Freedom is best conceived, not as a mere “choice” to do what we please, but the liberty to live a truly fulfilling life, which almost always includes others .

The main problem, however, is that ultimately, we cannot delegate our future to machines because, after all, we aren’t machines. Instead, we must learn to live with ourselves, our limitations, and our finitude, or we will never be free. Freedom only ever begins once we learn to let go of ourselves and start living for and with others.  

The reason for this is that freedom is best conceived, not as a mere “choice” to do what we please, but the liberty to live a truly fulfilling life, which almost always includes others. Many of the things that make a future worth wanting in the first place are shared goods, relational, communitarian, cultural values and practices that needn’t be optimised or automated at all—at least not technologically.  

When building a sandcastle with my toddlers, that process needn’t be optimised (which realistically would mean excluding the toddlers from the process altogether). Rather, the process of doing it together is the point. Political decision-making processes, to take another example, also don’t have to be automated or made more efficient through algorithms. Struggle in deliberating how our society should look is the point. Without such moral deliberation, our public life is diminished. In many cases, the slowness, strenuousness and inefficiency of such processes is a feature, not a bug.  

A tech future beyond transhumanism 

Having this in mind changes the questions we pose in light of novel technologies: How (if at all) can they be integrated into our lives in such a way that they open up the world in its complexity, allowing us to experience the fullness of life and enabling us to shape the future we really want? 

It is time to rediscover and bring back religious and humanistic traditions of self-formation into our public debates about the future. Far from being relics of the past, soon to be discarded, they can provide us with tried and true values, practices and virtues around which we can organise our societies in the digital future. They provide us with the tools to unlock the sources of care and the will to create a better social framework in which human beings and technology find their place. The future need not be transhuman to be better; being fully human is quite enough.  

Article
Culture
Film & TV
Monsters
5 min read

Cartoon villains: who's the real baddie?

What kind of villain do we want?

James Cary is a writer of situation comedy for BBC TV (Miranda, Bluestone 42) and Radio (Think the Unthinkable, Hut 33).

 A cartoon chase sees a car driven by a cow escaping from a car of baddies under a giant poster of their villainous boss.
Jazz Cow vs. Dr Popp.

“Nobody thinks they’re the bad guy”. That’s a phrase I often use when helping people write situation comedies. It’s always useful to have a strong antagonist who gets in the way of our hero. But the villains tend not to consider themselves to be evil. In fact, they are offended at the suggestion. 

The Batman universe has turned the interesting villain to new levels. The Penguin is Gotham’s latest production, a brand-new TV series on HBO. Colin Farrell plays a highly nuanced anti-hero, exploring The Penguin’s “awkwardness, and his strength, and his villainy, yes, his propensity for violence”. Farrell told Comicbook.com he was attracted to the role because “there's also a heartbroken man inside there you know, which just makes it really tasty.” Audiences are often invited to have sympathy for the devil. Should we be worried about the blurring of the lines between good and evil? 

I’ve been asking myself this question as I’ve been writing a new animation which involves a villain called Dr Popp who is trying to take over a city. But what kind of villain do we want in 2024? 

Jack Nicholson’s portrayal of The Joker back in 1989 feels like pop-culture ancient history. His Joker was an embittered agent of chaos without many redeeming qualities but mercifully lacked the nihilism of later versions. It was an old-fashioned story of cops and robbers which has its own simplistic charm. But have those days gone forever, having been shot in the head and dropped off a bridge into a river? 

The problem is it is so easy to humanise evil. You just give it a human face. The arch-villains of the twentieth century – the Nazi members of the SS – are rather sweet when portrayed by comedians Mitchell and Webb. A nervous member of the SS Unit (Mitchell) waiting for an attack from the Russians looks at the skull on his cap and asks his fellow comrade-in-arms (Webb): “Hans, are we the baddies?” 

Any student of World War Two will know that it’s never as simple as good versus evil. Many terrible things were done by people who felt justified in their behaviour. Moreover, ‘the goodies’ also felt compelled to do morally dubious things – like the bombing of civilians in cities – in order to defeat ‘the baddies. After all, they started it.’ The truth is always far more complicated than the war films suggest. 

Dr Popp is the very worst kind of villain: he has great power and he wants to help. In his own mind, he’s completely clear about his mission. 

Ten years ago, I was researching real life baddies for my sitcom Bluestone 42 about a bomb disposal team set in Afghanistan. At times, I had to think like the Taliban who, in their own minds, were entirely justified in leaving bombs by the side of the road, to be triggered by British soldiers or Afghan children. They were pretty relaxed about the outcome. It’s hard to sympathise with this way of thinking, but it made sense to them. 

My internet search history from that time probably put me on some sort of Home Office watchlist. Maybe a small dossier was started on me. More recently, that dossier would have become thicker as I’ve moved sideways from sitcom into murder mysteries, having recently worked on Death in Paradise and Shakespeare and Hathaway. To work on shows like these, you need to be thinking of good reasons for good people to commit murder. Someone would need a very strong motive to commit a murder on an idyllic Caribbean island where the local detective has a 100 per cent resolution rate. You also need to research ingenuous methods for murdering people in a way that escapes detection. I’m surprised I’ve not yet had a knock on my door, or enquiries made to the neighbours to call a number if they see anything suspicious. 

But what about cartoon villains where nothing is real? The bold colours and the larger-than-life characters might suggest that there is more clarity about goodies and baddies. But there isn’t. Evil villains – that is, villains who realise they are evil – are extremely rare. Skeletor from He-Man and the Masters of the Universe comes to mind. This kind of demonic baddie can be entertaining with wit and charm, like Hades in the Disney movie, Hercules. This character had some brilliant one-liners and was superbly brought to life by the voice of James Woods. Overall, however, purely evil characters are hard to write. 

Cartoon villains need proper motivation. This is either a character flaw or a backstory. In The Lion King, Scar is consumed with envy that his brother is king – and a good one at that. In The Incredibles, Syndrome is playing out his sense of injustice that he was not allowed to be Mr Incredible’s sidekick, Incrediboy. In The Simpsons, Mr Burns is essentially Mr Potter from It’s a Wonderful Life. He’s a Scrooge-type figure who doesn’t care about love and respect. He just wants to own the town. 

The cartoon villain I’ve been thinking about is for a new animation project I’ve been working on called Jazz Cow. The eponymous hero is a saxophone-playing cow and a reluctant Bogart-style leader of a bohemian band of misfits. They are trying resist the advance of the all-consuming algorithm created by Dr Popp, the villain. But what’s his motivation? 

Dr Popp is the very worst kind of villain: he has great power and he wants to help. In his own mind, he’s completely clear about his mission. He’s trying to make the world better, easier, safer, cheaper, more efficient and convenient. Why would anyone want to refuse his technology, reject his software and keep away from his algorithm? 

This is why Dr Popp has to silence Jazz Cow, literally, by stealing his saxophone. He simply cannot allow Jazz Cow to delight audiences at Connie Snott’s with live improvised music. There’s no need for this music! Dr Popp has all the music you could possibly need, want or imagine. Why improvise when we have artificial intelligence? 

Dr Popp is a cartoon villain for today when relativism is still alive and well. ‘Good’ and ‘Evil’ are still concepts or points of view rather than absolutes. However, there is good and evil in Jazz Cow. But the evil doesn’t come from Doctor Popp. It comes from the user or consumer.  That would be us. 

‘The Algorithm’ is always learning and always trying to give us our hearts’ desire. And that’s the problem: our hearts frequently desire that which they cannot – and should not – have. Dr Popp’s algorithm is like a mirror held up to our faces. In it, we see the real baddie: ourselves. Not even Jazz Cow can save us from that. But what this horn-playing cow can do is to make the world a more humane place. 

  

For more information about Jazz Cow, and information on how you can make the show happen, take a look at our Kickstarter – and don’t worry. Jazz Cow would approve, as it’s the creative’s way of sticking IT to the man.