Review
AI
Ambition
Culture
6 min read

The awe and outrage of Musk's toxic ingenuity

Walter Isaacson’s Elon Musk, is a biographic rollercoaster reckons Krish Kandiah. One marked by magnificent moments and moral crossroads.

Krish is a social entrepreneur partnering across civil society, faith communities, government and philanthropy. He founded The Sanctuary Foundation.

Elon Musk, wearing a dark suit, stands on a stage to a white robotic looking surgical robot.
Elon Musk at a demonstration of the Neuralink technology in 2020.

There is something both inspiring and unnerving about Elon Musk. He is a game-changing pioneer and innovator in so many industries pivotal to our future: rockets, electric cars, solar panels, batteries, satellite Wi-Fi, and Artificial Intelligence. But he is also no stranger to scandal, controversy and allegation. In his latest biography, author Walter Isaacson explores the toxicity as well as the ingenuity that has come to be associated with the richest man on the planet.  As he reveals Musk’s series of successes, and what has been sacrificed to acquire them, I found myself going on an emotional journey: from compassion to awe to outrage.  

Compassion: a man familiar with misery 

In the opening chapter of his book Isaacson draws attention to the trauma in Elon’s childhood. Perhaps unsurprisingly, Elon was socially awkward at school. When he once pushed back at a boy who bumped into him, he was being beaten up so badly his face was unrecognisable. When he returned from hospital Musk reports how his father reacted: “I had to stand for hours. He yelled at me and called me an idiot and told me that I was just worthless.” There are a number of similar stories from Musk’s seemingly brutal childhood. Errol Musk, Elon’s father, features heavily in a series of shocking revelations including that he slept with his own stepdaughter, fathering two children with her. The background of Musk’s chaotic childhood, his experience of domestic abuse, and his series of fractured relationships provides a context for some of the strange, indeed outrageous things catalogued in the book. 

Having worked for many years with children in the care system and with refugee experience, I understand a little about the impact of trauma and how it can change the brain in profound ways. There is a great deal of evidence showing how adverse childhood experiences can cause long-lasting impact on decision-making, impulse control, relationship building, mental health management and emotional regulation.  While many turn to alcohol, drugs or self-harm as coping mechanisms, others, perhaps like Musk, channel the pain into ambitions and achievements.   

I found myself feeling profoundly sorry for Musk. No child should have to experience such prolonged cruelty both at school and at home. All of us need to know that we are loved and valued, independent of anything we have done or anything that has been done to us.  

Awe: a man of magnificent moments 

Musk’s ideas have revolutionised so many industries. The automotive industries move to electric power owes a lot to the innovation of Tesla. His Space X programme is currently changing the way we think about space travel. His company was the first to create self-landing reusable rockets and was the first private owned company to develop a liquid-propellant rocket that reached orbit; the first to launch, orbit, and recover a spacecraft; the first to send a spacecraft to the International Space Station; and the first to send astronauts to the International Space Station. He is also trying to revolutionise Artificial Intelligence (AI) through his company xAI - a direct competitor to Open AI even though he was one of their early backers.  

Musk has a complex relationship with AI as he is not only one of the lead innovators in the field but also the most prominent of the 33,000 signatories of a letter calling for a pause to ‘Giant AI Experiments’ until there is, in Musk’s words, “a regulatory body established for overseeing AI to make sure that it does not present a danger to the public." 

AI, alongside each of the other major interest areas in Musk’s work, is way beyond any dreams I ever had of a futuristic world. Musk has managed not only to imagine the unimaginable, but to find a way to get there with impressive speed, scale and sustainability values. The more I read about the innovations involved in each step of each project, the more impressed I am with the genius behind them.  

Outrage: a man without a moral compass? 

Despite Walter Isaacson’s clear respect for all Musk is achieving, he paints a warts-and-all picture of his book’s subject. We see a man who is ruthless in his hirings and firings, who has often treated staff and colleagues badly. In 2018, he famously called a rescue diver, helping to save teenage boys from a flooded cave in Thailand, a ‘paedo’, in what seemed to be a reaction to a snub to his offer of using his minisub.  

In light of these sorts of outbursts, and his apparent desire to save the world from looming environmental disaster, it is no wonder that some people have accused Musk of having a messiah complex. Yet if he does, it is a very different mindset from the true messiah. He appears to me to be morally, emotionally and financially the polar opposite to the Jesus whose willingness to sacrifice himself on behalf of those in need was central to his claim to be sent from God. From the way Isaacson describes Musk, I see him more as a man on a mission to save himself than to save those around him.   

The future? Musk, a man at a crossroads. 

Isaacson closes his book with the following analysis:  

“But would a restrained Musk accomplish as much as Musk unbound? Is being unfiltered and untethered integral to who he is? Could you get the rockets to orbit or the transition to electric vehicles without accepting all aspects of him, hinged and unhinged? Sometimes great innovators are risk-seeking man-children who resist potty training. They are reckless, cringeworthy, sometimes even toxic. They can also be crazy. Crazy enough to think they can change the world.” 

I find this a disconcerting epilogue to the book. It suggests that we can pardon toxicity in the name of innovation, that the ends always justify the means, that morality and decency can take second place to advancement and wealth. If this stance were to be applied to, say, the development of AI, Musk’s fears of it becoming a danger to the public may sadly well be realised.  

While factors such as grand ambition, the contribution to society, early years trauma, and mental health struggles may provide a robust explanation of why a person may be toxic, toxicity itself can never be excused. No amount of wealth can undo the harm toxic masculinity does to those around us. No amount of charitable giving can buy a person a generous spirit or moral compass. No amount of environmental awards can create the sort of world we really want to live in in the future – a world where people treat one another with the respect they need and deserve.     

Elon Musk’s biography is unusual because he is still mid-journey. Who knows what else he may go on to achieve or fail at, to create or destroy? Will his AI revolution be a force for good, helping to create a better future for those who need it most, or will it become the behemoth of the doomsayers? What will future editions add to his biography? Is being ‘untethered’ really integral to who Musk is, or can he change? The visionary in me would love to imagine a redemption and transformation story for Musk that can unleash a compassionate generosity that could even overshadow his creative genius. The sceptic in me fears he may end up doing more harm than good. 

Explainer
AI
Belief
Creed
5 min read

Whether it's AI or us, it's OK to be ignorant

Our search for answers begins by recognising that we don’t have them.

Simon Walters is Curate at Holy Trinity Huddersfield.

A street sticker displays multiple lines reading 'and then?'
Stephen Harlan on Unsplash.

When was the last time you admitted you didn’t know something? I don’t say it as much as I ought to. I’ve certainly felt the consequences of admitting ignorance – of being ridiculed for being entirely unaware of a pop culture reference, of being found out that I wasn’t paying as close attention to what my partner was saying as she expected. In a hyper-connected age when the wealth of human knowledge is at our fingertips, ignorance can hardly be viewed as a virtue. 

A recent study on the development of artificial intelligence holds out more hope for the value of admitting our ignorance than we might have previously imagined. Despite wide-spread hype and fearmongering about the perils of AI, our current models are in many ways developed in similar ways to how an animal is trained. An AI system such as ChatGPT might have access to unimaginable amounts of information, but it requires training by humans on what information is valuable or not, whether it has appropriately understood the request it has received, and whether its answer is correct. The idea is that human feedback helps the AI to hone its model through positive feedback for correct answers, and negative feedback for incorrect answers, so that it keeps whatever method led to positive feedback and changes whatever method led to negative feedback. It really isn’t that far away from how animals are trained. 

However, a problem has emerged. AI systems have become adept at giving coherent and convincing sounding answers that are entirely incorrect. How has this happened? 

This is a tool; it is good at some tasks, and less good at others. And, like all tools, it does not have an intrinsic morality. 

In digging into the training method for AI, the researchers found that the humans training the AI flagged answers of “I don’t know” as unsatisfactory. On one level this makes sense. The whole purpose of these systems is to provide answers, after all. But rather than causing the AI to return and rethink its data, it instead developed increasingly convincing answers that were not true whatsoever, to the point where the human supervisors didn’t flag sufficiently convincing answers as wrong because they themselves didn’t realise that they were wrong. The result is that “the more difficult the question and the more advanced model you use, the more likely you are to get well-packaged, plausible nonsense as your answer.” 

Uncovering some of what is going on in AI systems dispels both the fervent hype that artificial intelligence might be our saviour, and the deep fear that it might be our societal downfall. This is a tool; it is good at some tasks, and less good at others. And, like all tools, it does not have an intrinsic morality. Whether it is used for good or ill depends on the approach of the humans that use it. 

But this study also uncovers our strained relationship with ignorance. Problems arise in the answers given by systems like ChatGPT because a convincing answer is valued more than admitting ignorance, even if the convincing answer is not at all correct. Because the AI has been trained to avoid admitting it doesn’t know something, all of its answers are less reliable, even the ones that are actually correct.  

This is not a problem limited to artificial intelligence. I had a friend who seemed incapable of admitting that he didn’t know something, and whenever he was corrected by someone else, he would make it sound like his first answer was actually the correct one, rather than whatever he had said. I don’t know how aware he was that he did this, but the result was that I didn’t particularly trust whatever he said to be correct. Paradoxically, had he admitted his ignorance more readily, I would have believed him to be less ignorant. 

It is strange that admitting ignorance is so avoided. After all, it is in many ways our default state. No one faults a baby or a child for not knowing things. If anything, we expect ignorance to be a fuel for curiosity. Our search for answers begins in the recognition that we don’t have them. And in an age where approximately 500 hours of video is uploaded to YouTube every minute, the sum of what we don’t know must by necessity be vastly greater than all that we do know. What any one of us can know is only a small fraction of all there is to know. 

Crucially, admitting we do not know everything is not the same as saying that we do not know anything

One of the gifts of Christian theology is an ability to recognize what it is that makes us human. One of these things is the fact that any created thing is, by definition, limited. God alone is the only one who can be described by the ‘omnis’. He is omnipotent, omnipresent, and omniscient. There is no limit to his power, and presence, and knowledge. The distinction between creator and creation means that created things have limits to their power, presence, and knowledge. We cannot do whatever we want. We cannot be everywhere at the same time. And we cannot know everything there is to be known.  

Projecting infinite knowledge is essentially claiming to be God. Admitting our ignorance is therefore merely recognizing our nature as created beings, acknowledging to one another that we are not God and therefore cannot know everything. But, crucially, admitting we do not know everything is not the same as saying that we do not know anything. Our God-given nature is one of discovery and learning. I sometimes like to imagine God’s delight in our discovery of some previously unknown facet of his creation, as he gets to share with us in all that he has made. Perhaps what really matters is what we do with our ignorance. Will we simply remain satisfied not to know, or will it turn us outwards to delight in the new things that lie behind every corner? 

For the developers of ChatGPT and the like, there is also a reminder here that we ought not to expect AI to take on the attributes of God. AI used well in the hands of humans may yet do extraordinary things for us, but it will not truly be able to do anything, be everywhere, or know everything. Perhaps if it was trained to say ‘I don’t know’ a little more, we might all learn a little more about the nature of the world God has made.