Review
Culture
Football
Sport
5 min read

Shootout: what penalties say about life

Football is a global language and the shootout is the end to Shakespearian tragedy.

Simon is Bishop of Tonbridge in the Diocese of Rochester. He writes regularly round social, cultural and political issues.

A footballer takes a penalty kick.
England v Columbia: 2018 World Cup shootout.

It is hard to pity entitled, overpaid footballers.  Until, that is, it comes down to penalties after extra time.  Even when you do not care who wins, the drama of the penalty shootout is so intense and all-consuming that every heart rate quickens. 

Is there a more exquisite form of sporting torture? 

Sport is laden with cliché, and the refrain, ‘it’s come down to the lottery of penalties’ is an established part of the lexicon.  But is it just the spin of the roulette?  That you can’t prepare effectively for the cauldron of the stadium?  

Not according to Geir Jordet, it isn’t.  The Norwegian Professor of Psychology and Football is on a mission to convince the world there is lots you can do to get ready and those that don’t are more likely to fail. 

There is skill involved in taking a penalty, an ability that can be honed with practice.  Individual players can be trained to take their time (but not too long), to establish a routine that helps them take control of the situation, to take careful breaths, and to focus.  They can be helped with blocking out the trash talk of opponents, especially goalkeepers, who subtly try to get under their skin in the seconds leading up to a penalty.  Extensive research can be carried out by data-rich backroom staff to help with preparation.  And behind all this is the recognition that taking a penalty is a team effort, not an individual one.   

This latter observation feels especially counter intuitive.  There is nothing more lonely than the appearance of one man or woman taking the long walk from the centre circle to the penalty spot.  But teams can support one another with words of encouragement and touch.  Not just in the grasping of each other’s shoulders in the centre circle, but in reaching out to those who both score and miss.  One reason Geir Jordet advises that the manager should choose penalty takers rather than look for volunteers is that they then can take full responsibility for the outcome.  It is hard to believe there are still times when a manager looks around at players after extra time, hoping to see in the eyes who is up to the task.  These duties should be sorted out in advance, with back-up plans for when players are injured or substituted.

Deciding war between opposing tribes based on an individual contest was quite common in the ancient world – effectively moving to the penalty shootout before the game, to save the effort.

Jordet, in his stimulating book Pressure: Lessons From The Psychology Of The Penalty Shoot Out says that anxiety is normal and should be embraced.  Greater openness round mental wellbeing is allowing the modern professional to admit this.  Erling Haarland, one of the world’s most accomplished goal scorers, has shared the fear he regularly feels round taking a penalty; it is hard to imagine a player from the 1970s saying the same. 

Missing a penalty in the shootout is inevitable; the only way it can conclude.  And statistics show that the world’s greatest players, like Lionel Messi, are not notably better at converting penalties than others.  On average, the best players have around an eighty percent success rate (which, significantly, is one missed penalty out of five in a shootout).  As in other professions, the best results are achieved by creating systems and cultures that can adapt quickly and honestly to errors and learn from them without humiliating those who fail. 

Reading the book cast my mind back to the archetypal shootout between David and Goliath.  Deciding war between opposing tribes based on an individual contest was quite common in the ancient world – effectively moving to the penalty shootout before the game, to save the effort.  Perhaps David should have lost it, and not just because of his size.  Beforehand, he had a serious bust up with his side and those who did not see him as a team player.  Then Goliath trash talked him like Emi Martinez is famed for with Aston Villa and Argentina.  And finally, he ran up to take his shot very quickly, without much reflection.  But then again, Geir Jordet would be the first to point out that preparing badly for a contest does not mean you can’t win it – just that you are less likely to. 

Football is a global language and the penalty shootout is like the cataclysmic end to a Shakespearian tragedy.  English fans are long suffering audiences of this trauma – from Italia 90 to Wembley 2021, via the 1996 Euros when football was coming home until a last minute wrong turning.  But many other nations have under-achieved at penalties, like Holland and Spain and, more recently, France.  Roberto Baggio of Italy missed the decisive penalty in the first World Cup Final to go to penalties in 1994.  He says of it:  

‘I failed that time.  Period.  And it affected me for years.  It was the worst moment of my career.  I still dream about it.’.   

The personal stakes are as high, if not higher, than the nation’s.   

We are left with the feeling that hugely divergent outcomes can emerge from the smallest and most random of causes.  The human tendency is then to rationalise the outcome in ways that make it seem inevitable.  Geir Jordet is aware of this in football, but in other walks of life, we continue to build up wobbly cases on shallow evidence as a way of warding off anxiety or the fear that others will think we are clueless if we admit to the existence of chance.  Most people are right less than eighty percent of the time; something we might hold in mind when the next England players make that solitary walk to the penalty spot.  

Review
AI
Books
Culture
Education
Monsters
5 min read

Are we letting a monster or saviour into the classroom?

Examining Sal Khan’s confidence in artificial intelligence.

Krish is a social entrepreneur partnering across civil society, faith communities, government and philanthropy. He founded The Sanctuary Foundation.

A board of experts sit at a table against a conference backdrop.
Sal Khan, left, at an AI summit.
White House via Wikimedia Commons.

I've watched enough dystopian movies to know that there are lots of reasons to be nervous about the rise of the machines. Whether it’s the Terminator universe where the internet becomes sentient and creates autonomous robots to eradicate humanity, Neo battling an artificial intelligence that enslaves humans in The Matrix, or Will Smith fending off helper robots bent on taking over the planet in I, Robot, technological advances often fuel an array of nightmare scenarios. As if to make matters worse, science fiction has an uncanny knack for becoming science fact – I think of how shows like Star Trek accurately foretold mobile phones, wearable tech and virtual assistants. The line between imagined catastrophe and reality might be thinner than we might like to admit. 

Perhaps we shouldn’t be surprised then, that our creative industries are sending out dire warnings about the impact of the latest breakthrough technology - Artificial Intelligence (AI). Like Mary Shelley’s Frankenstein in the middle of the industrial revolution, and Godzilla in the dawn of the nuclear era, dystopian fiction is par for the course of scientific advancement. It all stems, I believe, from our deep human response to the unknown – the fear instinct. But I have recently come across a surprising new voice of reassurance in Sal Khan’s book Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing)

Khan’s book comes recommended by Bill Gates - a reliably voracious reader and one of the founding fathers of the global information technology revolution. But Khan also has his own excellent credentials. From tutoring his niece online using a simple online drawing programme called Yahoo Doodle, he began creating YouTube videos and soon amassed over 450 million views. This led to his creation of the now world-renowned Khan Academy which has revolutionised online education. By 2023, it had more than 155 million registered users, with students spending billions of hours of learning on the platform.  

Teachers are concerned that AI could undermine their expertise, much like satellite navigation diminished the skills of London Black Cab drivers. 

You may also like

It seems to me that AI has the potential to upend the Khan Academy business model, however Khan does not take the opportunity to discredit AI or even to highlight its dangers in a bid to reinforce the advantages of his existing products. Nor does he buy into the doom and fearmongering about the impact of digital technologies on young minds, as Jonathan Haidt does in his recent bestselling book Anxious Generation. Instead, he writes a hopeful and imaginative book on AI’s potential for further transforming education for good.  

Khan’s perspective comes amidst great fear in educational circles that generative AI will mean the end of education. Students can currently ask ChatGPT to generate an outline for them for an essay, suggest copy, check grammar and accuracy, offer improvements, translations, and factchecks, as well as write a conclusion, edit for wordcount, add footnote references and more. Indeed, entire books available for sale on Amazon have been allegedly written solely by AI. Teachers and lecturers are understandably concerned about the potential for plagiarism. If teachers are no longer able to discern what a student has written for themselves and what a computer has generated, the assessment process becomes meaningless. 

Teachers are concerned that AI could undermine their expertise, much like satellite navigation diminished the skills of London Black Cab drivers. After years of mastering 'The Knowledge'—an arduous and demanding process requiring exceptional memory and recall—this once-essential qualification was rendered almost obsolete. New drivers now need little more than a GPS and an Uber account to compete, a shift that highlights how quickly hard-earned skills can become irrelevant in the face of technological advances. Many teachers fear a similar fate as AI continues to encroach on their domain. 

While AI may not be the evil monster that will destroy us, neither is it the perfect saviour that will solve all society’s ills. 

Khan offers an important alternative view. He sees the possibility that AI could, for example, help coach students on essay writing. By reading work, marking it and suggesting improvements, AI could not only save the teacher valuable time but help students take their work to an even higher level.  

Khan offers a similar hopeful alternative to those who blame digital technology advances for the crisis in young person’s mental health. What if AI could help offer coping mechanisms, coaching and tailored advice that can help improve the mental health of students? His vision for the Khan academy virtual assistant ‘”Khanmigo” reminded me of BayMax from Disney’s Big Hero 6 – the large inflatable, huggable robot with a calm, compassionate and loyal personality, highly committed to every aspect of his user’s wellbeing.  

Amid voices that demonise AI, Khan’s is a useful antidote, however I wonder if he has gone too far. While AI may not be the evil monster that will destroy us, neither is it the perfect saviour that will solve all society’s ills. Understatement is not Khan’s strong point. Instead, sometimes he becomes so carried away in excitement that I feel his book begins to sound like an infomercial for his own, current and future products.  

I wish that Khan had taken a slightly different tack – no less inspiring about the potential of AI, but also recognising its limits. After all education is as much about transformation as it is about information. It should lead to character formation as much as skill acquisition. Emphasising these aspects of moral and perhaps even spiritual mentorship, we can see that education remains irreplaceably human.  

AI has huge potential to help and to hinder us in our educative responsibilities to the next generation– and so questions remain – not if AI will change our world, but how. We need to ask not just what benefits it could bring, but who it could benefit most usefully.