Explainer
America
Comment
6 min read

The Cold War, the Internet and America’s nones

How does a culture lose religion so rapidly? Stephen Bullivant investigates the American phenomenon of ‘nonversion'.

Stephen Bullivant is a professor of theology and sociology at St Mary’s University, UK, and the University of Notre Dame, Australia.

Image from the series South Park

Those even passingly familiar with American religious trends over the past couple of decades will have read or heard the phrase ‘rise of the nones’ countless times. And with good reason. While in Britain the proportion of people telling pollsters they have ‘no religion’ grew rapidly over the twentieth century – it was already 43% when the British Social Attitudes survey began in 1983; it tipped 50% a little over a decade later – in America the figure stayed stubbornly low. According to Gallup polls, only 5% of American adults identified with no religion in 1975. Twenty years later, in 1995, it was still 5%.

But then, seemingly very suddenly, things started to change. Beginning in the late nineties, then rapidly accelerating in the early 2000s, each new survey showed the nones getting bigger and bigger. Depending on which survey one looks at, nones now account for  somewhere between a quarter and third of American adults. Even at the lower end, that amounts to some 60 million people. And they’re still growing.

This raises a natural question: Why now? Or rather, what is it about the nineties and early 2000s that pushed or pulled large swathes out of thinking of themselves as religious? Various ways of measuring American religiosity all indicate that something significant must have happened around then. But what

A prior, deeper puzzle

That, at least, is the obvious way of approaching things. And to be fair, it has much to recommend it: something, or rather a combination of somethings, certainly did happen to American religion in those critical years. But this in itself raises a prior, deeper puzzle: why hadn’t the numbers of American nones already risen before the late nineties or early naughts? In all manner of other, quasi-comparable countries – Britain, Canada, Australia, France – the nones started growing steadily from the 1960s onwards. Yet while the sixties had all manner of other disruptive and destabilizing effects on American culture, society, politics, and religion, the proportion of nones grew only a little bit, then stopped.

At the risk of gross oversimplification, if one were to look for a sufficiently big ‘something’ within American society, mostly absent from those other countries, which could plausibly have kept non-religiosity artificially low in these decades, then there is an obvious candidate: the Cold War. Or more specifically, the precise and peculiarly religious way in which it was framed in the USA. 

A final, all-out battle

We Brits were as up to our neck in the Cold War as anyone. But only in America, I think, was the Cold War ever popularly framed as a “final, all-out battle between commu­nistic atheism and Christianity”, to quote Republican Senator Joseph McCarthy. Remember too that it was only in the mid-1950s that Congress adopted “In God We Trust” as America’s official motto, and “under God” was added to the Pledge. During the Pledge debates in Congress, the Democrat Louis C. Rabaut’s summed up a common view on both sides of the aisle:

“You may argue from dawn to dusk about differing po­litical, economic, and social systems but the fundamental issue which is the unbridgeable gap between America and Communist Russia is a belief in almighty God.”

This wasn’t just an issue with wide bipartisan and popular support view, it was thoroughly ecumenical too. While McCarthy and Rabaut were Catholics, it was a Presbyterian president, Eisenhower, who signed the “under God” bill into law. As Eisenhower himself put it during his 1952 election campaign:

“What is our battle against communism if it is not a fight between anti-God and a belief in the Almighty?”

Embellishing the city on a hill

It was also during the Cold War that presidents began likening America to the biblical “city built on a hill” – all the better positioned, one presumes, to scour the horizon for incoming Soviet missiles. Kennedy was the first US president to use it. Reagan, adding his own embellishment of “shining,” would make it his, and many of his countrymen’s, own. Taken together, all this helped lay down a deep, implicit association between being un-religious and being un-American. Atheism itself bore the brunt of this, but it more generally ruled out as­sociated ideas and identities – including thinking of oneself as having “no religion” – as live options for the great majority of Americans.

Riven fault lines

Meanwhile, the cultural fault lines that begin obviously opening up in the late sixties – gender equality, sexual liberation – kept on widening, with new generations socialized into ever more liberal baselines. This created a growing values gap between traditional Christian views and the wider mainstream culture, on topics that were very personal to, and thus felt very deeply by, people on all sides. This meant that, while churches tended to be most visible on the 'conservative side' of various battlegrounds, they were also often deeply riven by internal versions of the same debates. Not surprisingly, church attendance, at least within Catholic and mainline churches, started falling steadily in the seventies and (except where immigration has helped fill the pews) has never really stopped.

The Internet of ideas and identities

On this basic account – and there is much that could be, and elsewhere has been, added to it – the thawing of the Cold War is obviously significant. Note that it is the Millennial generation, only the youngest of whom are able to remember the Cold War (and even then mostly from holiday reruns of Red Dawn and Rocky IV), who were at the vanguard of the rise of the nones. They were also the first generation to be true digital natives, opening many of them up to a much wider range of ideas and identities than hitherto available. This has been especially effective at chipping away the walls of some of America’s stronger religious subcultures. My ex-Mormon interviewees, especially, cite “the wonderful thing called the internet” as being “the game-changer”.

Serious discussion and South Park

The Millennials started coming of age, and indeed responding to pollsters’ surveys, in the early 2000s. This was also around the time when, arguably for the first time since maybe the hugely popular writer and speaker  Robert “The Great Agnostic” Ingersoll a century before, unbelief was being seriously discussed everywhere from primetime talkshows to episodes of South Park. The bestselling books of the New Atheists – principally Sam Harris, Richard Dawkins, Daniel Dennett, and Christopher Hitchens – evidently hit upon some long pent-up demand. They were also, significantly here, able to position atheism, and 'no religion' more generally, as a panacea for a world awash with religion. Harris, for example, makes much of how he started writing The End of Faith on September 12th. Dawkins made no secret about his wanting to run adverts with an image of the Twin Towers and the tagline “Imagine no religion…”.

Cultural space opens

Whatever one makes of such arguments, similar rhetorical moves would have had less intuitive appeal to earlier American generations, learning to duck and cover from atheists’ H-bombs: the stuff of Americans’ nightmares were now those with too much religion, rather than not enough. While the long term impact of the not-so-New Atheism is hard to judge – many nones are keen to distance themselves from what they saw as its “dogmatism” and “extremism”, even while agreeing with much of it – it certainly helped open up ‘cultural space’ for being both American and non-religious that the Cold War had (outside of various enclaves, such as college towns and certain big cities) largely sealed shut. As we have seen, it is one that a good quarter of American adults are quite comfortable placing themselves within.

So yes, new somethings indeed happened in the final years of the twentieth century and the first years of the twenty-first: and these helped drive the uptick of nones. But these happened at the same time as the none-inhibiting effects of a much earlier something had more or less worn off, especially among the emerging genera­tions most affected by the new somethings. It is this combination of factors— akin to pulling one foot off the brake as you slam the other down on the accelerator— that explains quite why the nones rose so suddenly and (seemingly) out of nowhere.  

 

Explainer
Assisted dying
Comment
9 min read

Assisted dying's language points to all our futures

Translating ‘lethal injection’ from Dutch releases the strange power of words.
A vial and syringe lie on a blue backdrop.
Markus Spiske on Unsplash.

In the coming weeks and months, MPs at Westminster will debate a draft bill which proposes a change in the law with regards to assisted dying in the UK. They will scrutinise every word of that bill. Language matters. 

Reading the coverage, with a particular interest in how such changes to the law have been operationalised in other countries, I was struck to discover that the term in Dutch for dying by means of a fatal injection of drugs is “de verlossende injectie.” This, when put through the rather clunky hands of Google translate, comes out literally as either “the redeeming injection” or “the releasing injection.” Of course, in English the term in more common parlance is “lethal injection”, which at first glance seems to carry neither of the possible Dutch meanings. But read on, and you will find out (as I did) that sometimes our words mean much more than we realise.   

Writing for Seen & Unseen readers, I explained a quirk of the brain that tricked them into thinking that the word car meant bicycle. Such is the mysterious world of neuroplasticity, but such also is the mysterious world of spoken language, where certain combinations of orally produced ‘sounds’ are designated to be ‘words’ which are assumed to be indicators of ‘meaning’. Such meanings are slippery things.  

This slipperiness has long been a preoccupation for philosophers of language. How do words come to indicate or delineate particular things? How come words can change their meanings? How is it that, if a friend tells you that they got hammered on Friday night, you instinctively know it had nothing to do with street violence or DIY? Why is it that in the eighteenth century it was a compliment to be called ‘silly’, but now it is an insult?  

Some words are so pregnant with possible meaning, they almost cease to have a meaning. What does “God” mean when you hear someone shout “Oh my God!”? Probably nothing at all, or very little. It is just a sound, surely? And yet no other sound has ever succeeded in fully replacing it. We are using the term “God”, as theologian Rowan Williams points out in his book The Edge of Words, as a “one-word folk poem” to refer to whatever we feel is out of our control.     

Both of these first two interpretations look at death, in some sense, ‘from the other side’ – evaluating the end of someone’s life in terms of speculation over what will happen next. 

This idea of an injection being verlossende seems to me to be the opposite. I find myself hearing it in four different (and not mutually exclusive) ways, each to do with taking control of this very uncertain question of dying. The first, releasing, sounds to me like an echo of the neo-platonic ideas that still infuse public consciousness about what it means to be dead. As we slimily carve our pumpkins for Halloween and the children clamour to cut eyeholes into perfectly good bedsheets, we see a demonstration of society’s latent belief that humans are made up of body and soul, and that at death the soul somehow leaves the body and floats into some unknown realm (or else remains, disembodied yet haunting). If we translate verlossende as releasing then we capture that idea – that of the soul, which longs to be at peace, trapped inside suffering, mortal flesh. 

Google’s second suggestion for verlossende was redeeming. This could be heard theologically. Christians believe in eternal life, that the death of this earthly body is only the start of something new – a life where there will be no crying or pain, and people will live forever in the glorious presence of God. In the bible, the apostle Paul encourages those who follow Christ to trust that they have been marked with a ‘seal’, meaning that they are like goods which have been purchased for a price, and that God will ‘redeem’ this purchase at the appointed time. Death, therefore, is not a fearful entering into the unknown, but a faithful entering into God’s promises.  

Both of these first two interpretations look at death, in some sense, ‘from the other side’ – evaluating the end of someone’s life in terms of speculation over what will happen next. But there is the view from this ‘side’ also. We do not need to speculate about what death means for some of those who experience acute suffering due to terminal illness, and who wish to hasten the end of their lives because of it. They too might want to speak of a releasing injection or a redeeming injection – given that both terms hint at the metaphor of life as a prison sentence. To be in prison is to have one’s rights and freedoms severely limited or entirely taken away. It is not uncommon to hear a sufferer refer to incapacitating illness as being ‘like a prison sentence’, and one can empathise with the desire to have the release date set, back within the sufferer’s control.  

This is the strange power and pregnancy of words – verlossende is able to carry all these meanings or none of them. Until I began researching this article, I had always assumed that the English term, lethal injection, simply meant an injection of some substance that is deadly. This is how the term is commonly understood, therefore, in a sense, this is its meaning. Yet, when I came to consider the possible origins of the word, I realised its likely etymology is from the Greek word lēthē, meaning ‘to forget’. In the Middle Ages, if something was lethal it caused not just death, but spiritual death, placing one beyond the prospect of everlasting life. By contrast, something could be fatal, meaning only that it brought one to one’s destiny or fate.  

With this in mind, as we try to speak clearly in the assisted dying debate, the term fatal injection might be a more precise way to describe this pathway to death that is in want of a name. After all, whether you believe in an afterlife or not, dying is everybody’s fate, and I can see that choosing to take control of one’s fate is, for anyone, an act of faith with regards to what comes next.  

  

This article was part-inspired by Theo Boer’s original article Euthanasia of young psychiatric patients cannot be carried out carefully enough, in Dutch newspaper Nederlands Dagblad.  Theo is a professor of health ethics at the Protestant Theology University, Utrecht. 

Read the original article in Dutch or an English translation below. Reproduced by permission.

 

 

Euthanasia of young psychiatric patients cannot be carried out carefully enough 

Theo Boer 

How is it possible to determine that patients who have suffered from psychiatric disorders for five or ten years and who are between the ages of 17 and 30 have ‘completed their treatment options’, wonders Theo Boer. It also conflicts with perhaps the most important task of psychiatrists: ‘offering hope.’  

The patients we are talking about now are not physically ill and therefore do not have the ‘comfort’ of an impending natural death. 

A letter was recently leaked in which leading psychiatrists ask the Public Prosecution Service to investigate the course of events surrounding euthanasia of young psychiatric patients.  

One death mentioned by name concerns seventeen-year-old Milou Verhoof, who received the redeeming injection from psychiatrist Menno Oosterhoff at the end of 2023. It will not have escaped many people's attention how much publicity the topic has received in the past year or so. Together with a colleague and a patient (who later also received euthanasia), Oosterhoff wrote the book Let me go.  

The tenor was: it is good that euthanasia is possible for this group of patients, the taboo must be removed, their suffering is often terrible, they have already had to undergo countless 'therapies' without effect - can one time be enough?  

Or would we rather have these patients end their lives in a gruesome way? And who really thinks that psychiatrists make hasty decisions when they decide to comply with a euthanasia request?  

To be clear: we are talking about something completely different than what has been called 'traditional euthanasia' for years: euthanasia for physically ill patients with a life expectancy of weeks or months. Given the excellent palliative care that has become available, such euthanasia will actually be less and less necessary in 2024.  

Panic  

No, the patients we are talking about now are panicky, anxious, confused, depressed, lonely, often unemployed, poorly housed, without prospects. But they are not physically ill and therefore do not have the 'comfort' of an impending natural death.  

I have heard several of them say: if only I were terminal, then euthanasia would not be necessary. The fact that there is now attention for this group of patients, with whom we in our hurried and solution-oriented society know so little how to deal, is a gain. At the same time, I am happy with the leaked letter. You can criticize Oosterhoff's procedural approach ('why not an ethical discussion instead of a legal one?'), the lack of collegiality, this perhaps underhanded action ('why did you go straight to the Public Prosecution Service?'). But in my opinion, the letter writers are definitely hitting the mark with this crooked stick. Firstly: how is it possible to determine that patients who have suffered from psychiatric disorders for five or ten years and who are between the ages of 17 and 30 have ‘completed their treatment options’ (a criterion from the Euthanasia Act)?  

Review Committee  

Nobody disputes that their suffering is unbearable. At the same time, I know from my time on a Regional Euthanasia Review Committee that an illness becomes unbearable when all hope is gone.  

A psychiatrist who gives euthanasia to a young adult is also undeniably sending the signal that, like his patient, he has given up all hope of improvement. That is actually risky, because even patients who have suffered for years sometimes recover and, moreover, our brains are not fully developed until we are 25. But it also conflicts with perhaps the most important task of psychiatrists: offering hope. In their training, the risk of transference-counter-transference is consistently pointed out: a patient takes his therapist with him into despair, the psychiatrist transfers those feelings to this and other patients: ‘this kind of suffering is untreatable and cannot be lived with’.  

In the recent NPO television documentary A Good Death we see an embrace between a psychiatrist and her emotional patient. In doing so, this psychiatrist offers a unique form of involvement. But does she provide sufficient resistance to the cynicism, despair and negative vision of the future that is also widespread outside psychiatry?  

Sensible decisions?  

That brings me to a second objection: is it sufficiently recognised how much a psychiatric illness can affect someone’s ability to make sensible decisions? The hallmark of many psychiatric illnesses is a deep desire to die and an inability to think about it in a relative way. As a result, many are unable to think in terms of a ‘possibly successful therapy’.  

Boudewijn Chabot 

The main character in the book Zelf heeft by Boudewijn Chabot, Netty Boomsma, responds to Chabot's suggestion that there might be a life after depression: 'Yes, but then I won't be it anymore.' She wants to go down with her depression. I know differences. The people with a death wish who remark about a possible therapy: ‘I hope it is not effective, because then I will have to go through it again.’ 

 Another hurdle 

If a second psychiatrist is consulted and, for example, suggests trying one or two more therapies, many patients see this as yet another hurdle on the road to euthanasia. They do not see it as a serious opportunity to be able to cope with life again. There are no easy answers here. Nor are pillories appropriate. But let euthanasia remain complicated here, and let us continue to look for hope. 

 

Reproduced by kind permission