Why the Demoniac Stayed in his Comfortable Corner of Hell


Detail from The Drunkard (1912) by Marc Chagall. Courtesy Wikipedia

John Kaag | Aeon Ideas

I am not what one might call a religious man. I went to church, and then to confirmation class, under duress. My mother, whom I secretly regarded as more powerful than God, insisted that I go. So I went. Her insistence, however, had the unintended consequence of introducing me to a pastor whom I came to despise. So I eventually quit.

There were many problems with this pastor but the one that bothered me the most was his refusal to explain a story from the New Testament that I found especially hard to believe: the story of the demoniac.

This story from Mark 5:1-20 relates how Jesus and the disciples go to the town of Gerasenes and there encounter a man who is possessed by evil spirits. This demoniac – a self-imposed outcast from society – lived at the outskirts of town and ‘night and day among the tombs and in the hills he would cry out and cut himself with stones’. The grossest part of the story, however, isn’t the self-mutilation. It’s the demoniac’s insane refusal to accept help. When Jesus approached him, the demoniac threw himself to the ground and wailed: ‘What do you want with me? … In God’s name, don’t torture me!’ When you’re possessed by evil spirits, the worst thing in the world is to be healed. In short, the demoniac tells Jesus to bugger off, to leave him and his sharp little stones in his comfortable corner of hell.

When I first read about the demoniac, I was admittedly scared, but I eventually convinced myself that the parable was a manipulative attempt to persuade unbelievers such as me to find religion. And I wasn’t buying it. But when I entered university, went into philosophy, and began to cultivate an agnosticism that one might call atheism, I discovered that many a philosopher had been drawn to this scary story. So I took a second look.

The Danish philosopher Søren Kierkegaard, who spent years analysing the psychological and ethical dimensions of the demoniac, tells us that being demonic is more common than we might like to admit. He points out that when Jesus heals the possessed man, the spirits are exorcised en masse, flying out together as ‘the Legion’ – a vast army of evil forces. There are more than enough little demons to go around, and this explains why they come to roust in some rather mundane places. In Kierkegaard’s words: ‘One may hear the drunkard say: “Let me be the filth that I am.”’ Or, leave me alone with my bottle and let me ruin my life, thank you very much. I heard this first from my father, and then from an increasing number of close friends, and most recently from a voice that occasionally keeps me up at night when everyone else is asleep.

Those who are the most pointedly afflicted are often precisely those who are least able to recognise their affliction, or to save themselves. And those with the resources to rescue themselves are usually already saved. As Kierkegaard suggests, the virtue of sobriety makes perfect sense to one who is already sober. Eating well is second nature to the one who is already healthy; saving money is a no-brainer for one who one is already rich; truth-telling is the good habit of one who is already honest. But for those in the grips of crisis or sin, getting out usually doesn’t make much sense.

Sharp stones can take a variety of forms.

In The Concept of Anxiety (1844), Kierkegaard tells us that the ‘essential nature of [the demoniac] is anxiety about the good’. I’ve been ‘anxious’ about many things – about exams, about spiders, about going to sleep – but Kierkegaard explains that the feeling I have about these nasty things isn’t anxiety at all. It’s fear. Anxiety, on the other hand, has no particular object. It is the sense of uneasiness that one has at the edge of a cliff, or climbing a ladder, or thinking about the prospects of a completely open future – it isn’t fear per se, but the feeling that we get when faced with possibility. It’s the unsettling feeling of freedom. Yes, freedom, that most precious of modern watchwords, is deeply unsettling.

What does this have to do with our demoniac? Everything. Kierkegaard explains that the demoniac reflects ‘an unfreedom that wants to close itself off’; when confronted with the possibility of being healed, he wants nothing to do with it. The free life that Jesus offers is, for the demoniac, pure torture. I’ve often thought that this is the fate of the characters in Jean-Paul Sartre’s play No Exit (1944): they are always free to leave, but leaving seems beyond impossible.

Yet Jesus manages to save the demoniac. And I wanted my pastor to tell me how. At the time, I chalked up most of the miracles from the Bible as exaggeration, or interpretation, or poetic licence. But the healing of the demoniac – unlike the bread and fish and resurrection – seemed really quite fantastic. So how did Jesus do it? I didn’t get a particularly good answer from my pastor, so I left the Church. And never came back.

Today, I still want to know.

I’m not here to explain the salvation of the demoniac. I’m here only to observe, as carefully as I can, that this demonic situation is a problem. Indeed, I suspect it is the problem for many, many readers. The demoniac reflects what theologians call the ‘religious paradox’, namely that it is impossible for fallen human beings – such craven creatures – to bootstrap themselves to heaven. Any redemptive resources at our disposal are probably exactly as botched as we are.

There are many ways to distract ourselves from this paradox – and we are very good at manufacturing them: movies and alcohol and Facebook and all the fixations and obsessions of modern life. But at the end of the day, these are pitifully little comfort.

So this year, as New Year’s Day recedes from memory and the winter darkness remains, I am making a resolution: I will try not to take all the usual escapes. Instead, I will try to simply sit with the plight of the demoniac, to ‘stew in it’ as my mother used to say, for a minute or two more. In his essay ‘Self-will’ (1919), the German author Hermann Hesse put it thus: ‘If you and you … are in pain, if you are sick in body or soul, if you are afraid and have a foreboding of danger – why not, if only to amuse yourselves … try to put the question in another way? Why not ask whether the source of your pain might not be you yourselves?’ I will not reach for my familiar demonic stones, blood-spattered yet comforting. I will ask why I need them in the first place. When I do this, and attempt to come to terms with the demoniac’s underlying suffering, I might notice that it is not unique to me.

When I do, when I let go of the things that I think are going to ease my suffering, I might have the chance to notice that I am not alone in my anxiety. And maybe this is recompense enough. Maybe this is freedom and the best that I can hope for.Aeon counter – do not remove

John Kaag

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Modern Technology is akin to the Metaphysics of Vedanta


Akhandadhi Das | Aeon Ideas

You might think that digital technologies, often considered a product of ‘the West’, would hasten the divergence of Eastern and Western philosophies. But within the study of Vedanta, an ancient Indian school of thought, I see the opposite effect at work. Thanks to our growing familiarity with computing, virtual reality (VR) and artificial intelligence (AI), ‘modern’ societies are now better placed than ever to grasp the insights of this tradition.

Vedanta summarises the metaphysics of the Upanishads, a clutch of Sanskrit religious texts, likely written between 800 and 500 BCE. They form the basis for the many philosophical, spiritual and mystical traditions of the Indian sub-continent. The Upanishads were also a source of inspiration for some modern scientists, including Albert Einstein, Erwin Schrödinger and Werner Heisenberg, as they struggled to comprehend quantum physics of the 20th century.

The Vedantic quest for understanding begins from what it considers the logical starting point: our own consciousness. How can we trust conclusions about what we observe and analyse unless we understand what is doing the observation and analysis? The progress of AI, neural nets and deep learning have inclined some modern observers to claim that the human mind is merely an intricate organic processing machine – and consciousness, if it exists at all, might simply be a property that emerges from information complexity. However, this view fails to explain intractable issues such as the subjective self and our experience of qualia, those aspects of mental content such as ‘redness’ or ‘sweetness’ that we experience during conscious awareness. Figuring out how matter can produce phenomenal consciousness remains the so-called ‘hard problem’.

Vedanta offers a model to integrate subjective consciousness and the information-processing systems of our body and brains. Its theory separates the brain and the senses from the mind. But it also distinguishes the mind from the function of consciousness, which it defines as the ability to experience mental output. We’re familiar with this notion from our digital devices. A camera, microphone or other sensors linked to a computer gather information about the world, and convert the various forms of physical energy – light waves, air pressure-waves and so forth – into digital data, just as our bodily senses do. The central processing unit processes this data and produces relevant outputs. The same is true of our brain. In both contexts, there seems to be little scope for subjective experience to play a role within these mechanisms.

While computers can handle all sorts of processing without our help, we furnish them with a screen as an interface between the machine and ourselves. Similarly, Vedanta postulates that the conscious entity – something it terms the atma – is the observer of the output of the mind. The atma possesses, and is said to be composed of, the fundamental property of consciousness. The concept is explored in many of the meditative practices of Eastern traditions.

You might think of the atma like this. Imagine you’re watching a film in the cinema. It’s a thriller, and you’re anxious about the lead character, trapped in a room. Suddenly, the door in the movie crashes open and there stands… You jump, as if startled. But what is the real threat to you, other than maybe spilling your popcorn? By suspending an awareness of your body in the cinema, and identifying with the character on the screen, we are allowing our emotional state to be manipulated. Vedanta suggests that the atma, the conscious self, identifies with the physical world in a similar fashion.

This idea can also be explored in the all-consuming realm of VR. On entering a game, we might be asked to choose our character or avatar – originally a Sanskrit word, aptly enough, meaning ‘one who descends from a higher dimension’. In older texts, the term often refers to divine incarnations. However, the etymology suits the gamer, as he or she chooses to descend from ‘normal’ reality and enter the VR world. Having specified our avatar’s gender, bodily features, attributes and skills, next we learn how to control its limbs and tools. Soon, our awareness diverts from our physical self to the VR capabilities of the avatar.

In Vedanta psychology, this is akin to the atma adopting the psychological persona-self it calls the ahankara, or the ‘pseudo-ego’. Instead of a detached conscious observer, we choose to define ourselves in terms of our social connections and the physical characteristics of the body. Thus, I come to believe in myself with reference to my gender, race, size, age and so forth, along with the roles and responsibilities of family, work and community. Conditioned by such identification, I indulge in the relevant emotions – some happy, some challenging or distressing – produced by the circumstances I witness myself undergoing.

Within a VR game, our avatar represents a pale imitation of our actual self and its entanglements. In our interactions with the avatar-selves of others, we might reveal little about our true personality or feelings, and know correspondingly little about others’. Indeed, encounters among avatars – particularly when competitive or combative – are often vitriolic, seemingly unrestrained by concern for the feelings of the people behind the avatars. Connections made through online gaming aren’t a substitute for other relationships. Rather, as researchers at Johns Hopkins University have noted, gamers with strong real-world social lives are less likely to fall prey to gaming addiction and depression.

These observations mirror the Vedantic claim that our ability to form meaningful relationships is diminished by absorption in the ahankara, the pseudo-ego. The more I regard myself as a physical entity requiring various forms of sensual gratification, the more likely I am to objectify those who can satisfy my desires, and to forge relationships based on mutual selfishness. But Vedanta suggests that love should emanate from the deepest part of the self, not its assumed persona. Love, it claims, is soul-to-soul experience. Interactions with others on the basis of the ahankara offer only a parody of affection.

As the atma, we remain the same subjective self throughout the whole of our life. Our body, mentality and personality change dramatically – but throughout it all, we know ourselves to be the constant observer. However, seeing everything shift and give way around us, we suspect that we’re also subject to change, ageing and heading for annihilation. Yoga, as systematised by Patanjali – an author or authors, like ‘Homer’, who lived in the 2nd century BCE – is intended to be a practical method for freeing the atma from relentless mental tribulation, and to be properly situated in the reality of pure consciousness.

In VR, we’re often called upon to do battle with evil forces, confronting jeopardy and virtual mortality along the way. Despite our efforts, the inevitable almost always happens: our avatar is killed. Game over. Gamers, especially pathological gamers, are known to become deeply attached to their avatars, and can suffer distress when their avatars are harmed. Fortunately, we’re usually offered another chance: Do you want to play again? Sure enough, we do. Perhaps we create a new avatar, someone more adept, based on the lessons learned last time around. This mirrors the Vedantic concept of reincarnation, specifically in its form of metempsychosis: the transmigration of the conscious self into a new physical vehicle.

Some commentators interpret Vedanta as suggesting that there is no real world, and that all that exists is conscious awareness. However, a broader take on Vedantic texts is more akin to VR. The VR world is wholly data, but it becomes ‘real’ when that information manifests itself to our senses as imagery and sounds on the screen or through a headset. Similarly, for Vedanta, it is the external world’s transitory manifestation as observable objects that makes it less ‘real’ than the perpetual, unchanging nature of the consciousness that observes it.

To the sages of old, immersing ourselves in the ephemeral world means allowing the atma to succumb to an illusion: the illusion that our consciousness is somehow part of an external scene, and must suffer or enjoy along with it. It’s amusing to think what Patanjali and the Vedantic fathers would make of VR: an illusion within an illusion, perhaps, but one that might help us to grasp the potency of their message.Aeon counter – do not remove

Akhandadhi Das

This article was originally published at Aeon and has been republished under Creative Commons.


How Al-Farabi drew on Plato to argue for censorship in Islam


Andrew Shiva / Wikipedia

Rashmee Roshan Lall | Aeon Ideas

You might not be familiar with the name Al-Farabi, a 10th-century thinker from Baghdad, but you know his work, or at least its results. Al-Farabi was, by all accounts, a man of steadfast Sufi persuasion and unvaryingly simple tastes. As a labourer in a Damascus vineyard before settling in Baghdad, he favoured a frugal diet of lambs’ hearts and water mixed with sweet basil juice. But in his political philosophy, Al-Farabi drew on a rich variety of Hellenic ideas, notably from Plato and Aristotle, adapting and extending them in order to respond to the flux of his times.

The situation in the mighty Abbasid empire in which Al-Farabi lived demanded a delicate balancing of conservatism with radical adaptation. Against the backdrop of growing dysfunction as the empire became a shrunken version of itself, Al-Farabi formulated a political philosophy conducive to civic virtue, justice, human happiness and social order.

But his real legacy might be the philosophical rationale that Al-Farabi provided for controlling creative expression in the Muslim world. In so doing, he completed the aniconism (or antirepresentational) project begun in the late seventh century by a caliph of the Umayyads, the first Muslim dynasty. Caliph Abd al-Malik did it with nonfigurative images on coins and calligraphic inscriptions on the Dome of the Rock in Jerusalem, the first monument of the new Muslim faith. This heralded Islamic art’s break from the Greco-Roman representative tradition. A few centuries later, Al-Farabi took the notion of creative control to new heights by arguing for restrictions on representation through the word. He did it using solidly Platonic concepts, and can justifiably be said to have helped concretise the way Islam understands and responds to creative expression.

Word portrayals of Islam and its prophet can be deemed sacrilegious just as much as representational art. The consequences of Al-Farabi’s rationalisation of representational taboos are apparent in our times. In 1989, Iran’s Ayatollah Khomeini issued a fatwa sentencing Salman Rushdie to death for writing The Satanic Verses (1988). The book outraged Muslims for its fictionalised account of Prophet Muhammad’s life. In 2001, the Taliban blew up the sixth-century Bamiyan Buddhas in Afghanistan. In 2005, controversy erupted over the publication by the Danish newspaper Jyllands-Posten of cartoons depicting the Prophet. The cartoons continued to ignite fury in some way or other for at least a decade. There were protests across the Middle East, attacks on Western embassies after several European papers reprinted the cartoons, and in 2008 Osama bin Laden issued an incendiary warning to Europe of ‘grave punishment’ for its ‘new Crusade’ against Islam. In 2015, the offices of Charlie Hebdo, a satirical magazine in Paris that habitually offended Muslim sensibilities, was attacked by armed gunmen, killing 12. The magazine had featured Michel Houellebecq’s novel Submission (2015), a futuristic vision of France under Islamic rule.

In a sense, the destruction of the Bamiyan Buddhas was no different from the Rushdie fatwa, which was like the Danish cartoons fallout and the violence wreaked on Charlie Hebdo’s editorial staff. All are linked by the desire to control representation, be it through imagery or the word.

Control of the word was something that Al-Farabi appeared to judge necessary if Islam’s biggest project – the multiethnic commonwealth that was the Abbasid empire – was to be preserved. Figural representation was pretty much settled as an issue for Muslims when Al-Farabi would have been pondering some of his key theories. Within 30 years of the Prophet’s death in 632, art and creative expression took two parallel paths depending on the context for which it was intended. There was art for the secular space, such as the palaces and bathhouses of the Umayyads (661-750). And there was the art considered appropriate for religious spaces – mosques and shrines such as the Dome of the Rock (completed in 691). Caliph Abd al-Malik had already engaged in what has been called a ‘polemic of images’ on coinage with his Byzantine counterpart, Emperor Justinian II. Ultimately, Abd al-Malik issued coins inscribed with the phrases ‘ruler of the orthodox’ and ‘representative [caliph] of Allah’ rather than his portrait. And the Dome of the Rock had script rather than representations of living creatures as a decoration. The lack of image had become an image. In fact, the word was now the image. That is why calligraphy became the greatest of Muslim art forms. The importance of the written word – its absorption and its meaning – was also exemplified by the Abbasids’ investment in the Greek-to-Arabic translation movement from the eighth to the 10th centuries.

Consequently, in Al-Farabi’s time, what was most important for Muslims was to control representation through the word. Christian iconophiles made their case for devotional images with the argument that words have the same representative power as paintings. Words are like icons, declared the iconophile Christian priest Theodore Abu Qurrah, who lived in dar-al Islam and wrote in Arabic in the ninth century. And images, he said, are the writing of the illiterate.

Al-Farabi was concerned about the power – for good or ill – of writings at a time when the Abbasid empire was in decline. He held creative individuals responsible for what they produced. Abbasid caliphs increasingly faced a crisis of authority, both moral and political. This led Al-Farabi – one of the Arab world’s most original thinkers – to extrapolate from topical temporal matters the key issues confronting Islam and its expanding and diverse dominions.

Al-Farabi fashioned a political philosophy that naturalised Plato’s imaginary ideal state for the world to which he belonged. He tackled the obvious issue of leadership, reminding Muslim readers of the need for a philosopher-king, a ‘virtuous ruler’ to preside over a ‘virtuous city’, which would be run on the principles of ‘virtuous religion’.

Like Plato, Al-Farabi suggested creative expression should support the ideal ruler, thus shoring up the virtuous city and the status quo. Just as Plato in the Republic demanded that poets in the ideal state tell stories of unvarying good, especially about the gods, Al-Farabi’s treatises mention ‘praiseworthy’ poems, melodies and songs for the virtuous city. Al-Farabi commended as ‘most venerable’ for the virtuous city the sorts of writing ‘used in the service of the supreme ruler and the virtuous king.’

It is this idea of writers following the approved narrative that most clearly joins Al-Farabi’s political philosophy to that of the man he called Plato the ‘Divine’. When Al-Farabi seized on Plato’s argument for ‘a censorship of the writers’ as a social good for Muslim society, he was making a case for managing the narrative by controlling the word. It would be important to the next phase of Islamic image-building.

Some of Al-Farabi’s ideas might have influenced other prominent Muslim thinkers, including the Persian polymath Ibn Sina, or Avicenna, (c980-1037) and the Persian theologian Al-Ghazali (c1058-1111). Certainly, his rationalisation for controlling creative writing enabled a further move to deny legitimacy to new interpretation.Aeon counter – do not remove

Rashmee Roshan Lall

This article was originally published at Aeon and has been republished under Creative Commons.

What Einstein Meant by ‘God Does Not Play Dice’

Einstein with his second wife Elsa, 1921. Wikipedia.

Jim Baggott | Aeon Ideas

‘The theory produces a good deal but hardly brings us closer to the secret of the Old One,’ wrote Albert Einstein in December 1926. ‘I am at all events convinced that He does not play dice.’

Einstein was responding to a letter from the German physicist Max Born. The heart of the new theory of quantum mechanics, Born had argued, beats randomly and uncertainly, as though suffering from arrhythmia. Whereas physics before the quantum had always been about doing this and getting that, the new quantum mechanics appeared to say that when we do this, we get that only with a certain probability. And in some circumstances we might get the other.

Einstein was having none of it, and his insistence that God does not play dice with the Universe has echoed down the decades, as familiar and yet as elusive in its meaning as E = mc2. What did Einstein mean by it? And how did Einstein conceive of God?

Hermann and Pauline Einstein were nonobservant Ashkenazi Jews. Despite his parents’ secularism, the nine-year-old Albert discovered and embraced Judaism with some considerable passion, and for a time he was a dutiful, observant Jew. Following Jewish custom, his parents would invite a poor scholar to share a meal with them each week, and from the impoverished medical student Max Talmud (later Talmey) the young and impressionable Einstein learned about mathematics and science. He consumed all 21 volumes of Aaron Bernstein’s joyful Popular Books on Natural Science (1880). Talmud then steered him in the direction of Immanuel Kant’s Critique of Pure Reason (1781), from which he migrated to the philosophy of David Hume. From Hume, it was a relatively short step to the Austrian physicist Ernst Mach, whose stridently empiricist, seeing-is-believing brand of philosophy demanded a complete rejection of metaphysics, including notions of absolute space and time, and the existence of atoms.

But this intellectual journey had mercilessly exposed the conflict between science and scripture. The now 12-year-old Einstein rebelled. He developed a deep aversion to the dogma of organised religion that would last for his lifetime, an aversion that extended to all forms of authoritarianism, including any kind of dogmatic atheism.

This youthful, heavy diet of empiricist philosophy would serve Einstein well some 14 years later. Mach’s rejection of absolute space and time helped to shape Einstein’s special theory of relativity (including the iconic equation E = mc2), which he formulated in 1905 while working as a ‘technical expert, third class’ at the Swiss Patent Office in Bern. Ten years later, Einstein would complete the transformation of our understanding of space and time with the formulation of his general theory of relativity, in which the force of gravity is replaced by curved spacetime. But as he grew older (and wiser), he came to reject Mach’s aggressive empiricism, and once declared that ‘Mach was as good at mechanics as he was wretched at philosophy.’

Over time, Einstein evolved a much more realist position. He preferred to accept the content of a scientific theory realistically, as a contingently ‘true’ representation of an objective physical reality. And, although he wanted no part of religion, the belief in God that he had carried with him from his brief flirtation with Judaism became the foundation on which he constructed his philosophy. When asked about the basis for his realist stance, he explained: ‘I have no better expression than the term “religious” for this trust in the rational character of reality and in its being accessible, at least to some extent, to human reason.’

But Einstein’s was a God of philosophy, not religion. When asked many years later whether he believed in God, he replied: ‘I believe in Spinoza’s God, who reveals himself in the lawful harmony of all that exists, but not in a God who concerns himself with the fate and the doings of mankind.’ Baruch Spinoza, a contemporary of Isaac Newton and Gottfried Leibniz, had conceived of God as identical with nature. For this, he was considered a dangerous heretic, and was excommunicated from the Jewish community in Amsterdam.

Einstein’s God is infinitely superior but impersonal and intangible, subtle but not malicious. He is also firmly determinist. As far as Einstein was concerned, God’s ‘lawful harmony’ is established throughout the cosmos by strict adherence to the physical principles of cause and effect. Thus, there is no room in Einstein’s philosophy for free will: ‘Everything is determined, the beginning as well as the end, by forces over which we have no control … we all dance to a mysterious tune, intoned in the distance by an invisible player.’

The special and general theories of relativity provided a radical new way of conceiving of space and time and their active interactions with matter and energy. These theories are entirely consistent with the ‘lawful harmony’ established by Einstein’s God. But the new theory of quantum mechanics, which Einstein had also helped to found in 1905, was telling a different story. Quantum mechanics is about interactions involving matter and radiation, at the scale of atoms and molecules, set against a passive background of space and time.

Earlier in 1926, the Austrian physicist Erwin Schrödinger had radically transformed the theory by formulating it in terms of rather obscure ‘wavefunctions’. Schrödinger himself preferred to interpret these realistically, as descriptive of ‘matter waves’. But a consensus was growing, strongly promoted by the Danish physicist Niels Bohr and the German physicist Werner Heisenberg, that the new quantum representation shouldn’t be taken too literally.

In essence, Bohr and Heisenberg argued that science had finally caught up with the conceptual problems involved in the description of reality that philosophers had been warning of for centuries. Bohr is quoted as saying: ‘There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.’ This vaguely positivist statement was echoed by Heisenberg: ‘[W]e have to remember that what we observe is not nature in itself but nature exposed to our method of questioning.’ Their broadly antirealist ‘Copenhagen interpretation’ – denying that the wavefunction represents the real physical state of a quantum system – quickly became the dominant way of thinking about quantum mechanics. More recent variations of such antirealist interpretations suggest that the wavefunction is simply a way of ‘coding’ our experience, or our subjective beliefs derived from our experience of the physics, allowing us to use what we’ve learned in the past to predict the future.

But this was utterly inconsistent with Einstein’s philosophy. Einstein could not accept an interpretation in which the principal object of the representation – the wavefunction – is not ‘real’. He could not accept that his God would allow the ‘lawful harmony’ to unravel so completely at the atomic scale, bringing lawless indeterminism and uncertainty, with effects that can’t be entirely and unambiguously predicted from their causes.

The stage was thus set for one of the most remarkable debates in the entire history of science, as Bohr and Einstein went head-to-head on the interpretation of quantum mechanics. It was a clash of two philosophies, two conflicting sets of metaphysical preconceptions about the nature of reality and what we might expect from a scientific representation of this. The debate began in 1927, and although the protagonists are no longer with us, the debate is still very much alive.

And unresolved.

I don’t think Einstein would have been particularly surprised by this. In February 1954, just 14 months before he died, he wrote in a letter to the American physicist David Bohm: ‘If God created the world, his primary concern was certainly not to make its understanding easy for us.’

Jim Baggott

This article was originally published at Aeon and has been republished under Creative Commons.

Being ‘interesting’ is Not an Objective Feature of the World


Colour-composite image of the Carina Nebula. Courtesy ESO

Lorraine L Besser | Aeon Ideas

Most of us know and value pleasant experiences. We savour the taste of a freshly picked strawberry. We laugh more than an event warrants, just because laughing feels good. We might argue about the degree to which such pleasant experiences are valuable, and the extent to which they ought to shape our lives, but we can’t deny their value.

So pleasant experiences are necessarily valuable, but are there also valuable experiences that are not necessarily pleasant? It seems there are. Often, we have experiences that captivate us, that we cherish even though they are not entirely pleasant. We read a novel that leads us to feel both horror and awe. We binge-watch a TV show that explores the shocking course of moral corruption of someone who could be your neighbour, friend, even your spouse. The experience is both painful and horrifying, but we can’t turn it off.

These experiences seem intuitively valuable in the same way that pleasant experiences are intuitively valuable. But they are not valuable because they are pleasant – rather, they are valuable by virtue of being interesting.

What does it mean for an experience to be interesting? First, to say that something is interesting is to describe what the experience feels like to the person undergoing it. This is the phenomenological quality of the experience. When we study the phenomenology of something, we examine what it feels like, from the inside, to experience that thing. For instance, most of us would describe eating our favourite foods as a pleasurable experience: the food itself isn’t pleasurable, but the experience of eating it is. Similarly, when we talk about something being beautiful or awe-inspiring, we aren’t describing the thing itself, but rather our experience of it. We see the sunset and feel moved by it; the beauty is something we experience. Likewise the awe it inspires is a feature of our experiential reaction to it. The interesting is just like this. It is a feature of our experiential reaction, of our engagement.

We don’t always use the word ‘interesting’ in this way. In ordinary language, we often describe the objects of experience as interesting. We talk about interesting books, interesting people, and so on. When we say that a book is interesting, we more likely mean that the experience of reading the book is interesting. It just doesn’t make sense to describe a book to be objectively interesting, independently of people experiencing it as interesting. How could a book be interesting without being read? And if a book is objectively interesting, shouldn’t we all find it interesting? We don’t all find the same things to be interesting. It is a common experience for something to be interesting to one person, yet not another. So while we might describe objects as interesting, we should recognise that this is a loose, and shorthand, way to describe what’s really interesting – our experience of them.

Another way in which we use the word ‘interesting’ is in the context of describing what a person is interested in: John is interested in Second World War novels, for example. This usage also differs from what I’m describing as the ‘interesting’. It describes a particular fit between one’s interests and the objects of one’s experiences. But notice that fitting with your interests, and being interested in something, is actually a different experience to finding something interesting. We’ve all been interested in things that turn out to be boring, and we’ve all found experiences interesting when we had no prior interest in them. The interesting is thus not an objective feature of an object, nor an experience that necessarily aligns or follows from your interests. It is rather a feature of our experiences.

To say that something is interesting is also to describe a particular kind of synthesis that arises within the experience. Whenever we engage in an activity, we bring to that experience some combination of expectations, likes/desires, beliefs, curiosity, and so forth. This package contributes to the activity delivering a particular subjective experience. There is a synthesis, specific to the individual’s engagement, that determines what her experience feels like – its phenomenological quality. It is within this synthesis that a person finds an experience interesting, or not. There is no one synthesis that makes an experience interesting. Sometimes, a clash of expectations and reality makes something interesting, sometimes someone’s curiosity allows one to notice features that make an activity interesting, and so on. Because the interesting lies within a synthesis between the individual and an activity, one individual can find something interesting (say, reading philosophy) that another person doesn’t.

The synthesis is complex, unique to the subject and the experience – and, in the end, unspecifiable. This is why we tend to overlook the interesting as a valuable feature of our experiences. Pleasure, by contrast, is a fairly uniform feature of experience. We know exactly what others are talking about when they talk about pleasurable experiences, and can relate to that experience in a personal way – even if it is something that we have not experienced as pleasurable. Our reactions to the experiences that others find interesting are often different. John finds reading Second World War novels to be an ongoing source of interest, yet Julia can’t imagine a more boring way of spending her time, and can’t understand how anyone would find them interesting. In such scenarios, we are more likely to discredit the value of John’s experience than to try to understand and appreciate it. Because the interesting is by nature a more complicated, harder-to-reach, harder-to-describe feature than others, we rarely stop to think about just what the interesting is.

While wrapping our head around the interesting might be challenging, it is important to acknowledge the value intrinsic to interesting experiences. Recognising it as valuable validates those who choose to pursue the interesting, and also opens up a new dimension of value that can enrich our lives. Most of us know there is more to life than pleasure, yet it is all too easy to choose our experiences for the sake of pleasure. For many of us, though, interesting experiences are more rewarding than pleasurable experiences, insofar as their intrinsic value is a product of multifaceted aspects of our engagement. Interesting experiences spark the mind in a way that stimulates and lingers. They can also be easy to come by – sometimes just a sense of curiosity is needed to make an activity interesting. Look around, feel the pull, and cherish the interesting.Aeon counter – do not remove

Lorraine L Besser

This article was originally published at Aeon and has been republished under Creative Commons.

Why Atheists are Not as Rational as Some Like to Think

File 20180924 85755 lffuqk.jpg?ixlib=rb 1.1

Richard Dawkins, author, evolutionary biologist and emeritus fellow of New College, University of Oxford, is one of the world’s most prominent atheists.
Fronteiras do Pensamento/Wikipedia, CC BY-SA

By Lois Lee, University of Kent

Many atheists think that their atheism is the product of rational thinking. They use arguments such as “I don’t believe in God, I believe in science” to explain that evidence and logic, rather than supernatural belief and dogma, underpin their thinking. But just because you believe in evidence-based, scientific research – which is subject to strict checks and procedures – doesn’t mean that your mind works in the same way.

When you ask atheists about why they became atheists (as I do for a living), they often point to eureka moments when they came to realise that religion simply doesn’t make sense.

Oddly perhaps, many religious people actually take a similar view of atheism. This comes out when theologians and other theists speculate that it must be rather sad to be an atheist, lacking (as they think atheists do) so much of the philosophical, ethical, mythical and aesthetic fulfilments that religious people have access to – stuck in a cold world of rationality only.

The Science of Atheism

The problem that any rational thinker needs to tackle, though, is that the science increasingly shows that atheists are no more rational than theists. Indeed, atheists are just as susceptible as the next person to “group-think” and other non-rational forms of cognition. For example, religious and nonreligious people alike can end up following charismatic individuals without questioning them. And our minds often prefer righteousness over truth, as the social psychologist Jonathan Haidt has explored.

Even atheist beliefs themselves have much less to do with rational inquiry than atheists often think. We now know, for example, that nonreligious children of religious parents cast off their beliefs for reasons that have little to do with intellectual reasoning. The latest cognitive research shows that the decisive factor is learning from what parents do rather than from what they say. So if a parent says that they’re Christian, but they’ve fallen out of the habit of doing the things they say should matter – such as praying or going to church – their kids simply don’t buy the idea that religion makes sense.

This is perfectly rational in a sense, but children aren’t processing this on a cognitive level. Throughout our evolutionary history, humans have often lacked the time to scrutinise and weigh up the evidence – needing to make quick assessments. That means that children to some extent just absorb the crucial information, which in this case is that religious belief doesn’t appear to matter in the way that parents are saying it does.

Children’s choices often aren’t based on rational thinking.
Anna Nahabed/Shutterstock

Even older children and adolescents who actually ponder the topic of religion may not be approaching it as independently as they think. Emerging research is demonstrating that atheist parents (and others) pass on their beliefs to their children in a similar way to religious parents – through sharing their culture as much as their arguments.

Some parents take the view that their children should choose their beliefs for themselves, but what they then do is pass on certain ways of thinking about religion, like the idea that religion is a matter of choice rather than divine truth. It’s not surprising that almost all of these children – 95% – end up “choosing” to be atheist.

Science versus Beliefs

But are atheists more likely to embrace science than religious people? Many belief systems can be more or less closely integrated with scientific knowledge. Some belief systems are openly critical of science, and think it has far too much sway over our lives, while other belief systems are hugely concerned to learn about and respond to scientific knowledge.

But this difference doesn’t neatly map onto whether you are religious or not. Some Protestant traditions, for example, see rationality or scientific thinking as central to their religious lives. Meanwhile, a new generation of postmodern atheists highlight the limits of human knowledge, and see scientific knowledge as hugely limited, problematic even, especially when it comes to existential and ethical questions. These atheists might, for example, follow thinkers like Charles Baudelaire in the view that true knowledge is only found in artistic expression.

Science can give us existential fulfilment, too.
Vladimir Pustovit/Flicr, CC BY-SA

And while many atheists do like to think of themselves as pro science, science and technology itself can sometimes be the basis of religious thinking or beliefs, or something very much like it. For example, the rise of the transhumanist movement, which centres on the belief that humans can and should transcend their current natural state and limitations through the use of technology, is an example of how technological innovation is driving the emergence of new movements that have much in common with religiosity.

Even for those atheists sceptical of transhumanism, the role of science isn’t only about rationality – it can provide the philosophical, ethical, mythical and aesthetic fulfilments that religious beliefs do for others. The science of the biological world, for example, is much more than a topic of intellectual curiosity – for some atheists, it provides meaning and comfort in much the same way that belief in God can for theists. Psychologists show that belief in science increases in the face of stress and existential anxiety, just as religious beliefs intensify for theists in these situations.

Clearly, the idea that being atheist is down to rationality alone is starting to look distinctly irrational. But the good news for all concerned is that rationality is overrated. Human ingenuity rests on a lot more than rational thinking. As Haidt says of “the righteous mind”, we are actually “designed to ‘do’ morality” – even if we’re not doing it in the rational way we think we are. The ability to make quick decisions, follow our passions and act on intuition are also important human qualities and crucial for our success.

It is helpful that we have invented something that, unlike our minds, is rational and evidence-based: science. When we need proper evidence, science can very often provide it – as long as the topic is testable. Importantly, the scientific evidence does not tend to support the view that atheism is about rational thought and theism is about existential fulfilments. The truth is that humans are not like science – none of us get by without irrational action, nor without sources of existential meaning and comfort. Fortunately, though, nobody has to.The Conversation

Lois Lee, Research Fellow, Department of Religious Studies, University of Kent

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Religion is About Emotion Regulation, and It’s Very Good at It


Stephen T Asma | Aeon Ideas

Religion does not help us to explain nature. It did what it could in pre-scientific times, but that job was properly unseated by science. Most religious laypeople and even clergy agree: Pope John Paul II declared in 1996 that evolution is a fact and Catholics should get over it. No doubt some extreme anti-scientific thinking lives on in such places as Ken Ham’s Creation Museum in Kentucky, but it has become a fringe position. Most mainstream religious people accept a version of Galileo’s division of labour: ‘The intention of the Holy Ghost is to teach us how one goes to heaven, not how heaven goes.’

Maybe, then, the heart of religion is not its ability to explain nature, but its moral power? Sigmund Freud, who referred to himself as a ‘godless Jew’, saw religion as delusional, but helpfully so. He argued that we humans are naturally awful creatures – aggressive, narcissistic wolves. Left to our own devices, we would rape, pillage and burn our way through life. Thankfully, we have the civilising influence of religion to steer us toward charity, compassion and cooperation by a system of carrots and sticks, otherwise known as heaven and hell.

The French sociologist Émile Durkheim, on the other hand, argued in The Elementary Forms of the Religious Life (1912) that the heart of religion was not its belief system or even its moral code, but its ability to generate collective effervescence: intense, shared experiences that unify individuals into cooperative social groups. Religion, Durkheim argued, is a kind of social glue, a view confirmed by recent interdisciplinary research.

While Freud and Durkheim were right about the important functions of religion, its true value lies in its therapeutic power, particularly its power to manage our emotions. How we feel is as important to our survival as how we think. Our species comes equipped with adaptive emotions, such as fear, rage, lust and so on: religion was (and is) the cultural system that dials these feelings and behaviours up or down. We see this clearly if we look at mainstream religion, rather than the deleterious forms of extremism. Mainstream religion reduces anxiety, stress and depression. It provides existential meaning and hope. It focuses aggression and fear against enemies. It domesticates lust, and it strengthens filial connections. Through story, it trains feelings of empathy and compassion for others. And it provides consolation for suffering.

Emotional therapy is the animating heart of religion. Social bonding happens not only when we agree to worship the same totems, but when we feel affection for each other. An affective community of mutual care emerges when groups share rituals, liturgy, song, dance, eating, grieving, comforting, tales of saints and heroes, hardships such as fasting and sacrifice. Theological beliefs are bloodless abstractions by comparison.

Emotional management is important because life is hard. The Buddha said: ‘All life is suffering’ and most of us past a certain age can only agree. Religion evolved to handle what I call the ‘vulnerability problem’. When we’re sick, we go to the doctor, not the priest. But when our child dies, or we lose our home in a fire, or we’re diagnosed with Stage-4 cancer, then religion is helpful because it provides some relief and some strength. It also gives us something to do, when there’s nothing we can do.

Consider how religion helps people after a death. Social mammals who have suffered separation distress are restored to health by touch, collective meals and grooming. Human grieving customs involve these same soothing prosocial mechanisms. We comfort-touch and embrace a person who has lost a loved one. Our bodies give ancient comfort directly to the grieving body. We provide the bereaved with food and drink, and we break bread with them (think of the Jewish tradition of shiva, or the visitation tradition of wakes in many cultures). We share stories about the loved one, and help the bereaved reframe their pain in larger optimistic narratives. Even music, in the form of consoling melodies and collective singing, helps to express shared sorrow and also transforms it from an unbearable and lonely experience to a bearable communal one. Social involvement from the community after a death can act as an antidepressant, boosting adaptive emotional changes in the bereaved.

Religion also helps to manage sorrow with something I’ll call ‘existential shaping’ or more precisely ‘existential debt’. It is common for Westerners to think of themselves as individuals first and as members of a community second, but our ideology of the lone protagonist fulfilling an individual destiny is more fiction than fact. Losing someone reminds us of our dependence on others and our deep vulnerability, and at such moments religion turns us toward the web of relations rather than away from it. Long after your parents have died, for example, religion helps you memorialise them and acknowledge your existential debt to them. Formalising the memory of the dead person, through funerary rites, or tomb-sweeping (Qingming) festivals in Asia, or the Day of the Dead in Mexico, or annual honorary masses in Catholicism, is important because it keeps reminding us, even through the sorrow, of the meaningful influence of these deceased loved ones. This is not a self-deception about the unreality of death, but an artful way of learning to live with it. The grief becomes transformed in the sincere acknowledgment of the value of the loved one, and religious rituals help people to set aside time and mental space for that acknowledgment.

An emotion such as grief has many ingredients. The physiological arousal of grief is accompanied by cognitive evaluations: ‘I will never see my friend again’; ‘I could have done something to prevent this’; ‘She was the love of my life’; and so on. Religions try to give the bereaved an alternative appraisal that reframes their tragedy as something more than just misery. Emotional appraisals are proactive, according to the psychologists Phoebe Ellsworth at the University of Michigan and Klaus Scherer at the University of Geneva, going beyond the immediate disaster to envision the possible solutions or responses. This is called ‘secondary appraisal’. After the primary appraisal (‘This is very sad’), the secondary appraisal assesses our ability to deal with the situation: ‘This is too much for me’ – or, positively: ‘I will survive this.’ Part of our ability to cope with suffering is our sense of power or agency: more power generally means better coping ability. If I acknowledge my own limitations when faced with unavoidable loss, but I feel that a powerful ally, God, is part of my agency or power, then I can be more resilient.

Because religious actions are often accompanied by magical thinking or supernatural beliefs, Christopher Hitchens argued in God Is not Great (2007) that religion is ‘false consolation’. Many critics of religion echo his condemnation. But there is no such thing as false consolation. Hitchens and fellow critics are making a category mistake, like saying: ‘The colour green is sleepy.’ Consolation or comfort is a feeling, and it can be weak or strong, but it can’t be false or true. You can be false in your judgment of why you’re feeling better, but feeling better is neither true nor false. True and false applies only if we’re evaluating whether our propositions correspond with reality. And no doubt many factual claims of religion are false in that way – the world was not created in six days.

Religion is real consolation in the same way that music is real consolation. No one thinks that the pleasure of Mozart’s opera The Magic Flute is ‘false pleasure’ because singing flutes don’t really exist. It doesn’t need to correspond to reality. It’s true that some religious devotees, unlike music devotees, pin their consolation to additional metaphysical claims, but why should we trust them to know how religion works? Such believers do not recognise that their unthinking religious rituals and social activities are the true sources of their therapeutic healing. Meanwhile, Hitchens and other critics confuse the factual disappointments of religion with the value of religion generally, and thereby miss the heart of it.

Why We Need Religion: An Agnostic Celebration of Spiritual Emotions’ by Stephen Asma © 2018 is published by Oxford University Press.Aeon counter – do not remove

Stephen T Asma

This article was originally published at Aeon and has been republished under Creative Commons.

The Varieties of Religious Experience


The Varieties of Religious Experience: A Study in Human Nature is a book by Harvard University psychologist and philosopher William James (1842 – 1910). James was an American philosopher and psychologist, and the first educator to offer a psychology course in the United States. He was one of the leading thinkers of the late nineteenth century and is believed by many to be one of the most influential philosophers the United States has ever produced, while others have labelled him the “Father of American psychology”.

Varieties comprises his edited Gifford Lectures on natural theology, which were delivered at the University of Edinburgh in Scotland in 1901 and 1902. The lectures concerned the nature of religion and the neglect of science in the academic study of religion.

Soon after its publication, Varieties entered the Western canon of psychology and philosophy and has remained in print for over a century.

James later developed his philosophy of pragmatism. There are many overlapping ideas in Varieties and his 1907 book, Pragmatism.

Religion, therefore, as I now ask you arbitrarily to take it, shall mean for us the feelings, acts, and experiences of individual men in their solitude, so far as they apprehend themselves to stand in relation to whatever they may consider the divine. Since the relation may be either moral, physical, or ritual, it is evident that out of religion in the sense in which we take it, theologies, philosophies, and ecclesiastical organizations may secondarily grow. Religion is a man’s total reaction upon life.

James was most interested in direct religious experiences. Theology and the organizational aspects of religion were of secondary interest. He believed that religious experiences were simply human experiences: “Religious happiness is happiness. Religious trance is trance.”

He believed that religious experiences can have “morbid origins” in brain pathology and can be irrational but nevertheless are largely positive. Unlike the bad ideas that people have under the influence of a high fever, after a religious experience, the ideas and insights usually remain and are often valued for the rest of the person’s life.

Under James’ pragmatism, the effectiveness of religious experiences proves their truth, whether they stem from religious practices or from drugs: “Nitrous oxide … stimulate[s] the mystical consciousness in an extraordinary degree.”

James had relatively little interest in the legitimacy or illegitimacy of religious experiences. Further, despite James’ examples being almost exclusively drawn from Christianity, he did not mean to limit his ideas to any single religion. Religious experiences are something that people sometimes have under certain conditions. In James’ description, these conditions are likely to be psychological or pharmaceutical rather than cultural.

Religion thus makes easy and felicitous what in any case is necessary; and if it be the only agency that can accomplish this result, its vital importance as a human faculty stands vindicated beyond dispute. It becomes an essential organ of our life, performing a function which no other portion of our nature can so successfully fulfill.

James believed that the origins of a religion shed little light upon its value. There is a distinction between an existential judgment (a judgment on “constitution, origin, and history”) and a proposition of value (a judgment on “importance, meaning, or significance”).

For example, if the founder of the Quaker religion, George Fox, had been a hereditary degenerate, the Quaker religion could yet be “a religion of veracity rooted in spiritual inwardness, and a return to something more like the original gospel truth than men had ever known in England.”

Furthermore, the potentially dubious psychological origins of religious beliefs apply just as well to non-religious beliefs:

Scientific theories are organically conditioned just as much as religious emotions are; and if we only knew the facts intimately enough, we should doubtless see “the liver” determining the dicta of the sturdy atheist as decisively as it does those of the Methodist under conviction anxious about his soul. Science… has ended by utterly repudiating the personal point of view.

James criticized scientists for ignoring unseen aspects of the universe. Science studies some of reality, but not all of it:

Vague impressions of something indefinable have no place in the rationalistic system…. Nevertheless, if we look on man’s whole mental life as it exists … we have to confess that the part of it of which rationalism can give an account of is relatively superficial. It is the part that has the prestige undoubtedly, for it has the loquacity, it can challenge you for proofs, and chop logic, and put you down with words … Your whole subconscious life, your impulses, your faiths, your needs, your divinations, have prepared the premises, of which your consciousness now feels the weight of the result; and something in you absolutely knows that that result must be truer than any logic-chopping rationalistic talk, however clever, that may contradict it.

James saw “healthy-mindedness” as America’s main contribution to religion. This is the religious experience of optimism and positive thinking which James sees running from the transcendentalists Emerson and Whitman to Mary Baker Eddy’s Christian Science. At the extreme, the “healthy-minded” see sickness and evil as an illusion. James considered belief in the “mind cure” to be reasonable when compared to medicine as practiced at the beginning of the twentieth century.

The “sick souls” (“morbid-mindedness” / the “twice-born”) are merely those who hit bottom before their religious experience; those whose redemption gives relief from the pains they suffered beforehand. By contrast, the “healthy-minded” deny the need for such preparatory pain or suffering. James believes that “morbid-mindedness ranges over the wider scale of experience” and that while healthy-mindedness is a surprisingly effective “religious solution”,

healthy-mindedness is inadequate as a philosophical doctrine, because the evil facts which it refuses positively to account for are a genuine portion of reality; and they may after all be the best key to life’s significance, and possibly the only openers of our eyes to the deepest levels of truth.

James sees the two types as being a mere matter of temperament: the healthy minded having a “constitutional incapacity for prolonged suffering”; the morbid-minded being those prone to “religious melancholia”.

The basenesses so commonly charged to religion’s account are thus, almost all of them, not chargeable at all to religion proper, but rather to religion’s wicked practical partner, the spirit of corporate dominion. And the bigotries are most of them in their turn chargeable to religion’s wicked intellectual partner, the spirit of dogmatic dominion, the passion for laying down the law in the form of an absolutely closed-in theoretic system.

For James, a saintly character is one where “spiritual emotions are the habitual centre of the personal energy.” James states that saintliness includes:

1. A feeling of being in a wider life than that of this world’s selfish little interests; and a conviction … of the existence of an Ideal Power.

2. A sense of the friendly continuity of the ideal power with our own life, and a willing self-surrender to its control.

3. An immense elation and freedom, as the outlines of the confining selfhood melt down.

4. A shifting of the emotional Centre towards loving and harmonious affections, towards “yes, yes” and away from “no,” where the claims of the non-ego are concerned.

For James, the practical consequences of saintliness are asceticism (pleasure in sacrifice), strength of soul (a “blissful equanimity” free from anxieties), purity (a withdrawal from the material world), and charity (tenderness to those most would naturally disdain).

James identified two main features to a mystical experience:

Ineffability —”No adequate report of its contents can be given in words. … its quality must be directly experienced; it cannot be imparted or transferred to others. … mystical states are more like states of feeling than like states of intellect. No one can make clear to another who has never had a certain feeling, in what the quality or worth of it consists.”

Noetic quality —”Although so similar to states of feeling, mystical states seem to those who experience them to be also states of knowledge. They are states of insight into depths of truth unplumbed by the discursive intellect. They are illuminations, revelations, full of significance and importance, all inarticulate though they remain; and as a rule they carry with them a curious sense of authority for after-time.”

He also identified two subsidiary features that are often, but not always, found with mystical experiences:

Transiency —”Mystical states cannot be sustained for long.”

Passivity —”The mystic feels as if his own will were in abeyance, and indeed sometimes as if he were grasped and held by a superior power.”

The only thing that religious experience, as we have studied it, unequivocally testifies to is that we can experience union with something larger than ourselves and in that union find our greatest peace.

Read Now: The Varieties of Religious Experience by William James (PDF)

Is Religion a Universal in Human Culture or an Academic Invention?

People give names to persons and things, and then suppose that if they know the names, they know that which the names refer to.

– Keiji Nishitani


Brett Colasacco | Aeon Ideas

If anything seems self-evident in human culture, it’s the widespread presence of religion. People do ‘religious’ stuff all the time; a commitment to gods, myths and rituals has been present in all societies. These practices and beliefs are diverse, to be sure, from Aztec human sacrifice to Christian baptism, but they appear to share a common essence. So what could compel the late Jonathan Zittell Smith, arguably the most influential scholar of religion of the past half-century, to declare in his book Imagining Religion: From Babylon to Jonestown (1982) that ‘religion is solely the creation of the scholar’s study’, and that it has ‘no independent existence apart from the academy’?

Smith wanted to dislodge the assumption that the phenomenon of religion needs no definition. He showed that things appearing to us as religious says less about the ideas and practices themselves than it does about the framing concepts that we bring to their interpretation. Far from a universal phenomenon with a distinctive essence, the category of ‘religion’ emerges only through second-order acts of classification and comparison.

When Smith entered the field in the late 1960s, the academic study of religion was still quite young. In the United States, the discipline had been significantly shaped by the Romanian historian of religions Mircea Eliade, who, from 1957 until his death in 1986, taught at the University of Chicago Divinity School. There, Eliade trained a generation of scholars in the approach to religious studies that he had already developed in Europe.

What characterised religion, for Eliade, was ‘the sacred’ – the ultimate source of all reality. Simply put, the sacred was ‘the opposite of the profane’. Yet the sacred could ‘irrupt’ into profane existence in a number of predictable ways across archaic cultures and histories. Sky and earth deities were ubiquitous, for example; the Sun and Moon served as representations of rational power and cyclicality; certain stones were regarded as sacred; and water was seen as a source of potentiality and regeneration.

Eliade also developed the concepts of ‘sacred time’ and ‘sacred space’. According to Eliade, archaic man, or Homo religiosus, always told stories of what the gods did ‘in the beginning’. They consecrated time through repetitions of these cosmogonic myths, and dedicated sacred spaces according to their relationship to the ‘symbolism of the Centre’. This included the ‘sacred mountain’ or axis mundi – the archetypal point of intersection between the sacred and the profane – but also holy cities, palaces and temples. The exact myths, rituals and places were culturally and historically specific, of course, but Eliade saw them as examples of a universal pattern.

Smith was profoundly influenced by Eliade. As a graduate student, he set out to read nearly every work cited in the bibliographies of Eliade’s magnum opus, Patterns in Comparative Religion (1958). Smith’s move to join the faculty of the University of Chicago in 1968-69, he admitted, was motivated in part by a desire to work alongside his ‘master’. However, he soon began to set out his own intellectual agenda, which put him at odds with Eliade’s paradigm.

First, Smith challenged whether the Eliadean constructions of sacred time and sacred space were truly universal. He did not deny that these constructs mapped onto some archaic cultures quite well. But in his early essay ‘The Wobbling Pivot’ (1972), Smith noted that some cultures aspired to explode or escape from space and time, rather than revere or reify them. (Think of the various schools of Gnosticism that thrived during the first two centuries CE, which held that the material world was the work of a flawed, even malevolent spirit known as the demiurge, who was inferior to the true, hidden god.) Smith distinguished these ‘utopian’ patterns, which seek the sacred outside the prevailing natural and social order, from the ‘locative’ ones described by Eliade, which reinforce it – a move that undercut Eliade’s universalist vocabulary.

Second, Smith introduced a new self-awareness and humility to the study of religion. In the essayAdde Parvum Parvo Magnus Acervus Erit’ (1971) – the title a quotation from Ovid, meaning ‘add a little to a little and there will be a great heap’ – Smith showed how comparisons of ‘religious’ data are laced with political and ideological values. What Smith identified as ‘Right-wing’ approaches, such as Eliade’s, strive for organic wholeness and unity; intertwined with this longing, he said, is a commitment to traditional social structures and authority. ‘Left-wing’ approaches, on the other hand, incline toward analysis and critique, which upset the established order and make possible alternative visions of society. By situating Eliade’s approach to religion on the conservative end of the spectrum, Smith did not necessarily intend to disparage it. Instead, he sought to distinguish these approaches so as to prevent scholars from carelessly combining them.

Behind Smith’s work was the motivating thesis that no theory or method for studying religion can be purely objective. Rather, the classifying devices we apply to decide whether something is ‘religious’ or not always rely on pre-existing norms. The selective taxonomy of ‘religious’ data from across cultures, histories and societies, Smith argued, is therefore a result of the scholar’s ‘imaginative acts of comparison and generalisation’. Where once we had the self-evident, universal phenomenon of religion, all that is left is a patchwork of particular beliefs, practices and experiences.

A vast number of traditions have existed over time that one could conceivably categorise as religions. But in order to decide one way or the other, an observer first has to formulate a definition according to which some traditions can be included and others excluded. As Smith wrote in the introduction to Imagining Religion: ‘while there is a staggering amount of data, of phenomena, of human experiences and expressions that might be characterised in one culture or another, by one criterion or another, as religious – there is no data for religion’. There might be evidence for various expressions of Hinduism, Judaism, Christianity, Islam and so forth. But these become ‘religions’ only through second-order, scholarly reflection. A scholar’s definition could even lead her to categorise some things as religions that are not conventionally thought of as such (Alcoholics Anonymous, for instance), while excluding others that are (certain strains of Buddhism).

Provocative and initially puzzling, Smith’s claim that religion ‘is created for the scholar’s analytic purposes’ is now widely accepted in the academy. Still, Smith reaffirmed his own critical appreciation for Eliade’s work in two of his last publications before his death in December 2017, and one of the final courses he taught at Chicago was a close reading of Patterns. Smith’s aim was never to exorcise Eliade from the field. His intention was instead to dispense with the temptations of self-evidence, to teach scholars of religion, whatever their preferred methods or political-ideological leanings, to be clear about the powers and limits of the decisions that they need to make. The student of religion, Smith said, must be self-conscious above all: ‘Indeed, this self-consciousness constitutes his primary expertise, his foremost object of study.’Aeon counter – do not remove

Brett Colasacco

This article was originally published at Aeon and has been republished under Creative Commons.

Pragmatism & Postmodernism


To my best belief: just what is the pragmatic theory of truth?

Cheryl Misak | Aeon Ideas

What is it for something to be true? One might think that the answer is obvious. A true belief gets reality right: our words correspond to objects and relations in the world. But making sense of that idea involves one in ever more difficult workarounds to intractable problems. For instance, how do we account for the statement ‘It did not rain in Toronto on 20 May 2018’? There don’t seem to be negative facts in the world that might correspond to the belief. What about ‘Every human is mortal’? There are more humans – past, present and future – than individual people in the world. (That is, a generalisation like ‘All Fs’ goes beyond the existing world of Fs, because ‘All Fs’ stretches into the future.) What about ‘Torture is wrong’? What are the objects in the world that might correspond to that? And what good is it explaining truth in terms of independently existing objects and facts, since we have access only to our interpretations of them?

Pragmatism can help us with some of these issues. The 19th-century American philosopher Charles Peirce, one of the founders of pragmatism, explained the core of this tradition beautifully: ‘We must not begin by talking of pure ideas, – vagabond thoughts that tramp the public roads without any human habitation, – but must begin with men and their conversation.’ Truth is a property of our beliefs. It is what we aim at, and is essentially connected to our practices of enquiry, action and evaluation. Truth, in other words, is the best that we could do.

The pragmatic theory of truth arose in Cambridge, Massachusetts in the 1870s, in a discussion group that included Peirce and William James. They called themselves the Metaphysical Club, with intentional irony. Though they shared the same broad outlook on truth, there was immediate disagreement about how to unpack the idea of the ‘best belief’. The debate stemmed from the different temperaments of Peirce and James.

Philosophy, James said, ‘is at once the most sublime and the most trivial of human pursuits. It works in the minutest crannies and it opens out the widest vistas.’ He was more a vista than a crannies man, dead set against technical philosophy. At the beginning of his book Pragmatism (1907), he said: ‘the philosophy which is so important to each of us is not a technical matter; it is our more or less dumb sense of what life honestly and deeply means.’ He wanted to write accessible philosophy for the public, and did so admirably. He became the most famous living academic in the United States.

The version of the pragmatist theory of truth made famous (or perhaps infamous) by James held that ‘Any idea upon which we can ride … any idea that will carry us prosperously from any one part of our experience to any other part, linking things satisfactorily, working securely, simplifying, saving labour, is … true INSTRUMENTALLY.’

‘Satisfactorily’ for James meant ‘more satisfactorily to ourselves, and individuals will emphasise their points of satisfaction differently. To a certain degree, therefore, everything here is plastic.’ He argued that if the available evidence underdetermines a matter, and if there are non-epistemic reasons for believing something (my people have always believed it, believing it would make me happier), then it is rational to believe it. He argued that if a belief in God has a positive impact on someone’s life, then it is true for that person. If it does not have a good impact on someone else’s life, it is not true for them.

Peirce, a crackerjack logician, was perfectly happy working in the crannies as well as opening out the vistas. He wrote much, but published little. A cantankerous man, Peirce described the difference in personality with his friend James thus: ‘He so concrete, so living; I a mere table of contents, so abstract, a very snarl of twine.’

Peirce said that James’s version of the pragmatic theory of truth was ‘a very exaggerated utterance, such as injures a serious man very much’. It amounted to: ‘Oh, I could not believe so-and-so, because I should be wretched if I did.’ Peirce’s worries, in these days of fake news, are more pressing than ever.

On Peirce’s account, a belief is true if it would be ‘indefeasible’ or would not in the end be defeated by reasons, argument, evidence and the actions that ensue from it. A true belief is the belief that we would come to, were we to enquire as far as we could on a matter. He added an important rider: a true belief must be put in place in a manner ‘not extraneous to the facts’. We cannot believe something because we would like it to be true. The brute impinging of experience cannot be ignored.

The disagreement continues to this day. James influenced John Dewey (who, when a student at Johns Hopkins, avoided Peirce and his technical philosophy like the plague) and later Richard Rorty. Dewey argued that truth (although he tended to stay away from the word) is nothing more than a resolution of a problematic situation. Rorty, at his most extreme, held that truth is nothing more than what our peers will let us get away with saying. This radically subjective or plastic theory of truth is what is usually thought of as pragmatism.

Peirce, however, managed to influence a few people himself, despite being virtually unknown in his lifetime. One was the Harvard logician and Kant scholar C I Lewis. He argued for a position remarkably similar to what his student W V O Quine would take over (and fail to acknowledge as Lewis’s). Reality cannot be ‘alien’, wrote Lewis – ‘the only reality there for us is one delimited in concepts of the results of our own ways of acting’. We have something given to us in brute experience, which we then interpret. With all pragmatists, Lewis was set against conceptions of truth in which ‘the mind approaches the flux of immediacy with some godlike foreknowledge of principles’. There is no ‘natural light’, no ‘self-illuminating propositions’, no ‘innate ideas’ from which other certainties can be deduced. Our body of knowledge is a pyramid, with the most general beliefs, such as the laws of logic, at the top, and the least general, such as ‘all swans are birds’, at the bottom. When faced with recalcitrant experience, we make adjustments in this complex system of interrelated concepts. ‘The higher up a concept stands in our pyramid, the more reluctant we are to disturb it, because the more radical and far-reaching the results will be…’ But all beliefs are fallible, and we can indeed disturb any of them. A true belief would be one that survives this process of enquiry.

Lewis saw that the pragmatist theory of truth deals nicely with those beliefs that the correspondence theory stumbles over. For instance, there is no automatic bar to ethical beliefs being true. Beliefs about what is right and wrong might well be evaluable in ways similar to how other kinds of beliefs are evaluable – in terms of whether they fit with experience and survive scrutiny.Aeon counter – do not remove

Cheryl Misak

This article was originally published at Aeon and has been republished under Creative Commons.