Why the Demoniac Stayed in his Comfortable Corner of Hell

the-drunkard

Detail from The Drunkard (1912) by Marc Chagall. Courtesy Wikipedia

John Kaag | Aeon Ideas

I am not what one might call a religious man. I went to church, and then to confirmation class, under duress. My mother, whom I secretly regarded as more powerful than God, insisted that I go. So I went. Her insistence, however, had the unintended consequence of introducing me to a pastor whom I came to despise. So I eventually quit.

There were many problems with this pastor but the one that bothered me the most was his refusal to explain a story from the New Testament that I found especially hard to believe: the story of the demoniac.

This story from Mark 5:1-20 relates how Jesus and the disciples go to the town of Gerasenes and there encounter a man who is possessed by evil spirits. This demoniac – a self-imposed outcast from society – lived at the outskirts of town and ‘night and day among the tombs and in the hills he would cry out and cut himself with stones’. The grossest part of the story, however, isn’t the self-mutilation. It’s the demoniac’s insane refusal to accept help. When Jesus approached him, the demoniac threw himself to the ground and wailed: ‘What do you want with me? … In God’s name, don’t torture me!’ When you’re possessed by evil spirits, the worst thing in the world is to be healed. In short, the demoniac tells Jesus to bugger off, to leave him and his sharp little stones in his comfortable corner of hell.

When I first read about the demoniac, I was admittedly scared, but I eventually convinced myself that the parable was a manipulative attempt to persuade unbelievers such as me to find religion. And I wasn’t buying it. But when I entered university, went into philosophy, and began to cultivate an agnosticism that one might call atheism, I discovered that many a philosopher had been drawn to this scary story. So I took a second look.

The Danish philosopher Søren Kierkegaard, who spent years analysing the psychological and ethical dimensions of the demoniac, tells us that being demonic is more common than we might like to admit. He points out that when Jesus heals the possessed man, the spirits are exorcised en masse, flying out together as ‘the Legion’ – a vast army of evil forces. There are more than enough little demons to go around, and this explains why they come to roust in some rather mundane places. In Kierkegaard’s words: ‘One may hear the drunkard say: “Let me be the filth that I am.”’ Or, leave me alone with my bottle and let me ruin my life, thank you very much. I heard this first from my father, and then from an increasing number of close friends, and most recently from a voice that occasionally keeps me up at night when everyone else is asleep.

Those who are the most pointedly afflicted are often precisely those who are least able to recognise their affliction, or to save themselves. And those with the resources to rescue themselves are usually already saved. As Kierkegaard suggests, the virtue of sobriety makes perfect sense to one who is already sober. Eating well is second nature to the one who is already healthy; saving money is a no-brainer for one who one is already rich; truth-telling is the good habit of one who is already honest. But for those in the grips of crisis or sin, getting out usually doesn’t make much sense.

Sharp stones can take a variety of forms.

In The Concept of Anxiety (1844), Kierkegaard tells us that the ‘essential nature of [the demoniac] is anxiety about the good’. I’ve been ‘anxious’ about many things – about exams, about spiders, about going to sleep – but Kierkegaard explains that the feeling I have about these nasty things isn’t anxiety at all. It’s fear. Anxiety, on the other hand, has no particular object. It is the sense of uneasiness that one has at the edge of a cliff, or climbing a ladder, or thinking about the prospects of a completely open future – it isn’t fear per se, but the feeling that we get when faced with possibility. It’s the unsettling feeling of freedom. Yes, freedom, that most precious of modern watchwords, is deeply unsettling.

What does this have to do with our demoniac? Everything. Kierkegaard explains that the demoniac reflects ‘an unfreedom that wants to close itself off’; when confronted with the possibility of being healed, he wants nothing to do with it. The free life that Jesus offers is, for the demoniac, pure torture. I’ve often thought that this is the fate of the characters in Jean-Paul Sartre’s play No Exit (1944): they are always free to leave, but leaving seems beyond impossible.

Yet Jesus manages to save the demoniac. And I wanted my pastor to tell me how. At the time, I chalked up most of the miracles from the Bible as exaggeration, or interpretation, or poetic licence. But the healing of the demoniac – unlike the bread and fish and resurrection – seemed really quite fantastic. So how did Jesus do it? I didn’t get a particularly good answer from my pastor, so I left the Church. And never came back.

Today, I still want to know.

I’m not here to explain the salvation of the demoniac. I’m here only to observe, as carefully as I can, that this demonic situation is a problem. Indeed, I suspect it is the problem for many, many readers. The demoniac reflects what theologians call the ‘religious paradox’, namely that it is impossible for fallen human beings – such craven creatures – to bootstrap themselves to heaven. Any redemptive resources at our disposal are probably exactly as botched as we are.

There are many ways to distract ourselves from this paradox – and we are very good at manufacturing them: movies and alcohol and Facebook and all the fixations and obsessions of modern life. But at the end of the day, these are pitifully little comfort.

So this year, as New Year’s Day recedes from memory and the winter darkness remains, I am making a resolution: I will try not to take all the usual escapes. Instead, I will try to simply sit with the plight of the demoniac, to ‘stew in it’ as my mother used to say, for a minute or two more. In his essay ‘Self-will’ (1919), the German author Hermann Hesse put it thus: ‘If you and you … are in pain, if you are sick in body or soul, if you are afraid and have a foreboding of danger – why not, if only to amuse yourselves … try to put the question in another way? Why not ask whether the source of your pain might not be you yourselves?’ I will not reach for my familiar demonic stones, blood-spattered yet comforting. I will ask why I need them in the first place. When I do this, and attempt to come to terms with the demoniac’s underlying suffering, I might notice that it is not unique to me.

When I do, when I let go of the things that I think are going to ease my suffering, I might have the chance to notice that I am not alone in my anxiety. And maybe this is recompense enough. Maybe this is freedom and the best that I can hope for.Aeon counter – do not remove

John Kaag

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Having a sense of Meaning in life is Good for you — So how do you get one?

File 20190208 174890 1tn0xbx.jpg?ixlib=rb 1.1

There’s a high degree of overlap between experiencing happiness and meaning.
Shutterstock/KieferPix


Lisa A Williams, UNSW

The pursuit of happiness and health is a popular endeavour, as the preponderance of self-help books would attest.

Yet it is also fraught. Despite ample advice from experts, individuals regularly engage in activities that may only have short-term benefit for well-being, or even backfire.

The search for the heart of well-being – that is, a nucleus from which other aspects of well-being and health might flow – has been the focus of decades of research. New findings recently reported in Proceedings of the National Academy of Sciences point towards an answer commonly overlooked: meaning in life.

Meaning in life: part of the well-being puzzle?

University College London’s psychology professor Andrew Steptoe and senior research associate Daisy Fancourt analysed a sample of 7,304 UK residents aged 50+ drawn from the English Longitudinal Study of Ageing.

Survey respondents answered a range of questions assessing social, economic, health, and physical activity characteristics, including:

…to what extent do you feel the things you do in your life are worthwhile?

Follow-up surveys two and four years later assessed those same characteristics again.

One key question addressed in this research is: what advantage might having a strong sense of meaning in life afford a few years down the road?

The data revealed that individuals reporting a higher meaning in life had:

  • lower risk of divorce
  • lower risk of living alone
  • increased connections with friends and engagement in social and cultural activities
  • lower incidence of new chronic disease and onset of depression
  • lower obesity and increased physical activity
  • increased adoption of positive health behaviours (exercising, eating fruit and veg).

On the whole, individuals with a higher sense of meaning in life a few years earlier were later living lives characterised by health and well-being.

You might wonder if these findings are attributable to other factors, or to factors already in play by the time participants joined the study. The authors undertook stringent analyses to account for this, which revealed largely similar patterns of findings.

The findings join a body of prior research documenting longitudinal relationships between meaning in life and social functioning, net wealth and reduced mortality, especially among older adults.

What is meaning in life?

The historical arc of consideration of the meaning in life (not to be confused with the meaning of life) starts as far back as Ancient Greece. It tracks through the popular works of people such as Austrian neurologist and psychiatrist Victor Frankl, and continues today in the field of psychology.

One definition, offered by well-being researcher Laura King and colleagues, says

…lives may be experienced as meaningful when they are felt to have a significance beyond the trivial or momentary, to have purpose, or to have a coherence that transcends chaos.

This definition is useful because it highlights three central components of meaning:

  1. purpose: having goals and direction in life
  2. significance: the degree to which a person believes his or her life has value, worth, and importance
  3. coherence: the sense that one’s life is characterised by predictability and routine.
Michael Steger’s TEDx talk What Makes Life Meaningful.


Curious about your own sense of meaning in life? You can take an interactive version of the Meaning in Life Questionnaire, developed by Steger and colleagues, yourself here.

This measure captures not just the presence of meaning in life (whether a person feels that their life has purpose, significance, and coherence), but also the desire to search for meaning in life.

Routes for cultivating meaning in life

Given the documented benefits, you may wonder: how might one go about cultivating a sense of meaning in life?

We know a few things about participants in Steptoe and Fancourt’s study who reported relatively higher meaning in life during the first survey. For instance, they contacted their friends frequently, belonged to social groups, engaged in volunteering, and maintained a suite of healthy habits relating to sleep, diet and exercise.

Backing up the idea that seeking out these qualities might be a good place to start in the quest for meaning, several studies have causally linked these indicators to meaning in life.

For instance, spending money on others and volunteering, eating fruit and vegetables, and being in a well-connected social network have all been prospectively linked to acquiring a sense of meaning in life.

For a temporary boost, some activities have documented benefits for meaning in the short term: envisioning a happier future, writing a note of gratitude to another person, engaging in nostalgic reverie, and bringing to mind one’s close relationships.

Happiness and meaning: is it one or the other?

There’s a high degree of overlap between experiencing happiness and meaning – most people who report one also report the other. Days when people report feeling happy are often also days that people report meaning.

Yet there’s a tricky relationship between the two. Moment-to-moment, happiness and meaning are often decoupled.

Research by social psychologist Roy Baumeister and colleagues suggests that satisfying basic needs promotes happiness, but not meaning. In contrast, linking a sense of self across one’s past, present, and future promotes meaning, but not happiness.

Connecting socially with others is important for both happiness and meaning, but doing so in a way that promotes meaning (such as via parenting) can happen at the cost of personal happiness, at least temporarily.

Given the now-documented long-term social, mental, and physical benefits of having a sense of meaning in life, the recommendation here is clear. Rather than pursuing happiness as an end-state, ensuring one’s activities provide a sense of meaning might be a better route to living well and flourishing throughout life.The Conversation

Lisa A Williams, Senior Lecturer, School of Psychology, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Should contemporary philosophers read Ockham? Or: what did history ever do for us?

If you are a historian of philosophy, you’ve probably encountered the question whether the stuff you’re working on is of any interest today. It’s the kind of question that awakens all the different souls in your breast at once. Your more enthusiastic self might think, “yes, totally”, while your methodological soul might shout, “anachronism ahead!” And your humbler part might think, “I don’t even understand it myself.” When exposed to this question, I often want to say many things at once, and out comes something garbled. But now I’d like to suggest that there is only one true reply to the main question in the title: “No, that’s the wrong kind of question to ask!” – But of course that’s not all there is to it. So please hear me out…

Read the rest at Handling Ideas, “a blog on (writing) philosophy”

Introduction to Deontology: Kantian Ethics

One popular moral theory that denies that morality is solely about the consequences of our actions is known as Deontology. The most influential and widely adhered to version of Deontology was extensively laid out by Immanuel Kant (1724–1804). Kant’s ethics, as well as the overall philosophical system in which it is embedded, is vast and incredibly difficult. However, one relatively simple concept lies at the center of his ethical system: The Categorical Imperative.

via Introduction to Deontology: Kantian Ethics (1000-Word Philosophy)

Author: Andrew Chapman
Category: Ethics
Word Count: 1000

Philosophy Can Make the Previously Unthinkable Thinkable

woman-at-window

Detail from Woman at a Window (1822) by Caspar David Friedrich. Courtesy Alte Nationalgalerie, Berlin


Rebecca Brown | Aeon Ideas

In the mid-1990s, Joseph Overton, a researcher at the US think tank the Mackinac Center for Public Policy, proposed the idea of a ‘window’ of socially acceptable policies within any given domain. This came to be known as the Overton window of political possibilities. The job of think tanks, Overton proposed, was not directly to advocate particular policies, but to shift the window of possibilities so that previously unthinkable policy ideas – those shocking to the sensibilities of the time – become mainstream and part of the debate.

Overton’s insight was that there is little point advocating policies that are publicly unacceptable, since (almost) no politician will support them. Efforts are better spent, he argued, in shifting the debate so that such policies seem less radical and become more likely to receive support from sympathetic politicians. For instance, working to increase awareness of climate change might make future proposals to restrict the use of diesel cars more palatable, and ultimately more effective, than directly lobbying for a ban on such vehicles.

Overton was concerned with the activities of think tanks, but philosophers and practical ethicists might gain something from considering the Overton window. By its nature, practical ethics typically addresses controversial, politically sensitive topics. It is the job of philosophers to engage in ‘conceptual hygiene’ or, as the late British philosopher Mary Midgley described it, ‘philosophical plumbing’: clarifying and streamlining, diagnosing unjustified assertions and pointing out circularities.

Hence, philosophers can be eager to apply their skills to new subjects. This can provoke frustration from those embedded within a particular subject. Sometimes, this is deserved: philosophers can be naive in contributing their thoughts to complex areas with which they lack the kind of familiarity that requires time and immersion. But such an outside perspective can also be useful. Although such contributions will rarely get everything right, the standard is too demanding in areas of great division and debate (such as practical ethics). Instead, we should expect philosophers to offer a counterpoint to received wisdom, established norms and doctrinal prejudice.

Ethicists, at least within their academic work, are encouraged to be skeptical of intuition and the naturalistic fallacy (the idea that values can be derived simply from facts). Philosophers are also familiar with tools such as thought experiments: hypothetical and contrived descriptions of events that can be useful for clarifying particular intuitions or the implications of a philosophical claim. These two factors make it unsurprising that philosophers often publicly adopt positions that are unintuitive and outside mainstream thought, and that they might not personally endorse.

This can serve to shift, and perhaps widen, the Overton window. Is this a good thing? Sometimes philosophers argue for conclusions far outside the domain of ‘respectable’ positions; conclusions that could be hijacked by those with intolerant, racist, sexist or fundamentalist beliefs to support their stance. It is understandable that those who are threatened by such beliefs want any argument that might conceivably support them to be absent from the debate, off the table, and ignored.

However, the freedom to test the limits of argumentation and intuition is vital to philosophical practice. There are sufficient and familiar examples of historical orthodoxies that have been overturned – women’s right to vote; the abolition of slavery; the decriminalisation of same-sex relationships – to establish that strength and pervasiveness of a belief indicate neither truth nor immutability.

It can be tedious to repeatedly debate women’s role in the workforce, abortion, animals’ capacity to feel pain and so on, but to silence discussion would be far worse. Genuine attempts to resolve difficult ethical dilemmas must recognise that understanding develops by getting things wrong and having this pointed out. Most (arguably, all) science fails to describe or predict how the world works with perfect accuracy. But as a collective enterprise, it can identify errors and gradually approximate ‘truth’. Ethical truths are less easy to come by, and a different methodology is required in seeking out satisfactory approximations. But part of this model requires allowing plenty of room to get things wrong.

It is unfortunate but true that bad ideas are sometimes undermined by bad reasoning, and also that sometimes those who espouse offensive and largely false views can say true things. Consider the ‘born this way’ argument, which endorses the flawed assumption that a genetic basis for homosexuality indicates the permissibility of same-sex relationships. While this might win over some individuals, it could cause problems down the line if it turns out that homosexuality isn’t genetically determined. Debates relating to the ‘culture wars’ on college campuses have attracted many ad hominem criticisms that set out to discredit the authors’ position by pointing to the fact that they fit a certain demographic (white, middle-class, male) or share some view with a villainous figure, and thus are not fit to contribute. The point of philosophy is to identify such illegitimate moves, and to keep the argument on topic; sometimes, this requires coming to the defence of bad ideas or villainous characters.

Participation in this process can be daunting. Defending an unpopular position can make one a target both for well-directed, thoughtful criticisms, and for emotional, sweeping attacks. Controversial positions on contentious topics attract far more scrutiny than abstract philosophical contributions to niche subjects. This means that, in effect, the former are required to be more rigorous than the latter, and to foresee and head off more potential misappropriations, misinterpretations and misunderstandings – all while contributing to an interdisciplinary area, which requires some understanding not only of philosophical theory but perhaps also medicine, law, natural and social science, politics and various other disciplines.

This can be challenging, though I do not mean to be an apologist for thoughtless, sensationalist provocation and controversy-courting, whether delivered by philosophers or others. We should see one important social function of practical ethicists as widening the Overton window and pushing the public and political debate towards reasoned deliberation and respectful disagreement. Widening the Overton window can yield opportunities for ideas that many find offensive, and straightforwardly mistaken, as well as for ideas that are well-defended and reasonable. It is understandable that those with deep personal involvement in these debates often want to narrow the window and push it in the direction of those views they find unthreatening. But philosophers have a professional duty, as conceptual plumbers, to keep the whole system in good working order. This depends upon philosophical contributors upholding the disciplinary standards of academic rigour and intellectual honesty that are essential to ethical reflection, and trusting that this will gradually, collectively lead us in the right direction.Aeon counter – do not remove

Rebecca Brown

This article was originally published at Aeon and has been republished under Creative Commons.

The Existentialist Tradition

existentialist-tradition


This just recently arrived in the mail: The Existentialist Tradition: Selected Writings, edited by Nino Langiulli. I’m very happy to have found this book in good condition. This was my first introduction to existentialism around 10 years ago. I originally found it at the University library and the ideas contained within are thought-provoking and sometimes even profound. Very glad to have found a copy for myself all these years later. Highly recommended as an introduction to existentialism and a guide to which authors you may wish to pursue further.

The Empathetic Humanities have much to teach our Adversarial Culture

Books


Alexander Bevilacqua | Aeon Ideas

As anyone on Twitter knows, public culture can be quick to attack, castigate and condemn. In search of the moral high ground, we rarely grant each other the benefit of the doubt. In her Class Day remarks at Harvard’s 2018 graduation, the Nigerian novelist Chimamanda Ngozi Adichie addressed the problem of this rush to judgment. In the face of what she called ‘a culture of “calling out”, a culture of outrage’, she asked students to ‘always remember context, and never disregard intent’. She could have been speaking as a historian.

History, as a discipline, turns away from two of the main ways of reading that have dominated the humanities for the past half-century. These methods have been productive, but perhaps they also bear some responsibility for today’s corrosive lack of generosity. The two approaches have different genealogies, but share a significant feature: at heart, they are adversarial.

One mode of reading, first described in 1965 by the French philosopher Paul Ricœur and known as ‘the hermeneutics of suspicion’, aims to uncover the hidden meaning or agenda of a text. Whether inspired by Karl Marx, Friedrich Nietzsche or Sigmund Freud, the reader interprets what happens on the surface as a symptom of something deeper and more dubious, from economic inequality to sexual anxiety. The reader’s task is to reject the face value of a work, and to plumb for a submerged truth.

A second form of interpretation, known as ‘deconstruction’, was developed in 1967 by the French philosopher Jacques Derrida. It aims to identify and reveal a text’s hidden contradictions – ambiguities and even aporias (unthinkable contradictions) that eluded the author. For example, Derrida detected a bias that favoured speech over writing in many influential philosophical texts of the Western tradition, from Plato to Jean-Jacques Rousseau. The fact that written texts could privilege the immediacy and truth of speech was a paradox that revealed unarticulated metaphysical commitments at the heart of Western philosophy.

Both of these ways of reading pit reader against text. The reader’s goal becomes to uncover meanings or problems that the work does not explicitly express. In both cases, intelligence and moral probity are displayed at the expense of what’s been written. In the 20th century, these approaches empowered critics to detect and denounce the workings of power in all kinds of materials – not just the dreams that Freud interpreted, or the essays by Plato and Rousseau with which Derrida was most closely concerned.

They do, however, foster a prosecutorial attitude among academics and public intellectuals. As a colleague once told me: ‘I am always looking for the Freudian slip.’ He scours the writings of his peers to spot when they trip up and betray their problematic intellectual commitments. One poorly chosen phrase can sully an entire work.

Not surprisingly, these methods have fostered a rather paranoid atmosphere in modern academia. Mutual monitoring of lexical choices leads to anxiety, as an increasing number of words are placed on a ‘no fly’ list. One error is taken as the symptom of problematic thinking; it can spoil not just a whole book, but perhaps even the author’s entire oeuvre. This set of attitudes is not a world apart from the pile-ons that we witness on social media.

Does the lack of charity in public discourse – the quickness to judge, the aversion to context and intent – stem in part from what we might call the ‘adversarial’ humanities? These practices of interpretation are certainly on display in many classrooms, where students learn to exercise their moral and intellectual prowess by dismantling what they’ve read. For teachers, showing students how to take a text apart bestows authority; for students, learning to read like this can be electrifying.

Yet the study of history is different. History deals with the past – and the past is, as the British novelist L P Hartley wrote in 1953, ‘a foreign country’. By definition, historians deal with difference: with what is unlike the present, and with what rarely meets today’s moral standards.

The virtue of reading like a historian, then, is that critique or disavowal is not the primary goal. On the contrary, reading historically provides something more destabilising: it requires the historian to put her own values in parentheses.

The French medievalist Marc Bloch wrote that the task of the historian is understanding, not judging. Bloch, who fought in the French Resistance, was caught and turned over to the Gestapo. Poignantly, the manuscript of The Historian’s Craft, where he expressed this humane statement, was left unfinished: Bloch was executed by firing squad in June 1944.

As Bloch knew well, historical empathy involves reaching out across the chasm of time to understand people whose values and motivations are often utterly unlike our own. It means affording these people the gift of intellectual charity – that is, the best possible interpretation of what they said or believed. For example, a belief in magic can be rational on the basis of a period’s knowledge of nature. Yet acknowledging this demands more than just contextual, linguistic or philological skill. It requires empathy.

Aren’t a lot of psychological assumptions built into this model? The call for empathy might seem theoretically naive. Yet we judge people’s intentions all the time in our daily lives; we can’t function socially without making inferences about others’ motivations. Historians merely apply this approach to people who are dead. They invoke intentions not from a desire to attack, nor because they seek reasons to restrain a text’s range of meanings. Their questions about intentions stem, instead, from respect for the people whose actions and thoughts they’re trying to understand.

Reading like a historian, then, involves not just a theory of interpretation, but also a moral stance. It is an attempt to treat others generously, and to extend that generosity even to those who can’t be hic et nunc – here and now.

For many historians (as well as others in what we might call the ‘empathetic’ humanities, such as art history and literary history), empathy is a life practice. Living with the people of the past changes one’s relationship to the present. At our best, we begin to offer empathy not just to those who are distant, but to those who surround us, aiming in our daily life for ‘understanding, not judging’.

To be sure, it’s challenging to impart these lessons to students in their teens or early 20s, to whom the problems of the present seem especially urgent and compelling. The injunction to read more generously is pretty unfashionable. It can even be perceived as conservative: isn’t the past what’s holding us back, and shouldn’t we reject it? Isn’t it more useful to learn how to deconstruct a text, and to be on the lookout for latent, pernicious meanings?

Certainly, reading isn’t a zero-sum game. One can and should cultivate multiple modes of interpretation. Yet the nostrum that the humanities teach ‘critical thinking and reading skills’ obscures the profound differences in how adversarial and empathetic disciplines engage with written works – and how they teach us to respond to other human beings. If the empathetic humanities can make us more compassionate and more charitable – if they can encourage us to ‘always remember context, and never disregard intent’ – they afford something uniquely useful today.Aeon counter – do not remove

Alexander Bevilacqua

This article was originally published at Aeon and has been republished under Creative Commons.

Slaying the Snark: What Nonsense Verse tells us about Reality

hunting-snark

Eighth of Henry Holiday’s original illustrations to “The Hunting of the Snark” by Lewis Carroll, Wikipedia

Nina Lyon | Aeon Ideas

The English writer Lewis Carroll’s nonsense poem The Hunting of the Snark (1876) is an exceptionally difficult read. In it, a crew of improbable characters boards a ship to hunt a Snark, which might sound like a plot were it not for the fact that nobody knows what a Snark actually is. It doesn’t help that any attempt to describe a Snark turns into a pile-up of increasingly incoherent attributes: it is said to taste ‘meagre and hollow, but crisp: / Like a coat that is rather too tight in the waist’.

The only significant piece of information we have about the Snark’s identity is that it might be a Boojum. Unfortunately nobody knows what that is either, apart from the fact that anyone who encounters a Boojum will ‘softly and suddenly vanish away’ into nothingness.

Nothingness also characterises the crew’s map: a ‘perfect and absolute blank!’

‘What’s the good of Mercator’s North Poles and Equators,
Tropics, Zones and Meridian Lines?’
So the Bellman would cry: and the crew would reply,
‘They are merely conventional signs!’

Nonsense such as this might get tiresome to read, but it can make for a useful thought-experiment – particularly about language. In the Snark, as in the Alice books of 1865 and 1871, the commonsense assumptions that usually govern language and meaning are turned upside down. It makes us wonder what all of those assumptions are up to, and how they work. How do we know that this sentence is trying to say something serious, or that where we are now is not a dream?

Language can’t always convey meaning alone – it might need sense, which is the governing context that framed it. We talk about ‘common sense’, or whether something ‘makes sense’, or dismiss things as ‘nonsense’, but we rarely think about what sense itself is, until it goes missing. The German logician Gottlob Frege in 1892 used sense to describe a proposition’s meaning, as something distinct from what it denoted. Sense therefore appears to be a mental entity, resistant to fixed definition.

Shortly after Carroll’s death in 1898, a seismic turn took place in both logic and metaphysics. Building on Frege, logical positivists such as Bertrand Russell sought to deploy logic and mathematics in order to establish unconditional truths. A logical truth was, like mathematics, true whether or not people changed their minds about it. Realism, the belief in a mind-independent reality, began to assert itself afresh after a long spell in the philosophical wilderness.

Sense and nonsense would therefore become landmines in a battle over logic’s ability to untether truth from thought. If an issue over meaning seeks recourse in sense, it seeks recourse in thought too. Carroll anticipated where logic was headed, and the strangest of his creations was more than a game, an experiment conceived, as the English author G K Chesterton once wrote of his work, ‘in order to study that darkest problem of metaphysics’.

In 1901, the pragmatist philosopher and provocateur F C S Schiller created a parody Christmas edition of the philosophical journal Mind called Mind!. The frontispiece was a ‘Portrait of Its Immanence the Absolute’, which, Schiller noted, was ‘very like the Bellman’s map in the Hunting of the Snark’: completely blank.

The Absolute – or the Infinite or Ultimate Reality, among other grand aliases – was the sum of all experience and being, and inconceivable to the human mind. It was monistic, consuming all into the One. If it sounded like something you’d struggle to get your head around, that was pretty much the point. The Absolute was an emblem of metaphysical idealism, the doctrine that truth could exist only within the domain of thought. Idealism had dominated the academy for the entirety of Carroll’s career, and it was beginning to come under attack. The realist mission, headed by Russell, was to clean up philosophy’s act with the sound application of mathematics and objective facts, and it felt like a breath of fresh air.

Schiller delighted in trolling absolute idealists in general and the English idealist philosopher F H Bradley in particular. In Mind!, Schiller claimed that the Snark was a satire on the Absolute, whose notorious ineffability drove its seekers to derangement. But this was disingenuous. Bradley’s major work, Appearance and Reality (1893), mirrors the point, insofar that there is one, of the Snark. When you home in on a thing and try to pin it down by describing its attributes, and then try to pin down what those are too – Bradley uses the example of a lump of sugar – it all begins to crumble, and must be something other instead. What appeared to be there was only ever an idea. Carroll was, contrariwise, in line with idealist thinking.

A passionate logician, Carroll had been working on a three-part book on symbolic logic that remained unfinished at his death. Two logical paradoxes that he posed in Mind and shared privately with friends and colleagues, such as Bradley, hint at a troublemaking sentiment regarding where logic might be headed. ‘A Logical Paradox’ (1894) resulted in two contradictory statements being simultaneously true; ‘What the Tortoise Said to Achilles’ (1895) set up a predicament in which each proposition requires an additional supporting proposition, creating an infinite regress.

A few years after Carroll’s death, Russell began to flex logic as a tool for denoting the world and testing the validity of propositions about it. Carroll’s paradoxes were problematic and demanded a solution. Russell’s response to ‘A Logical Paradox’ was to legislate nonsense away into a ‘null-class’ – a set of nonexistent propositions that, because it had no real members, didn’t exist either.

Russell’s solution to ‘What the Tortoise Said to Achilles’, tucked away in a footnote to the Principles of Mathematics (1903), entailed a recourse to sense in order to determine whether or not a proposition should be asserted in the first place, teetering into the mind-dependent realm of idealism. Mentally determining meaning is a bit like mentally determining reality, and it wasn’t a neat win for logic’s role as objective sword of truth.

In the Snark, the principles of narrative self-immolate, so that the story, rather than describing things and events in the world, undoes them into something other. It ends like this:

In the midst of the word he was trying to say,
In the midst of his laughter and glee,
He had softly and suddenly vanished away –
For the Snark was a Boojum, you see.

Strip the plot down to those eight final words, and it is all there. The thing sought turned out, upon examination, to be something else entirely. Beyond the flimsy veil of appearance, formed from words and riddled with holes, lies an inexpressible reality.

By the late-20th century, when Russell had won the battle of ideas and commonsense realism prevailed, critics such as Martin Gardner, author of The Annotated Hunting of the Snark (2006), were rattled by Carroll’s antirealism. If the reality we perceive is all there is, and it falls apart, we are left with nothing.

Carroll’s attacks on realism might look nihilistic or radical to a postwar mind steeped in atheist scientism, but they were neither. Carroll was a man of his time, taking a philosophically conservative party line on absolute idealism and its theistic implications. But he was also prophetic, seeing conflict at the limits of language, logic and reality, and laying a series of conceptual traps that continue to provoke it.

The Snark is one such trap. Carroll rejected his illustrator Henry Holiday’s image of the Boojum on the basis that it needed to remain unimaginable, for, after all, how can you illustrate the incomprehensible nature of ultimate reality? It is a task as doomed as saying the unsayable – which, paradoxically, was a task Carroll himself couldn’t quite resist.Aeon counter – do not remove

Nina Lyon

This article was originally published at Aeon and has been republished under Creative Commons.

Modern Technology is akin to the Metaphysics of Vedanta

whitehead-vedanta.jpg

Akhandadhi Das | Aeon Ideas

You might think that digital technologies, often considered a product of ‘the West’, would hasten the divergence of Eastern and Western philosophies. But within the study of Vedanta, an ancient Indian school of thought, I see the opposite effect at work. Thanks to our growing familiarity with computing, virtual reality (VR) and artificial intelligence (AI), ‘modern’ societies are now better placed than ever to grasp the insights of this tradition.

Vedanta summarises the metaphysics of the Upanishads, a clutch of Sanskrit religious texts, likely written between 800 and 500 BCE. They form the basis for the many philosophical, spiritual and mystical traditions of the Indian sub-continent. The Upanishads were also a source of inspiration for some modern scientists, including Albert Einstein, Erwin Schrödinger and Werner Heisenberg, as they struggled to comprehend quantum physics of the 20th century.

The Vedantic quest for understanding begins from what it considers the logical starting point: our own consciousness. How can we trust conclusions about what we observe and analyse unless we understand what is doing the observation and analysis? The progress of AI, neural nets and deep learning have inclined some modern observers to claim that the human mind is merely an intricate organic processing machine – and consciousness, if it exists at all, might simply be a property that emerges from information complexity. However, this view fails to explain intractable issues such as the subjective self and our experience of qualia, those aspects of mental content such as ‘redness’ or ‘sweetness’ that we experience during conscious awareness. Figuring out how matter can produce phenomenal consciousness remains the so-called ‘hard problem’.

Vedanta offers a model to integrate subjective consciousness and the information-processing systems of our body and brains. Its theory separates the brain and the senses from the mind. But it also distinguishes the mind from the function of consciousness, which it defines as the ability to experience mental output. We’re familiar with this notion from our digital devices. A camera, microphone or other sensors linked to a computer gather information about the world, and convert the various forms of physical energy – light waves, air pressure-waves and so forth – into digital data, just as our bodily senses do. The central processing unit processes this data and produces relevant outputs. The same is true of our brain. In both contexts, there seems to be little scope for subjective experience to play a role within these mechanisms.

While computers can handle all sorts of processing without our help, we furnish them with a screen as an interface between the machine and ourselves. Similarly, Vedanta postulates that the conscious entity – something it terms the atma – is the observer of the output of the mind. The atma possesses, and is said to be composed of, the fundamental property of consciousness. The concept is explored in many of the meditative practices of Eastern traditions.

You might think of the atma like this. Imagine you’re watching a film in the cinema. It’s a thriller, and you’re anxious about the lead character, trapped in a room. Suddenly, the door in the movie crashes open and there stands… You jump, as if startled. But what is the real threat to you, other than maybe spilling your popcorn? By suspending an awareness of your body in the cinema, and identifying with the character on the screen, we are allowing our emotional state to be manipulated. Vedanta suggests that the atma, the conscious self, identifies with the physical world in a similar fashion.

This idea can also be explored in the all-consuming realm of VR. On entering a game, we might be asked to choose our character or avatar – originally a Sanskrit word, aptly enough, meaning ‘one who descends from a higher dimension’. In older texts, the term often refers to divine incarnations. However, the etymology suits the gamer, as he or she chooses to descend from ‘normal’ reality and enter the VR world. Having specified our avatar’s gender, bodily features, attributes and skills, next we learn how to control its limbs and tools. Soon, our awareness diverts from our physical self to the VR capabilities of the avatar.

In Vedanta psychology, this is akin to the atma adopting the psychological persona-self it calls the ahankara, or the ‘pseudo-ego’. Instead of a detached conscious observer, we choose to define ourselves in terms of our social connections and the physical characteristics of the body. Thus, I come to believe in myself with reference to my gender, race, size, age and so forth, along with the roles and responsibilities of family, work and community. Conditioned by such identification, I indulge in the relevant emotions – some happy, some challenging or distressing – produced by the circumstances I witness myself undergoing.

Within a VR game, our avatar represents a pale imitation of our actual self and its entanglements. In our interactions with the avatar-selves of others, we might reveal little about our true personality or feelings, and know correspondingly little about others’. Indeed, encounters among avatars – particularly when competitive or combative – are often vitriolic, seemingly unrestrained by concern for the feelings of the people behind the avatars. Connections made through online gaming aren’t a substitute for other relationships. Rather, as researchers at Johns Hopkins University have noted, gamers with strong real-world social lives are less likely to fall prey to gaming addiction and depression.

These observations mirror the Vedantic claim that our ability to form meaningful relationships is diminished by absorption in the ahankara, the pseudo-ego. The more I regard myself as a physical entity requiring various forms of sensual gratification, the more likely I am to objectify those who can satisfy my desires, and to forge relationships based on mutual selfishness. But Vedanta suggests that love should emanate from the deepest part of the self, not its assumed persona. Love, it claims, is soul-to-soul experience. Interactions with others on the basis of the ahankara offer only a parody of affection.

As the atma, we remain the same subjective self throughout the whole of our life. Our body, mentality and personality change dramatically – but throughout it all, we know ourselves to be the constant observer. However, seeing everything shift and give way around us, we suspect that we’re also subject to change, ageing and heading for annihilation. Yoga, as systematised by Patanjali – an author or authors, like ‘Homer’, who lived in the 2nd century BCE – is intended to be a practical method for freeing the atma from relentless mental tribulation, and to be properly situated in the reality of pure consciousness.

In VR, we’re often called upon to do battle with evil forces, confronting jeopardy and virtual mortality along the way. Despite our efforts, the inevitable almost always happens: our avatar is killed. Game over. Gamers, especially pathological gamers, are known to become deeply attached to their avatars, and can suffer distress when their avatars are harmed. Fortunately, we’re usually offered another chance: Do you want to play again? Sure enough, we do. Perhaps we create a new avatar, someone more adept, based on the lessons learned last time around. This mirrors the Vedantic concept of reincarnation, specifically in its form of metempsychosis: the transmigration of the conscious self into a new physical vehicle.

Some commentators interpret Vedanta as suggesting that there is no real world, and that all that exists is conscious awareness. However, a broader take on Vedantic texts is more akin to VR. The VR world is wholly data, but it becomes ‘real’ when that information manifests itself to our senses as imagery and sounds on the screen or through a headset. Similarly, for Vedanta, it is the external world’s transitory manifestation as observable objects that makes it less ‘real’ than the perpetual, unchanging nature of the consciousness that observes it.

To the sages of old, immersing ourselves in the ephemeral world means allowing the atma to succumb to an illusion: the illusion that our consciousness is somehow part of an external scene, and must suffer or enjoy along with it. It’s amusing to think what Patanjali and the Vedantic fathers would make of VR: an illusion within an illusion, perhaps, but one that might help us to grasp the potency of their message.Aeon counter – do not remove

Akhandadhi Das

This article was originally published at Aeon and has been republished under Creative Commons.

 

Reach out, listen, be patient. Good arguments can stop extremism

coming-together

Walter Sinnott-Armstrong | Aeon Ideas

Many of my best friends think that some of my deeply held beliefs about important issues are obviously false or even nonsense. Sometimes, they tell me so to my face. How can we still be friends? Part of the answer is that these friends and I are philosophers, and philosophers learn how to deal with positions on the edge of sanity. In addition, I explain and give arguments for my claims, and they patiently listen and reply with arguments of their own against my – and for their – stances. By exchanging reasons in the form of arguments, we show each other respect and come to understand each other better.

Philosophers are weird, so this kind of civil disagreement still might seem impossible among ordinary folk. However, some stories give hope and show how to overcome high barriers.

One famous example involved Ann Atwater and C P Ellis in my home town of Durham, North Carolina; it is described in Osha Gray Davidson’s book The Best of Enemies (1996) and a forthcoming movie. Atwater was a single, poor, black parent who led Operation Breakthrough, which tried to improve local black neighbourhoods. Ellis was an equally poor but white parent who was proud to be Exalted Cyclops of the local Ku Klux Klan. They could not have started further apart. At first, Ellis brought a gun and henchmen to town meetings in black neighbourhoods. Atwater once lurched toward Ellis with a knife and had to be held back by her friends.

Despite their mutual hatred, when courts ordered Durham to integrate their public schools, Atwater and Ellis were pressured into co-chairing a charrette – a series of public discussions that lasted eight hours per day for 10 days in July 1971 – about how to implement integration. To plan their ordeal, they met and began by asking questions, answering with reasons, and listening to each other. Atwater asked Ellis why he opposed integration. He replied that mainly he wanted his children to get a good education, but integration would ruin their schools. Atwater was probably tempted to scream at him, call him a racist, and walk off in a huff. But she didn’t. Instead, she listened and said that she also wanted his children – as well as hers – to get a good education. Then Ellis asked Atwater why she worked so hard to improve housing for blacks. She replied that she wanted her friends to have better homes and better lives. He wanted the same for his friends.

When each listened to the other’s reasons, they realised that they shared the same basic values. Both loved their children and wanted decent lives for their communities. As Ellis later put it: ‘I used to think that Ann Atwater was the meanest black woman I’d ever seen in my life … But, you know, her and I got together one day for an hour or two and talked. And she is trying to help her people like I’m trying to help my people.’ After realising their common ground, they were able to work together to integrate Durham schools peacefully. In large part, they succeeded.

None of this happened quickly or easily. Their heated discussions lasted 10 long days in the charrette. They could not have afforded to leave their jobs for so long if their employers (including Duke University, where Ellis worked in maintenance) had not granted them time off with pay. They were also exceptional individuals who had strong incentives to work together as well as many personal virtues, including intelligence and patience. Still, such cases prove that sometimes sworn enemies can become close friends and can accomplish a great deal for their communities.

Why can’t liberals and conservatives do the same today? Admittedly, extremists on both sides of the current political scene often hide in their echo chambers and homogeneous neighbourhoods. They never listen to the other side. When they do venture out, the level of rhetoric on the internet is abysmal. Trolls resort to slogans, name-calling and jokes. When they do bother to give arguments, their arguments often simply justify what suits their feelings and signals tribal alliances.

The spread of bad arguments is undeniable but not inevitable. Rare but valuable examples such as Atwater and Ellis show us how we can use philosophical tools to reduce political polarisation.

The first step is to reach out. Philosophers go to conferences to find critics who can help them improve their theories. Similarly, Atwater and Ellis arranged meetings with each other in order to figure out how to work together in the charrette. All of us need to recognise the value of listening carefully and charitably to opponents. Then we need to go to the trouble of talking with those opponents, even if it means leaving our comfortable neighbourhoods or favourite websites.

Second, we need to ask questions. Since Socrates, philosophers have been known as much for their questions as for their answers. And if Atwater and Ellis had not asked each other questions, they never would have learned that what they both cared about the most was their children and alleviating the frustrations of poverty. By asking the right questions in the right way, we can often discover shared values or at least avoid misunderstanding opponents.

Third, we need to be patient. Philosophers teach courses for months on a single issue. Similarly, Atwater and Ellis spent 10 days in a public charrette before they finally came to understand and appreciate each other. They also welcomed other members of the community to talk as long as they wanted, just as good teachers include conflicting perspectives and bring all students into the conversation. Today, we need to slow down and fight the tendency to exclude competing views or to interrupt and retort with quick quips and slogans that demean opponents.

Fourth, we need to give arguments. Philosophers typically recognise that they owe reasons for their claims. Similarly, Atwater and Ellis did not simply announce their positions. They referred to the concrete needs of their children and their communities in order to explain why they held their positions. On controversial issues, neither side is obvious enough to escape demands for evidence and reasons, which are presented in the form of arguments.

None of these steps is easy or quick, but books and online courses on reasoning – especially in philosophy – are available to teach us how to appreciate and develop arguments. We can also learn through practice by reaching out, asking questions, being patient, and giving arguments in our everyday lives.

We still cannot reach everyone. Even the best arguments sometimes fall on deaf ears. But we should not generalise hastily to the conclusion that arguments always fail. Moderates are often open to reason on both sides. So are those all-too-rare exemplars who admit that they (like most of us) do not know which position to hold on complex moral and political issues.

Two lessons emerge. First, we should not give up on trying to reach extremists, such as Atwater and Ellis, despite how hard it is. Second, it is easier to reach moderates, so it usually makes sense to try reasoning with them first. Practising on more receptive audiences can help us improve our arguments as well as our skills in presenting arguments. These lessons will enable us to do our part to shrink the polarisation that stunts our societies and our lives.Aeon counter – do not remove

Walter Sinnott-Armstrong

This article was originally published at Aeon and has been republished under Creative Commons.