Descartes was Wrong: ‘A Person is a Person through Other Persons’

young-moe

Detail from Young Moe (1938) by Paul Klee. Courtesy Phillips collection/Wikipedia

Abeba Birhane | Aeon Ideas

According to Ubuntu philosophy, which has its origins in ancient Africa, a newborn baby is not a person. People are born without ‘ena’, or selfhood, and instead must acquire it through interactions and experiences over time. So the ‘self’/‘other’ distinction that’s axiomatic in Western philosophy is much blurrier in Ubuntu thought. As the Kenyan-born philosopher John Mbiti put it in African Religions and Philosophy (1975): ‘I am because we are, and since we are, therefore I am.’

We know from everyday experience that a person is partly forged in the crucible of community. Relationships inform self-understanding. Who I am depends on many ‘others’: my family, my friends, my culture, my work colleagues. The self I take grocery shopping, say, differs in her actions and behaviours from the self that talks to my PhD supervisor. Even my most private and personal reflections are entangled with the perspectives and voices of different people, be it those who agree with me, those who criticise, or those who praise me.

Yet the notion of a fluctuating and ambiguous self can be disconcerting. We can chalk up this discomfort, in large part, to René Descartes. The 17th-century French philosopher believed that a human being was essentially self-contained and self-sufficient; an inherently rational, mind-bound subject, who ought to encounter the world outside her head with skepticism. While Descartes didn’t single-handedly create the modern mind, he went a long way towards defining its contours.

Descartes had set himself a very particular puzzle to solve. He wanted to find a stable point of view from which to look on the world without relying on God-decreed wisdoms; a place from which he could discern the permanent structures beneath the changeable phenomena of nature. But Descartes believed that there was a trade-off between certainty and a kind of social, worldly richness. The only thing you can be certain of is your own cogito – the fact that you are thinking. Other people and other things are inherently fickle and erratic. So they must have nothing to do with the basic constitution of the knowing self, which is a necessarily detached, coherent and contemplative whole.

Few respected philosophers and psychologists would identify as strict Cartesian dualists, in the sense of believing that mind and matter are completely separate. But the Cartesian cogito is still everywhere you look. The experimental design of memory testing, for example, tends to proceed from the assumption that it’s possible to draw a sharp distinction between the self and the world. If memory simply lives inside the skull, then it’s perfectly acceptable to remove a person from her everyday environment and relationships, and to test her recall using flashcards or screens in the artificial confines of a lab. A person is considered a standalone entity, irrespective of her surroundings, inscribed in the brain as a series of cognitive processes. Memory must be simply something you have, not something you do within a certain context.

Social psychology purports to examine the relationship between cognition and society. But even then, the investigation often presumes that a collective of Cartesian subjects are the real focus of the enquiry, not selves that co-evolve with others over time. In the 1960s, the American psychologists John Darley and Bibb Latané became interested in the murder of Kitty Genovese, a young white woman who had been stabbed and assaulted on her way home one night in New York. Multiple people had witnessed the crime but none stepped in to prevent it. Darley and Latané designed a series of experiments in which they simulated a crisis, such as an epileptic fit, or smoke billowing in from the next room, to observe what people did. They were the first to identify the so-called ‘bystander effect’, in which people seem to respond more slowly to someone in distress if others are around.

Darley and Latané suggested that this might come from a ‘diffusion of responsibility’, in which the obligation to react is diluted across a bigger group of people. But as the American psychologist Frances Cherry argued in The Stubborn Particulars of Social Psychology: Essays on the Research Process (1995), this numerical approach wipes away vital contextual information that might help to understand people’s real motives. Genovese’s murder had to be seen against a backdrop in which violence against women was not taken seriously, Cherry said, and in which people were reluctant to step into what might have been a domestic dispute. Moreover, the murder of a poor black woman would have attracted far less subsequent media interest. But Darley and Latané’s focus make structural factors much harder to see.

Is there a way of reconciling these two accounts of the self – the relational, world-embracing version, and the autonomous, inward one? The 20th-century Russian philosopher Mikhail Bakhtin believed that the answer lay in dialogue. We need others in order to evaluate our own existence and construct a coherent self-image. Think of that luminous moment when a poet captures something you’d felt but had never articulated; or when you’d struggled to summarise your thoughts, but they crystallised in conversation with a friend. Bakhtin believed that it was only through an encounter with another person that you could come to appreciate your own unique perspective and see yourself as a whole entity. By ‘looking through the screen of the other’s soul,’ he wrote, ‘I vivify my exterior.’ Selfhood and knowledge are evolving and dynamic; the self is never finished – it is an open book.

So reality is not simply out there, waiting to be uncovered. ‘Truth is not born nor is it to be found inside the head of an individual person, it is born between people collectively searching for truth, in the process of their dialogic interaction,’ Bakhtin wrote in Problems of Dostoevsky’s Poetics (1929). Nothing simply is itself, outside the matrix of relationships in which it appears. Instead, being is an act or event that must happen in the space between the self and the world.

Accepting that others are vital to our self-perception is a corrective to the limitations of the Cartesian view. Consider two different models of child psychology. Jean Piaget’s theory of cognitive development conceives of individual growth in a Cartesian fashion, as the reorganisation of mental processes. The developing child is depicted as a lone learner – an inventive scientist, struggling independently to make sense of the world. By contrast, ‘dialogical’ theories, brought to life in experiments such as Lisa Freund’s ‘doll house study’ from 1990, emphasise interactions between the child and the adult who can provide ‘scaffolding’ for how she understands the world.

A grimmer example might be solitary confinement in prisons. The punishment was originally designed to encourage introspection: to turn the prisoner’s thoughts inward, to prompt her to reflect on her crimes, and to eventually help her return to society as a morally cleansed citizen. A perfect policy for the reform of Cartesian individuals. But, in fact, studies of such prisoners suggest that their sense of self dissolves if they are punished this way for long enough. Prisoners tend to suffer profound physical and psychological difficulties, such as confusion, anxiety, insomnia, feelings of inadequacy, and a distorted sense of time. Deprived of contact and interaction – the external perspective needed to consummate and sustain a coherent self-image – a person risks disappearing into non-existence.

The emerging fields of embodied and enactive cognition have started to take dialogic models of the self more seriously. But for the most part, scientific psychology is only too willing to adopt individualistic Cartesian assumptions that cut away the webbing that ties the self to others. There is a Zulu phrase, ‘Umuntu ngumuntu ngabantu’, which means ‘A person is a person through other persons.’ This is a richer and better account, I think, than ‘I think, therefore I am.’Aeon counter – do not remove

Abeba Birhane

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

The Concept of Probability is not as Simple as You Think

probability

Phil Long/Flickr

Nevin Climenhaga | Aeon Ideas

The gambler, the quantum physicist and the juror all reason about probabilities: the probability of winning, of a radioactive atom decaying, of a defendant’s guilt. But despite their ubiquity, experts dispute just what probabilities are. This leads to disagreements on how to reason about, and with, probabilities – disagreements that our cognitive biases can exacerbate, such as our tendency to ignore evidence that runs counter to a hypothesis we favour. Clarifying the nature of probability, then, can help to improve our reasoning.

Three popular theories analyse probabilities as either frequencies, propensities or degrees of belief. Suppose I tell you that a coin has a 50 per cent probability of landing heads up. These theories, respectively, say that this is:

  • The frequency with which that coin lands heads;
  • The propensity, or tendency, that the coin’s physical characteristics give it to land heads;
  • How confident I am that it lands heads.

But each of these interpretations faces problems. Consider the following case:

Adam flips a fair coin that self-destructs after being tossed four times. Adam’s friends Beth, Charles and Dave are present, but blindfolded. After the fourth flip, Beth says: ‘The probability that the coin landed heads the first time is 50 per cent.’

Adam then tells his friends that the coin landed heads three times out of four. Charles says: ‘The probability that the coin landed heads the first time is 75 per cent.’

Dave, despite having the same information as Charles, says: ‘I disagree. The probability that the coin landed heads the first time is 60 per cent.’

The frequency interpretation struggles with Beth’s assertion. The frequency with which the coin lands heads is three out of four, and it can never be tossed again. Still, it seems that Beth was right: the probability that the coin landed heads the first time is 50 per cent.

Meanwhile, the propensity interpretation falters on Charles’s assertion. Since the coin is fair, it had an equal propensity to land heads or tails. Yet Charles also seems right to say that the probability that the coin landed heads the first time is 75 per cent.

The confidence interpretation makes sense of the first two assertions, holding that they express Beth and Charles’s confidence that the coin landed heads. But consider Dave’s assertion. When Dave says that the probability that the coin landed heads is 60 per cent, he says something false. But if Dave really is 60 per cent confident that the coin landed heads, then on the confidence interpretation, he has said something true – he has truly reported how certain he is.

Some philosophers think that such cases support a pluralistic approach in which there are multiple kinds of probabilities. My own view is that we should adopt a fourth interpretation – a degree-of-support interpretation.

Here, probabilities are understood as relations of evidential support between propositions. ‘The probability of X given Y’ is the degree to which Y supports the truth of X. When we speak of ‘the probability of X’ on its own, this is shorthand for the probability of X conditional on any background information we have. When Beth says that there is a 50 per cent probability that the coin landed heads, she means that this is the probability that it lands heads conditional on the information that it was tossed and some information about its construction (for example, it being symmetrical).

Relative to different information, however, the proposition that the coin landed heads has a different probability. When Charles says that there is a 75 per cent probability that the coin landed heads, he means this is the probability that it landed heads relative to the information that three of four tosses landed heads. Meanwhile, Dave says there is a 60 per cent probability that the coin landed heads, relative to this same information – but since this information in fact supports heads more strongly than 60 per cent, what Dave says is false.

The degree-of-support interpretation incorporates what’s right about each of our first three approaches while correcting their problems. It captures the connection between probabilities and degrees of confidence. It does this not by identifying them – instead, it takes degrees of belief to be rationally constrained by degrees of support. The reason I should be 50 per cent confident that a coin lands heads, if all I know about it is that it is symmetrical, is because this is the degree to which my evidence supports this hypothesis.

Similarly, the degree-of-support interpretation allows the information that the coin landed heads with a 75 per cent frequency to make it 75 per cent probable that it landed heads on any particular toss. It captures the connection between frequencies and probabilities but, unlike the frequency interpretation, it denies that frequencies and probabilities are the same thing. Instead, probabilities sometimes relate claims about frequencies to claims about specific individuals.

Finally, the degree-of-support interpretation analyses the propensity of the coin to land heads as a relation between, on the one hand, propositions about the construction of the coin and, on the other, the proposition that it lands heads. That is, it concerns the degree to which the coin’s construction predicts the coin’s behaviour. More generally, propensities link claims about causes and claims about effects – eg, a description of an atom’s intrinsic characteristics and the hypothesis that it decays.

Because they turn probabilities into different kinds of entities, our four theories offer divergent advice on how to figure out the values of probabilities. The first three interpretations (frequency, propensity and confidence) try to make probabilities things we can observe – through counting, experimentation or introspection. By contrast, degrees of support seem to be what philosophers call ‘abstract entities’ – neither in the world nor in our minds. While we know that a coin is symmetrical by observation, we know that the proposition ‘this coin is symmetrical’ supports the propositions ‘this coin lands heads’ and ‘this coin lands tails’ to equal degrees in the same way we know that ‘this coin lands heads’ entails ‘this coin lands heads or tails’: by thinking.

But a skeptic might point out that coin tosses are easy. Suppose we’re on a jury. How are we supposed to figure out the probability that the defendant committed the murder, so as to see whether there can be reasonable doubt about his guilt?

Answer: think more. First, ask: what is our evidence? What we want to figure out is how strongly this evidence supports the hypothesis that the defendant is guilty. Perhaps our salient evidence is that the defendant’s fingerprints are on the gun used to kill the victim.

Then, ask: can we use the mathematical rules of probability to break down the probability of our hypothesis in light of the evidence into more tractable probabilities? Here we are concerned with the probability of a cause (the defendant committing the murder) given an effect (his fingerprints being on the murder weapon). Bayes’s theorem lets us calculate this as a function of three further probabilities: the prior probability of the cause, the probability of the effect given this cause, and the probability of the effect without this cause.

Since this is all relative to any background information we have, the first probability (of the cause) is informed by what we know about the defendant’s motives, means and opportunity. We can get a handle on the third probability (of the effect without the cause) by breaking down the possibility that the defendant is innocent into other possible causes of the victim’s death, and asking how probable each is, and how probable they make it that the defendant’s fingerprints would be on the gun. We will eventually reach probabilities that we cannot break down any further. At this point, we might search for general principles to guide our assignments of probabilities, or we might rely on intuitive judgments, as we do in the coin cases.

When we are reasoning about criminals rather than coins, this process is unlikely to lead to convergence on precise probabilities. But there’s no alternative. We can’t resolve disagreements about how much the information we possess supports a hypothesis just by gathering more information. Instead, we can make progress only by way of philosophical reflection on the space of possibilities, the information we have, and how strongly it supports some possibilities over others.Aeon counter – do not remove

Nevin Climenhaga

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Why the Demoniac Stayed in his Comfortable Corner of Hell

the-drunkard

Detail from The Drunkard (1912) by Marc Chagall. Courtesy Wikipedia

John Kaag | Aeon Ideas

I am not what one might call a religious man. I went to church, and then to confirmation class, under duress. My mother, whom I secretly regarded as more powerful than God, insisted that I go. So I went. Her insistence, however, had the unintended consequence of introducing me to a pastor whom I came to despise. So I eventually quit.

There were many problems with this pastor but the one that bothered me the most was his refusal to explain a story from the New Testament that I found especially hard to believe: the story of the demoniac.

This story from Mark 5:1-20 relates how Jesus and the disciples go to the town of Gerasenes and there encounter a man who is possessed by evil spirits. This demoniac – a self-imposed outcast from society – lived at the outskirts of town and ‘night and day among the tombs and in the hills he would cry out and cut himself with stones’. The grossest part of the story, however, isn’t the self-mutilation. It’s the demoniac’s insane refusal to accept help. When Jesus approached him, the demoniac threw himself to the ground and wailed: ‘What do you want with me? … In God’s name, don’t torture me!’ When you’re possessed by evil spirits, the worst thing in the world is to be healed. In short, the demoniac tells Jesus to bugger off, to leave him and his sharp little stones in his comfortable corner of hell.

When I first read about the demoniac, I was admittedly scared, but I eventually convinced myself that the parable was a manipulative attempt to persuade unbelievers such as me to find religion. And I wasn’t buying it. But when I entered university, went into philosophy, and began to cultivate an agnosticism that one might call atheism, I discovered that many a philosopher had been drawn to this scary story. So I took a second look.

The Danish philosopher Søren Kierkegaard, who spent years analysing the psychological and ethical dimensions of the demoniac, tells us that being demonic is more common than we might like to admit. He points out that when Jesus heals the possessed man, the spirits are exorcised en masse, flying out together as ‘the Legion’ – a vast army of evil forces. There are more than enough little demons to go around, and this explains why they come to roust in some rather mundane places. In Kierkegaard’s words: ‘One may hear the drunkard say: “Let me be the filth that I am.”’ Or, leave me alone with my bottle and let me ruin my life, thank you very much. I heard this first from my father, and then from an increasing number of close friends, and most recently from a voice that occasionally keeps me up at night when everyone else is asleep.

Those who are the most pointedly afflicted are often precisely those who are least able to recognise their affliction, or to save themselves. And those with the resources to rescue themselves are usually already saved. As Kierkegaard suggests, the virtue of sobriety makes perfect sense to one who is already sober. Eating well is second nature to the one who is already healthy; saving money is a no-brainer for one who one is already rich; truth-telling is the good habit of one who is already honest. But for those in the grips of crisis or sin, getting out usually doesn’t make much sense.

Sharp stones can take a variety of forms.

In The Concept of Anxiety (1844), Kierkegaard tells us that the ‘essential nature of [the demoniac] is anxiety about the good’. I’ve been ‘anxious’ about many things – about exams, about spiders, about going to sleep – but Kierkegaard explains that the feeling I have about these nasty things isn’t anxiety at all. It’s fear. Anxiety, on the other hand, has no particular object. It is the sense of uneasiness that one has at the edge of a cliff, or climbing a ladder, or thinking about the prospects of a completely open future – it isn’t fear per se, but the feeling that we get when faced with possibility. It’s the unsettling feeling of freedom. Yes, freedom, that most precious of modern watchwords, is deeply unsettling.

What does this have to do with our demoniac? Everything. Kierkegaard explains that the demoniac reflects ‘an unfreedom that wants to close itself off’; when confronted with the possibility of being healed, he wants nothing to do with it. The free life that Jesus offers is, for the demoniac, pure torture. I’ve often thought that this is the fate of the characters in Jean-Paul Sartre’s play No Exit (1944): they are always free to leave, but leaving seems beyond impossible.

Yet Jesus manages to save the demoniac. And I wanted my pastor to tell me how. At the time, I chalked up most of the miracles from the Bible as exaggeration, or interpretation, or poetic licence. But the healing of the demoniac – unlike the bread and fish and resurrection – seemed really quite fantastic. So how did Jesus do it? I didn’t get a particularly good answer from my pastor, so I left the Church. And never came back.

Today, I still want to know.

I’m not here to explain the salvation of the demoniac. I’m here only to observe, as carefully as I can, that this demonic situation is a problem. Indeed, I suspect it is the problem for many, many readers. The demoniac reflects what theologians call the ‘religious paradox’, namely that it is impossible for fallen human beings – such craven creatures – to bootstrap themselves to heaven. Any redemptive resources at our disposal are probably exactly as botched as we are.

There are many ways to distract ourselves from this paradox – and we are very good at manufacturing them: movies and alcohol and Facebook and all the fixations and obsessions of modern life. But at the end of the day, these are pitifully little comfort.

So this year, as New Year’s Day recedes from memory and the winter darkness remains, I am making a resolution: I will try not to take all the usual escapes. Instead, I will try to simply sit with the plight of the demoniac, to ‘stew in it’ as my mother used to say, for a minute or two more. In his essay ‘Self-will’ (1919), the German author Hermann Hesse put it thus: ‘If you and you … are in pain, if you are sick in body or soul, if you are afraid and have a foreboding of danger – why not, if only to amuse yourselves … try to put the question in another way? Why not ask whether the source of your pain might not be you yourselves?’ I will not reach for my familiar demonic stones, blood-spattered yet comforting. I will ask why I need them in the first place. When I do this, and attempt to come to terms with the demoniac’s underlying suffering, I might notice that it is not unique to me.

When I do, when I let go of the things that I think are going to ease my suffering, I might have the chance to notice that I am not alone in my anxiety. And maybe this is recompense enough. Maybe this is freedom and the best that I can hope for.Aeon counter – do not remove

John Kaag

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Having a sense of Meaning in life is Good for you — So how do you get one?

File 20190208 174890 1tn0xbx.jpg?ixlib=rb 1.1

There’s a high degree of overlap between experiencing happiness and meaning.
Shutterstock/KieferPix


Lisa A Williams, UNSW

The pursuit of happiness and health is a popular endeavour, as the preponderance of self-help books would attest.

Yet it is also fraught. Despite ample advice from experts, individuals regularly engage in activities that may only have short-term benefit for well-being, or even backfire.

The search for the heart of well-being – that is, a nucleus from which other aspects of well-being and health might flow – has been the focus of decades of research. New findings recently reported in Proceedings of the National Academy of Sciences point towards an answer commonly overlooked: meaning in life.

Meaning in life: part of the well-being puzzle?

University College London’s psychology professor Andrew Steptoe and senior research associate Daisy Fancourt analysed a sample of 7,304 UK residents aged 50+ drawn from the English Longitudinal Study of Ageing.

Survey respondents answered a range of questions assessing social, economic, health, and physical activity characteristics, including:

…to what extent do you feel the things you do in your life are worthwhile?

Follow-up surveys two and four years later assessed those same characteristics again.

One key question addressed in this research is: what advantage might having a strong sense of meaning in life afford a few years down the road?

The data revealed that individuals reporting a higher meaning in life had:

  • lower risk of divorce
  • lower risk of living alone
  • increased connections with friends and engagement in social and cultural activities
  • lower incidence of new chronic disease and onset of depression
  • lower obesity and increased physical activity
  • increased adoption of positive health behaviours (exercising, eating fruit and veg).

On the whole, individuals with a higher sense of meaning in life a few years earlier were later living lives characterised by health and well-being.

You might wonder if these findings are attributable to other factors, or to factors already in play by the time participants joined the study. The authors undertook stringent analyses to account for this, which revealed largely similar patterns of findings.

The findings join a body of prior research documenting longitudinal relationships between meaning in life and social functioning, net wealth and reduced mortality, especially among older adults.

What is meaning in life?

The historical arc of consideration of the meaning in life (not to be confused with the meaning of life) starts as far back as Ancient Greece. It tracks through the popular works of people such as Austrian neurologist and psychiatrist Victor Frankl, and continues today in the field of psychology.

One definition, offered by well-being researcher Laura King and colleagues, says

…lives may be experienced as meaningful when they are felt to have a significance beyond the trivial or momentary, to have purpose, or to have a coherence that transcends chaos.

This definition is useful because it highlights three central components of meaning:

  1. purpose: having goals and direction in life
  2. significance: the degree to which a person believes his or her life has value, worth, and importance
  3. coherence: the sense that one’s life is characterised by predictability and routine.
Michael Steger’s TEDx talk What Makes Life Meaningful.


Curious about your own sense of meaning in life? You can take an interactive version of the Meaning in Life Questionnaire, developed by Steger and colleagues, yourself here.

This measure captures not just the presence of meaning in life (whether a person feels that their life has purpose, significance, and coherence), but also the desire to search for meaning in life.

Routes for cultivating meaning in life

Given the documented benefits, you may wonder: how might one go about cultivating a sense of meaning in life?

We know a few things about participants in Steptoe and Fancourt’s study who reported relatively higher meaning in life during the first survey. For instance, they contacted their friends frequently, belonged to social groups, engaged in volunteering, and maintained a suite of healthy habits relating to sleep, diet and exercise.

Backing up the idea that seeking out these qualities might be a good place to start in the quest for meaning, several studies have causally linked these indicators to meaning in life.

For instance, spending money on others and volunteering, eating fruit and vegetables, and being in a well-connected social network have all been prospectively linked to acquiring a sense of meaning in life.

For a temporary boost, some activities have documented benefits for meaning in the short term: envisioning a happier future, writing a note of gratitude to another person, engaging in nostalgic reverie, and bringing to mind one’s close relationships.

Happiness and meaning: is it one or the other?

There’s a high degree of overlap between experiencing happiness and meaning – most people who report one also report the other. Days when people report feeling happy are often also days that people report meaning.

Yet there’s a tricky relationship between the two. Moment-to-moment, happiness and meaning are often decoupled.

Research by social psychologist Roy Baumeister and colleagues suggests that satisfying basic needs promotes happiness, but not meaning. In contrast, linking a sense of self across one’s past, present, and future promotes meaning, but not happiness.

Connecting socially with others is important for both happiness and meaning, but doing so in a way that promotes meaning (such as via parenting) can happen at the cost of personal happiness, at least temporarily.

Given the now-documented long-term social, mental, and physical benefits of having a sense of meaning in life, the recommendation here is clear. Rather than pursuing happiness as an end-state, ensuring one’s activities provide a sense of meaning might be a better route to living well and flourishing throughout life.The Conversation

Lisa A Williams, Senior Lecturer, School of Psychology, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Should contemporary philosophers read Ockham? Or: what did history ever do for us?

If you are a historian of philosophy, you’ve probably encountered the question whether the stuff you’re working on is of any interest today. It’s the kind of question that awakens all the different souls in your breast at once. Your more enthusiastic self might think, “yes, totally”, while your methodological soul might shout, “anachronism ahead!” And your humbler part might think, “I don’t even understand it myself.” When exposed to this question, I often want to say many things at once, and out comes something garbled. But now I’d like to suggest that there is only one true reply to the main question in the title: “No, that’s the wrong kind of question to ask!” – But of course that’s not all there is to it. So please hear me out…

Read the rest at Handling Ideas, “a blog on (writing) philosophy”

Introduction to Deontology: Kantian Ethics

One popular moral theory that denies that morality is solely about the consequences of our actions is known as Deontology. The most influential and widely adhered to version of Deontology was extensively laid out by Immanuel Kant (1724–1804). Kant’s ethics, as well as the overall philosophical system in which it is embedded, is vast and incredibly difficult. However, one relatively simple concept lies at the center of his ethical system: The Categorical Imperative.

via Introduction to Deontology: Kantian Ethics (1000-Word Philosophy)

Author: Andrew Chapman
Category: Ethics
Word Count: 1000

Philosophy Can Make the Previously Unthinkable Thinkable

woman-at-window

Detail from Woman at a Window (1822) by Caspar David Friedrich. Courtesy Alte Nationalgalerie, Berlin


Rebecca Brown | Aeon Ideas

In the mid-1990s, Joseph Overton, a researcher at the US think tank the Mackinac Center for Public Policy, proposed the idea of a ‘window’ of socially acceptable policies within any given domain. This came to be known as the Overton window of political possibilities. The job of think tanks, Overton proposed, was not directly to advocate particular policies, but to shift the window of possibilities so that previously unthinkable policy ideas – those shocking to the sensibilities of the time – become mainstream and part of the debate.

Overton’s insight was that there is little point advocating policies that are publicly unacceptable, since (almost) no politician will support them. Efforts are better spent, he argued, in shifting the debate so that such policies seem less radical and become more likely to receive support from sympathetic politicians. For instance, working to increase awareness of climate change might make future proposals to restrict the use of diesel cars more palatable, and ultimately more effective, than directly lobbying for a ban on such vehicles.

Overton was concerned with the activities of think tanks, but philosophers and practical ethicists might gain something from considering the Overton window. By its nature, practical ethics typically addresses controversial, politically sensitive topics. It is the job of philosophers to engage in ‘conceptual hygiene’ or, as the late British philosopher Mary Midgley described it, ‘philosophical plumbing’: clarifying and streamlining, diagnosing unjustified assertions and pointing out circularities.

Hence, philosophers can be eager to apply their skills to new subjects. This can provoke frustration from those embedded within a particular subject. Sometimes, this is deserved: philosophers can be naive in contributing their thoughts to complex areas with which they lack the kind of familiarity that requires time and immersion. But such an outside perspective can also be useful. Although such contributions will rarely get everything right, the standard is too demanding in areas of great division and debate (such as practical ethics). Instead, we should expect philosophers to offer a counterpoint to received wisdom, established norms and doctrinal prejudice.

Ethicists, at least within their academic work, are encouraged to be skeptical of intuition and the naturalistic fallacy (the idea that values can be derived simply from facts). Philosophers are also familiar with tools such as thought experiments: hypothetical and contrived descriptions of events that can be useful for clarifying particular intuitions or the implications of a philosophical claim. These two factors make it unsurprising that philosophers often publicly adopt positions that are unintuitive and outside mainstream thought, and that they might not personally endorse.

This can serve to shift, and perhaps widen, the Overton window. Is this a good thing? Sometimes philosophers argue for conclusions far outside the domain of ‘respectable’ positions; conclusions that could be hijacked by those with intolerant, racist, sexist or fundamentalist beliefs to support their stance. It is understandable that those who are threatened by such beliefs want any argument that might conceivably support them to be absent from the debate, off the table, and ignored.

However, the freedom to test the limits of argumentation and intuition is vital to philosophical practice. There are sufficient and familiar examples of historical orthodoxies that have been overturned – women’s right to vote; the abolition of slavery; the decriminalisation of same-sex relationships – to establish that strength and pervasiveness of a belief indicate neither truth nor immutability.

It can be tedious to repeatedly debate women’s role in the workforce, abortion, animals’ capacity to feel pain and so on, but to silence discussion would be far worse. Genuine attempts to resolve difficult ethical dilemmas must recognise that understanding develops by getting things wrong and having this pointed out. Most (arguably, all) science fails to describe or predict how the world works with perfect accuracy. But as a collective enterprise, it can identify errors and gradually approximate ‘truth’. Ethical truths are less easy to come by, and a different methodology is required in seeking out satisfactory approximations. But part of this model requires allowing plenty of room to get things wrong.

It is unfortunate but true that bad ideas are sometimes undermined by bad reasoning, and also that sometimes those who espouse offensive and largely false views can say true things. Consider the ‘born this way’ argument, which endorses the flawed assumption that a genetic basis for homosexuality indicates the permissibility of same-sex relationships. While this might win over some individuals, it could cause problems down the line if it turns out that homosexuality isn’t genetically determined. Debates relating to the ‘culture wars’ on college campuses have attracted many ad hominem criticisms that set out to discredit the authors’ position by pointing to the fact that they fit a certain demographic (white, middle-class, male) or share some view with a villainous figure, and thus are not fit to contribute. The point of philosophy is to identify such illegitimate moves, and to keep the argument on topic; sometimes, this requires coming to the defence of bad ideas or villainous characters.

Participation in this process can be daunting. Defending an unpopular position can make one a target both for well-directed, thoughtful criticisms, and for emotional, sweeping attacks. Controversial positions on contentious topics attract far more scrutiny than abstract philosophical contributions to niche subjects. This means that, in effect, the former are required to be more rigorous than the latter, and to foresee and head off more potential misappropriations, misinterpretations and misunderstandings – all while contributing to an interdisciplinary area, which requires some understanding not only of philosophical theory but perhaps also medicine, law, natural and social science, politics and various other disciplines.

This can be challenging, though I do not mean to be an apologist for thoughtless, sensationalist provocation and controversy-courting, whether delivered by philosophers or others. We should see one important social function of practical ethicists as widening the Overton window and pushing the public and political debate towards reasoned deliberation and respectful disagreement. Widening the Overton window can yield opportunities for ideas that many find offensive, and straightforwardly mistaken, as well as for ideas that are well-defended and reasonable. It is understandable that those with deep personal involvement in these debates often want to narrow the window and push it in the direction of those views they find unthreatening. But philosophers have a professional duty, as conceptual plumbers, to keep the whole system in good working order. This depends upon philosophical contributors upholding the disciplinary standards of academic rigour and intellectual honesty that are essential to ethical reflection, and trusting that this will gradually, collectively lead us in the right direction.Aeon counter – do not remove

Rebecca Brown

This article was originally published at Aeon and has been republished under Creative Commons.

The Existentialist Tradition

existentialist-tradition


This just recently arrived in the mail: The Existentialist Tradition: Selected Writings, edited by Nino Langiulli. I’m very happy to have found this book in good condition. This was my first introduction to existentialism around 10 years ago. I originally found it at the University library and the ideas contained within are thought-provoking and sometimes even profound. Very glad to have found a copy for myself all these years later. Highly recommended as an introduction to existentialism and a guide to which authors you may wish to pursue further.

The Empathetic Humanities have much to teach our Adversarial Culture

Books


Alexander Bevilacqua | Aeon Ideas

As anyone on Twitter knows, public culture can be quick to attack, castigate and condemn. In search of the moral high ground, we rarely grant each other the benefit of the doubt. In her Class Day remarks at Harvard’s 2018 graduation, the Nigerian novelist Chimamanda Ngozi Adichie addressed the problem of this rush to judgment. In the face of what she called ‘a culture of “calling out”, a culture of outrage’, she asked students to ‘always remember context, and never disregard intent’. She could have been speaking as a historian.

History, as a discipline, turns away from two of the main ways of reading that have dominated the humanities for the past half-century. These methods have been productive, but perhaps they also bear some responsibility for today’s corrosive lack of generosity. The two approaches have different genealogies, but share a significant feature: at heart, they are adversarial.

One mode of reading, first described in 1965 by the French philosopher Paul Ricœur and known as ‘the hermeneutics of suspicion’, aims to uncover the hidden meaning or agenda of a text. Whether inspired by Karl Marx, Friedrich Nietzsche or Sigmund Freud, the reader interprets what happens on the surface as a symptom of something deeper and more dubious, from economic inequality to sexual anxiety. The reader’s task is to reject the face value of a work, and to plumb for a submerged truth.

A second form of interpretation, known as ‘deconstruction’, was developed in 1967 by the French philosopher Jacques Derrida. It aims to identify and reveal a text’s hidden contradictions – ambiguities and even aporias (unthinkable contradictions) that eluded the author. For example, Derrida detected a bias that favoured speech over writing in many influential philosophical texts of the Western tradition, from Plato to Jean-Jacques Rousseau. The fact that written texts could privilege the immediacy and truth of speech was a paradox that revealed unarticulated metaphysical commitments at the heart of Western philosophy.

Both of these ways of reading pit reader against text. The reader’s goal becomes to uncover meanings or problems that the work does not explicitly express. In both cases, intelligence and moral probity are displayed at the expense of what’s been written. In the 20th century, these approaches empowered critics to detect and denounce the workings of power in all kinds of materials – not just the dreams that Freud interpreted, or the essays by Plato and Rousseau with which Derrida was most closely concerned.

They do, however, foster a prosecutorial attitude among academics and public intellectuals. As a colleague once told me: ‘I am always looking for the Freudian slip.’ He scours the writings of his peers to spot when they trip up and betray their problematic intellectual commitments. One poorly chosen phrase can sully an entire work.

Not surprisingly, these methods have fostered a rather paranoid atmosphere in modern academia. Mutual monitoring of lexical choices leads to anxiety, as an increasing number of words are placed on a ‘no fly’ list. One error is taken as the symptom of problematic thinking; it can spoil not just a whole book, but perhaps even the author’s entire oeuvre. This set of attitudes is not a world apart from the pile-ons that we witness on social media.

Does the lack of charity in public discourse – the quickness to judge, the aversion to context and intent – stem in part from what we might call the ‘adversarial’ humanities? These practices of interpretation are certainly on display in many classrooms, where students learn to exercise their moral and intellectual prowess by dismantling what they’ve read. For teachers, showing students how to take a text apart bestows authority; for students, learning to read like this can be electrifying.

Yet the study of history is different. History deals with the past – and the past is, as the British novelist L P Hartley wrote in 1953, ‘a foreign country’. By definition, historians deal with difference: with what is unlike the present, and with what rarely meets today’s moral standards.

The virtue of reading like a historian, then, is that critique or disavowal is not the primary goal. On the contrary, reading historically provides something more destabilising: it requires the historian to put her own values in parentheses.

The French medievalist Marc Bloch wrote that the task of the historian is understanding, not judging. Bloch, who fought in the French Resistance, was caught and turned over to the Gestapo. Poignantly, the manuscript of The Historian’s Craft, where he expressed this humane statement, was left unfinished: Bloch was executed by firing squad in June 1944.

As Bloch knew well, historical empathy involves reaching out across the chasm of time to understand people whose values and motivations are often utterly unlike our own. It means affording these people the gift of intellectual charity – that is, the best possible interpretation of what they said or believed. For example, a belief in magic can be rational on the basis of a period’s knowledge of nature. Yet acknowledging this demands more than just contextual, linguistic or philological skill. It requires empathy.

Aren’t a lot of psychological assumptions built into this model? The call for empathy might seem theoretically naive. Yet we judge people’s intentions all the time in our daily lives; we can’t function socially without making inferences about others’ motivations. Historians merely apply this approach to people who are dead. They invoke intentions not from a desire to attack, nor because they seek reasons to restrain a text’s range of meanings. Their questions about intentions stem, instead, from respect for the people whose actions and thoughts they’re trying to understand.

Reading like a historian, then, involves not just a theory of interpretation, but also a moral stance. It is an attempt to treat others generously, and to extend that generosity even to those who can’t be hic et nunc – here and now.

For many historians (as well as others in what we might call the ‘empathetic’ humanities, such as art history and literary history), empathy is a life practice. Living with the people of the past changes one’s relationship to the present. At our best, we begin to offer empathy not just to those who are distant, but to those who surround us, aiming in our daily life for ‘understanding, not judging’.

To be sure, it’s challenging to impart these lessons to students in their teens or early 20s, to whom the problems of the present seem especially urgent and compelling. The injunction to read more generously is pretty unfashionable. It can even be perceived as conservative: isn’t the past what’s holding us back, and shouldn’t we reject it? Isn’t it more useful to learn how to deconstruct a text, and to be on the lookout for latent, pernicious meanings?

Certainly, reading isn’t a zero-sum game. One can and should cultivate multiple modes of interpretation. Yet the nostrum that the humanities teach ‘critical thinking and reading skills’ obscures the profound differences in how adversarial and empathetic disciplines engage with written works – and how they teach us to respond to other human beings. If the empathetic humanities can make us more compassionate and more charitable – if they can encourage us to ‘always remember context, and never disregard intent’ – they afford something uniquely useful today.Aeon counter – do not remove

Alexander Bevilacqua

This article was originally published at Aeon and has been republished under Creative Commons.

Slaying the Snark: What Nonsense Verse tells us about Reality

hunting-snark

Eighth of Henry Holiday’s original illustrations to “The Hunting of the Snark” by Lewis Carroll, Wikipedia

Nina Lyon | Aeon Ideas

The English writer Lewis Carroll’s nonsense poem The Hunting of the Snark (1876) is an exceptionally difficult read. In it, a crew of improbable characters boards a ship to hunt a Snark, which might sound like a plot were it not for the fact that nobody knows what a Snark actually is. It doesn’t help that any attempt to describe a Snark turns into a pile-up of increasingly incoherent attributes: it is said to taste ‘meagre and hollow, but crisp: / Like a coat that is rather too tight in the waist’.

The only significant piece of information we have about the Snark’s identity is that it might be a Boojum. Unfortunately nobody knows what that is either, apart from the fact that anyone who encounters a Boojum will ‘softly and suddenly vanish away’ into nothingness.

Nothingness also characterises the crew’s map: a ‘perfect and absolute blank!’

‘What’s the good of Mercator’s North Poles and Equators,
Tropics, Zones and Meridian Lines?’
So the Bellman would cry: and the crew would reply,
‘They are merely conventional signs!’

Nonsense such as this might get tiresome to read, but it can make for a useful thought-experiment – particularly about language. In the Snark, as in the Alice books of 1865 and 1871, the commonsense assumptions that usually govern language and meaning are turned upside down. It makes us wonder what all of those assumptions are up to, and how they work. How do we know that this sentence is trying to say something serious, or that where we are now is not a dream?

Language can’t always convey meaning alone – it might need sense, which is the governing context that framed it. We talk about ‘common sense’, or whether something ‘makes sense’, or dismiss things as ‘nonsense’, but we rarely think about what sense itself is, until it goes missing. The German logician Gottlob Frege in 1892 used sense to describe a proposition’s meaning, as something distinct from what it denoted. Sense therefore appears to be a mental entity, resistant to fixed definition.

Shortly after Carroll’s death in 1898, a seismic turn took place in both logic and metaphysics. Building on Frege, logical positivists such as Bertrand Russell sought to deploy logic and mathematics in order to establish unconditional truths. A logical truth was, like mathematics, true whether or not people changed their minds about it. Realism, the belief in a mind-independent reality, began to assert itself afresh after a long spell in the philosophical wilderness.

Sense and nonsense would therefore become landmines in a battle over logic’s ability to untether truth from thought. If an issue over meaning seeks recourse in sense, it seeks recourse in thought too. Carroll anticipated where logic was headed, and the strangest of his creations was more than a game, an experiment conceived, as the English author G K Chesterton once wrote of his work, ‘in order to study that darkest problem of metaphysics’.

In 1901, the pragmatist philosopher and provocateur F C S Schiller created a parody Christmas edition of the philosophical journal Mind called Mind!. The frontispiece was a ‘Portrait of Its Immanence the Absolute’, which, Schiller noted, was ‘very like the Bellman’s map in the Hunting of the Snark’: completely blank.

The Absolute – or the Infinite or Ultimate Reality, among other grand aliases – was the sum of all experience and being, and inconceivable to the human mind. It was monistic, consuming all into the One. If it sounded like something you’d struggle to get your head around, that was pretty much the point. The Absolute was an emblem of metaphysical idealism, the doctrine that truth could exist only within the domain of thought. Idealism had dominated the academy for the entirety of Carroll’s career, and it was beginning to come under attack. The realist mission, headed by Russell, was to clean up philosophy’s act with the sound application of mathematics and objective facts, and it felt like a breath of fresh air.

Schiller delighted in trolling absolute idealists in general and the English idealist philosopher F H Bradley in particular. In Mind!, Schiller claimed that the Snark was a satire on the Absolute, whose notorious ineffability drove its seekers to derangement. But this was disingenuous. Bradley’s major work, Appearance and Reality (1893), mirrors the point, insofar that there is one, of the Snark. When you home in on a thing and try to pin it down by describing its attributes, and then try to pin down what those are too – Bradley uses the example of a lump of sugar – it all begins to crumble, and must be something other instead. What appeared to be there was only ever an idea. Carroll was, contrariwise, in line with idealist thinking.

A passionate logician, Carroll had been working on a three-part book on symbolic logic that remained unfinished at his death. Two logical paradoxes that he posed in Mind and shared privately with friends and colleagues, such as Bradley, hint at a troublemaking sentiment regarding where logic might be headed. ‘A Logical Paradox’ (1894) resulted in two contradictory statements being simultaneously true; ‘What the Tortoise Said to Achilles’ (1895) set up a predicament in which each proposition requires an additional supporting proposition, creating an infinite regress.

A few years after Carroll’s death, Russell began to flex logic as a tool for denoting the world and testing the validity of propositions about it. Carroll’s paradoxes were problematic and demanded a solution. Russell’s response to ‘A Logical Paradox’ was to legislate nonsense away into a ‘null-class’ – a set of nonexistent propositions that, because it had no real members, didn’t exist either.

Russell’s solution to ‘What the Tortoise Said to Achilles’, tucked away in a footnote to the Principles of Mathematics (1903), entailed a recourse to sense in order to determine whether or not a proposition should be asserted in the first place, teetering into the mind-dependent realm of idealism. Mentally determining meaning is a bit like mentally determining reality, and it wasn’t a neat win for logic’s role as objective sword of truth.

In the Snark, the principles of narrative self-immolate, so that the story, rather than describing things and events in the world, undoes them into something other. It ends like this:

In the midst of the word he was trying to say,
In the midst of his laughter and glee,
He had softly and suddenly vanished away –
For the Snark was a Boojum, you see.

Strip the plot down to those eight final words, and it is all there. The thing sought turned out, upon examination, to be something else entirely. Beyond the flimsy veil of appearance, formed from words and riddled with holes, lies an inexpressible reality.

By the late-20th century, when Russell had won the battle of ideas and commonsense realism prevailed, critics such as Martin Gardner, author of The Annotated Hunting of the Snark (2006), were rattled by Carroll’s antirealism. If the reality we perceive is all there is, and it falls apart, we are left with nothing.

Carroll’s attacks on realism might look nihilistic or radical to a postwar mind steeped in atheist scientism, but they were neither. Carroll was a man of his time, taking a philosophically conservative party line on absolute idealism and its theistic implications. But he was also prophetic, seeing conflict at the limits of language, logic and reality, and laying a series of conceptual traps that continue to provoke it.

The Snark is one such trap. Carroll rejected his illustrator Henry Holiday’s image of the Boojum on the basis that it needed to remain unimaginable, for, after all, how can you illustrate the incomprehensible nature of ultimate reality? It is a task as doomed as saying the unsayable – which, paradoxically, was a task Carroll himself couldn’t quite resist.Aeon counter – do not remove

Nina Lyon

This article was originally published at Aeon and has been republished under Creative Commons.