Sooner or later we all face death. Will a sense of meaning help us?

dance-with-death

Detail from the Dance with Death by Johann Rudolf Feyerabend. Courtesy the Basel Historical Museum, Switzerland/Wikipedia

Warren Ward | Aeon Ideas

‘Despite all our medical advances,’ my friend Jason used to quip, ‘the mortality rate has remained constant – one per person.’

Jason and I studied medicine together back in the 1980s. Along with everyone else in our course, we spent six long years memorising everything that could go wrong with the human body. We diligently worked our way through a textbook called Pathologic Basis of Disease that described, in detail, every single ailment that could befall a human being. It’s no wonder medical students become hypochondriacal, attributing sinister causes to any lump, bump or rash they find on their own person.

Jason’s oft-repeated observation reminded me that death (and disease) are unavoidable aspects of life. It sometimes seems, though, that we’ve developed a delusional denial of this in the West. We pour billions into prolonging life with increasingly expensive medical and surgical interventions, most of them employed in our final, decrepit years. From a big-picture perspective, this seems a futile waste of our precious health-dollars.

Don’t get me wrong. If I get struck down with cancer, heart disease or any of the myriad life-threatening ailments I learnt about in medicine, I want all the futile and expensive treatments I can get my hands on. I value my life. In fact, like most humans, I value staying alive above pretty much everything else. But also, like most, I tend to not really value my life unless I’m faced with the imminent possibility of it being taken away from me.

Another old friend of mine, Ross, was studying philosophy while I studied medicine. At the time, he wrote an essay called ‘Death the Teacher’ that had a profound effect on me. It argued that the best thing we could do to appreciate life was to keep the inevitability of our death always at the forefront of our minds.

When the Australian palliative care nurse Bronnie Ware interviewed scores of people in the last 12 weeks of their lives, she asked them their greatest regrets. The most frequent, published in her book The Top Five Regrets of the Dying (2011), were:

  1. I wish I’d had the courage to live a life true to myself, not the life others expected of me;
  2. I wish I hadn’t worked so hard;
  3. I wish I’d had the courage to express my feelings;
  4. I wish I had stayed in touch with my friends; and
  5. I wish that I had let myself be happier.

The relationship between death-awareness and leading a fulfilling life was a central concern of the German philosopher Martin Heidegger, whose work inspired Jean-Paul Sartre and other existentialist thinkers. Heidegger lamented that too many people wasted their lives running with the ‘herd’ rather than being true to themselves. But Heidegger actually struggled to live up to his own ideals; in 1933, he joined the Nazi Party, hoping it would advance his career.

Despite his shortcomings as a man, Heidegger’s ideas would go on to influence a wide range of philosophers, artists, theologians and other thinkers. Heidegger believed that Aristotle’s notion of Being – which had run as a thread through Western thinking for more than 2,000 years, and been instrumental in the development of scientific thinking – was flawed at a most fundamental level. Whereas Aristotle saw all of existence, including human beings, as things we could classify and analyse to increase our understanding of the world, in Being and Time (1927) Heidegger argued that, before we start classifying Being, we should first ask the question: ‘Who or what is doing all this questioning?’

Heidegger pointed out that we who are asking questions about Being are qualitatively different to the rest of existence: the rocks, oceans, trees, birds and insects that we are asking about. He invented a special word for this Being that asks, looks and cares. He called it Dasein, which loosely translates as ‘being there’. He coined the term Dasein because he believed that we had become immune to words such as ‘person’, ‘human’ and ‘human being’, losing our sense of wonder about our own consciousness.

Heidegger’s philosophy remains attractive to many today who see how science struggles to explain the experience of being a moral, caring person aware that his precious, mysterious, beautiful life will, one day, come to an end. According to Heidegger, this awareness of our own inevitable demise makes us, unlike the rocks and trees, hunger to make our life worthwhile, to give it meaning, purpose and value.

While Western medical science, which is based on Aristotelian thinking, sees the human body as a material thing that can be understood by examining it and breaking it down to its constituent parts like any other piece of matter, Heidegger’s ontology puts human experience at the centre of our understanding of the world.

Ten years ago, I was diagnosed with melanoma. As a doctor, I knew how aggressive and rapidly fatal this cancer could be. Fortunately for me, the surgery seemed to achieve a cure (touch wood). But I was also fortunate in another sense. I became aware, in a way I never had before, that I was going to die – if not from melanoma, then from something else, eventually. I have been much happier since then. For me, this realisation, this acceptance, this awareness that I am going to die is at least as important to my wellbeing as all the advances of medicine, because it reminds me to live my life to the full every day. I don’t want to experience the regret that Ware heard about more than any other, of not living ‘a life true to myself’.

Most Eastern philosophical traditions appreciate the importance of death-awareness for a well-lived life. The Tibetan Book of the Dead, for example, is a central text of Tibetan culture. The Tibetans spend a lot of time living with death, if that isn’t an oxymoron.

The East’s greatest philosopher, Siddhartha Gautama, also known as the Buddha, realised the importance of keeping the end in sight. He saw desire as the cause of all suffering, and counselled us not to get too attached to worldly pleasures but, rather, to focus on more important things such as loving others, developing equanimity of mind, and staying in the present.

The last thing the Buddha said to his followers was: ‘Decay is inherent in all component things! Work out your salvation with diligence!’ As a doctor, I am reminded every day of the fragility of the human body, how closely mortality lurks just around the corner. As a psychiatrist and psychotherapist, however, I am also reminded how empty life can be if we have no sense of meaning or purpose. An awareness of our mortality, of our precious finitude, can, paradoxically, move us to seek – and, if necessary, create – the meaning that we so desperately crave.Aeon counter – do not remove


Warren Ward is an associate professor of psychiatry at the University of Queensland. He is the author of the forthcoming book, Lovers of Philosophy (2021).

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Why do you believe what you do? Run some diagnostics on it

Mennonite-1942

A public school serving the Mennonite community in Red Run, Pennsylvania, March 1942. Photo by John Collier Jnr/Library of Congress

Miriam Schoenfield | Aeon Ideas

Many of the beliefs that play a fundamental role in our worldview are largely the result of the communities in which we’ve been immersed. Religious parents tend to beget religious children, liberal educational institutions tend to produce liberal graduates, blue states stay mostly blue, and red ones stay mostly red. Of course, some people, through their own sheer intelligence, might be able to see through fallacious reasoning, detect biases and, as a result, resist the social influences that lead most of us to belief. But I’m not that special, and so learning how susceptible my beliefs are to these sorts of influences makes me a bit squirmy.

Let’s work with a hypothetical example. Suppose I’m raised among atheists and firmly believe that God doesn’t exist. I realise that, had I grown up in a religious community, I would almost certainly have believed in God. Furthermore, we can imagine that, had I grown up a theist, I would have been exposed to all the considerations that I take to be relevant to the question of whether God exists: I would have learned science and history, I would have heard all the same arguments for and against the existence of God. The difference is that I would interpret this evidence differently. Divergences in belief result from the fact that people weigh the evidence for and against theism in varying ways. It’s not as if pooling resources and having a conversation would result in one side convincing the other – we wouldn’t have had centuries of religious conflict if things were so simple. Rather, each side will insist that the balance of considerations supports its position – and this insistence will be a product of the social environments that people on that side were raised in.

The you-just-believe-that-because challenge is meant to make us suspicious of our beliefs, to motivate us to reduce our confidence, or even abandon them completely. But what exactly does this challenge amount to? The fact that I have my particular beliefs as a result of growing up in a certain community is just a boring psychological fact about me and is not, in itself, evidence for or against anything so grand as the existence of God. So, you might wonder, if these psychological facts about us are not themselves evidence for or against our worldview, why would learning them motivate any of us to reduce our confidence in such matters?

The method of believing whatever one’s social surroundings tell one to believe is not reliable. So, when I learn about the social influences on my belief, I learn that I’ve formed my beliefs using an unreliable method. If it turns out that my thermometer produces its readings using an unreliable mechanism, I cease to trust the thermometer. Similarly, learning that my beliefs were produced by an unreliable process means that I should cease to trust them too.

But in the hypothetical example, do I really hold that my beliefs were formed by an unreliable mechanism? I might think as follows: ‘I formed my atheistic beliefs as a result of growing up in my particular community, not as a result of growing up in some community or another. The fact that there are a bunch of communities out there that inculcate their members with false beliefs doesn’t mean that my community does. So I deny that my beliefs were formed by an unreliable method. Luckily for me, they were formed by an extremely reliable method: they are the result of growing up among intelligent well-informed people with a sensible worldview.’

The thermometer analogy, then, is inapt. Learning that I would have believed differently if I’d been raised by a different community is not like learning that my thermometer is unreliable. It’s more like learning that my thermometer came from a store that sells a large number of unreliable thermometers. But the fact that the store sells unreliable thermometers doesn’t mean I shouldn’t trust the readings of my particular thermometer. After all, I might have excellent reasons to think that I got lucky and bought one of the few reliable ones.

There’s something fishy about the ‘I got lucky’ response because I would think the very same thing if I were raised in a community that I take to believe falsehoods. If I’m an atheist, I might think: ‘Luckily, I was raised by people who are well-educated, take science seriously, and aren’t in the grip of old-fashioned religious dogma.’ But if I were a theist, I would think something along the lines of: ‘If I’d been raised among arrogant people who believe that there is nothing greater than themselves, I might never have personally experienced God’s grace, and would have ended up with a completely distorted view of reality.’ The fact that the ‘I got lucky’ response is a response anyone could give seems to undermine its legitimacy.

Despite the apparent fishiness of the ‘I got lucky’ response in the case of religious belief, this response is perfectly sensible in other cases. Return to the thermometers. Suppose that, when I was looking for a thermometer, I knew very little about the different types and picked a random one off the shelf. After learning that the store sells many unreliable thermometers, I get worried and do some serious research. I discover that the particular thermometer I bought is produced by a reputable company whose thermometers are extraordinarily reliable. There’s nothing wrong with thinking: ‘How lucky I am to have ended up with this excellent thermometer!’

What’s the difference? Why does it seem perfectly reasonable to think I got lucky about the thermometer I bought but not to think that I got lucky with the community I was raised in? Here’s the answer: my belief that the community I was raised in is a reliable one is itself, plausibly, a result of growing up in that community. If I don’t take for granted the beliefs that my community instilled in me, then I’ll find that I have no particular reason to think that my community is more reliable than others. If we’re evaluating the reliability of some belief-forming method, we can’t use beliefs that are the result of that very method in support of that method’s reliability.

So, if we ought to abandon our socially influenced beliefs, it is for the following reason: deliberation about whether to maintain or abandon a belief, or set of beliefs, due to the worries about how the beliefs were formed must be conducted from a perspective that doesn’t rely on the beliefs in question. Here’s another way of putting the point: when we’re concerned about some belief we have, and are wondering whether to give it up, we’re engaged in doubt. When we doubt, we set aside some belief or cluster of beliefs, and we wonder whether the beliefs in question can be recovered from a perspective that doesn’t rely on those beliefs. Sometimes, we learn that they can be recovered once they’ve been subject to doubt, and other times we learn that they can’t.

What’s worrisome about the realisation that our moral, religious and political beliefs are heavily socially influenced is that many ways of recovering belief from doubt are not available to us in this case. We can’t make use of ordinary arguments in support of these beliefs because, in the perspective of doubt, the legitimacy of those very arguments is being questioned: after all, we are imagining that we find the arguments for our view more compelling than the arguments for alternative views as a result of the very social influences with which we’re concerned. In the perspective of doubt, we also can’t take the fact that we believe what we do as evidence for the belief’s truth, because we know that we believe what we do simply because we were raised in a certain environment, and the fact that we were raised here rather than there is no good reason to think that our beliefs are the correct ones.

It’s important to realise that the concern about beliefs being socially influenced is worrisome only if we’re deliberating about whether to maintain belief from the perspective of doubt. For recall that the facts about how my particular beliefs were caused are not, in themselves, evidence for or against any particular religious, moral or political outlook. So if you were thinking about whether to abandon your beliefs from a perspective in which you’re willing to make use of all of the reasoning and arguments that you normally use, you would simply think that you got lucky – just as you might have got lucky buying a particular thermometer, or reaching the train moments before it shuts its doors, or striking up a conversation on an airplane with someone who ends up being the love of your life.

There’s no general problem with thinking that we’ve been lucky – sometimes we are. The worry is just that, from the perspective of doubt, we don’t have the resources to justify the claim that we’ve been lucky. What’s needed to support such a belief is part of what’s being questioned.Aeon counter – do not remove


Miriam Schoenfield is associate professor in the Department of Philosophy at the University of Texas at Austin.

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

The Humanitarian Crisis of Deaths of Despair

woman-statue-despair

Image by cocoparisienne from Pixabay

David V. Johnson | Blog of the APA

Last April, Princeton University economists and married partners Anne Case and  Sir Angus Deaton delivered the Tanner Lectures on Human Values at Stanford University. The title of their talks, “Deaths of Despair and the Future of Capitalism,” is also the provisional name of their forthcoming book, to be published in 2020.

The couple’s research has focused on disturbing mortality data for a specific demographic: white non-Hispanic Americans without college degrees. This century, they have been dying at alarming rates from what Case and Deaton call “deaths of despair,” which cover suicide, alcohol-related disease, and drug overdoses (primarily driven by opioids). These deaths have, along with US obesity, heart disease, and cancer rates, contributed to a shocking recent decline in US life expectancy for three straight years—something which hasn’t happened since World War I and the 1918 Spanish flu pandemic. The rates for “deaths of despair” are not as high for college-educated whites or for other racial minorities, and there are many potential economic and sociological reasons for this.

Case and Deaton’s research raises important questions for the US political economy and the legacy of neoliberalism. But I am more interested in the framing of the mortality statistics as “deaths of despair.” Assume for the sake of argument that a large segment of the US population—non-Hispanic white Americans without college degrees—are suffering despair. What does it mean to say this?

We can gain some insight by contrasting its opposite, hope, which has received a lot of philosophical attention for the puzzles it raises about rationality and agency. Hope is a forward-looking emotion with cognitive and desiderative elements. We hope for things that are possible in the future (we don’t hope for the impossible or the certain), which means we make a judgement about their possibility. And when we hope for them, we desire for them to come about, and this desire can motivate our action if we think our acting can help bring it about. Is it rational to hope for something that has a miniscule chance of happening, and if so, under what circumstances? And when is it rational to act based on hope? Much ink has been spilt on these questions.

Philosophers have also thought about hopefulness—about hope as an emotional tendency or character trait that undergirds agency. People who are hopeful or optimistic are generally better able to pursue their plans and succeed, which gives the adoption of a hopeful outlook a pragmatic justification. One could argue that some minimal level of hopefulness is requisite for anyone to plan, act, and live one’s life, insofar as these involve forward-looking judgments and desires that are characteristic of hope.

We can see why despair, as a condition opposed to hope and hopefulness, can be such a debilitating state of mind. Despair undermines agency. The despairing person may conceive of plans and goals but feel that he is so unlikely to achieve them that they are not worth the investment of time and energy, or that even if he does achieve them, it won’t make a substantive difference to his life. So despair undermines the requisite motivation to pursue our plans and goals. A despairing person tends to passivity, to go along with the flow of life and focus on getting by, making due, and assuaging pain and foreboding however she can at the moment.

But despair—or at least the sort of despair I identify in Case and Deaton’s analysis—has a very different structure from hope. If despair were structurally like hope, then it would also be a forward-looking emotion with the appropriate cognitive and desiderative elements. We would be in a state of despair if we believed there was something that could possibly happen in the future that we do not want to have happen, so much so that its possibility gives us anguish and depresses us, to the point that we have difficulty summoning the motivation to avoid it or to go about our lives generally. To be sure, there are forms of despair that are like this. If my boss gives me a poor performance review and warns that I may be subject to termination, and the livelihood of my family depends upon my employment, this may send me into despair. I see my future firing as possible and something I desperately want to avoid, to the point of anxiety and depression. My despondent feelings may undermine my ability to perform better, making my firing even more likely. I may also have trouble living my life in general due to my negative feelings. I may struggle to talk to my spouse about her day or plan my daughter’s after-school activities.

But there is another form of despair that is not like this. This kind of despair is not forward looking, per se, but rather focused narrowly on the present. It sees the present as dark, dreary, painful, and uninteresting, and anticipates this state of consciousness to extend indefinitely into the future. It’s the feeling of unrelenting misery and ennui. No one wants to feel like this, but the person who despairs in this way does not form the desire to avoid it, or is not motivated by such a desire, because he does not see a means of escape or because the present sense of pain and dreariness is so overwhelming that it disrupts his ability to imagine such means. This form of despair is what Case and Deaton have in mind: people who have not only lost the will to live—i.e. to direct their lives, make plans, pursue them—but are so miserable and distressed that they either die by suicide or self-medicate with drugs and binge drinking to lessen their immediate pain, and do so as a way of slowly dying by suicide. It is the constant feeling associated with present consciousness that life is bad, and that it will continue to be bad indefinitely into the future. A sizable portion of the American public feels this way.

Case and Deaton’s appeal to despair, if we understand it correctly, should shock us. The prevalence of despair represents a horrific communal collapse. It goes well beyond statistics of poor welfare outcomes that alarm economists. It is about the obliteration of human lives—the undermining of the very basis of living a life, the ability to enjoy experience moment to moment, have enough peace of mind and stability to anticipate the future, make plans, and pursue them. It is nothing less than a humanitarian crisis.


David V. Johnson is the public philosophy editor of the APA Blog and deputy editor of Stanford Social Innovation Review. He is a former philosophy professor turned journalist with more than a decade of experience as an editor and writer. Previously, he was senior opinion editor at Al Jazeera America, where he edited the op-ed section of the news channel’s website. Earlier in his career, he served as online editor at Boston Review and research editor at San Francisco magazine the year it won a National Magazine Award for general excellence. He has written for The New York Times, USA Today, The New Republic, Bookforum, Aeon, Dissent, and The Baffler, among other publications.

This article was republished with the permission of the APA Blog and the author. View the original article here.

How Mengzi came up with something better than the Golden Rule

family-training

Family Training, unknown artist, Ming (1368-1644) or Qing (1644-1911) dynasty. Courtesy the Met Museum, New York

Eric Schwitzgebel | Aeon Ideas

There’s something I don’t like about the ‘Golden Rule’, the admonition to do unto others as you would have others do unto you. Consider this passage from the ancient Chinese philosopher Mengzi (Mencius):

That which people are capable of without learning is their genuine capability. That which they know without pondering is their genuine knowledge. Among babes in arms there are none that do not know to love their parents. When they grow older, there are none that do not know to revere their elder brothers. Treating one’s parents as parents is benevolence. Revering one’s elders is righteousness. There is nothing else to do but extend these to the world.

One thing I like about the passage is that it assumes love and reverence for one’s family as a given, rather than as a special achievement. It portrays moral development simply as a matter of extending that natural love and reverence more widely.

In another passage, Mengzi notes the kindness that the vicious tyrant King Xuan exhibits in saving a frightened ox from slaughter, and he urges the king to extend similar kindness to the people of his kingdom. Such extension, Mengzi says, is a matter of ‘weighing’ things correctly – a matter of treating similar things similarly, and not overvaluing what merely happens to be nearby. If you have pity for an innocent ox being led to slaughter, you ought to have similar pity for the innocent people dying in your streets and on your battlefields, despite their invisibility beyond your beautiful palace walls.

Mengzian extension starts from the assumption that you are already concerned about nearby others, and takes the challenge to be extending that concern beyond a narrow circle. The Golden Rule works differently – and so too the common advice to imagine yourself in someone else’s shoes. In contrast with Mengzian extension, Golden Rule/others’ shoes advice assumes self-interest as the starting point, and implicitly treats overcoming egoistic selfishness as the main cognitive and moral challenge.

Maybe we can model Golden Rule/others’ shoes thinking like this:

  1. If I were in the situation of person x, I would want to be treated according to principle p.
  2. Golden Rule: do unto others as you would have others do unto you.
  3. Thus, I will treat person x according to principle p.

And maybe we can model Mengzian extension like this:

  1. I care about person y and want to treat that person according to principle p.
  2. Person x, though perhaps more distant, is relevantly similar.
  3. Thus, I will treat person x according to principle p.

There will be other more careful and detailed formulations, but this sketch captures the central difference between these two approaches to moral cognition. Mengzian extension models general moral concern on the natural concern we already have for people close to us, while the Golden Rule models general moral concern on concern for oneself.

I like Mengzian extension better for three reasons. First, Mengzian extension is more psychologically plausible as a model of moral development. People do, naturally, have concern and compassion for others around them. Explicit exhortations aren’t needed to produce this natural concern and compassion, and these natural reactions are likely to be the main seed from which mature moral cognition grows. Our moral reactions to vivid, nearby cases become the bases for more general principles and policies. If you need to reason or analogise your way into concern even for close family members, you’re already in deep moral trouble.

Second, Mengzian extension is less ambitious – in a good way. The Golden Rule imagines a leap from self-interest to generalised good treatment of others. This might be excellent and helpful advice, perhaps especially for people who are already concerned about others and thinking about how to implement that concern. But Mengzian extension has the advantage of starting the cognitive project much nearer the target, requiring less of a leap. Self-to-other is a huge moral and ontological divide. Family-to-neighbour, neighbour-to-fellow citizen – that’s much less of a divide.

Third, you can turn Mengzian extension back on yourself, if you are one of those people who has trouble standing up for your own interests – if you’re the type of person who is excessively hard on yourself or who tends to defer a bit too much to others. You would want to stand up for your loved ones and help them flourish. Apply Mengzian extension, and offer the same kindness to yourself. If you’d want your father to be able to take a vacation, realise that you probably deserve a vacation too. If you wouldn’t want your sister to be insulted by her spouse in public, realise that you too shouldn’t have to suffer that indignity.

Although Mengzi and the 18th-century French philosopher Jean-Jacques Rousseau both endorse mottoes standardly translated as ‘human nature is good’ and have views that are similar in important ways, this is one difference between them. In both Emile (1762) and Discourse on Inequality (1755), Rousseau emphasises self-concern as the root of moral development, making pity and compassion for others secondary and derivative. He endorses the foundational importance of the Golden Rule, concluding that ‘love of men derived from love of self is the principle of human justice’.

This difference between Mengzi and Rousseau is not a general difference between East and West. Confucius, for example, endorses something like the Golden Rule in the Analects: ‘Do not impose on others what you yourself do not desire.’ Mozi and Xunzi, also writing in China in the period, imagine people acting mostly or entirely selfishly until society artificially imposes its regulations, and so they see the enforcement of rules rather than Mengzian extension as the foundation of moral development. Moral extension is thus specifically Mengzian rather than generally Chinese.

Care about me not because you can imagine what you would selfishly want if you were me. Care about me because you see how I am not really so different from others you already love.


This is an edited extract from ‘A Theory of Jerks and Other Philosophical Misadventures’ © 2019 by Eric Schwitzgebel, published by MIT Press.Aeon counter – do not remove

Eric Schwitzgebel is professor of philosophy at the University of California, Riverside. He blogs at The Splintered Mind and is the author of Perplexities of Consciousness (2011) and A Theory of Jerks and Other Philosophical Misadventures (2019).

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

The Meaning to Life? A Darwinian Existentialist has his Answers

human-lifespan

Michael Ruse | Aeon Ideas

I was raised as a Quaker, but around the age of 20 my faith faded. It would be easiest to say that this was because I took up philosophy – my lifelong occupation as a teacher and scholar. This is not true. More accurately, I joke that having had one headmaster in this life, I’ll be damned if I want another in the next. I was convinced back then that, by the age of 70, I would be getting back onside with the Powers That Be. But faith did not then return and, as I approach 80, is nowhere on the horizon. I feel more at peace with myself than ever before. It’s not that I don’t care about the meaning or purpose of life – I am a philosopher! Nor does my sense of peace mean that I am complacent or that I have delusions about my achievements and successes. Rather, I feel that deep contentment that religious people tell us is the gift or reward for proper living.

I come to my present state for two separate reasons. As a student of Charles Darwin, I am totally convinced – God or no God – that we are (as the 19th-century biologist Thomas Henry Huxley used to say) modified monkeys rather than modified mud. Culture is hugely important, but to ignore our biology is just wrong. Second, I am drawn, philosophically, to existentialism. A century after Darwin, Jean-Paul Sartre said that we are condemned to freedom, and I think he is right. Even if God does exist, He or She is irrelevant. The choices are ours.

Sartre denied such a thing as human nature. From this quintessential Frenchman, I take that with a pinch of salt: we are free, within the context of our Darwinian-created human nature. What am I talking about? A lot of philosophers today are uncomfortable even raising the idea of ‘human nature’. They feel that, too quickly, it is used against minorities – gay people, the disabled, and others – to suggest that they are not really human. This is a challenge not a refutation. If a definition of human nature cannot take account of the fact that up to 10 per cent of us have same-sex orientation, then the problem is not with human nature but with the definition.

What, then, is human nature? In the middle of the 20th century, it was popular to suggest that we are killer apes: we can and do make weapons, and we use them. But modern primatologists have little time for this. Their findings suggest that most apes would far rather fornicate than fight. In making war we are really not doing what comes naturally. I don’t deny that humans are violent, however our essence goes the other way. It is one of sociability. We are not that fast, we are not that strong, we are hopeless in bad weather; but we succeed because we work together. Indeed, our lack of natural weapons points that way. We cannot get all we want through violence. We must cooperate.

Darwinians did not discover this fact about our nature. Listen to the metaphysical poet John Donne in 1624:

No man is an island,
Entire of itself,
Every man is a piece of the continent,
A part of the main.
If a clod be washed away by the sea,
Europe is the less.
As well as if a promontory were.
As well as if a manor of thy friend’s
Or of thine own were:
Any man’s death diminishes me,
Because I am involved in mankind,
And therefore never send to know for whom the bell tolls;
It tolls for thee.

Darwinian evolutionary theory shows how this all came about, historically, through the forces of nature. It suggests that there is no eternal future or, if there is, it is not relevant for the here and now. Rather, we must live life to the full, within the context of – liberated by – our Darwinian-created human nature. I see three basic ways in which this occurs.

First, family. Humans are not like male orangutans whose home life is made up mainly of one-night stands. A male turns up, does his business, and then, sexually sated, vanishes. The impregnated female births and raises the children by herself. This is possible simply because she can. If she couldn’t then, biologically it would be in the interests of the males to lend a hand. Male birds help at the nest because, exposed as they are up trees, the chicks need to grow as quickly as possible. Humans face different challenges, but with the same end. We have big brains that need time to develop. Our young cannot fend for themselves within weeks or days. Therefore humans need lots of parental care, and our biology fits us for home life, as it were: spouses, offspring, parents, and more. Men don’t push the pram just by chance. Nor boast to their co-workers about their kid getting into Harvard.

Second, society. Co-workers, shop attendants, teachers, doctors, hotel clerks – the list is endless. Our evolutionary strength is that we work together, helping and expecting help. I am a teacher, not just of my children, but of yours (and others) too. You are a doctor: you give medical care not just to your children, but to mine (and others) too. In this way, we all benefit. As Adam Smith pointed out in 1776, none of this happens by chance or because nature has suddenly become soft: ‘It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.’ Smith invoked the ‘invisible hand’. The Darwinian puts it down to evolution through natural selection.

Though life can be a drag sometimes, biology ensures that we generally get on with the job, and do it as part of our fulfilled lives. John Stuart Mill had it exactly right in 1863: ‘When people who are fairly fortunate in their material circumstances don’t find sufficient enjoyment to make life valuable to them, this is usually because they care for nobody but themselves.’

Third, culture. Works of art and entertainment, TV, movies, plays, novels, paintings and sport. Note how social it all is. Romeo and Juliet, about two kids in ill-fated love. The Sopranos, about a mob family. A Roy Lichtenstein faux-comic painting; a girl on the phone: ‘Oh, Jeff… I love you, too… but…’ England beating Australia at cricket. There are evolutionists who doubt that culture is so tightly bound to biology, and who are inclined to see it as a side-product of evolution, what Stephen Jay Gould in 1982 called an ‘exaptation’. This is surely true in part. But probably only in part. Darwin thought that culture might have something to do with sexual selection: protohumans using songs and melodies, say, to attract mates. Sherlock Holmes agreed; in A Study in Scarlet (1887), he tells Watson that musical ability predates speech, according to Darwin: ‘Perhaps that is why we are so subtly influenced by it. There are vague memories in our souls of those misty centuries when the world was in its childhood.’

Draw it together. I have had a full family life, a loving spouse and children. I even liked teenagers. I have been a college professor for 55 years. I have not always done the job as well as I could, but I am not lying when I say that Monday morning is my favourite time of the week. I’m not much of a creative artist, and I’m hopeless at sports. But I have done my scholarship and shared with others. Why else am I writing this? And I have enjoyed the work of fellow humans. A great performance of Mozart’s opera The Marriage of Figaro is heaven. I speak literally.

This is my meaning to life. When I meet my nonexistent God, I shall say to Him: ‘God, you gave me talents and it’s been a hell of a lot of fun using them. Thank you.’ I need no more. As George Meredith wrote in his poem ‘In the Woods’ (1870):

The lover of life knows his labour divine,
And therein is at peace.


A Meaning to Life (2019) by Michael Ruse is published via Princeton University Press.Aeon counter – do not remove

Michael Ruse is the Lucyle T Werkmeister Professor of Philosophy and director of the history and philosophy of science at Florida State University. He has written or edited more than 50 books, including most recently On Purpose (2017), Darwinism as Religion (2016), The Problem of War (2018) and A Meaning to Life (2019).

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Can you step in the same river twice? Wittgenstein v Heraclitus

statue-foot

Photo Pixabay

David Egan | Aeon Ideas

‘I am not a religious man,’ the philosopher Ludwig Wittgenstein once said to a friend, ‘but I cannot help seeing every problem from a religious point of view.’ These problems that he claims to see from a religious point of view tend to be technical matters of logic and language. Wittgenstein trained as an engineer before he turned to philosophy, and he draws on mundane metaphors of gears, levers and machinery. Where you find the word ‘transcendent’ in Wittgenstein’s writings, you’ll likely find ‘misunderstanding’ or ‘nonsense’ nearby.

When he does respond to philosophers who set their sights on higher mysteries, Wittgenstein can be stubbornly dismissive. Consider: ‘The man who said one cannot step into the same river twice was wrong; one can step into the same river twice.’ With such blunt statements, Wittgenstein seems less a religious thinker and more a stodgy literalist. But a close examination of this remark can show us not only what Wittgenstein means by a ‘religious point of view’ but also reveal Wittgenstein as a religious thinker of striking originality.

‘The man’ who made the remark about rivers is Heraclitus, a philosopher at once pre-Socratic and postmodern, misquoted on New Age websites and quoted out of context by everyone, since all we have of his corpus are isolated fragments. What is it that Heraclitus thinks we can’t do? Obviously I can do a little in-and-out-and-back-in-again shuffle with my foot at a riverbank. But is it the same river from moment to moment – the water flowing over my foot spills toward the ocean while new waters join the river at its source – and am I the same person?

One reading of Heraclitus has him conveying a mystical message. We use this one word, river, to talk about something that’s in constant flux, and that might dispose us to think that things are more fixed than they are – indeed, to think that there are stable things at all. Our noun-bound language can’t capture the ceaseless flow of existence. Heraclitus is saying that language is an inadequate tool for the purpose of limning reality.

What Wittgenstein finds intriguing about so many of our philosophical pronouncements is that while they seem profoundly important, it’s unclear what difference they make to anything. Imagine Heraclitus spending an afternoon down by the river (or the constantly changing flux of river-like moments, if you prefer) with his friend Parmenides, who says that change is impossible. They might have a heated argument about whether the so-called river is many or one, but afterwards they can both go for a swim, get a cool drink to refresh themselves, or slip into some waders for a bit of fly fishing. None of these activities is in the least bit altered by the metaphysical commitments of the disputants.

Wittgenstein thinks that we can get clearer about such disputes by likening the things that people say to moves in a game. Just as every move in a game of chess alters the state of play, so does every conversational move alter the state of play in what he calls the language-game. The point of talking, like the point of moving a chess piece, is to do something. But a move only counts as that move in that game provided a certain amount of stage-setting. To make sense of a chess game, you need to be able to distinguish knights from bishops, know how the different pieces move, and so on. Placing pieces on the board at the start of the game isn’t a sequence of moves. It’s something we do to make the game possible in the first place.

One way we get confused by language, Wittgenstein thinks, is that the rule-stating and place-setting activities happen in the same medium as the actual moves of the language-game – that is, in words. ‘The river is overflowing its banks’ and ‘The word river is a noun’ are both grammatically sound English sentences, but only the former is a move in a language-game. The latter states a rule for using language: it’s like saying ‘The bishop moves diagonally’, and it’s no more a move in a language-game than a demonstration of how the bishop moves is a move in chess.

What Heraclitus and Parmenides disagree about, Wittgenstein wants us to see, isn’t a fact about the river but the rules for talking about the river. Heraclitus is recommending a new language-game: one in which the rule for using the word river prohibits us from saying that we stepped into the same one twice, just as the rules of our own language-game prohibit us from saying that the same moment occurred at two different times. There’s nothing wrong with proposing alternative rules, provided you’re clear that that’s what you’re doing. If you say: ‘The king moves just like the queen,’ you’re either saying something false about our game of chess or you’re proposing an alternative version of the game – which might or might not turn out to be any good. The trouble with Heraclitus is that he imagines he’s talking about rivers and not rules – and, in that case, he’s simply wrong. The mistake we so often make in philosophy, according to Wittgenstein, is that we think we’re doing one thing when in fact we’re doing another.

But if we dismiss the remark about rivers as a naive blunder, we learn nothing from it. ‘In a certain sense one cannot take too much care in handling philosophical mistakes, they contain so much truth,’ Wittgenstein cautions. Heraclitus and Parmenides might not do anything different as a result of their metaphysical differences, but those differences bespeak profoundly different attitudes toward everything they do. That attitude might be deep or shallow, bold or timorous, grateful or crabbed, but it isn’t true or false. Similarly, the rules of a game aren’t right or wrong – they’re the measure by which we determine whether moves within the game are right or wrong – but which games you think are worth playing, and how you relate to the rules as you play them, says a lot about you.

What, then, inclines us – and Heraclitus – to regard this expression of an attitude as a metaphysical fact? Recall that Heraclitus wants to reform our language-games because he thinks they misrepresent the way things really are. But consider what you’d need to do in order to assess whether our language-games are more or less adequate to some ultimate reality. You’d need to compare two things: our language-game and the reality that it’s meant to represent. In other words, you’d need to compare reality as we represent it to ourselves with reality free of all representation. But that makes no sense: how can you represent to yourself how things look free of all representation?

The fact that we might even be tempted to suppose we can do that bespeaks a deeply human longing to step outside our own skins. We can feel trapped by our bodily, time-bound existence. There’s a kind of religious impulse that seeks liberation from these limits: it seeks to transcend our finite selves and make contact with the infinite. Wittgenstein’s religious impulse pushes us in the opposite direction: he doesn’t try to satisfy our aspiration for transcendence but to wean us from that aspiration altogether. The liberation he offers isn’t liberation from our bounded selves but for our bounded selves.

Wittgenstein’s remark about Heraclitus comes from a typescript from the early 1930s, when Wittgenstein was just beginning to work out the mature philosophy that would be published posthumously as Philosophical Investigations (1953). Part of what makes that late work special is the way in which the Wittgenstein who sees every problem from a religious point of view merges with the practical-minded engineer. Metaphysical speculations, for Wittgenstein, are like gears that have slipped free from the mechanism of language and are spinning wildly out of control. Wittgenstein the engineer wants to get the mechanism running smoothly. And this is precisely where the spiritual insight resides: our aim, properly understood, isn’t transcendence but a fully invested immanence. In this respect, he offers a peculiarly technical approach to an aspiration that finds expression in mystics from Meister Eckhart to the Zen patriarchs: not to ascend to a state of perfection but to recognise that where you are, already, in this moment, is all the perfection you need.Aeon counter – do not remove


David Egan is a visiting assistant professor in the Department of Philosophy at CUNY Hunter College in New York. He is the author of The Pursuit of an Authentic Philosophy: Wittgenstein, Heidegger, and the Everyday (2019).

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

To Avoid Moral Failure, Don’t See People as Sherlock Does

sherlock-holmes

Suspicious minds; William Gillette as Sherlock Holmes (right) and Bruce McRae as Dr John Watson in the play Sherlock Holmes (c1900). Courtesy Wikimedia

Rima Basu | Aeon Ideas

If we’re the kind of people who care both about not being racist, and also about basing our beliefs on the evidence that we have, then the world presents us with a challenge. The world is pretty racist. It shouldn’t be surprising then that sometimes it seems as if the evidence is stacked in favour of some racist belief. For example, it’s racist to assume that someone’s a staff member on the basis of his skin colour. But what if it’s the case that, because of historical patterns of discrimination, the members of staff with whom you interact are predominantly of one race? When the late John Hope Franklin, professor of history at Duke University in North Carolina, hosted a dinner party at his private club in Washington, DC in 1995, he was mistaken as a member of staff. Did the woman who did so do something wrong? Yes. It was indeed racist of her, even though Franklin was, since 1962, that club’s first black member.

To begin with, we don’t relate to people in the same way that we relate to objects. Human beings are different in an important way. In the world, there are things – tables, chairs, desks and other objects that aren’t furniture – and we try our best to understand how this world works. We ask why plants grow when watered, why dogs give birth to dogs and never to cats, and so on. But when it comes to people, ‘we have a different way of going on, though it is hard to capture just what that is’, as Rae Langton, now professor of philosophy at the University of Cambridge, put it so nicely in 1991.

Once you accept this general intuition, you might begin to wonder how can we capture that different way in which we ought to relate to others. To do this, first we must recognise that, as Langton goes on to write, ‘we don’t simply observe people as we might observe planets, we don’t simply treat them as things to be sought out when they can be of use to us, and avoid when they are a nuisance. We are, as [the British philosopher P F] Strawson says, involved.’

This way of being involved has been played out in many different ways, but here’s the basic thought: being involved is thinking that others’ attitudes and intentions towards us are important in a special way, and that our treatment of others should reflect that importance. We are, each of us, in virtue of being social beings, vulnerable. We depend upon others for our self-esteem and self-respect.

For example, we each think of ourselves as having a variety of more or less stable characteristics, from marginal ones such as being born on a Friday to central ones such as being a philosopher or a spouse. The more central self-descriptions are important to our sense of self-worth, to our self-understanding, and they constitute our sense of identity. When these central self-descriptions are ignored by others in favour of expectations on the basis of our race, gender or sexual orientation, we’re wronged. Perhaps our self-worth shouldn’t be based on something so fragile, but not only are we all-too-human, these self-descriptions also allow us to understand who we are and where we stand in the world.

This thought is echoed in the American sociologist and civil rights activist W E B DuBois’s concept of double consciousness. In The Souls of Black Folk (1903), DuBois notes a common feeling: ‘this sense of always looking at one’s self through the eyes of others, of measuring one’s soul by the tape of a world that looks on in amused contempt and pity’.

When you believe that John Hope Franklin must be a staff member rather than a club member, you’ve made predictions of him and observed him in the same way that one might observe the planets. Our private thoughts can wrong other people. When someone forms beliefs about you in this predictive way, they fail to see you, they fail to interact with you as a person. This is not only upsetting. It is a moral failing.

The English philosopher W K Clifford argued in 1877 that we were morally criticisable if our beliefs weren’t formed in the right way. He warned that we have a duty to humanity to never believe on the basis of insufficient evidence because to do so would be to put society at risk. As we look at the world around us and the epistemic crisis in which we find ourselves, we see what happens when Clifford’s imperative is ignored. And if we combine Clifford’s warning with DuBois’s and Langton’s observations, it becomes clear that, for our belief-forming practices, the stakes aren’t just high because we depend on one another for knowledge – the stakes are also high because we depend on one another for respect and dignity.

Consider how upset Arthur Conan Doyle’s characters get with Sherlock Holmes for the beliefs this fictional detective forms about them. Without fail, the people whom Holmes encounters find the way he forms beliefs about others to be insulting. Sometimes it’s because it is a negative belief. Often, however, the belief is mundane: eg, what they ate on the train or which shoe they put on first in the morning. There’s something improper about the way that Holmes relates to other human beings. Holmes’s failure to relate is not just a matter of his actions or his words (though sometimes it is also that), but what really rubs us up the wrong way is that Holmes observes us all as objects to be studied, predicted and managed. He doesn’t relate to us as human beings.

Maybe in an ideal world, what goes on inside our heads wouldn’t matter. But just as the personal is the political, our private thoughts aren’t really only our own. If a man believes of every woman he meets: ‘She’s someone I can sleep with,’ it’s no excuse that he never acts on the belief or reveals the belief to others. He has objectified her and failed to relate to her as a human being, and he has done so in a world in which women are routinely objectified and made to feel less-than.

This kind of indifference to the effect one has on others is morally criticisable. It has always struck me as odd that everyone grants that our actions and words are apt for moral critique, but once we enter the realm of thought we’re off the hook. Our beliefs about others matter. We care what others think of us.

When we mistake a person of colour for a staff member, that challenges this person’s central self-descriptions, the descriptions from which he draws his sense of self-worth. This is not to say that there is anything wrong with being a staff member, but if your reason for thinking that someone is staff is tied not only to something he has no control over (his skin colour) but also to a history of oppression (being denied access to more prestigious forms of employment), then that should give you pause.

The facts might not be racist, but the facts that we often rely on can be the result of racism, including racist institutions and policies. So when forming beliefs using evidence that is a result of racist history, we are accountable for failing to show more care and for believing so easily that someone is a staff member. Precisely what is owed can vary along a number of dimensions, but nonetheless we can recognise that some extra care with our beliefs is owed along these lines. We owe each other not only better actions and better words, but also better thoughts.Aeon counter – do not remove


Rima Basu is an assistant professor of philosophy at Claremont McKenna College in California. Her work has been published in Philosophical Studies, among others.

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

How the Dualism of Descartes Ruined our Mental Health

goya-lunatics

Yard with Lunatics 1794, (detail) by Francisco José de Goya y Lucientes. Courtesy Wikimedia/Meadows Museum, Dallas

James Barnes | Aeon Ideas

Toward the end of the Renaissance period, a radical epistemological and metaphysical shift overcame the Western psyche. The advances of Nicolaus Copernicus, Galileo Galilei and Francis Bacon posed a serious problem for Christian dogma and its dominion over the natural world. Following Bacon’s arguments, the natural world was now to be understood solely in terms of efficient causes (ie, external effects). Any inherent meaning or purpose to the natural world (ie, its ‘formal’ or ‘final’ causes) was deemed surplus to requirements. Insofar as it could be predicted and controlled in terms of efficient causes, not only was any notion of nature beyond this conception redundant, but God too could be effectively dispensed with.

In the 17th century, René Descartes’s dualism of matter and mind was an ingenious solution to the problem this created. ‘The ideas’ that had hitherto been understood as inhering in nature as ‘God’s thoughts’ were rescued from the advancing army of empirical science and withdrawn into the safety of a separate domain, ‘the mind’. On the one hand, this maintained a dimension proper to God, and on the other, served to ‘make the intellectual world safe for Copernicus and Galileo’, as the American philosopher Richard Rorty put it in Philosophy and the Mirror of Nature (1979). In one fell swoop, God’s substance-divinity was protected, while empirical science was given reign over nature-as-mechanism – something ungodly and therefore free game.

Nature was thereby drained of her inner life, rendered a deaf and blind apparatus of indifferent and value-free law, and humankind was faced with a world of inanimate, meaningless matter, upon which it projected its psyche – its aliveness, meaning and purpose – only in fantasy. It was this disenchanted vision of the world, at the dawn of the industrial revolution that followed, that the Romantics found so revolting, and feverishly revolted against.

The French philosopher Michel Foucault in The Order of Things (1966) termed it a shift in ‘episteme’ (roughly, a system of knowledge). The Western psyche, Foucault argued, had once been typified by ‘resemblance and similitude’. In this episteme, knowledge of the world was derived from participation and analogy (the ‘prose of the world’, as he called it), and the psyche was essentially extroverted and world-involved. But after the bifurcation of mind and nature, an episteme structured around ‘identity and difference’ came to possess the Western psyche. The episteme that now prevailed was, in Rorty’s terms, solely concerned with ‘truth as correspondence’ and ‘knowledge as accuracy of representations’. Psyche, as such, became essentially introverted and untangled from the world.

Foucault argued, however, that this move was not a supersession per se, but rather constituted an ‘othering’ of the prior experiential mode. As a result, its experiential and epistemological dimensions were not only denied validity as an experience, but became the ‘occasion of error’. Irrational experience (ie, experience inaccurately corresponding to the ‘objective’ world) then became a meaningless mistake – and disorder the perpetuation of that mistake. This is where Foucault located the beginning of the modern conception of ‘madness’.

Although Descartes’s dualism did not win the philosophical day, we in the West are still very much the children of the disenchanted bifurcation it ushered in. Our experience remains characterised by the separation of ‘mind’ and ‘nature’ instantiated by Descartes. Its present incarnation  – what we might call the empiricist-materialist position  –  not only predominates in academia, but in our everyday assumptions about ourselves and the world. This is particularly clear in the case of mental disorder.

Common notions of mental disorder remain only elaborations of ‘error’, conceived of in the language of ‘internal dysfunction’ relative to a mechanistic world devoid of any meaning and influence. These dysfunctions are either to be cured by psychopharmacology, or remedied by therapy meant to lead the patient to rediscover ‘objective truth’ of the world. To conceive of it in this way is not only simplistic, but highly biased.

While it is true that there is value in ‘normalising’ irrational experiences like this, it comes at a great cost. These interventions work (to the extent that they do) by emptying our irrational experiences of their intrinsic value or meaning. In doing so, not only are these experiences cut off from any world-meaning they might harbour, but so too from any agency and responsibility we or those around us have – they are only errors to be corrected.

In the previous episteme, before the bifurcation of mind and nature, irrational experiences were not just ‘error’ – they were speaking a language as meaningful as rational experiences, perhaps even more so. Imbued with the meaning and rhyme of nature herself, they were themselves pregnant with the amelioration of the suffering they brought. Within the world experienced this way, we had a ground, guide and container for our ‘irrationality’, but these crucial psychic presences vanished along with the withdrawal of nature’s inner life and the move to ‘identity and difference’.

In the face of an indifferent and unresponsive world that neglects to render our experience meaningful outside of our own minds  –  for nature-as-mechanism is powerless to do this  –  our minds have been left fixated on empty representations of a world that was once its source and being. All we have, if we are lucky to have them, are therapists and parents who try to take on what is, in reality, and given the magnitude of the loss, an impossible task.

But I’m not going to argue that we just need to ‘go back’ somehow. On the contrary, the bifurcation of mind and nature was at the root of immeasurable secular progress –  medical and technological advance, the rise of individual rights and social justice, to name just a few. It also protected us all from being bound up in the inherent uncertainty and flux of nature. It gave us a certain omnipotence – just as it gave science empirical control over nature – and most of us readily accept, and willingly spend, the inheritance bequeathed by it, and rightly so.

It cannot be emphasised enough, however, that this history is much less a ‘linear progress’ and much more a dialectic. Just as unified psyche-nature stunted material progress, material progress has now degenerated psyche. Perhaps, then, we might argue for a new swing in this pendulum. Given the dramatic increase in substance-use issues and recent reports of a teenage ‘mental health crisis’ and teen suicide rates rising in the US, the UK and elsewhere to name only the most conspicuous, perhaps the time is in fact overripe.

However, one might ask, by what means? There has been a resurgence of ‘pan-experiential’ and idealist-leaning theories in several disciplines, largely concerned with undoing the very knot of bifurcation and the excommunication of a living nature, and creating in its wake something afresh. This is because attempts at explaining subjective experience in empiricist-materialist terms have all but failed (principally due to what the Australian philosopher David Chalmers in 1995 termed the ‘the hard problem’ of consciousness). The notion that metaphysics is ‘dead’ would in fact be met with very significant qualification in certain quarters – indeed, the Canadian philosopher Evan Thompson et al argued along the same lines in a recent essay in Aeon.

It must be remembered that mental disorder as ‘error’ rises and falls with the empiricist-materialist metaphysics and the episteme it is a product of. Therefore, we might also think it justified to begin to reconceptualise the notion of mental disorder in the same terms as these theories. There has been a decisive shift in psychotherapeutic theory and practice away from the changing of parts or structures of the individual, and towards the idea that it is the very process of the therapeutic encounter itself that is ameliorative. Here, correct or incorrect judgments about ‘objective reality’ start to lose meaning, and psyche as open and organic starts to come back into focus, but the metaphysics remains. We ultimately need to be thinking about mental disorder on a metaphysical level, and not just within the confines of the status quo.Aeon counter – do not remove

James Barnes

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

How do we Pry Apart the True and Compelling from the False and Toxic?

cpu-stack

Stack of CPU’s. Shawn Stutzman, Pexels

David V Johnson | Aeon Ideas

When false and malicious speech roils the body politic, when racism and violence surge, the right and role of freedom of speech in society comes into crisis. People rightly begin to wonder what are the limits, what should be the rules. It is a complicated issue, and resolving it requires care about the exact problems targeted and solutions proposed. Otherwise the risk to free speech is real.

Propaganda from Russian-funded troll farms (boosted by Facebook data breaches) might have contributed to the United Kingdom’s vote to exit the European Union and aided the United States’ election of Donald Trump as president. Conspiracy theories spread by alternative news outlets or over social media sometimes lead to outbreaks of violence. Politicians exploit the mainstream news media’s commitment to balance, to covering newsworthy public statements and their need for viewers or readers by making baseless, sensational claims.

In On Liberty (1859), John Stuart Mill offers the most compelling defence of freedom of speech, conscience and autonomy ever written. Mill argues that the only reason to restrict speech is to prevent harm to others, such as with hate speech and incitement to violence. Otherwise, all speech must be protected. Even if we know a view is false, Mill says, it is wrong to suppress it. We avoid prejudice and dogmatism, and achieve understanding, through freely discussing and defending what we believe against contrary claims.

Today, a growing number of people see these views as naive. Mill’s arguments are better suited to those who still believe in the open marketplace of ideas, where free and rational debate is the best way to settle all disputes about truth and falsity. Who could possibly believe we live in such a world anymore? Instead, what we have is a Wild West of partisanship and manipulation, where social media gurus exploit research in behavioural psychology to compel users to affirm and echo absurd claims. We have a world where people live in cognitive bubbles of the like-minded and share one another’s biases and prejudices. According to this savvy view, our brave new world is too prone to propaganda and conspiracy-mongering to rely on Mill’s optimism about free speech. To do so is to risk abetting the rise of fascist and absolutist tendencies.

In his book How Fascism Works (2018), the American philosopher Jason Stanley cites the Russian television network RT, which presents all sorts of misleading and slanted views. If Mill is right, claims Stanley, then RT and such propaganda outfits ‘should be the paradigm of knowledge production’ because they force us to scrutinise their claims. But this is a reductio ad absurdum of Mill’s argument. Similarly, Alexis Papazoglou in The New Republic questions whether Nick Clegg, the former British deputy prime minister turned Facebook’s new vice president of global affairs and communication, will be led astray by his appreciation of Mill’s On Liberty. ‘Mill seemed to believe that an open, free debate meant the truth would usually prevail, whereas under censorship, truth could end up being accidentally suppressed, along with falsehood,’ writes Papazoglou. ‘It’s a view that seems a bit archaic in the age of an online marketplace of memes and clickbait, where false stories tend to spread faster and wider than their true counterpoints.’

When important and false beliefs and theories gain traction in public conversation, Mill’s protection of speech can be frustrating. But there is nothing new about ‘fake news’, whether in Mill’s age of sensationalist newspapers or in our age of digital media. Nonetheless to seek a solution in restricting speech is foolish and counterproductive – it lends credibility to the illiberal forces you, paradoxically, seek to silence. It also betrays an elitism about engaging with those of different opinions and a cynicism about affording your fellow citizens the freedom to muddle through the morass on their own. If we want to live in a liberal democratic society, rational engagement is the only solution on offer. Rather than restricting speech, we should look to supplement Mill’s view with effective tools for dealing with bad actors and with beliefs that, although false, seem compelling to some.

Fake news and propaganda are certainly problems, as they were in Mill’s day, but the problems they raise are more serious than the falsity of their claims. After all, they are not unique in saying false things, as the latest newspaper corrections will tell you. More importantly, they involve bad actors: people and organisations who intentionally pass off false views as the truth, and hide their nature and motives. (Think Russian troll farms.) Anyone who knows that they are dealing with bad actors – people trying to mislead – ignores them, and justifiably so. It’s not worth your time to consider the claim of someone you know is trying to deceive you.

There is nothing in Mill that demands that we engage any and all false views. After all, there are too many out there and so people have to be selective. Transparency is key, helping people know with whom, or what, they are dealing. Transparency helps filter out noise and fosters accountability, so that bad actors – those who hide their identity for the purpose of misleading others – are eliminated.

Mill’s critics fail to see the truth that is mixed in with the false views that they wish to restrict, and that makes those views compelling. RT, for instance, has covered many issues, such as the US financial crisis, economic inequality and imperialism more accurately than mainstream news channels. RT also includes informed sources who are ignored by other outlets. The channel might be biased toward demeaning the US and fomenting division, but it often pursues this agenda by speaking truths that are not covered in mainstream US media. Informed news-watchers know to view RT and all news sources with skepticism, and there is no reason not to extend the same respect to the entire viewing public, unless you presume you are a better judge of what to believe than your fellow citizens.

Mill rightly thought that the typical case wasn’t one of views that are false, but views that have a mixture of true and false. It would be far more effective to try to engage with the truth in views we despise than to try to ban them for their alleged falsity. The Canadian psychologist and YouTube sensation Jordan Peterson, for example, says things that are false, misogynistic and illiberal, but one possible reason for his following is that he recognises and speaks to a deficit of meaning and values in many young men’s lives. Here, the right approach is to pry apart the true and compelling from the false and toxic, through reasoned consideration. This way, following Mill’s path, presents a better chance of winning over those who are lost to views we despise. It also helps us improve our own understanding, as Mill wisely suggests.Aeon counter – do not remove

David V Johnson

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

To Boost your Self-esteem, Write about Chapters of your Life

1980s-car

New car, 1980s. Photo by Don Pugh/Flickr

Christian Jarrett | Aeon Ideas

In truth, so much of what happens to us in life is random – we are pawns at the mercy of Lady Luck. To take ownership of our experiences and exert a feeling of control over our future, we tell stories about ourselves that weave meaning and continuity into our personal identity. Writing in the 1950s, the psychologist Erik Erikson put it this way:

To be adult means among other things to see one’s own life in continuous perspective, both in retrospect and in prospect … to selectively reconstruct his past in such a way that, step for step, it seems to have planned him, or better, he seems to have planned it.

Alongside your chosen values and goals in life, and your personality traits – how sociable you are, how much of a worrier and so on – your life story as you tell it makes up the final part of what in 2015 the personality psychologist Dan P McAdams at Northwestern University in Illinois called the ‘personological trinity’.

Of course, some of us tell these stories more explicitly than others – one person’s narrative identity might be a barely formed story at the edge of their consciousness, whereas another person might literally write out their past and future in a diary or memoir.

Intriguingly, there’s some evidence that prompting people to reflect on and tell their life stories – a process called ‘life review therapy’ – could be psychologically beneficial. However, most of this work has been on older adults and people with pre-existing problems such as depression or chronic physical illnesses. It remains to be established through careful experimentation whether prompting otherwise healthy people to reflect on their lives will have any immediate benefits.

A relevant factor in this regard is the tone, complexity and mood of the stories that people tell themselves. For instance, it’s been shown that people who tell more positive stories, including referring to more instances of personal redemption, tend to enjoy higher self-esteem and greater ‘self-concept clarity’ (the confidence and lucidity in how you see yourself). Perhaps engaging in writing or talking about one’s past will have immediate benefits only for people whose stories are more positive.

In a recent paper in the Journal of Personality, Kristina L Steiner at Denison University in Ohio and her colleagues looked into these questions and reported that writing about chapters in your life does indeed lead to a modest, temporary self-esteem boost, and that in fact this benefit arises regardless of how positive your stories are. However, there were no effects on self-concept clarity, and many questions on this topic remain for future study.

Steiner’s team tested three groups of healthy American participants across three studies. The first two groups – involving more than 300 people between them – were young undergraduates, most of them female. The final group, a balanced mix of 101 men and women, was recruited from the community, and they were older, with an average age of 62.

The format was essentially the same for each study. The participants were asked to complete various questionnaires measuring their mood, self-esteem and self-concept clarity, among other things. Then half of them were allocated to write about four chapters in their lives, spending 10 minutes on each. They were instructed to be as specific and detailed as possible, and to reflect on main themes, how each chapter related to their lives as a whole, and to think about any causes and effects of the chapter on them and their lives. The other half of the participants, who acted as a control group, spent the same time writing about four famous Americans of their choosing (to make this task more intellectually comparable, they were also instructed to reflect on the links between the individuals they chose, how they became famous, and other similar questions). After the writing tasks, all the participants retook the same psychological measures they’d completed at the start.

The participants who wrote about chapters in their lives displayed small, but statistically significant, increases to their self-esteem, whereas the control-group participants did not. This self-esteem boost wasn’t explained by any changes to their mood, and – to the researchers’ surprise – it didn’t matter whether the participants rated their chapters as mostly positive or negative, nor did it depend on whether they featured themes of agency (that is, being in control) and communion (pertaining to meaningful relationships). Disappointingly, there was no effect of the life-chapter task on self-concept clarity, nor on meaning and identity.

How long do the self-esteem benefits of the life-chapter task last, and might they accumulate by repeating the exercise? Clues come from the second of the studies, which involved two life chapter-writing tasks (and two tasks writing about famous Americans for the control group), with the second task coming 48 hours after the first. The researchers wanted to see if the self-esteem boost arising from the first life-chapter task would still be apparent at the start of the second task two days later – but it wasn’t. They also wanted to see if the self-esteem benefits might accumulate over the two tasks – they didn’t (the second life-chapter task had its own self-esteem benefit, but it wasn’t cumulative with the benefits of the first).

It remains unclear exactly why the life-chapter task had the self-esteem benefits that it did. It’s possible that the task led participants to consider how they had changed in positive ways. They might also have benefited from expressing and confronting their emotional reactions to these periods of their lives – this would certainly be consistent with the well-documented benefits of expressive writing and ‘affect labelling’ (the calming effect of putting our emotions into words). Future research will need to compare different life chapter-writing instructions to tease apart these different potential beneficial mechanisms. It would also be helpful to test more diverse groups of participants and different ‘dosages’ of the writing task to see if it is at all possible for the benefits to accrue over time.

The researchers said: ‘Our findings suggest that the experience of systematically reviewing one’s life and identifying, describing and conceptually linking life chapters may serve to enhance the self, even in the absence of increased self-concept clarity and meaning.’ If you are currently lacking much confidence and feel like you could benefit from an ego boost, it could be worth giving the life-chapter task a go. It’s true that the self-esteem benefits of the exercise were small, but as Steiner’s team noted, ‘the costs are low’ too.Aeon counter – do not remove

Christian Jarrett

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.