The Future of the History of Philosophy

by Josh Platzky Miller and Lea Cantor


From The Philosopher, vol. 111, no. 1 (“Where is Philosophy Going?“).
If you enjoy reading this, please consider becoming a patron or making a small donation.
We are unfunded and your support is greatly appreciated.


One way to scry the future of philosophy is to look at its past. However, the history of philosophy – both as a field of academic study and in more popular literature – tends to tell a rather narrow and parochial story. This story predominantly focuses on Europe to the exclusion of almost everywhere else. The shift away from such a bias has already begun, especially in the specialist history of philosophy literature, but there are still deeply Eurocentric assumptions built into the most influential general histories of philosophy available today. One invisible assumption, still widely adopted, is that there is such a thing as “Western Philosophy”. As we will argue, the history of philosophy – both in Europe and globally – would be better understood if we abandoned the idea of a “Western Philosophy”. To see why, we start with the most widespread narratives about philosophy’s past.

***

Mainstream histories of philosophy contain what we might call a “Standard Narrative”: that philosophy begins in ancient Greece, usually starting with Thales; that it is continuous to the present day (the “Plato to NATO” picture); and that it is a largely self-standing European achievement with minimal influence from elsewhere. Some form of this picture is present in most influential histories of philosophy, from Bertrand Russell’s History of Western Philosophy (1945) to more recent works like Anthony Gottlieb’s The Dream of Reason: A History of Western Philosophy (2000), Anthony Kenny’s New History of Western Philosophy (2010), James Garvey and Jeremy Stangroom’s The Story of Philosophy: a History of Western Thought (2012), and A.C. Grayling’s History of Philosophy (2019). In these histories, the Standard Narrative tends to be equated to the history of “Western Philosophy”, although it is sometimes used interchangeably with philosophy as such, for instance in Philip Stokes’ Philosophy: 100 Essential Thinkers (2016).

So far, so familiar. But there are real problems with the Standard Narrative. Most obviously, for a supposedly continuous tradition, we might have some questions about a glaring c. 600-year gap (about 450-1050 CE). The gap seems to suggest that there weren’t really any philosophers for over half a millennium – or, as Brian Magee presents it, “for a long time scarcely any new intellectual work of lasting importance was done.”

We might further wonder about a history of philosophy that tells a story of almost entirely men boasting an age-old European lineage. How have the Ancient Greeks become equated to Western Europeans when their main interactions were with the Eastern Mediterranean, and they themselves often hailed from the Levant and North Africa? What of the “canonical” thinkers in the Graeco-Roman world who were actually from contemporary Turkey (e.g., Thales), Egypt (e.g., Plotinus), and Algeria (e.g., Augustine)? And that’s just the start of it: what about philosophers prior to the Greeks, or altogether excluded from the ambit of Ancient Philosophy, who wrote in languages other than Greek or Latin, such as Sanskrit or classical Chinese?

The Standard Narrative is presented by historians of philosophy in Europe as having been passed down since antiquity. Yet, one of its most striking features is how recently it was fabricated. Even until the late 1700s, many European histories of philosophy offered a significantly different picture. For instance, Gilles Ménage in France published a History of Women Philosophers (1690), while in Germany, Johann Jakob Brucker’s 1742 Critical History of Philosophy contained hundreds of pages on philosophy prior to the Greeks and beyond Europe.

The Standard Narrative is presented by historians of philosophy in Europe as having been passed down since antiquity. Yet, one of its most striking features is how recently it was fabricated.

How, then, did we arrive at the Standard Narrative? The story of a Greek origin of philosophy became common in late-18th century Eurocentric historiography. It was used to cement the exclusion of non-European traditions from the mainstream canon of philosophy in the 19th century. Echoing Peter Park’s important 2013 book, Africa, Asia, and the History of Philosophy, Yoko Arisaka recently emphasised that the broader Standard Narrative “is in fact a particular post-19th century construction arising out of the German tradition and establishing itself as the canonical Eurocentric history of philosophy”. It emerged from a long history of exclusion and marginalisation that is tied up with a host of extra-philosophical concerns, including European colonial expansion, slavery, pseudoscientific racial theorising, gendered social restructuring, academic disciplinary specialisation, religious sectarianism, and political expediency. Prominent European philosophers increasingly made a lot of noise about ancient Greece having inaugurated an unprecedented era of logic and reason, of logos, freed from superstition and murky mythos.

By the early 20th century, the Standard Narrative had largely assumed its contemporary form in specialist texts. Amongst Anglophones, it then became popularised through best-selling books like Will Durant’s The Story of Philosophy (1926) – the most sold book in the United States that year, with some four million copies sold overall – and Bertrand Russell’s History of Western Philosophy, which has sold an estimated two million copies since first publication in 1945 and made the Standard Narrative widely known under the label of “Western Philosophy”. The Standard Narrative has since spread to become globally influential, especially in former European settler-colonies.

Contemporary, 21st-century histories of philosophy have an ambivalent relationship to the Standard Narrative. There is usually some recognition of its inadequacy and parochialism, especially amongst feminist historians of philosophy such as Mary Ellen Waithe and Eileen O’Neill. This is also true amongst figures working on less Eurocentric, more global histories (or histories “without any gaps”), such as Hajime Nakamura, Souleymane Bachir Diagne, and Peter Adamson. However, many continue to replicate the Standard Narrative as the basis for a specifically “Western Philosophy”, and hence remain wedded to its basic premises (Greek origins, insularity, and continuity with contemporary Europe). In so doing, even contemporary histories of philosophy set up a false dichotomy between so-called “Western” and “non-Western” philosophy, trapped by the Eurocentric biases that birthed it, and are thus unable to offer a truly global history of philosophy.

***

If the future of the history of philosophy is global rather than Eurocentric, how do we get there? One lesson is from feminist critiques of male-dominated history of philosophy: simply adding [excluded group XX] and stirring is inadequate; genuine integration in the history of philosophy might mean reimagining what counts as philosophy. The same is likely to be true for rewriting the history of philosophy from a non-Eurocentric or global perspective. This will require much painstaking work, from historiographical challenges (that is, how to write such history) to exploring how philosophising itself has been conceptualised beyond Europe.

Meanwhile, however, there is a major hurdle to address: the idea of a “Western Philosophy” itself. The idea of “Western Philosophy” is largely taken for granted: few authors have attempted to define what the term picks out, mostly leaving it implicit and equivalent to the Standard Narrative (noteworthy exceptions include Ben Kies in the 1950s, and Lucy Allais and Christoph Schuringa more recently). When explanations are attempted, these turn out to be implausible, unstable or nonspecific to this supposed “tradition”: from a purely geographical descriptor, to supposed characteristics like “secular” or “scientific” thinking, “rational inquiry” or “concern with argumentation”, to simply a “legacy of the Greeks”.

If “Western Philosophy” is defined by a commitment to secular thinking, then most Greek philosophers probably wouldn’t qualify.

The idea of “Western Philosophy” cannot be purely geographical, since “west” is a relational term. Does it rule out “any sources east of Suez”, as Antony Flew put it in his Introduction to Western Philosophy (1971)? If so, this would exclude Australia and New Zealand while including indigenous thinkers from the Americas. Nor is “Western Philosophy” easily defined by putative characteristics. Take secular thinking: as Grayling puts it in his recent History of Philosophy, “this is a history of philosophy, not of theology and religion”. But if “Western Philosophy” is defined by a commitment to secular thinking, then most Greek philosophers probably wouldn’t qualify (interest in the nature of the divine and theological concepts underpinned many of their philosophical theories and scientific explanations), let alone Medieval Christian thinkers in the “Latin West”. In Europe, you would have to wait until about the 18th or even 19th century before finding widespread secular theorisations in metaphysics, ethics, and so on. On the other hand, you can find plenty of evidence of “secular thinking” amongst, say, ancient Indian Cārvāka/Lokāyata thinkers, but nobody sees Cārvāka as part of “Western Philosophy”.

What about the “legacy of the Greeks” idea? On this conception, philosophy in the Islamic world (as Peter Adamson frames it) would be a much stronger contender for being characteristic of “Western Philosophy” than anything happening across medieval Latin Christendom in Europe for, roughly, 600 years. As it happens, this is precisely the issue with the 600-year-gap in the continuity story. If there is any continuity in philosophising with Greek sources in or around Europe, the story predominantly runs through scholars east and south of Greece, in Byzantium and the Islamic world. In this period, translations of Greek texts proliferated in numerous languages, including Syriac, Arabic, Aramaic, Hebrew, Armenian, Coptic, and Ge’ez.

The incoherence of the idea of “Western Philosophy” doesn’t stop at the 600-year gap: one exemplar is Ibn Rushd (Latinised as Averroes, 1126-1198), a rationalist scholar working between Al-Andalus – contemporary Spain, one of the westernmost regions of Europe, no less – and northwest Africa, especially contemporary Morocco. Ibn Rushd’s commentaries on Aristotle and distinctive philosophical views were hugely influential in Europe up to the 16th century. If we wanted to tell a story that was continuous, Greek-responding, and in a geographical “West” (of Europe), then Ibn Rushd would appear to be an essential part of such a narrative. However, he is rarely foregrounded in Histories of “Western Philosophy”, and sometimes excluded entirely. Often, he is presented in passing as having merely “preserved” and “transmitted” Aristotle.

“Western Philosophy” is presented as a purely European phenomenon (at most, perhaps, extending to North America and Australasia), hermetically sealed from outside influence.

This leads us to the final major problem with the idea of “Western Philosophy”: insularity. It is presented as a purely European phenomenon (at most, perhaps, extending to North America and Australasia), hermetically sealed from outside influence. Even some of the “global” histories of philosophy, such as Julian Baggini’s How the World Thinks (2018), recreate the narrative of hermetically sealed traditions in isolation from one another. Despite being written out of histories of “Western Philosophy”, however, there is increasing scholarly interest in the histories of exchange, connection, and conversation (or even outright theft of ideas) between canonically “Western” philosophers and the rest of the world. Some examples are well known, such as the influence of Indian and East Asian philosophy on Schopenhauer and Heidegger, while others have been the subject of more recent scholarly work, such as Leibniz’s interest in China.

This trend also holds within the ancient periodisation of “Western Philosophy”, which downplays the exchanges between ancient Greece and much of Egypt, Babylonia, Persia, and India, as well as between the Roman Empire and much of North Africa and Eurasia. In fact, some scholars have argued that quintessential periods in so-called “Western” history, such as the Renaissance and the Enlightenment, are in fact the product of European learning from Islamic and, later, indigenous American, African, Indian, and Chinese thinkers.

Historical entanglement is, perhaps, the key problem with the narrative of “Western Philosophy”: if philosophers in Europe have, throughout history, been in conversation with those outside of Europe, then it becomes difficult to justify sectioning off a “Western Philosophy” that is distinctive from all others (much less holding “Western Philosophy” as unique and equivalent to philosophy proper). This is precisely the argument raised by Ben Kies (1917-1979), a South African school teacher, anti-colonial activist, and public intellectual – and perhaps the first person to challenge the idea of a “Western Philosophy”. As Kies argued in 1953, the formation of this narrative is primarily “a matter of myth and political metaphysics”. Moreover, as Kies argues, the project of “Western Civilisation”, with an attendant “Western Philosophy”, only becomes widespread in post-World War II attempts to recuperate a racial category of “white civilisation”. If Kies is right, then “Western Philosophy” is fundamentally an ideological construction, tied to forms of political dominance. This would explain why none of its explanations can coherently track the cast of characters and intellectual movements associated with it.

***

The idea of a “Western Philosophy” is a recent invention: a political project that masks its origins in, to no small degree, racial and imperialist thinking. Indeed, the very idea itself is the productof a fabricated history that does not fit the facts, and inhibits our understanding of both philosophy and its history. As a result, we should abandon the idea of a “Western Philosophy” and re-examine the history of philosophy without its distorting effects. In doing so, we have much to learn from the past. Throughout history, thinkers around the world have engaged in philosophy that is “cross-cultural”, even globally entangled, but today their insights and methods are largely missing in historiographical and metaphilosophical debates. We suggest that a crucial step to rectify this situation is to draw these approaches into the history and historiography of philosophy, without reusing and reinforcing the Eurocentric category of “Western Philosophy”.

If you are interested in reading more about these issues, we recommend:

Josh Platzky Miller is a Lecturer in Sociology at the University of the Free State (South Africa), with a PhD from the University of Cambridge. Josh’s primary research interests are social movements, African and Latin American politics and political thought, social epistemology and the imagination, and the global history and historiography of philosophy.
(Not really on) Twitter: @jplatzkymiller

Lea Cantor is a doctoral candidate in Philosophy at Worcester College, University of Oxford, and a British Society for the History of Philosophy Postgraduate Fellow (2022-2023). Lea’s primary research interests are in classical Chinese philosophy, early Greek philosophy, the reception of ancient Chinese and Greek philosophy in European philosophy, comparative methodology, and the global history and historiography of philosophy.
Website: leacantor.com
Twitter: @LeaMundi

Josh and Lea are organising a conference addressing these themes in April 2023.


From The Philosopher, vol. 111, no. 1 (“Where is Philosophy Going?“).
If you enjoyed reading this, please consider becoming a patron or making a small donation.
We are unfunded and your support is greatly appreciated.


This article is shared from The Philosopher – Published since 1923.

The Philosopher is the journal of the PSE (Philosophical Society of England), a charitable organisation founded in 1913 to provide an alternative to the formal university-based discipline. You can find out more about the history of the PSE here.

Read the original article here.

Female Gaming Experience and Representation in Video Game Virtual Worlds

Video games and the carefully crafted virtual worlds they offer for exploration and engagement are important to understand. In this article, we observe why they are important to understand for female commentators and those with feminist concerns, although research has also examined matters as varied as racism, racial representation, colonialism, cyberbullying, and much more in virtual worlds. Suggesting the importance of understanding video games is that millions of people access them and participate in their worlds. Thus, video games become entangled with self-identity, social interaction, existentialism, and more.

Continue reading at Bishop’s Encyclopedia of Religion, Society and Philosophy

Sooner or later we all face death. Will a sense of meaning help us?

dance-with-death

Detail from the Dance with Death by Johann Rudolf Feyerabend. Courtesy the Basel Historical Museum, Switzerland/Wikipedia

Warren Ward | Aeon Ideas

‘Despite all our medical advances,’ my friend Jason used to quip, ‘the mortality rate has remained constant – one per person.’

Jason and I studied medicine together back in the 1980s. Along with everyone else in our course, we spent six long years memorising everything that could go wrong with the human body. We diligently worked our way through a textbook called Pathologic Basis of Disease that described, in detail, every single ailment that could befall a human being. It’s no wonder medical students become hypochondriacal, attributing sinister causes to any lump, bump or rash they find on their own person.

Jason’s oft-repeated observation reminded me that death (and disease) are unavoidable aspects of life. It sometimes seems, though, that we’ve developed a delusional denial of this in the West. We pour billions into prolonging life with increasingly expensive medical and surgical interventions, most of them employed in our final, decrepit years. From a big-picture perspective, this seems a futile waste of our precious health-dollars.

Don’t get me wrong. If I get struck down with cancer, heart disease or any of the myriad life-threatening ailments I learnt about in medicine, I want all the futile and expensive treatments I can get my hands on. I value my life. In fact, like most humans, I value staying alive above pretty much everything else. But also, like most, I tend to not really value my life unless I’m faced with the imminent possibility of it being taken away from me.

Another old friend of mine, Ross, was studying philosophy while I studied medicine. At the time, he wrote an essay called ‘Death the Teacher’ that had a profound effect on me. It argued that the best thing we could do to appreciate life was to keep the inevitability of our death always at the forefront of our minds.

When the Australian palliative care nurse Bronnie Ware interviewed scores of people in the last 12 weeks of their lives, she asked them their greatest regrets. The most frequent, published in her book The Top Five Regrets of the Dying (2011), were:

  1. I wish I’d had the courage to live a life true to myself, not the life others expected of me;
  2. I wish I hadn’t worked so hard;
  3. I wish I’d had the courage to express my feelings;
  4. I wish I had stayed in touch with my friends; and
  5. I wish that I had let myself be happier.

The relationship between death-awareness and leading a fulfilling life was a central concern of the German philosopher Martin Heidegger, whose work inspired Jean-Paul Sartre and other existentialist thinkers. Heidegger lamented that too many people wasted their lives running with the ‘herd’ rather than being true to themselves. But Heidegger actually struggled to live up to his own ideals; in 1933, he joined the Nazi Party, hoping it would advance his career.

Despite his shortcomings as a man, Heidegger’s ideas would go on to influence a wide range of philosophers, artists, theologians and other thinkers. Heidegger believed that Aristotle’s notion of Being – which had run as a thread through Western thinking for more than 2,000 years, and been instrumental in the development of scientific thinking – was flawed at a most fundamental level. Whereas Aristotle saw all of existence, including human beings, as things we could classify and analyse to increase our understanding of the world, in Being and Time (1927) Heidegger argued that, before we start classifying Being, we should first ask the question: ‘Who or what is doing all this questioning?’

Heidegger pointed out that we who are asking questions about Being are qualitatively different to the rest of existence: the rocks, oceans, trees, birds and insects that we are asking about. He invented a special word for this Being that asks, looks and cares. He called it Dasein, which loosely translates as ‘being there’. He coined the term Dasein because he believed that we had become immune to words such as ‘person’, ‘human’ and ‘human being’, losing our sense of wonder about our own consciousness.

Heidegger’s philosophy remains attractive to many today who see how science struggles to explain the experience of being a moral, caring person aware that his precious, mysterious, beautiful life will, one day, come to an end. According to Heidegger, this awareness of our own inevitable demise makes us, unlike the rocks and trees, hunger to make our life worthwhile, to give it meaning, purpose and value.

While Western medical science, which is based on Aristotelian thinking, sees the human body as a material thing that can be understood by examining it and breaking it down to its constituent parts like any other piece of matter, Heidegger’s ontology puts human experience at the centre of our understanding of the world.

Ten years ago, I was diagnosed with melanoma. As a doctor, I knew how aggressive and rapidly fatal this cancer could be. Fortunately for me, the surgery seemed to achieve a cure (touch wood). But I was also fortunate in another sense. I became aware, in a way I never had before, that I was going to die – if not from melanoma, then from something else, eventually. I have been much happier since then. For me, this realisation, this acceptance, this awareness that I am going to die is at least as important to my wellbeing as all the advances of medicine, because it reminds me to live my life to the full every day. I don’t want to experience the regret that Ware heard about more than any other, of not living ‘a life true to myself’.

Most Eastern philosophical traditions appreciate the importance of death-awareness for a well-lived life. The Tibetan Book of the Dead, for example, is a central text of Tibetan culture. The Tibetans spend a lot of time living with death, if that isn’t an oxymoron.

The East’s greatest philosopher, Siddhartha Gautama, also known as the Buddha, realised the importance of keeping the end in sight. He saw desire as the cause of all suffering, and counselled us not to get too attached to worldly pleasures but, rather, to focus on more important things such as loving others, developing equanimity of mind, and staying in the present.

The last thing the Buddha said to his followers was: ‘Decay is inherent in all component things! Work out your salvation with diligence!’ As a doctor, I am reminded every day of the fragility of the human body, how closely mortality lurks just around the corner. As a psychiatrist and psychotherapist, however, I am also reminded how empty life can be if we have no sense of meaning or purpose. An awareness of our mortality, of our precious finitude, can, paradoxically, move us to seek – and, if necessary, create – the meaning that we so desperately crave.Aeon counter – do not remove


Warren Ward is an associate professor of psychiatry at the University of Queensland. He is the author of the forthcoming book, Lovers of Philosophy (2021).

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Why do you believe what you do? Run some diagnostics on it

Mennonite-1942

A public school serving the Mennonite community in Red Run, Pennsylvania, March 1942. Photo by John Collier Jnr/Library of Congress

Miriam Schoenfield | Aeon Ideas

Many of the beliefs that play a fundamental role in our worldview are largely the result of the communities in which we’ve been immersed. Religious parents tend to beget religious children, liberal educational institutions tend to produce liberal graduates, blue states stay mostly blue, and red ones stay mostly red. Of course, some people, through their own sheer intelligence, might be able to see through fallacious reasoning, detect biases and, as a result, resist the social influences that lead most of us to belief. But I’m not that special, and so learning how susceptible my beliefs are to these sorts of influences makes me a bit squirmy.

Let’s work with a hypothetical example. Suppose I’m raised among atheists and firmly believe that God doesn’t exist. I realise that, had I grown up in a religious community, I would almost certainly have believed in God. Furthermore, we can imagine that, had I grown up a theist, I would have been exposed to all the considerations that I take to be relevant to the question of whether God exists: I would have learned science and history, I would have heard all the same arguments for and against the existence of God. The difference is that I would interpret this evidence differently. Divergences in belief result from the fact that people weigh the evidence for and against theism in varying ways. It’s not as if pooling resources and having a conversation would result in one side convincing the other – we wouldn’t have had centuries of religious conflict if things were so simple. Rather, each side will insist that the balance of considerations supports its position – and this insistence will be a product of the social environments that people on that side were raised in.

The you-just-believe-that-because challenge is meant to make us suspicious of our beliefs, to motivate us to reduce our confidence, or even abandon them completely. But what exactly does this challenge amount to? The fact that I have my particular beliefs as a result of growing up in a certain community is just a boring psychological fact about me and is not, in itself, evidence for or against anything so grand as the existence of God. So, you might wonder, if these psychological facts about us are not themselves evidence for or against our worldview, why would learning them motivate any of us to reduce our confidence in such matters?

The method of believing whatever one’s social surroundings tell one to believe is not reliable. So, when I learn about the social influences on my belief, I learn that I’ve formed my beliefs using an unreliable method. If it turns out that my thermometer produces its readings using an unreliable mechanism, I cease to trust the thermometer. Similarly, learning that my beliefs were produced by an unreliable process means that I should cease to trust them too.

But in the hypothetical example, do I really hold that my beliefs were formed by an unreliable mechanism? I might think as follows: ‘I formed my atheistic beliefs as a result of growing up in my particular community, not as a result of growing up in some community or another. The fact that there are a bunch of communities out there that inculcate their members with false beliefs doesn’t mean that my community does. So I deny that my beliefs were formed by an unreliable method. Luckily for me, they were formed by an extremely reliable method: they are the result of growing up among intelligent well-informed people with a sensible worldview.’

The thermometer analogy, then, is inapt. Learning that I would have believed differently if I’d been raised by a different community is not like learning that my thermometer is unreliable. It’s more like learning that my thermometer came from a store that sells a large number of unreliable thermometers. But the fact that the store sells unreliable thermometers doesn’t mean I shouldn’t trust the readings of my particular thermometer. After all, I might have excellent reasons to think that I got lucky and bought one of the few reliable ones.

There’s something fishy about the ‘I got lucky’ response because I would think the very same thing if I were raised in a community that I take to believe falsehoods. If I’m an atheist, I might think: ‘Luckily, I was raised by people who are well-educated, take science seriously, and aren’t in the grip of old-fashioned religious dogma.’ But if I were a theist, I would think something along the lines of: ‘If I’d been raised among arrogant people who believe that there is nothing greater than themselves, I might never have personally experienced God’s grace, and would have ended up with a completely distorted view of reality.’ The fact that the ‘I got lucky’ response is a response anyone could give seems to undermine its legitimacy.

Despite the apparent fishiness of the ‘I got lucky’ response in the case of religious belief, this response is perfectly sensible in other cases. Return to the thermometers. Suppose that, when I was looking for a thermometer, I knew very little about the different types and picked a random one off the shelf. After learning that the store sells many unreliable thermometers, I get worried and do some serious research. I discover that the particular thermometer I bought is produced by a reputable company whose thermometers are extraordinarily reliable. There’s nothing wrong with thinking: ‘How lucky I am to have ended up with this excellent thermometer!’

What’s the difference? Why does it seem perfectly reasonable to think I got lucky about the thermometer I bought but not to think that I got lucky with the community I was raised in? Here’s the answer: my belief that the community I was raised in is a reliable one is itself, plausibly, a result of growing up in that community. If I don’t take for granted the beliefs that my community instilled in me, then I’ll find that I have no particular reason to think that my community is more reliable than others. If we’re evaluating the reliability of some belief-forming method, we can’t use beliefs that are the result of that very method in support of that method’s reliability.

So, if we ought to abandon our socially influenced beliefs, it is for the following reason: deliberation about whether to maintain or abandon a belief, or set of beliefs, due to the worries about how the beliefs were formed must be conducted from a perspective that doesn’t rely on the beliefs in question. Here’s another way of putting the point: when we’re concerned about some belief we have, and are wondering whether to give it up, we’re engaged in doubt. When we doubt, we set aside some belief or cluster of beliefs, and we wonder whether the beliefs in question can be recovered from a perspective that doesn’t rely on those beliefs. Sometimes, we learn that they can be recovered once they’ve been subject to doubt, and other times we learn that they can’t.

What’s worrisome about the realisation that our moral, religious and political beliefs are heavily socially influenced is that many ways of recovering belief from doubt are not available to us in this case. We can’t make use of ordinary arguments in support of these beliefs because, in the perspective of doubt, the legitimacy of those very arguments is being questioned: after all, we are imagining that we find the arguments for our view more compelling than the arguments for alternative views as a result of the very social influences with which we’re concerned. In the perspective of doubt, we also can’t take the fact that we believe what we do as evidence for the belief’s truth, because we know that we believe what we do simply because we were raised in a certain environment, and the fact that we were raised here rather than there is no good reason to think that our beliefs are the correct ones.

It’s important to realise that the concern about beliefs being socially influenced is worrisome only if we’re deliberating about whether to maintain belief from the perspective of doubt. For recall that the facts about how my particular beliefs were caused are not, in themselves, evidence for or against any particular religious, moral or political outlook. So if you were thinking about whether to abandon your beliefs from a perspective in which you’re willing to make use of all of the reasoning and arguments that you normally use, you would simply think that you got lucky – just as you might have got lucky buying a particular thermometer, or reaching the train moments before it shuts its doors, or striking up a conversation on an airplane with someone who ends up being the love of your life.

There’s no general problem with thinking that we’ve been lucky – sometimes we are. The worry is just that, from the perspective of doubt, we don’t have the resources to justify the claim that we’ve been lucky. What’s needed to support such a belief is part of what’s being questioned.Aeon counter – do not remove


Miriam Schoenfield is associate professor in the Department of Philosophy at the University of Texas at Austin.

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Would you rather have a fish or know how to fish?

fishing-lure

Public domain

Jonny Robinson | Aeon Ideas

Imagine the following. You are living a life with enough money and health and time so as to allow an hour or two of careless relaxation, sitting on the sofa at the end of the day in front of a large television, half-heartedly watching a documentary about solar energy with a glass of wine and scrolling through your phone. You happen to hear a fact about climate change, something to do with recent emission figures. Now, on that same night, a friend who is struggling to meet her financial commitments has just arrived at her second job and misses out on the documentary (and the relaxation). Later in the week, when the two of you meet for a drink and your friend is ignorant of recent emission figures, what kind of intellectual or moral superiority is really justified on your part?

This example is designed to show that knowledge of the truth might very well have nothing to do with our own efforts or character. Many are born into severe poverty with a slim chance at a good education, and others grow up in religious or social communities that prohibit certain lines of enquiry. Others still face restrictions because of language, transport, money, sickness, technology, bad luck and so on. The truth, for various reasons, is much harder to access at these times. At the opposite end of the scale, some are effectively handed the truth about some matter as if it were a mint on their pillow, pleasantly materialising and not a big deal. Pride in this mere knowledge of the truth ignores the way in which some people come to possess it without any care or effort, and the way that others strive relentlessly against the odds for it and still miss out. The phrase ‘We know the truth [and, perhaps, you don’t]’, weaponised and presented without any qualifying modesty, fails to recognise the extraordinary privileges so often involved in that very acquisition, drawing an exclusionary line that overlooks almost everything else of significance.

A good attitude towards knowledge shines through various character traits that put us in a healthy relationship with it. Philosophers call these traits epistemic virtues. Instead of praising those people who happen to possess some piece of knowledge, we ought to praise those who have the right attitude towards it, since only this benchmark also includes those who strive for the truth and miss out on it for reasons not entirely under their control. Consider traits such as intellectual humility (a willingness to be wrong), intellectual courage (to pursue truths that make us uncomfortable), open-mindedness (to contemplate all sides of the argument, limiting preconceptions), and curiosity (to be continually seeking). You can see that the person ready to correct herself, courageous in her pursuit of the truth, open-minded in her deliberation, and driven by a deep curiosity has a better relationship to truth even where she occasionally fails to obtain it than does the indifferent person who is occasionally handed the truth on a silver platter.

In a sense, it’s difficult to answer to the disjunction ‘Is it better to know, or to seek to know?’ because there is not quite enough information in it. In respect to knowing (the first half of the disjunction), we also want to hear how that knowledge came about. That is, was the knowledge acquired despite the disinterest and laziness of the possessor, or was it acquired through diligent seeking? If the latter, then it is better to know since the second half of the disjunction is also accommodated in the first: the possession of knowledge and the attitude of seeking it. We can build on the idea with another example.

Would you rather have a fish or know how to fish? Again, we need some more information. If having the fish is the result of knowing how to fish, then once more the two halves of the disjunction are not necessarily mutually exclusive, and this combination is the ideal. But, if the having is the result of waiting around for someone to give you a fish, it would be better to know how to do it yourself. For where the waiting agent hopes for luck or charity, the agent who knows how to fish can return to the river each morning and each evening, throwing her line into the water over and over until she is satisfied with the catch.

And so it is with knowledge. Yes, it’s better to know, but only where this implies an accompanying attitude. If, instead, the possession of knowledge relies primarily upon the sporadic pillars of luck or privilege (as it so often does), one’s position is uncertain and in danger of an unfounded pride (not to mention pride’s own concomitant complications). Split into two discrete categories, then, we should prefer seeking to knowing. As with the agent who knows how to fish, the one who seeks knowledge can go out into the world, sometimes failing and sometimes succeeding, but in any case able to continue until she is satisfied with her catch, a knowledge attained. And then, the next day, she might return to the river and do it all again.

A person will eventually come up against the world, logically, morally, socially, even physically. Some collisions will be barely noticeable, others will be catastrophic. The consistent posture of seeking the truth gives us the best shot at seeing clearly, and that is what we should praise and value.Aeon counter – do not remove


Jonny Robinson is a tutor and casual lecturer in the department of philosophy at Macquarie University. He lives in Sydney.

This article was originally published at Aeon and has been republished under Creative Commons. View the original article here.

What Viktor Frankl’s logotherapy can offer in the Anthropocene

viktor-frankl

Viktor Frankl in New York, 1968. Photo by Imago/Getty

Ed Simon | Aeon Ideas

With our collapsing democracies and imploding biosphere, it’s no wonder that people despair. The Austrian psychoanalyst and Holocaust survivor Viktor Frankl presciently described such sentiments in his book Man’s Search for Meaning (1946). He wrote of something that ‘so many patients complain [about] today, namely, the feeling of the total and ultimate meaninglessness of their lives’. A nihilistic wisdom emerges when staring down the apocalypse. There’s something predictable in our current pandemics, from addiction to belief in pseudoscientific theories, for in Frankl’s analysis, ‘An abnormal reaction to an abnormal situation is normal behaviour.’ When scientists worry that humanity might have just one generation left, we can agree that ours is an abnormal situation. Which is why Man’s Search for Meaning is the work to return to in these humid days of the Anthropocene.

Already a successful psychotherapist before he was sent to Auschwitz and then Dachau, Frankl was part of what’s known as the ‘third wave’ of Viennese psychoanalysis. Reacting against both Sigmund Freud and Alfred Adler, Frankl rejected the first’s theories concerning the ‘will to pleasure’ and the latter’s ‘will to power’. By contrast, Frankl writes that: ‘Man’s search for meaning is the primary motivation in his life and not a “secondary rationalisation” of instinctual drives.’

Frankl argued that literature, art, religion and all the other cultural phenomena that place meaning at their core are things-unto-themselves, and furthermore are the very basis for how we find purpose. In private practice, Frankl developed a methodology he called ‘logotherapy’ – from logos, Greek for ‘reason’ – describing it as defined by the fact that ‘this striving to find a meaning in one’s life is the primary motivational force in man’. He believed that there was much that humanity can live without, but if we’re devoid of a sense of purpose and meaning then we ensure our eventual demise.

In Vienna, he was Dr Viktor Frankl, head of the neurology department of the Rothschild Hospital. In Auschwitz, he was ‘number 119,104’. The concentration camp was the null point of meaning, a type of absolute zero for purpose in life. Already having developed his theories about logotherapy, Frankl smuggled a manuscript he was working on into the camp, only to lose it, later forced to recreate it from memory. While in the camps, he informally worked as a physician, finding that acting as analyst to his fellow prisoners gave him purpose, even as he ostensibly assisted others. In those discussions, he came to conclusions that became foundational for humanistic psychology.

One was that the ‘prisoner who had lost faith in the future – his future – was doomed’. Frankl recounts how even in the camps, where suicide was endemic, the prisoners who seemed to have the best chance of survival were not necessarily the strongest or physically healthiest, but those somehow capable of directing their thoughts towards a sense of meaning. A few prisoners were ‘able to retreat from their terrible surroundings to a life of inner riches and spiritual freedom’, and in the imagining of such a space there was the potential for survival.

Frankl imagined intricate conversations with his wife Tilly (who, he later discovered, had been murdered at another camp), or of lecturing a future crowd about the psychology of the camps – which was precisely his work for the rest of his life. Man’s Search for Meaning – with its conviction that: ‘Man can preserve a vestige of spiritual freedom, of independence of mind, even in such terrible conditions’ – became a postwar bestseller. Translated into more than two dozen languages, selling more than 12 millions copies, and frequently chosen by book clubs and college psychology, philosophy and religion courses, Man’s Search for Meaning has its place in the cultural zeitgeist, with whole university and hospital departments geared around both humanistic psychology and logotherapy. Even though Frankl was a physician, his form of psychoanalysis often seemed to have more in common with a form of secularised rabbinic Judaism than with science.

Man’s Search for Meaning is structured in two parts. The first constitutes Frankl’s Holocaust testimony, bearing similarity to writings by Elie Wiesel and Primo Levi. In the second part, he elaborates on logotherapy, arguing that the meaning of life is found in ‘experiencing something – such as goodness, truth and beauty – by experiencing nature and culture or … by experiencing another human being in his very uniqueness – by loving him’, not simply in spite of apocalyptic situations, but because of them.

The book has been maligned as superficial pop-existentialism; a vestige of middle-brow culture offering platitudinous New Age panaceas. Such a reading isn’t entirely unfair. And seven decades later, one might blanche at the sexist language, or the hokey suggestion that a ‘Statue of Responsibility’ be constructed on the US West Coast. However, a fuller consideration of Frankl’s concept of ‘tragic optimism’ should give more attention to the former rather than the latter before the therapist is impugned as overly rosy. When he writes ‘Since Auschwitz we know what man is capable of. And since Hiroshima we know what is at stake,’ it’s hard to accuse him of being a Pollyanna.

Some critics accuse Frankl of victim-blaming. The American scholar Lawrence Langer in 1982 even wrote that Man’s Search for Meaning is ‘almost sinister’. According to him, Frankl reduced survival to an issue of a positivity; Langer argues that the book does a profound disservice to the millions who perished. A critique such as this has some merit to it, and yet Frankl’s actual implications are different. His book evidences no moralising against those who’d lost a sense of meaning. Frankl’s study doesn’t advocate logotherapy as an ethical but as a strategic response to tragedy.

When identifying meaninglessness, it would be a mistake to find it within the individual who suffers. Frankl’s fellow prisoners weren’t responsible for the concentration camps, just as somebody born into a cycle of poverty isn’t at fault, nor is any one of us (unless you happen to be an oil executive) the cause of our collapsing ecosystem. Nothing in logotherapy implies acceptance of the status quo, for the struggle to alter political, material, social, cultural and economic conditions is paramount. What logotherapy offers is something different, a way to envision meaning, despite things not being in your control. In his preface to the book’s 2006 edition, Rabbi Harold Kushner glosses Frankl’s argument by saying that: ‘Forces beyond your control can take away everything you possess except one thing, your freedom to choose how you will respond to the situation.’

Far from being obsessed with the meaning of life, logotherapy demands that patients orient themselves to the idea of individual meaning, to ‘think of ourselves as those who were being questioned by life – daily and hourly’, as Frankl writes. Logotherapy – asking patients to clear an imaginative space to orient themselves towards some higher meaning – provides a response to intolerable situations.

Frankl writes that he ‘grasped the meaning of the greatest secret that human poetry and human thought and belief have to impart: The salvation of man is through love and in love.’ It is easy to be cynical about such a claim, proving Frankl’s point. In our small, petty, limited, cruel era, it seems hard to come across much collective human affection, and yet our pettiness, limitations and cruelty are in their own way a response to the looming apocalypse. ‘Every age has its own collective neurosis,’ Frankl writes, ‘and every age needs its own psychotherapy to cope with it.’ If we’re exhausted, fatigued, anxious, enraged, despairing and confused at the collapse of our individual fortunes, our social networks, our communities, our industries, our democracy, our very planet, it’s no wonder we’ve developed a certain collective neurosis. Yet humanistic psychology has not been in vogue for decades; in its place, we have fashionable sociobiology and misapplied neuroscience in the form of the Panglossian Steven Pinker and the Svengali platitudes of Jordan Peterson.

In one of the book’s most remarkable passages, Frankl recounts how, when his work group was allowed a meagre few hours of rest, a fellow prisoner interrupted them and ‘asked us to run out to the assembly grounds and see a wonderful sunset’. With a prose style that tends towards the clinical, albeit with a distinct sense of the sacred, Frankl here gives himself over to the transcendent:

Standing outside we saw sinister clouds glowing in the west and the whole sky alive with clouds of ever-changing shapes and colours, from steel blue to blood red. The desolate grey mud huts provided a sharp contrast, while the puddles on the muddy ground reflected the glowing sky.

From this vision, here in a place whose very definition was the nullification of meaning, another prisoner remarked: ‘How beautiful the world could be!’ Such is the promise of logotherapy – not to ensure that there will be more sunsets, for that is our individual and societal responsibility. What logotherapy offers, rather, is the promise to be in awe at a sunset, even if it does happen to be our last one; to find wonder, meaning, beauty and grace even in the apocalypse, even in hell. The rest is up to us.Aeon counter – do not remove


Ed Simon is staff writer at the literary site The Millions and an editor at Berfrois. His latest book is Furnace of This World; or, 36 Observations about Goodness (2019), and he is the author of America and Other Fictions (2018). He lives in Boston.

This article was originally published at Aeon and has been republished under Creative Commons. View the original article here.

We all know that we will die, so why do we struggle to believe it?

tolstoy

Tolstoy photographed by Karl Bulla in 1902. Courtesy Wikipedia

James Baillie | Aeon Ideas

In the novella The Death of Ivan Ilyich (1886), Leo Tolstoy presents a man who is shocked by suddenly realising that his death is inevitable. While we can easily appreciate that the diagnosis of a terminal illness came as an unpleasant surprise, how could he only then discover the fact of his mortality? But that is Ivan’s situation. Not only is it news to him, but he can’t fully take it in:

The syllogism he had learned from Kiesewetter’s logic – ‘Caius is a man, men are mortal, therefore Caius is mortal’ – had always seemed to him correct as applied to Caius, but by no means to himself. That man Caius represented man in the abstract, and so the reasoning was perfectly sound; but he was not Caius, not an abstract man; he had always been a creature quite, quite distinct from all the others.

Tolstoy’s story would not be the masterpiece that it is were it describing an anomaly, a psychological quirk of a fictional character with no analogue in real life. The book’s power resides in its evocative depiction of a mysterious experience that gets to the heart of what it is to be human.

In 1984, on the eve of my 27th birthday, I shared in Ivan’s realisation: that one day I will cease to exist. That was my first and most intense episode of what I call ‘existential shock’. It was by far the most disorienting event of my life, like nothing I’d ever experienced.

While you need to have undergone existential shock to really know what it is like, the experience need not yield any understanding of what you have gone through, either at the time or later. The acute anxiety induced by the state renders you incapable of thinking clearly. And once the state has passed, it is almost impossible to remember in any detail. Getting back in touch with existential shock is like trying to reconstruct a barely remembered dream, except that the struggle is to recall a time when one was unusually awake.

While granting the strangeness of existential shock, the revealed content itself is not peculiar. Indeed, it is undeniable. That’s what makes the phenomenon so puzzling. I learned that I would die? Obviously, I already knew that, so how could it come as a revelation? It is too simple to merely say that I had long known that I would die, because there is also a sense in which I didn’t – and still don’t – really believe it. These conflicting attitudes emerge from the two most basic ways of thinking about oneself, that I will call the outside and inside views.

Let’s consider the way in which my inevitable death is old news. It stems from the uniquely human capacity to disengage from our actions and commitments, so that each of us can consider him or herself as an inhabitant of the mind-independent world, one human being among billions. When I regard myself ‘from the outside’ in this manner, I have no trouble in affirming that I will die. I understand that I exist because of innumerable contingencies, and that the world will go on without me just as it did before my coming to be. These reflections do not disturb me. My equanimity is due to the fact that, even though I am reflecting on my inevitable annihilation, it is almost as if I am thinking about someone else. That is, the outside view places a cognitive distance between myself as the thinker of these thoughts and myself as their subject.

The other basic way of conceiving of ourselves consists of how our lives feel ‘from the inside’ as we go about our everyday activities. One important aspect of the inside view has recently been discussed by Mark Johnston in Surviving Death (2010), namely the perspectival nature of perceptual experience. The world is presented to me as if it were framed around my body, particularly my head, where my sensory apparatus is mostly located. I never experience the world except with me ‘at the centre’, as if I were the axis on which it all turned. As I change location, this phenomenologically central position moves with me. This locus of perceptual experiences is also the source from which my thoughts, feelings and bodily sensations arise. Johnston calls it the ‘arena of presence and action’. When we think of ourselves as the one at the centre of this arena, we find it inconceivable that this consciousness, this point of view on the world, will cease to be.

The inside view is the default. That is, the automatic tendency is to experience the world as if it literally revolved around oneself, and this prevents us from fully assimilating what we know from the outside view, that the world can and will go on without us.

In order to fully digest the fact of my mortality, I would need to realise, not just intellectually, that my everyday experience is misleading, not in the details, but as a whole. Buddhism can help identify another source of radical distortion. As Jay L Garfield puts it in Engaging Buddhism (2015), we suffer from the ‘primal confusion’ of seeing the world, and ourselves, through the lens of a substance-based metaphysics. For example, I take myself as a self-contained individual with a permanent essence that makes me who I am. This core ‘me-ness’ underpins the constant changes in my physical and mental properties. Garfield is not saying that we all explicitly endorse this position. In fact, speaking for myself, I reject it. Rather, the primal confusion is the product of a non-rational reflex, and typically operates well below the level of conscious awareness.

When we combine the phenomenological fact of our apparent centrality to the world with the implicit view of ourselves as substances, it is easy to see how these factors make our non-existence unthinkable ‘from the inside’, so that the best understanding of our own mortality we can achieve is the detached acknowledgement that comes with the outside view.

The Buddhist alternative to a substance-based view of persons is the ‘no-self’ account, which was independently discovered by David Hume. Hume introspected only a constantly shifting array of thoughts, feelings and sensations. He took the absence of evidence of a substantial self to be evidence of its absence, and concluded in A Treatise of Human Nature (1739-40) that the notion of a ‘self’ is merely a convenient device for referring to a causally linked network of mental states, rather than something distinct from them.

While remarkably similar lines of thought can be found within Buddhist texts, philosophical argument comprises only part of their teaching. Buddhists maintain that a developed practice of meditation allows one to directly experience the fact of no-self, rather than just inferring it. The theoretical and experiential methods are mutually supporting, and ideally develop in tandem.

Let us return to existential shock. One might be tempted to look for some unusual factor that has to be added to our normal condition in order to bring the state about. However, I believe that a better approach is to consider what must be subtracted from our everyday experience. Existential shock emerges from a radical alteration of the inside view, where the primal confusion lifts so that the person directly experiences herself as insubstantial. I see the truth of no-self, not merely as an idea, but in an impression. I see that my ego is an imposter, masquerading as a permanent self. The most perplexing feature of existential shock, namely the sense of revelation about my inevitable death, comes from my mortality being re-contextualised as part of a visceral recognition of the more fundamental truth of no-self.

But this raises the question as to what causes the primal confusion to temporarily withdraw when it does. The answer lies in Hume’s observation that the natural movement of our mental states is governed by associative principles, where the train of thought and feelings tends to run on familiar tracks, with one state effortlessly leading to another. The relentless operation of our associative mechanisms keeps the shock at bay, and the collapse of these mechanisms lets it come through.

It is no coincidence that my first encounter with existential shock took place towards the end of a long and rigorous retreat. Being away from my habitual surroundings – my social routines, my ready-to-hand possessions, all my trusted distractors and de-stressers – created conditions in which I functioned a little less on autopilot. This created an opening for existential shock, which brought about an inner STOP! – a sudden and radical break in my mental associations. Just for a moment, I see myself for what I am.Aeon counter – do not remove


James Baillie is a professor of philosophy at the University of Portland in Oregon. He is the author of the Routledge Philosophy GuideBook to Hume on Morality (2000).

This article was originally published at Aeon and has been republished under Creative Commons.

The Humanitarian Crisis of Deaths of Despair

woman-statue-despair

Image by cocoparisienne from Pixabay

David V. Johnson | Blog of the APA

Last April, Princeton University economists and married partners Anne Case and  Sir Angus Deaton delivered the Tanner Lectures on Human Values at Stanford University. The title of their talks, “Deaths of Despair and the Future of Capitalism,” is also the provisional name of their forthcoming book, to be published in 2020.

The couple’s research has focused on disturbing mortality data for a specific demographic: white non-Hispanic Americans without college degrees. This century, they have been dying at alarming rates from what Case and Deaton call “deaths of despair,” which cover suicide, alcohol-related disease, and drug overdoses (primarily driven by opioids). These deaths have, along with US obesity, heart disease, and cancer rates, contributed to a shocking recent decline in US life expectancy for three straight years—something which hasn’t happened since World War I and the 1918 Spanish flu pandemic. The rates for “deaths of despair” are not as high for college-educated whites or for other racial minorities, and there are many potential economic and sociological reasons for this.

Case and Deaton’s research raises important questions for the US political economy and the legacy of neoliberalism. But I am more interested in the framing of the mortality statistics as “deaths of despair.” Assume for the sake of argument that a large segment of the US population—non-Hispanic white Americans without college degrees—are suffering despair. What does it mean to say this?

We can gain some insight by contrasting its opposite, hope, which has received a lot of philosophical attention for the puzzles it raises about rationality and agency. Hope is a forward-looking emotion with cognitive and desiderative elements. We hope for things that are possible in the future (we don’t hope for the impossible or the certain), which means we make a judgement about their possibility. And when we hope for them, we desire for them to come about, and this desire can motivate our action if we think our acting can help bring it about. Is it rational to hope for something that has a miniscule chance of happening, and if so, under what circumstances? And when is it rational to act based on hope? Much ink has been spilt on these questions.

Philosophers have also thought about hopefulness—about hope as an emotional tendency or character trait that undergirds agency. People who are hopeful or optimistic are generally better able to pursue their plans and succeed, which gives the adoption of a hopeful outlook a pragmatic justification. One could argue that some minimal level of hopefulness is requisite for anyone to plan, act, and live one’s life, insofar as these involve forward-looking judgments and desires that are characteristic of hope.

We can see why despair, as a condition opposed to hope and hopefulness, can be such a debilitating state of mind. Despair undermines agency. The despairing person may conceive of plans and goals but feel that he is so unlikely to achieve them that they are not worth the investment of time and energy, or that even if he does achieve them, it won’t make a substantive difference to his life. So despair undermines the requisite motivation to pursue our plans and goals. A despairing person tends to passivity, to go along with the flow of life and focus on getting by, making due, and assuaging pain and foreboding however she can at the moment.

But despair—or at least the sort of despair I identify in Case and Deaton’s analysis—has a very different structure from hope. If despair were structurally like hope, then it would also be a forward-looking emotion with the appropriate cognitive and desiderative elements. We would be in a state of despair if we believed there was something that could possibly happen in the future that we do not want to have happen, so much so that its possibility gives us anguish and depresses us, to the point that we have difficulty summoning the motivation to avoid it or to go about our lives generally. To be sure, there are forms of despair that are like this. If my boss gives me a poor performance review and warns that I may be subject to termination, and the livelihood of my family depends upon my employment, this may send me into despair. I see my future firing as possible and something I desperately want to avoid, to the point of anxiety and depression. My despondent feelings may undermine my ability to perform better, making my firing even more likely. I may also have trouble living my life in general due to my negative feelings. I may struggle to talk to my spouse about her day or plan my daughter’s after-school activities.

But there is another form of despair that is not like this. This kind of despair is not forward looking, per se, but rather focused narrowly on the present. It sees the present as dark, dreary, painful, and uninteresting, and anticipates this state of consciousness to extend indefinitely into the future. It’s the feeling of unrelenting misery and ennui. No one wants to feel like this, but the person who despairs in this way does not form the desire to avoid it, or is not motivated by such a desire, because he does not see a means of escape or because the present sense of pain and dreariness is so overwhelming that it disrupts his ability to imagine such means. This form of despair is what Case and Deaton have in mind: people who have not only lost the will to live—i.e. to direct their lives, make plans, pursue them—but are so miserable and distressed that they either die by suicide or self-medicate with drugs and binge drinking to lessen their immediate pain, and do so as a way of slowly dying by suicide. It is the constant feeling associated with present consciousness that life is bad, and that it will continue to be bad indefinitely into the future. A sizable portion of the American public feels this way.

Case and Deaton’s appeal to despair, if we understand it correctly, should shock us. The prevalence of despair represents a horrific communal collapse. It goes well beyond statistics of poor welfare outcomes that alarm economists. It is about the obliteration of human lives—the undermining of the very basis of living a life, the ability to enjoy experience moment to moment, have enough peace of mind and stability to anticipate the future, make plans, and pursue them. It is nothing less than a humanitarian crisis.


David V. Johnson is the public philosophy editor of the APA Blog and deputy editor of Stanford Social Innovation Review. He is a former philosophy professor turned journalist with more than a decade of experience as an editor and writer. Previously, he was senior opinion editor at Al Jazeera America, where he edited the op-ed section of the news channel’s website. Earlier in his career, he served as online editor at Boston Review and research editor at San Francisco magazine the year it won a National Magazine Award for general excellence. He has written for The New York Times, USA Today, The New Republic, Bookforum, Aeon, Dissent, and The Baffler, among other publications.

This article was republished with the permission of the APA Blog and the author. View the original article here.

How Mengzi came up with something better than the Golden Rule

family-training

Family Training, unknown artist, Ming (1368-1644) or Qing (1644-1911) dynasty. Courtesy the Met Museum, New York

Eric Schwitzgebel | Aeon Ideas

There’s something I don’t like about the ‘Golden Rule’, the admonition to do unto others as you would have others do unto you. Consider this passage from the ancient Chinese philosopher Mengzi (Mencius):

That which people are capable of without learning is their genuine capability. That which they know without pondering is their genuine knowledge. Among babes in arms there are none that do not know to love their parents. When they grow older, there are none that do not know to revere their elder brothers. Treating one’s parents as parents is benevolence. Revering one’s elders is righteousness. There is nothing else to do but extend these to the world.

One thing I like about the passage is that it assumes love and reverence for one’s family as a given, rather than as a special achievement. It portrays moral development simply as a matter of extending that natural love and reverence more widely.

In another passage, Mengzi notes the kindness that the vicious tyrant King Xuan exhibits in saving a frightened ox from slaughter, and he urges the king to extend similar kindness to the people of his kingdom. Such extension, Mengzi says, is a matter of ‘weighing’ things correctly – a matter of treating similar things similarly, and not overvaluing what merely happens to be nearby. If you have pity for an innocent ox being led to slaughter, you ought to have similar pity for the innocent people dying in your streets and on your battlefields, despite their invisibility beyond your beautiful palace walls.

Mengzian extension starts from the assumption that you are already concerned about nearby others, and takes the challenge to be extending that concern beyond a narrow circle. The Golden Rule works differently – and so too the common advice to imagine yourself in someone else’s shoes. In contrast with Mengzian extension, Golden Rule/others’ shoes advice assumes self-interest as the starting point, and implicitly treats overcoming egoistic selfishness as the main cognitive and moral challenge.

Maybe we can model Golden Rule/others’ shoes thinking like this:

  1. If I were in the situation of person x, I would want to be treated according to principle p.
  2. Golden Rule: do unto others as you would have others do unto you.
  3. Thus, I will treat person x according to principle p.

And maybe we can model Mengzian extension like this:

  1. I care about person y and want to treat that person according to principle p.
  2. Person x, though perhaps more distant, is relevantly similar.
  3. Thus, I will treat person x according to principle p.

There will be other more careful and detailed formulations, but this sketch captures the central difference between these two approaches to moral cognition. Mengzian extension models general moral concern on the natural concern we already have for people close to us, while the Golden Rule models general moral concern on concern for oneself.

I like Mengzian extension better for three reasons. First, Mengzian extension is more psychologically plausible as a model of moral development. People do, naturally, have concern and compassion for others around them. Explicit exhortations aren’t needed to produce this natural concern and compassion, and these natural reactions are likely to be the main seed from which mature moral cognition grows. Our moral reactions to vivid, nearby cases become the bases for more general principles and policies. If you need to reason or analogise your way into concern even for close family members, you’re already in deep moral trouble.

Second, Mengzian extension is less ambitious – in a good way. The Golden Rule imagines a leap from self-interest to generalised good treatment of others. This might be excellent and helpful advice, perhaps especially for people who are already concerned about others and thinking about how to implement that concern. But Mengzian extension has the advantage of starting the cognitive project much nearer the target, requiring less of a leap. Self-to-other is a huge moral and ontological divide. Family-to-neighbour, neighbour-to-fellow citizen – that’s much less of a divide.

Third, you can turn Mengzian extension back on yourself, if you are one of those people who has trouble standing up for your own interests – if you’re the type of person who is excessively hard on yourself or who tends to defer a bit too much to others. You would want to stand up for your loved ones and help them flourish. Apply Mengzian extension, and offer the same kindness to yourself. If you’d want your father to be able to take a vacation, realise that you probably deserve a vacation too. If you wouldn’t want your sister to be insulted by her spouse in public, realise that you too shouldn’t have to suffer that indignity.

Although Mengzi and the 18th-century French philosopher Jean-Jacques Rousseau both endorse mottoes standardly translated as ‘human nature is good’ and have views that are similar in important ways, this is one difference between them. In both Emile (1762) and Discourse on Inequality (1755), Rousseau emphasises self-concern as the root of moral development, making pity and compassion for others secondary and derivative. He endorses the foundational importance of the Golden Rule, concluding that ‘love of men derived from love of self is the principle of human justice’.

This difference between Mengzi and Rousseau is not a general difference between East and West. Confucius, for example, endorses something like the Golden Rule in the Analects: ‘Do not impose on others what you yourself do not desire.’ Mozi and Xunzi, also writing in China in the period, imagine people acting mostly or entirely selfishly until society artificially imposes its regulations, and so they see the enforcement of rules rather than Mengzian extension as the foundation of moral development. Moral extension is thus specifically Mengzian rather than generally Chinese.

Care about me not because you can imagine what you would selfishly want if you were me. Care about me because you see how I am not really so different from others you already love.


This is an edited extract from ‘A Theory of Jerks and Other Philosophical Misadventures’ © 2019 by Eric Schwitzgebel, published by MIT Press.Aeon counter – do not remove

Eric Schwitzgebel is professor of philosophy at the University of California, Riverside. He blogs at The Splintered Mind and is the author of Perplexities of Consciousness (2011) and A Theory of Jerks and Other Philosophical Misadventures (2019).

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

The Meaning to Life? A Darwinian Existentialist has his Answers

human-lifespan

Michael Ruse | Aeon Ideas

I was raised as a Quaker, but around the age of 20 my faith faded. It would be easiest to say that this was because I took up philosophy – my lifelong occupation as a teacher and scholar. This is not true. More accurately, I joke that having had one headmaster in this life, I’ll be damned if I want another in the next. I was convinced back then that, by the age of 70, I would be getting back onside with the Powers That Be. But faith did not then return and, as I approach 80, is nowhere on the horizon. I feel more at peace with myself than ever before. It’s not that I don’t care about the meaning or purpose of life – I am a philosopher! Nor does my sense of peace mean that I am complacent or that I have delusions about my achievements and successes. Rather, I feel that deep contentment that religious people tell us is the gift or reward for proper living.

I come to my present state for two separate reasons. As a student of Charles Darwin, I am totally convinced – God or no God – that we are (as the 19th-century biologist Thomas Henry Huxley used to say) modified monkeys rather than modified mud. Culture is hugely important, but to ignore our biology is just wrong. Second, I am drawn, philosophically, to existentialism. A century after Darwin, Jean-Paul Sartre said that we are condemned to freedom, and I think he is right. Even if God does exist, He or She is irrelevant. The choices are ours.

Sartre denied such a thing as human nature. From this quintessential Frenchman, I take that with a pinch of salt: we are free, within the context of our Darwinian-created human nature. What am I talking about? A lot of philosophers today are uncomfortable even raising the idea of ‘human nature’. They feel that, too quickly, it is used against minorities – gay people, the disabled, and others – to suggest that they are not really human. This is a challenge not a refutation. If a definition of human nature cannot take account of the fact that up to 10 per cent of us have same-sex orientation, then the problem is not with human nature but with the definition.

What, then, is human nature? In the middle of the 20th century, it was popular to suggest that we are killer apes: we can and do make weapons, and we use them. But modern primatologists have little time for this. Their findings suggest that most apes would far rather fornicate than fight. In making war we are really not doing what comes naturally. I don’t deny that humans are violent, however our essence goes the other way. It is one of sociability. We are not that fast, we are not that strong, we are hopeless in bad weather; but we succeed because we work together. Indeed, our lack of natural weapons points that way. We cannot get all we want through violence. We must cooperate.

Darwinians did not discover this fact about our nature. Listen to the metaphysical poet John Donne in 1624:

No man is an island,
Entire of itself,
Every man is a piece of the continent,
A part of the main.
If a clod be washed away by the sea,
Europe is the less.
As well as if a promontory were.
As well as if a manor of thy friend’s
Or of thine own were:
Any man’s death diminishes me,
Because I am involved in mankind,
And therefore never send to know for whom the bell tolls;
It tolls for thee.

Darwinian evolutionary theory shows how this all came about, historically, through the forces of nature. It suggests that there is no eternal future or, if there is, it is not relevant for the here and now. Rather, we must live life to the full, within the context of – liberated by – our Darwinian-created human nature. I see three basic ways in which this occurs.

First, family. Humans are not like male orangutans whose home life is made up mainly of one-night stands. A male turns up, does his business, and then, sexually sated, vanishes. The impregnated female births and raises the children by herself. This is possible simply because she can. If she couldn’t then, biologically it would be in the interests of the males to lend a hand. Male birds help at the nest because, exposed as they are up trees, the chicks need to grow as quickly as possible. Humans face different challenges, but with the same end. We have big brains that need time to develop. Our young cannot fend for themselves within weeks or days. Therefore humans need lots of parental care, and our biology fits us for home life, as it were: spouses, offspring, parents, and more. Men don’t push the pram just by chance. Nor boast to their co-workers about their kid getting into Harvard.

Second, society. Co-workers, shop attendants, teachers, doctors, hotel clerks – the list is endless. Our evolutionary strength is that we work together, helping and expecting help. I am a teacher, not just of my children, but of yours (and others) too. You are a doctor: you give medical care not just to your children, but to mine (and others) too. In this way, we all benefit. As Adam Smith pointed out in 1776, none of this happens by chance or because nature has suddenly become soft: ‘It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.’ Smith invoked the ‘invisible hand’. The Darwinian puts it down to evolution through natural selection.

Though life can be a drag sometimes, biology ensures that we generally get on with the job, and do it as part of our fulfilled lives. John Stuart Mill had it exactly right in 1863: ‘When people who are fairly fortunate in their material circumstances don’t find sufficient enjoyment to make life valuable to them, this is usually because they care for nobody but themselves.’

Third, culture. Works of art and entertainment, TV, movies, plays, novels, paintings and sport. Note how social it all is. Romeo and Juliet, about two kids in ill-fated love. The Sopranos, about a mob family. A Roy Lichtenstein faux-comic painting; a girl on the phone: ‘Oh, Jeff… I love you, too… but…’ England beating Australia at cricket. There are evolutionists who doubt that culture is so tightly bound to biology, and who are inclined to see it as a side-product of evolution, what Stephen Jay Gould in 1982 called an ‘exaptation’. This is surely true in part. But probably only in part. Darwin thought that culture might have something to do with sexual selection: protohumans using songs and melodies, say, to attract mates. Sherlock Holmes agreed; in A Study in Scarlet (1887), he tells Watson that musical ability predates speech, according to Darwin: ‘Perhaps that is why we are so subtly influenced by it. There are vague memories in our souls of those misty centuries when the world was in its childhood.’

Draw it together. I have had a full family life, a loving spouse and children. I even liked teenagers. I have been a college professor for 55 years. I have not always done the job as well as I could, but I am not lying when I say that Monday morning is my favourite time of the week. I’m not much of a creative artist, and I’m hopeless at sports. But I have done my scholarship and shared with others. Why else am I writing this? And I have enjoyed the work of fellow humans. A great performance of Mozart’s opera The Marriage of Figaro is heaven. I speak literally.

This is my meaning to life. When I meet my nonexistent God, I shall say to Him: ‘God, you gave me talents and it’s been a hell of a lot of fun using them. Thank you.’ I need no more. As George Meredith wrote in his poem ‘In the Woods’ (1870):

The lover of life knows his labour divine,
And therein is at peace.


A Meaning to Life (2019) by Michael Ruse is published via Princeton University Press.Aeon counter – do not remove

Michael Ruse is the Lucyle T Werkmeister Professor of Philosophy and director of the history and philosophy of science at Florida State University. He has written or edited more than 50 books, including most recently On Purpose (2017), Darwinism as Religion (2016), The Problem of War (2018) and A Meaning to Life (2019).

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.