Diseases of Despair

From Wikipedia, the free encyclopedia

The diseases of despair are three classes of behavior-related medical conditions that increase in groups of people who experience despair due to a sense that their long-term social and economic outlook is bleak. The three disease types are drug overdose (including alcohol overdose), suicide, and alcoholic liver disease.

Diseases of despair, and the resulting deaths of despair, are high in the Appalachia region of the United States. The prevalence increased markedly during the first decades of the 21st century, especially among middle-aged and older working-class white Americans. It gained media attention because of its connection to the opioid epidemic.

Risk Factors

Although addiction and depression affect people of every age, every race, and every demographic group, the excess mortality and morbidity from diseases of despair affects a smaller group. In the US, the group most affected by these diseases of despair are non-Hispanic white men and women who have not attended university. Compared to previous generations, this group is less likely to be married, less likely to be working, less likely to be able to provide for their families, and more likely to report physical pain, overall poor health, and mental health problems, such as depression.

Causes

The factors that seem to exacerbate diseases of despair are not fully known, but they are generally recognized as including a worsening of economic inequality and feeling of hopelessness about personal financial success. This can take many forms and appear in different situations. For example, people feel inadequate and disadvantaged when products are marketed to them as being important, but these products repeatedly prove to be unaffordable for them. The overall loss of employment in affected geographic regions and the worsening of pay and working conditions along with the decline of labor unions is a widely hypothesized factor.

The changes in the labor market also affect social connections that might otherwise provide protection, as people at risk for this problem are less likely to get married, more likely to get divorced, and more likely to experience social isolation. Economists Anne Case and Angus Deaton argue that the ultimate cause is the sense that life is meaningless, unsatisfying, or unfulfilling, rather than strictly the basic economic security that makes these higher-order feelings more likely.

Diseases of despair differ from diseases of poverty because poverty itself is not the central factor. Groups of impoverished people with a sense that their lives or their children’s lives will improve are not affected as much by diseases of despair. Instead, this affects people who have little reason to believe that the future will be better. As a result, this problem is distributed unevenly. For example, affecting working-class people in the United States more than working-class people in Europe, even when the European economy was weaker. It also affects white people more than racially disadvantaged groups, possibly because working-class white people are more likely to believe that they are not doing better than their parents did, while non-white people in similar economic situations are more likely to believe that they are better off than their parents.

Effects

Starting in 1998, a rise in deaths of despair has resulted in an unexpected increase in the number of middle-aged white Americans dying (the age-specific mortality rate). By 2014, the increasing number of deaths of despair had resulted in a drop in overall life expectancy. Anne Case and Angus Deaton propose that the increase in mid-life mortality is the result of cumulative disadvantages that occurred over decades and that solving it will require patience and perseverance for many years, rather than a quick fix that produces immediate results.

Terminology

The name disease of despair has been criticized for being unfair to the people who are adversely affected by social and economic forces beyond their control, and for underplaying the role of specific drugs, such as OxyContin, in increasing deaths.


References

Cunningham, Paige Winfield (30 October 2017). “Appalachian death from drug overdoses far outpace nation’s”The Washington Post.

Danny, Dorling (2015-06-03). Injustice (revised edition): Why social inequality still persists. Policy Press. ISBN 9781447320777. “Part of the mechanism behind the worldwide rise in diseases of despair is suggested, with evidence provided below, to be the anxiety caused when particular forms of competition are enhanced… The effects of the advertising industry in making both adults, and especially children, feel inadequate, are also documented here.”

McGreal, Chris. American overdose: The opioid tragedy in three acts (First ed.). New York, NY. pp. 109–112. ISBN 9781610398619. OCLC 1039238075.

Case, Anne; Deaton, Angus (Spring 2017). “Mortality and Morbidity in the 21st Century”Brookings Papers on Economic Activity.

Further Reading

Michael Meit, Megan Heffernan, Erin Tanenbaum, and Topher Hoffmann (August 2017) Appalachian Diseases of Despair (PDF). The Walsh Center for Rural Health Analysis at the University of Chicago.

Chris McGreal (12 November 2015) “Abandonded by coal, swallowed by drugs” The Guardian

Can you step in the same river twice? Wittgenstein v Heraclitus

statue-foot

Photo Pixabay

David Egan | Aeon Ideas

‘I am not a religious man,’ the philosopher Ludwig Wittgenstein once said to a friend, ‘but I cannot help seeing every problem from a religious point of view.’ These problems that he claims to see from a religious point of view tend to be technical matters of logic and language. Wittgenstein trained as an engineer before he turned to philosophy, and he draws on mundane metaphors of gears, levers and machinery. Where you find the word ‘transcendent’ in Wittgenstein’s writings, you’ll likely find ‘misunderstanding’ or ‘nonsense’ nearby.

When he does respond to philosophers who set their sights on higher mysteries, Wittgenstein can be stubbornly dismissive. Consider: ‘The man who said one cannot step into the same river twice was wrong; one can step into the same river twice.’ With such blunt statements, Wittgenstein seems less a religious thinker and more a stodgy literalist. But a close examination of this remark can show us not only what Wittgenstein means by a ‘religious point of view’ but also reveal Wittgenstein as a religious thinker of striking originality.

‘The man’ who made the remark about rivers is Heraclitus, a philosopher at once pre-Socratic and postmodern, misquoted on New Age websites and quoted out of context by everyone, since all we have of his corpus are isolated fragments. What is it that Heraclitus thinks we can’t do? Obviously I can do a little in-and-out-and-back-in-again shuffle with my foot at a riverbank. But is it the same river from moment to moment – the water flowing over my foot spills toward the ocean while new waters join the river at its source – and am I the same person?

One reading of Heraclitus has him conveying a mystical message. We use this one word, river, to talk about something that’s in constant flux, and that might dispose us to think that things are more fixed than they are – indeed, to think that there are stable things at all. Our noun-bound language can’t capture the ceaseless flow of existence. Heraclitus is saying that language is an inadequate tool for the purpose of limning reality.

What Wittgenstein finds intriguing about so many of our philosophical pronouncements is that while they seem profoundly important, it’s unclear what difference they make to anything. Imagine Heraclitus spending an afternoon down by the river (or the constantly changing flux of river-like moments, if you prefer) with his friend Parmenides, who says that change is impossible. They might have a heated argument about whether the so-called river is many or one, but afterwards they can both go for a swim, get a cool drink to refresh themselves, or slip into some waders for a bit of fly fishing. None of these activities is in the least bit altered by the metaphysical commitments of the disputants.

Wittgenstein thinks that we can get clearer about such disputes by likening the things that people say to moves in a game. Just as every move in a game of chess alters the state of play, so does every conversational move alter the state of play in what he calls the language-game. The point of talking, like the point of moving a chess piece, is to do something. But a move only counts as that move in that game provided a certain amount of stage-setting. To make sense of a chess game, you need to be able to distinguish knights from bishops, know how the different pieces move, and so on. Placing pieces on the board at the start of the game isn’t a sequence of moves. It’s something we do to make the game possible in the first place.

One way we get confused by language, Wittgenstein thinks, is that the rule-stating and place-setting activities happen in the same medium as the actual moves of the language-game – that is, in words. ‘The river is overflowing its banks’ and ‘The word river is a noun’ are both grammatically sound English sentences, but only the former is a move in a language-game. The latter states a rule for using language: it’s like saying ‘The bishop moves diagonally’, and it’s no more a move in a language-game than a demonstration of how the bishop moves is a move in chess.

What Heraclitus and Parmenides disagree about, Wittgenstein wants us to see, isn’t a fact about the river but the rules for talking about the river. Heraclitus is recommending a new language-game: one in which the rule for using the word river prohibits us from saying that we stepped into the same one twice, just as the rules of our own language-game prohibit us from saying that the same moment occurred at two different times. There’s nothing wrong with proposing alternative rules, provided you’re clear that that’s what you’re doing. If you say: ‘The king moves just like the queen,’ you’re either saying something false about our game of chess or you’re proposing an alternative version of the game – which might or might not turn out to be any good. The trouble with Heraclitus is that he imagines he’s talking about rivers and not rules – and, in that case, he’s simply wrong. The mistake we so often make in philosophy, according to Wittgenstein, is that we think we’re doing one thing when in fact we’re doing another.

But if we dismiss the remark about rivers as a naive blunder, we learn nothing from it. ‘In a certain sense one cannot take too much care in handling philosophical mistakes, they contain so much truth,’ Wittgenstein cautions. Heraclitus and Parmenides might not do anything different as a result of their metaphysical differences, but those differences bespeak profoundly different attitudes toward everything they do. That attitude might be deep or shallow, bold or timorous, grateful or crabbed, but it isn’t true or false. Similarly, the rules of a game aren’t right or wrong – they’re the measure by which we determine whether moves within the game are right or wrong – but which games you think are worth playing, and how you relate to the rules as you play them, says a lot about you.

What, then, inclines us – and Heraclitus – to regard this expression of an attitude as a metaphysical fact? Recall that Heraclitus wants to reform our language-games because he thinks they misrepresent the way things really are. But consider what you’d need to do in order to assess whether our language-games are more or less adequate to some ultimate reality. You’d need to compare two things: our language-game and the reality that it’s meant to represent. In other words, you’d need to compare reality as we represent it to ourselves with reality free of all representation. But that makes no sense: how can you represent to yourself how things look free of all representation?

The fact that we might even be tempted to suppose we can do that bespeaks a deeply human longing to step outside our own skins. We can feel trapped by our bodily, time-bound existence. There’s a kind of religious impulse that seeks liberation from these limits: it seeks to transcend our finite selves and make contact with the infinite. Wittgenstein’s religious impulse pushes us in the opposite direction: he doesn’t try to satisfy our aspiration for transcendence but to wean us from that aspiration altogether. The liberation he offers isn’t liberation from our bounded selves but for our bounded selves.

Wittgenstein’s remark about Heraclitus comes from a typescript from the early 1930s, when Wittgenstein was just beginning to work out the mature philosophy that would be published posthumously as Philosophical Investigations (1953). Part of what makes that late work special is the way in which the Wittgenstein who sees every problem from a religious point of view merges with the practical-minded engineer. Metaphysical speculations, for Wittgenstein, are like gears that have slipped free from the mechanism of language and are spinning wildly out of control. Wittgenstein the engineer wants to get the mechanism running smoothly. And this is precisely where the spiritual insight resides: our aim, properly understood, isn’t transcendence but a fully invested immanence. In this respect, he offers a peculiarly technical approach to an aspiration that finds expression in mystics from Meister Eckhart to the Zen patriarchs: not to ascend to a state of perfection but to recognise that where you are, already, in this moment, is all the perfection you need.Aeon counter – do not remove


David Egan is a visiting assistant professor in the Department of Philosophy at CUNY Hunter College in New York. He is the author of The Pursuit of an Authentic Philosophy: Wittgenstein, Heidegger, and the Everyday (2019).

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

To Avoid Moral Failure, Don’t See People as Sherlock Does

sherlock-holmes

Suspicious minds; William Gillette as Sherlock Holmes (right) and Bruce McRae as Dr John Watson in the play Sherlock Holmes (c1900). Courtesy Wikimedia

Rima Basu | Aeon Ideas

If we’re the kind of people who care both about not being racist, and also about basing our beliefs on the evidence that we have, then the world presents us with a challenge. The world is pretty racist. It shouldn’t be surprising then that sometimes it seems as if the evidence is stacked in favour of some racist belief. For example, it’s racist to assume that someone’s a staff member on the basis of his skin colour. But what if it’s the case that, because of historical patterns of discrimination, the members of staff with whom you interact are predominantly of one race? When the late John Hope Franklin, professor of history at Duke University in North Carolina, hosted a dinner party at his private club in Washington, DC in 1995, he was mistaken as a member of staff. Did the woman who did so do something wrong? Yes. It was indeed racist of her, even though Franklin was, since 1962, that club’s first black member.

To begin with, we don’t relate to people in the same way that we relate to objects. Human beings are different in an important way. In the world, there are things – tables, chairs, desks and other objects that aren’t furniture – and we try our best to understand how this world works. We ask why plants grow when watered, why dogs give birth to dogs and never to cats, and so on. But when it comes to people, ‘we have a different way of going on, though it is hard to capture just what that is’, as Rae Langton, now professor of philosophy at the University of Cambridge, put it so nicely in 1991.

Once you accept this general intuition, you might begin to wonder how can we capture that different way in which we ought to relate to others. To do this, first we must recognise that, as Langton goes on to write, ‘we don’t simply observe people as we might observe planets, we don’t simply treat them as things to be sought out when they can be of use to us, and avoid when they are a nuisance. We are, as [the British philosopher P F] Strawson says, involved.’

This way of being involved has been played out in many different ways, but here’s the basic thought: being involved is thinking that others’ attitudes and intentions towards us are important in a special way, and that our treatment of others should reflect that importance. We are, each of us, in virtue of being social beings, vulnerable. We depend upon others for our self-esteem and self-respect.

For example, we each think of ourselves as having a variety of more or less stable characteristics, from marginal ones such as being born on a Friday to central ones such as being a philosopher or a spouse. The more central self-descriptions are important to our sense of self-worth, to our self-understanding, and they constitute our sense of identity. When these central self-descriptions are ignored by others in favour of expectations on the basis of our race, gender or sexual orientation, we’re wronged. Perhaps our self-worth shouldn’t be based on something so fragile, but not only are we all-too-human, these self-descriptions also allow us to understand who we are and where we stand in the world.

This thought is echoed in the American sociologist and civil rights activist W E B DuBois’s concept of double consciousness. In The Souls of Black Folk (1903), DuBois notes a common feeling: ‘this sense of always looking at one’s self through the eyes of others, of measuring one’s soul by the tape of a world that looks on in amused contempt and pity’.

When you believe that John Hope Franklin must be a staff member rather than a club member, you’ve made predictions of him and observed him in the same way that one might observe the planets. Our private thoughts can wrong other people. When someone forms beliefs about you in this predictive way, they fail to see you, they fail to interact with you as a person. This is not only upsetting. It is a moral failing.

The English philosopher W K Clifford argued in 1877 that we were morally criticisable if our beliefs weren’t formed in the right way. He warned that we have a duty to humanity to never believe on the basis of insufficient evidence because to do so would be to put society at risk. As we look at the world around us and the epistemic crisis in which we find ourselves, we see what happens when Clifford’s imperative is ignored. And if we combine Clifford’s warning with DuBois’s and Langton’s observations, it becomes clear that, for our belief-forming practices, the stakes aren’t just high because we depend on one another for knowledge – the stakes are also high because we depend on one another for respect and dignity.

Consider how upset Arthur Conan Doyle’s characters get with Sherlock Holmes for the beliefs this fictional detective forms about them. Without fail, the people whom Holmes encounters find the way he forms beliefs about others to be insulting. Sometimes it’s because it is a negative belief. Often, however, the belief is mundane: eg, what they ate on the train or which shoe they put on first in the morning. There’s something improper about the way that Holmes relates to other human beings. Holmes’s failure to relate is not just a matter of his actions or his words (though sometimes it is also that), but what really rubs us up the wrong way is that Holmes observes us all as objects to be studied, predicted and managed. He doesn’t relate to us as human beings.

Maybe in an ideal world, what goes on inside our heads wouldn’t matter. But just as the personal is the political, our private thoughts aren’t really only our own. If a man believes of every woman he meets: ‘She’s someone I can sleep with,’ it’s no excuse that he never acts on the belief or reveals the belief to others. He has objectified her and failed to relate to her as a human being, and he has done so in a world in which women are routinely objectified and made to feel less-than.

This kind of indifference to the effect one has on others is morally criticisable. It has always struck me as odd that everyone grants that our actions and words are apt for moral critique, but once we enter the realm of thought we’re off the hook. Our beliefs about others matter. We care what others think of us.

When we mistake a person of colour for a staff member, that challenges this person’s central self-descriptions, the descriptions from which he draws his sense of self-worth. This is not to say that there is anything wrong with being a staff member, but if your reason for thinking that someone is staff is tied not only to something he has no control over (his skin colour) but also to a history of oppression (being denied access to more prestigious forms of employment), then that should give you pause.

The facts might not be racist, but the facts that we often rely on can be the result of racism, including racist institutions and policies. So when forming beliefs using evidence that is a result of racist history, we are accountable for failing to show more care and for believing so easily that someone is a staff member. Precisely what is owed can vary along a number of dimensions, but nonetheless we can recognise that some extra care with our beliefs is owed along these lines. We owe each other not only better actions and better words, but also better thoughts.Aeon counter – do not remove


Rima Basu is an assistant professor of philosophy at Claremont McKenna College in California. Her work has been published in Philosophical Studies, among others.

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

How the Dualism of Descartes Ruined our Mental Health

goya-lunatics

Yard with Lunatics 1794, (detail) by Francisco José de Goya y Lucientes. Courtesy Wikimedia/Meadows Museum, Dallas

James Barnes | Aeon Ideas

Toward the end of the Renaissance period, a radical epistemological and metaphysical shift overcame the Western psyche. The advances of Nicolaus Copernicus, Galileo Galilei and Francis Bacon posed a serious problem for Christian dogma and its dominion over the natural world. Following Bacon’s arguments, the natural world was now to be understood solely in terms of efficient causes (ie, external effects). Any inherent meaning or purpose to the natural world (ie, its ‘formal’ or ‘final’ causes) was deemed surplus to requirements. Insofar as it could be predicted and controlled in terms of efficient causes, not only was any notion of nature beyond this conception redundant, but God too could be effectively dispensed with.

In the 17th century, René Descartes’s dualism of matter and mind was an ingenious solution to the problem this created. ‘The ideas’ that had hitherto been understood as inhering in nature as ‘God’s thoughts’ were rescued from the advancing army of empirical science and withdrawn into the safety of a separate domain, ‘the mind’. On the one hand, this maintained a dimension proper to God, and on the other, served to ‘make the intellectual world safe for Copernicus and Galileo’, as the American philosopher Richard Rorty put it in Philosophy and the Mirror of Nature (1979). In one fell swoop, God’s substance-divinity was protected, while empirical science was given reign over nature-as-mechanism – something ungodly and therefore free game.

Nature was thereby drained of her inner life, rendered a deaf and blind apparatus of indifferent and value-free law, and humankind was faced with a world of inanimate, meaningless matter, upon which it projected its psyche – its aliveness, meaning and purpose – only in fantasy. It was this disenchanted vision of the world, at the dawn of the industrial revolution that followed, that the Romantics found so revolting, and feverishly revolted against.

The French philosopher Michel Foucault in The Order of Things (1966) termed it a shift in ‘episteme’ (roughly, a system of knowledge). The Western psyche, Foucault argued, had once been typified by ‘resemblance and similitude’. In this episteme, knowledge of the world was derived from participation and analogy (the ‘prose of the world’, as he called it), and the psyche was essentially extroverted and world-involved. But after the bifurcation of mind and nature, an episteme structured around ‘identity and difference’ came to possess the Western psyche. The episteme that now prevailed was, in Rorty’s terms, solely concerned with ‘truth as correspondence’ and ‘knowledge as accuracy of representations’. Psyche, as such, became essentially introverted and untangled from the world.

Foucault argued, however, that this move was not a supersession per se, but rather constituted an ‘othering’ of the prior experiential mode. As a result, its experiential and epistemological dimensions were not only denied validity as an experience, but became the ‘occasion of error’. Irrational experience (ie, experience inaccurately corresponding to the ‘objective’ world) then became a meaningless mistake – and disorder the perpetuation of that mistake. This is where Foucault located the beginning of the modern conception of ‘madness’.

Although Descartes’s dualism did not win the philosophical day, we in the West are still very much the children of the disenchanted bifurcation it ushered in. Our experience remains characterised by the separation of ‘mind’ and ‘nature’ instantiated by Descartes. Its present incarnation  – what we might call the empiricist-materialist position  –  not only predominates in academia, but in our everyday assumptions about ourselves and the world. This is particularly clear in the case of mental disorder.

Common notions of mental disorder remain only elaborations of ‘error’, conceived of in the language of ‘internal dysfunction’ relative to a mechanistic world devoid of any meaning and influence. These dysfunctions are either to be cured by psychopharmacology, or remedied by therapy meant to lead the patient to rediscover ‘objective truth’ of the world. To conceive of it in this way is not only simplistic, but highly biased.

While it is true that there is value in ‘normalising’ irrational experiences like this, it comes at a great cost. These interventions work (to the extent that they do) by emptying our irrational experiences of their intrinsic value or meaning. In doing so, not only are these experiences cut off from any world-meaning they might harbour, but so too from any agency and responsibility we or those around us have – they are only errors to be corrected.

In the previous episteme, before the bifurcation of mind and nature, irrational experiences were not just ‘error’ – they were speaking a language as meaningful as rational experiences, perhaps even more so. Imbued with the meaning and rhyme of nature herself, they were themselves pregnant with the amelioration of the suffering they brought. Within the world experienced this way, we had a ground, guide and container for our ‘irrationality’, but these crucial psychic presences vanished along with the withdrawal of nature’s inner life and the move to ‘identity and difference’.

In the face of an indifferent and unresponsive world that neglects to render our experience meaningful outside of our own minds  –  for nature-as-mechanism is powerless to do this  –  our minds have been left fixated on empty representations of a world that was once its source and being. All we have, if we are lucky to have them, are therapists and parents who try to take on what is, in reality, and given the magnitude of the loss, an impossible task.

But I’m not going to argue that we just need to ‘go back’ somehow. On the contrary, the bifurcation of mind and nature was at the root of immeasurable secular progress –  medical and technological advance, the rise of individual rights and social justice, to name just a few. It also protected us all from being bound up in the inherent uncertainty and flux of nature. It gave us a certain omnipotence – just as it gave science empirical control over nature – and most of us readily accept, and willingly spend, the inheritance bequeathed by it, and rightly so.

It cannot be emphasised enough, however, that this history is much less a ‘linear progress’ and much more a dialectic. Just as unified psyche-nature stunted material progress, material progress has now degenerated psyche. Perhaps, then, we might argue for a new swing in this pendulum. Given the dramatic increase in substance-use issues and recent reports of a teenage ‘mental health crisis’ and teen suicide rates rising in the US, the UK and elsewhere to name only the most conspicuous, perhaps the time is in fact overripe.

However, one might ask, by what means? There has been a resurgence of ‘pan-experiential’ and idealist-leaning theories in several disciplines, largely concerned with undoing the very knot of bifurcation and the excommunication of a living nature, and creating in its wake something afresh. This is because attempts at explaining subjective experience in empiricist-materialist terms have all but failed (principally due to what the Australian philosopher David Chalmers in 1995 termed the ‘the hard problem’ of consciousness). The notion that metaphysics is ‘dead’ would in fact be met with very significant qualification in certain quarters – indeed, the Canadian philosopher Evan Thompson et al argued along the same lines in a recent essay in Aeon.

It must be remembered that mental disorder as ‘error’ rises and falls with the empiricist-materialist metaphysics and the episteme it is a product of. Therefore, we might also think it justified to begin to reconceptualise the notion of mental disorder in the same terms as these theories. There has been a decisive shift in psychotherapeutic theory and practice away from the changing of parts or structures of the individual, and towards the idea that it is the very process of the therapeutic encounter itself that is ameliorative. Here, correct or incorrect judgments about ‘objective reality’ start to lose meaning, and psyche as open and organic starts to come back into focus, but the metaphysics remains. We ultimately need to be thinking about mental disorder on a metaphysical level, and not just within the confines of the status quo.Aeon counter – do not remove

James Barnes

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

How do we Pry Apart the True and Compelling from the False and Toxic?

cpu-stack

Stack of CPU’s. Shawn Stutzman, Pexels

David V Johnson | Aeon Ideas

When false and malicious speech roils the body politic, when racism and violence surge, the right and role of freedom of speech in society comes into crisis. People rightly begin to wonder what are the limits, what should be the rules. It is a complicated issue, and resolving it requires care about the exact problems targeted and solutions proposed. Otherwise the risk to free speech is real.

Propaganda from Russian-funded troll farms (boosted by Facebook data breaches) might have contributed to the United Kingdom’s vote to exit the European Union and aided the United States’ election of Donald Trump as president. Conspiracy theories spread by alternative news outlets or over social media sometimes lead to outbreaks of violence. Politicians exploit the mainstream news media’s commitment to balance, to covering newsworthy public statements and their need for viewers or readers by making baseless, sensational claims.

In On Liberty (1859), John Stuart Mill offers the most compelling defence of freedom of speech, conscience and autonomy ever written. Mill argues that the only reason to restrict speech is to prevent harm to others, such as with hate speech and incitement to violence. Otherwise, all speech must be protected. Even if we know a view is false, Mill says, it is wrong to suppress it. We avoid prejudice and dogmatism, and achieve understanding, through freely discussing and defending what we believe against contrary claims.

Today, a growing number of people see these views as naive. Mill’s arguments are better suited to those who still believe in the open marketplace of ideas, where free and rational debate is the best way to settle all disputes about truth and falsity. Who could possibly believe we live in such a world anymore? Instead, what we have is a Wild West of partisanship and manipulation, where social media gurus exploit research in behavioural psychology to compel users to affirm and echo absurd claims. We have a world where people live in cognitive bubbles of the like-minded and share one another’s biases and prejudices. According to this savvy view, our brave new world is too prone to propaganda and conspiracy-mongering to rely on Mill’s optimism about free speech. To do so is to risk abetting the rise of fascist and absolutist tendencies.

In his book How Fascism Works (2018), the American philosopher Jason Stanley cites the Russian television network RT, which presents all sorts of misleading and slanted views. If Mill is right, claims Stanley, then RT and such propaganda outfits ‘should be the paradigm of knowledge production’ because they force us to scrutinise their claims. But this is a reductio ad absurdum of Mill’s argument. Similarly, Alexis Papazoglou in The New Republic questions whether Nick Clegg, the former British deputy prime minister turned Facebook’s new vice president of global affairs and communication, will be led astray by his appreciation of Mill’s On Liberty. ‘Mill seemed to believe that an open, free debate meant the truth would usually prevail, whereas under censorship, truth could end up being accidentally suppressed, along with falsehood,’ writes Papazoglou. ‘It’s a view that seems a bit archaic in the age of an online marketplace of memes and clickbait, where false stories tend to spread faster and wider than their true counterpoints.’

When important and false beliefs and theories gain traction in public conversation, Mill’s protection of speech can be frustrating. But there is nothing new about ‘fake news’, whether in Mill’s age of sensationalist newspapers or in our age of digital media. Nonetheless to seek a solution in restricting speech is foolish and counterproductive – it lends credibility to the illiberal forces you, paradoxically, seek to silence. It also betrays an elitism about engaging with those of different opinions and a cynicism about affording your fellow citizens the freedom to muddle through the morass on their own. If we want to live in a liberal democratic society, rational engagement is the only solution on offer. Rather than restricting speech, we should look to supplement Mill’s view with effective tools for dealing with bad actors and with beliefs that, although false, seem compelling to some.

Fake news and propaganda are certainly problems, as they were in Mill’s day, but the problems they raise are more serious than the falsity of their claims. After all, they are not unique in saying false things, as the latest newspaper corrections will tell you. More importantly, they involve bad actors: people and organisations who intentionally pass off false views as the truth, and hide their nature and motives. (Think Russian troll farms.) Anyone who knows that they are dealing with bad actors – people trying to mislead – ignores them, and justifiably so. It’s not worth your time to consider the claim of someone you know is trying to deceive you.

There is nothing in Mill that demands that we engage any and all false views. After all, there are too many out there and so people have to be selective. Transparency is key, helping people know with whom, or what, they are dealing. Transparency helps filter out noise and fosters accountability, so that bad actors – those who hide their identity for the purpose of misleading others – are eliminated.

Mill’s critics fail to see the truth that is mixed in with the false views that they wish to restrict, and that makes those views compelling. RT, for instance, has covered many issues, such as the US financial crisis, economic inequality and imperialism more accurately than mainstream news channels. RT also includes informed sources who are ignored by other outlets. The channel might be biased toward demeaning the US and fomenting division, but it often pursues this agenda by speaking truths that are not covered in mainstream US media. Informed news-watchers know to view RT and all news sources with skepticism, and there is no reason not to extend the same respect to the entire viewing public, unless you presume you are a better judge of what to believe than your fellow citizens.

Mill rightly thought that the typical case wasn’t one of views that are false, but views that have a mixture of true and false. It would be far more effective to try to engage with the truth in views we despise than to try to ban them for their alleged falsity. The Canadian psychologist and YouTube sensation Jordan Peterson, for example, says things that are false, misogynistic and illiberal, but one possible reason for his following is that he recognises and speaks to a deficit of meaning and values in many young men’s lives. Here, the right approach is to pry apart the true and compelling from the false and toxic, through reasoned consideration. This way, following Mill’s path, presents a better chance of winning over those who are lost to views we despise. It also helps us improve our own understanding, as Mill wisely suggests.Aeon counter – do not remove

David V Johnson

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

To Boost your Self-esteem, Write about Chapters of your Life

1980s-car

New car, 1980s. Photo by Don Pugh/Flickr

Christian Jarrett | Aeon Ideas

In truth, so much of what happens to us in life is random – we are pawns at the mercy of Lady Luck. To take ownership of our experiences and exert a feeling of control over our future, we tell stories about ourselves that weave meaning and continuity into our personal identity. Writing in the 1950s, the psychologist Erik Erikson put it this way:

To be adult means among other things to see one’s own life in continuous perspective, both in retrospect and in prospect … to selectively reconstruct his past in such a way that, step for step, it seems to have planned him, or better, he seems to have planned it.

Alongside your chosen values and goals in life, and your personality traits – how sociable you are, how much of a worrier and so on – your life story as you tell it makes up the final part of what in 2015 the personality psychologist Dan P McAdams at Northwestern University in Illinois called the ‘personological trinity’.

Of course, some of us tell these stories more explicitly than others – one person’s narrative identity might be a barely formed story at the edge of their consciousness, whereas another person might literally write out their past and future in a diary or memoir.

Intriguingly, there’s some evidence that prompting people to reflect on and tell their life stories – a process called ‘life review therapy’ – could be psychologically beneficial. However, most of this work has been on older adults and people with pre-existing problems such as depression or chronic physical illnesses. It remains to be established through careful experimentation whether prompting otherwise healthy people to reflect on their lives will have any immediate benefits.

A relevant factor in this regard is the tone, complexity and mood of the stories that people tell themselves. For instance, it’s been shown that people who tell more positive stories, including referring to more instances of personal redemption, tend to enjoy higher self-esteem and greater ‘self-concept clarity’ (the confidence and lucidity in how you see yourself). Perhaps engaging in writing or talking about one’s past will have immediate benefits only for people whose stories are more positive.

In a recent paper in the Journal of Personality, Kristina L Steiner at Denison University in Ohio and her colleagues looked into these questions and reported that writing about chapters in your life does indeed lead to a modest, temporary self-esteem boost, and that in fact this benefit arises regardless of how positive your stories are. However, there were no effects on self-concept clarity, and many questions on this topic remain for future study.

Steiner’s team tested three groups of healthy American participants across three studies. The first two groups – involving more than 300 people between them – were young undergraduates, most of them female. The final group, a balanced mix of 101 men and women, was recruited from the community, and they were older, with an average age of 62.

The format was essentially the same for each study. The participants were asked to complete various questionnaires measuring their mood, self-esteem and self-concept clarity, among other things. Then half of them were allocated to write about four chapters in their lives, spending 10 minutes on each. They were instructed to be as specific and detailed as possible, and to reflect on main themes, how each chapter related to their lives as a whole, and to think about any causes and effects of the chapter on them and their lives. The other half of the participants, who acted as a control group, spent the same time writing about four famous Americans of their choosing (to make this task more intellectually comparable, they were also instructed to reflect on the links between the individuals they chose, how they became famous, and other similar questions). After the writing tasks, all the participants retook the same psychological measures they’d completed at the start.

The participants who wrote about chapters in their lives displayed small, but statistically significant, increases to their self-esteem, whereas the control-group participants did not. This self-esteem boost wasn’t explained by any changes to their mood, and – to the researchers’ surprise – it didn’t matter whether the participants rated their chapters as mostly positive or negative, nor did it depend on whether they featured themes of agency (that is, being in control) and communion (pertaining to meaningful relationships). Disappointingly, there was no effect of the life-chapter task on self-concept clarity, nor on meaning and identity.

How long do the self-esteem benefits of the life-chapter task last, and might they accumulate by repeating the exercise? Clues come from the second of the studies, which involved two life chapter-writing tasks (and two tasks writing about famous Americans for the control group), with the second task coming 48 hours after the first. The researchers wanted to see if the self-esteem boost arising from the first life-chapter task would still be apparent at the start of the second task two days later – but it wasn’t. They also wanted to see if the self-esteem benefits might accumulate over the two tasks – they didn’t (the second life-chapter task had its own self-esteem benefit, but it wasn’t cumulative with the benefits of the first).

It remains unclear exactly why the life-chapter task had the self-esteem benefits that it did. It’s possible that the task led participants to consider how they had changed in positive ways. They might also have benefited from expressing and confronting their emotional reactions to these periods of their lives – this would certainly be consistent with the well-documented benefits of expressive writing and ‘affect labelling’ (the calming effect of putting our emotions into words). Future research will need to compare different life chapter-writing instructions to tease apart these different potential beneficial mechanisms. It would also be helpful to test more diverse groups of participants and different ‘dosages’ of the writing task to see if it is at all possible for the benefits to accrue over time.

The researchers said: ‘Our findings suggest that the experience of systematically reviewing one’s life and identifying, describing and conceptually linking life chapters may serve to enhance the self, even in the absence of increased self-concept clarity and meaning.’ If you are currently lacking much confidence and feel like you could benefit from an ego boost, it could be worth giving the life-chapter task a go. It’s true that the self-esteem benefits of the exercise were small, but as Steiner’s team noted, ‘the costs are low’ too.Aeon counter – do not remove

Christian Jarrett

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Is Consciousness a Battle between your Beliefs and Perceptions?

elephant-magic

Now you see it… Magician Harry Houdini moments before ‘disappearing’ Jennie the 10,000lb elephant at the Hippodrome, New York, in 1918. Photo courtesy Library of Congress

Hakwan Lau | Aeon Ideas

Imagine you’re at a magic show, in which the performer suddenly vanishes. Of course, you ultimately know that the person is probably just hiding somewhere. Yet it continues to look as if the person has disappeared. We can’t reason away that appearance, no matter what logic dictates. Why are our conscious experiences so stubborn?

The fact that our perception of the world appears to be so intransigent, however much we might reflect on it, tells us something unique about how our brains are wired. Compare the magician scenario with how we usually process information. Say you have five friends who tell you it’s raining outside, and one weather website indicating that it isn’t. You’d probably just consider the website to be wrong and write it off. But when it comes to conscious perception, there seems to be something strangely persistent about what we see, hear and feel. Even when a perceptual experience is clearly ‘wrong’, we can’t just mute it.

Why is that so? Recent advances in artificial intelligence (AI) shed new light on this puzzle. In computer science, we know that neural networks for pattern-recognition – so-called deep learning models – can benefit from a process known as predictive coding. Instead of just taking in information passively, from the bottom up, networks can make top-down hypotheses about the world, to be tested against observations. They generally work better this way. When a neural network identifies a cat, for example, it first develops a model that allows it to predict or imagine what a cat looks like. It can then examine any incoming data that arrives to see whether or not it fits that expectation.

The trouble is, while these generative models can be super efficient once they’re up and running, they usually demand huge amounts of time and information to train. One solution is to use generative adversarial networks (GANs) – hailed as the ‘coolest idea in deep learning in the last 20 years’ by Facebook’s head of AI research Yann LeCun. In GANs, we might train one network (the generator) to create pictures of cats, mimicking real cats as closely as it can. And we train another network (the discriminator) to distinguish between the manufactured cat images and the real ones. We can then pit the two networks against each other, such that the discriminator is rewarded for catching fakes, while the generator is rewarded for getting away with them. When they are set up to compete, the networks grow together in prowess, not unlike an arch art-forger trying to outwit an art expert. This makes learning very efficient for each of them.

As well as a handy engineering trick, GANs are a potentially useful analogy for understanding the human brain. In mammalian brains, the neurons responsible for encoding perceptual information serve multiple purposes. For example, the neurons that fire when you see a cat also fire when you imagine or remember a cat; they can also activate more or less at random. So whenever there’s activity in our neural circuitry, the brain needs to be able to figure out the cause of the signals, whether internal or external.

We can call this exercise perceptual reality monitoring. John Locke, the 17th-century British philosopher, believed that we had some sort of inner organ that performed the job of sensory self-monitoring. But critics of Locke wondered why Mother Nature would take the trouble to grow a whole separate organ, on top of a system that’s already set up to detect the world via the senses. You have to be able to smell something before you can go about deciding whether or not the perception is real or fake; so why not just build in a check to the detecting mechanism itself?

In light of what we now know about GANs, though, Locke’s idea makes a certain amount of sense. Because our perceptual system takes up neural resources, parts of it get recycled for different uses. So imagining a cat draws on the same neuronal patterns as actually seeing one. But this overlap muddies the water regarding the meaning of the signals. Therefore, for the recycling scheme to work well, we need a discriminator to decide when we are seeing something versus when we’re merely thinking about it. This GAN-like inner sense organ – or something like it – needs to be there to act as an adversarial rival, to stimulate the growth of a well-honed predictive coding mechanism.

If this account is right, it’s fair to say that conscious experience is probably akin to a kind of logical inference. That is, if the perceptual signal from the generator says there is a cat, and the discriminator decides that this signal truthfully reflects the state of the world right now, we naturally see a cat. The same goes for raw feelings: pain can feel sharp, even when we know full well that nothing is poking at us, and patients can report feeling pain in limbs that have already been amputated. To the extent that the discriminator gets things right most of the time, we tend to trust it. No wonder that when there’s a conflict between subjective impressions and rational beliefs, it seems to make sense to believe what we consciously experience.

This perceptual stubbornness is not just a feature of humans. Some primates have it too, as shown by their capacity to be amazed and amused by magic tricks. That is, they seem to understand that there’s a tension between what they’re seeing and what they know to be true. Given what we understand about their brains – specifically, that their perceptual neurons are also ‘recyclable’ for top-down functioning – the GAN theory suggests that these nonhuman animals probably have conscious experiences not dissimilar to ours.

The future of AI is more challenging. If we built a robot with a very complex GAN-style architecture, would it be conscious? On the basis of our theory, it would probably be capable of predictive coding, exercising the same machinery for perception as it deploys for top-down prediction or imagination. Perhaps like some current generative networks, it could ‘dream’. Like us, it probably couldn’t reason away its pain – and it might even be able to appreciate stage magic.

Theorising about consciousness is notoriously hard, and we don’t yet know what it really consists in. So we wouldn’t be in a position to establish if our robot was truly conscious. Then again, we can’t do this with any certainty with respect to other animals either. At least by fleshing out some conjectures about the machinery of consciousness, we can begin
to test them against our intuitions – and, more importantly, in experiments. What we do know is that a model of the mind involving an inner mechanism of doubt – a nit-picking system that’s constantly on the lookout for fakes and forgeries in perception – is one of the most promising ideas we’ve come up with so far.

Hakwan Lau

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

A Philosophical Approach to Routines can Illuminate Who We Really Are

Elias Anttila | Aeon Ideas

There are hundreds of things we do – repeatedly, routinely – every day. We wake up, check our phones, eat our meals, brush our teeth, do our jobs, satisfy our addictions. In recent years, such habitual actions have become an arena for self-improvement: bookshelves are saturated with bestsellers about ‘life hacks’, ‘life design’ and how to ‘gamify’ our long-term projects, promising everything from enhanced productivity to a healthier diet and huge fortunes. These guides vary in scientific accuracy, but they tend to depict habits as routines that follow a repeated sequence of behaviours, into which we can intervene to set ourselves on a more desirable track.

The problem is that this account has been bleached of much of its historical richness. Today’s self-help books have in fact inherited a highly contingent version of habit – specifically, one that arises in the work of early 20th-century psychologists such as B F Skinner, Clark Hull, John B Watson and Ivan Pavlov. These thinkers are associated with behaviourism, an approach to psychology that prioritises observable, stimulus-response reactions over the role of inner feelings or thoughts. The behaviourists defined habits in a narrow, individualistic sense; they believed that people were conditioned to respond automatically to certain cues, which produced repeated cycles of action and reward.

The behaviourist image of habit has since been updated in light of contemporary neuroscience. For example, the fact that the brain is plastic and changeable allows habits to inscribe themselves in our neural wiring over time by forming privileged connections between brain regions. The influence of behaviourism has enabled researchers to study habits quantitatively and rigorously. But it has also bequeathed a flattened notion of habit that overlooks the concept’s wider philosophical implications.

Philosophers used to look at habits as ways of contemplating who we are, what it means to have faith, and why our daily routines reveal something about the world at large. In his Nicomachean Ethics, Aristotle uses the terms hexis and ethos – both translated today as ‘habit’ – to study stable qualities in people and things, especially regarding their morals and intellect. Hexis denotes the lasting characteristics of a person or thing, like the smoothness of a table or the kindness of a friend, which can guide our actions and emotions. A hexis is a characteristic, capacity or disposition that one ‘owns’; its etymology is the Greek word ekhein, the term for ownership. For Aristotle, a person’s character is ultimately a sum of their hexeis (plural).

An ethos, on the other hand, is what allows one to develop hexeis. It is both a way of life and the basic calibre of one’s personality. Ethos is what gives rise to the essential principles that help to guide moral and intellectual development. Honing hexeis out of an ethos thus takes both time and practice. This version of habit fits with the tenor of ancient Greek philosophy, which often emphasised the cultivation of virtue as a path to the ethical life.

Millennia later, in medieval Christian Europe, Aristotle’s hexis was Latinised into habitus. The translation tracks a shift away from the virtue ethics of the Ancients towards Christian morality, by which habit acquired distinctly divine connotations. In the middle ages, Christian ethics moved away from the idea of merely shaping one’s moral dispositions, and proceeded instead from the belief that ethical character was handed down by God. In this way, the desired habitus should become entwined with the exercise of Christian virtue.

The great theologian Thomas Aquinas saw habit as a vital component of spiritual life. According to his Summa Theologica (1265-1274), habitus involved a rational choice, and led the true believer to a sense of faithful freedom. By contrast, Aquinas used consuetudo to refer to the habits we acquire that inhibit this freedom: the irreligious, quotidian routines that do not actively engage with faith. Consuetudo signifies mere association and regularity, whereas habitus conveys sincere thoughtfulness and consciousness of God. Consuetudo is also where we derive the terms ‘custom’ and ‘costume’ – a lineage which suggests that the medievals considered habit to extend beyond single individuals.

For the Enlightenment philosopher David Hume, these ancient and medieval interpretations of habit were far too limiting. Hume conceived of habit via what it empowers and enables us to do as human beings. He came to the conclusion that habit is the ‘cement of the universe’, which all ‘operations of the mind … depend on’. For instance, we might throw a ball in the air and watch it rise and descend to Earth. By habit, we come to associate these actions and perceptions – the movement of our limb, the trajectory of the ball – in a way that eventually lets us grasp the relationship between cause and effect. Causality, for Hume, is little more than habitual association. Likewise language, music, relationships – any skills we use to transform experiences into something that’s useful are built from habits, he believed. Habits are thus crucial instruments that enable us to navigate the world and to understand the principles by which it operates. For Hume, habit is nothing less than the ‘great guide of human life’.

It’s clear that we ought to see habits as more than mere routines, tendencies and ticks. They encompass our identities and ethics; they teach us how to practise our faiths; if Hume is to believed, they do no less than bind the world together. Seeing habits in this new-yet-old way requires a certain conceptual and historical about-face, but this U-turn offers much more than shallow self-help. It should show us that the things we do every day aren’t just routines to be hacked, but windows through which we might glimpse who we truly are.Aeon counter – do not remove

Elias Anttila

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Ibn Tufayl and the Story of the Feral Child of Philosophy

scholar-in-garden

Album folio fragment with scholar in a garden. Attributed to Muhammad Ali 1610-15. Courtesy Museum of Fine Arts, Boston

Marwa Elshakry & Murad Idris | Aeon Ideas

Ibn Tufayl, a 12th-century Andalusian, fashioned the feral child in philosophy. His story Hayy ibn Yaqzan is the tale of a child raised by a doe on an unnamed Indian Ocean island. Hayy ibn Yaqzan (literally ‘Living Son of Awakeness’) reaches a state of perfect, ecstatic understanding of the world. A meditation on the possibilities (and pitfalls) of the quest for the good life, Hayy offers not one, but two ‘utopias’: a eutopia (εὖ ‘good’, τόπος ‘place’) of the mind in perfect isolation, and an ethical community under the rule of law. Each has a version of human happiness. Ibn Tufayl pits them against each other, but each unfolds ‘no where’ (οὐ ‘not’, τόπος ‘place’) in the world.

Ibn Tufayl begins with a vision of humanity isolated from society and politics. (Modern European political theorists who employed this literary device called it ‘the state of nature’.) He introduces Hayy by speculating about his origin. Whether Hayy was placed in a basket by his mother to sail through the waters of life (like Moses) or born by spontaneous generation on the island is irrelevant, Ibn Tufayl says. His divine station remains the same, as does much of his life, spent in the company only of animals. Later philosophers held that society elevates humanity from its natural animal state to an advanced, civilised one. Ibn Tufayl took a different view. He maintained that humans can be perfected only outside society, through a progress of the soul, not the species.

In contrast to Thomas Hobbes’s view that ‘man is a wolf to man’, Hayy’s island has no wolves. It proves easy enough for him to fend off other creatures by waving sticks at them or donning terrifying costumes of hides and feathers. For Hobbes, the fear of violent death is the origin of the social contract and the apologia for the state; but Hayy’s first encounter with fear of death is when his doe-mother dies. Desperate to revive her, Hayy dissects her heart only to find one of its chambers is empty. The coroner-turned-theologian concludes that what he loved in his mother no longer resides in her body. Death therefore was the first lesson of metaphysics, not politics.

Hayy then observes the island’s plants and animals. He meditates upon the idea of an elemental, ‘vital spirit’ upon discovering fire. Pondering the plurality of matter leads him to conclude that it must originate from a singular, non-corporeal source or First Cause. He notes the perfect motion of the celestial spheres and begins a series of ascetic exercises (such as spinning until dizzy) to emulate this hidden, universal order. By the age of 50, he retreats from the physical world, meditating in his cave until, finally, he attains a state of ecstatic illumination. Reason, for Ibn Tufayl, is thus no absolute guide to Truth.

The difference between Hayy’s ecstatic journeys of the mind and later rationalist political thought is the role of reason. Yet many later modern European commentaries or translations of Hayy confuse this by framing the allegory in terms of reason. In 1671, Edward Pococke entitled his Latin translation The Self-Taught Philosopher: In Which It Is Demonstrated How Human Reason Can Ascend from Contemplation of the Inferior to Knowledge of the Superior. In 1708, Simon Ockley’s English translation was The Improvement of Human Reason, and it too emphasised reason’s capacity to attain ‘knowledge of God’. For Ibn Tufayl, however, true knowledge of God and the world – as a eutopia for the ‘mind’ (or soul) – could come only through perfect contemplative intuition, not absolute rational thought.

This is Ibn Tufayl’s first utopia: an uninhabited island where a feral philosopher retreats to a cave to reach ecstasy through contemplation and withdrawal from the world. Friedrich Nietzsche’s Zarathustra would be impressed: ‘Flee, my friend, into your solitude!’

The rest of the allegory introduces the problem of communal life and a second utopia. After Hayy achieves his perfect condition, an ascetic is shipwrecked on his island. Hayy is surprised to discover another being who so resembles him. Curiosity leads him to befriend the wanderer, Absal. Absal teaches Hayy language, and describes the mores of his own island’s law-abiding people. The two men determine that the islanders’ religion is a lesser version of the Truth that Hayy discovered, shrouded in symbols and parables. Hayy is driven by compassion to teach them the Truth. They travel to Absal’s home.

The encounter is disastrous. Absal’s islanders feel compelled by their ethical principles of hospitality towards foreigners, friendship with Absal, and association with all people to welcome Hayy. But soon Hayy’s constant attempts to preach irritate them. Hayy realises that they are incapable of understanding. They are driven by satisfactions of the body, not the mind. There can be no perfect society because not everyone can achieve a state of perfection in their soul. Illumination is possible only for the select, in accordance with a sacred order, or a hieros archein. (This hierarchy of being and knowing is a fundamental message of neo-Platonism.) Hayy concludes that persuading people away from their ‘natural’ stations would only corrupt them further. The laws that the ‘masses’ venerate, be they revealed or reasoned, he decides, are their only chance to achieve a good life.

The islanders’ ideals – lawfulness, hospitality, friendship, association – might seem reasonable, but these too exist ‘no where’ in the world. Hence their dilemma: either they adhere to these and endure Hayy’s criticisms, or violate them by shunning him. This is a radical critique of the law and its ethical principles: they are normatively necessary for social life yet inherently contradictory and impossible. It’s a sly reproach of political life, one whose bite endures. Like the islanders, we follow principles that can undermine themselves. To be hospitable, we must be open to the stranger who violates hospitality. To be democratic, we must include those who are antidemocratic. To be worldly, our encounters with other people must be opportunities to learn from them, not just about them.

In the end, Hayy returns to his island with Absal, where they enjoy a life of ecstatic contemplation unto death. They abandon the search for a perfect society of laws. Their eutopia is the quest of the mind left unto itself, beyond the imperfections of language, law and ethics – perhaps beyond even life itself.

The islanders offer a less obvious lesson: our ideals and principles undermine themselves, but this is itself necessary for political life. For an island of pure ethics and law is an impossible utopia. Perhaps, like Ibn Tufayl, all we can say on the search for happiness is (quoting Al-Ghazali):

It was – what it was is harder to say.
Think the best, but don’t make me describe it away.

After all, we don’t know what happened to Hayy and Absal after their deaths – or to the islanders after they left.Aeon counter – do not remove

Marwa Elshakry & Murad Idris

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Descartes was Wrong: ‘A Person is a Person through Other Persons’

young-moe

Detail from Young Moe (1938) by Paul Klee. Courtesy Phillips collection/Wikipedia

Abeba Birhane | Aeon Ideas

According to Ubuntu philosophy, which has its origins in ancient Africa, a newborn baby is not a person. People are born without ‘ena’, or selfhood, and instead must acquire it through interactions and experiences over time. So the ‘self’/‘other’ distinction that’s axiomatic in Western philosophy is much blurrier in Ubuntu thought. As the Kenyan-born philosopher John Mbiti put it in African Religions and Philosophy (1975): ‘I am because we are, and since we are, therefore I am.’

We know from everyday experience that a person is partly forged in the crucible of community. Relationships inform self-understanding. Who I am depends on many ‘others’: my family, my friends, my culture, my work colleagues. The self I take grocery shopping, say, differs in her actions and behaviours from the self that talks to my PhD supervisor. Even my most private and personal reflections are entangled with the perspectives and voices of different people, be it those who agree with me, those who criticise, or those who praise me.

Yet the notion of a fluctuating and ambiguous self can be disconcerting. We can chalk up this discomfort, in large part, to René Descartes. The 17th-century French philosopher believed that a human being was essentially self-contained and self-sufficient; an inherently rational, mind-bound subject, who ought to encounter the world outside her head with skepticism. While Descartes didn’t single-handedly create the modern mind, he went a long way towards defining its contours.

Descartes had set himself a very particular puzzle to solve. He wanted to find a stable point of view from which to look on the world without relying on God-decreed wisdoms; a place from which he could discern the permanent structures beneath the changeable phenomena of nature. But Descartes believed that there was a trade-off between certainty and a kind of social, worldly richness. The only thing you can be certain of is your own cogito – the fact that you are thinking. Other people and other things are inherently fickle and erratic. So they must have nothing to do with the basic constitution of the knowing self, which is a necessarily detached, coherent and contemplative whole.

Few respected philosophers and psychologists would identify as strict Cartesian dualists, in the sense of believing that mind and matter are completely separate. But the Cartesian cogito is still everywhere you look. The experimental design of memory testing, for example, tends to proceed from the assumption that it’s possible to draw a sharp distinction between the self and the world. If memory simply lives inside the skull, then it’s perfectly acceptable to remove a person from her everyday environment and relationships, and to test her recall using flashcards or screens in the artificial confines of a lab. A person is considered a standalone entity, irrespective of her surroundings, inscribed in the brain as a series of cognitive processes. Memory must be simply something you have, not something you do within a certain context.

Social psychology purports to examine the relationship between cognition and society. But even then, the investigation often presumes that a collective of Cartesian subjects are the real focus of the enquiry, not selves that co-evolve with others over time. In the 1960s, the American psychologists John Darley and Bibb Latané became interested in the murder of Kitty Genovese, a young white woman who had been stabbed and assaulted on her way home one night in New York. Multiple people had witnessed the crime but none stepped in to prevent it. Darley and Latané designed a series of experiments in which they simulated a crisis, such as an epileptic fit, or smoke billowing in from the next room, to observe what people did. They were the first to identify the so-called ‘bystander effect’, in which people seem to respond more slowly to someone in distress if others are around.

Darley and Latané suggested that this might come from a ‘diffusion of responsibility’, in which the obligation to react is diluted across a bigger group of people. But as the American psychologist Frances Cherry argued in The Stubborn Particulars of Social Psychology: Essays on the Research Process (1995), this numerical approach wipes away vital contextual information that might help to understand people’s real motives. Genovese’s murder had to be seen against a backdrop in which violence against women was not taken seriously, Cherry said, and in which people were reluctant to step into what might have been a domestic dispute. Moreover, the murder of a poor black woman would have attracted far less subsequent media interest. But Darley and Latané’s focus make structural factors much harder to see.

Is there a way of reconciling these two accounts of the self – the relational, world-embracing version, and the autonomous, inward one? The 20th-century Russian philosopher Mikhail Bakhtin believed that the answer lay in dialogue. We need others in order to evaluate our own existence and construct a coherent self-image. Think of that luminous moment when a poet captures something you’d felt but had never articulated; or when you’d struggled to summarise your thoughts, but they crystallised in conversation with a friend. Bakhtin believed that it was only through an encounter with another person that you could come to appreciate your own unique perspective and see yourself as a whole entity. By ‘looking through the screen of the other’s soul,’ he wrote, ‘I vivify my exterior.’ Selfhood and knowledge are evolving and dynamic; the self is never finished – it is an open book.

So reality is not simply out there, waiting to be uncovered. ‘Truth is not born nor is it to be found inside the head of an individual person, it is born between people collectively searching for truth, in the process of their dialogic interaction,’ Bakhtin wrote in Problems of Dostoevsky’s Poetics (1929). Nothing simply is itself, outside the matrix of relationships in which it appears. Instead, being is an act or event that must happen in the space between the self and the world.

Accepting that others are vital to our self-perception is a corrective to the limitations of the Cartesian view. Consider two different models of child psychology. Jean Piaget’s theory of cognitive development conceives of individual growth in a Cartesian fashion, as the reorganisation of mental processes. The developing child is depicted as a lone learner – an inventive scientist, struggling independently to make sense of the world. By contrast, ‘dialogical’ theories, brought to life in experiments such as Lisa Freund’s ‘doll house study’ from 1990, emphasise interactions between the child and the adult who can provide ‘scaffolding’ for how she understands the world.

A grimmer example might be solitary confinement in prisons. The punishment was originally designed to encourage introspection: to turn the prisoner’s thoughts inward, to prompt her to reflect on her crimes, and to eventually help her return to society as a morally cleansed citizen. A perfect policy for the reform of Cartesian individuals. But, in fact, studies of such prisoners suggest that their sense of self dissolves if they are punished this way for long enough. Prisoners tend to suffer profound physical and psychological difficulties, such as confusion, anxiety, insomnia, feelings of inadequacy, and a distorted sense of time. Deprived of contact and interaction – the external perspective needed to consummate and sustain a coherent self-image – a person risks disappearing into non-existence.

The emerging fields of embodied and enactive cognition have started to take dialogic models of the self more seriously. But for the most part, scientific psychology is only too willing to adopt individualistic Cartesian assumptions that cut away the webbing that ties the self to others. There is a Zulu phrase, ‘Umuntu ngumuntu ngabantu’, which means ‘A person is a person through other persons.’ This is a richer and better account, I think, than ‘I think, therefore I am.’Aeon counter – do not remove

Abeba Birhane

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.