Philosophical Writing Should Read like a Letter Written to Oneself

kierkegaard2

Søren Kierkegaard at his high desk (1920) by Luplau Janssen. Courtesy Wikipedia

John Lysaker | Aeon Ideas

In memory of Stanley Cavell (1926-2018)

I came to philosophy bursting with things to say. Somewhere along the way, that changed. Not that I stopped talking, or, as time went on, writing. But the mood of it, the key in which it was pitched, moved. I came to feel answerable. And not just to myself or those I knew but to some broader public, some open, indefinite ‘you’. ‘Answer for yourself’ wove into ‘know thyself’.

How though does one register a key change in prose? If philosophy is bound, in part, to the feeling of being answerable, shouldn’t it have more of an epistolary feel? ‘Dear you, here is where I stand, for the time being… Yours, me.’ One ventures thoughts, accounts for them and awaits a reply, only to begin again: ‘Dear you, thank you for your response. So much (or very little) has changed since I received your letter…’

A move toward the epistolary seems right to me, at least for philosophy. Still a gadfly perhaps, but also working through having been stung, and with the vulnerability of doing so before, even for others. But how much philosophy has the feel of a letter? And when we philosophise, are we cognisant of our addressees and the varied situations in which they find us? The view from nowhere has been more or less exiled from epistemology. We know that we know in concrete, situated locales. But has philosophical writing kept pace and developed a feel for what to consider when pondering: how should I write?

Survey philosophy’s history, and the plot thickens. Philosophical writing is a varied affair. Some texts prioritise demonstration, arguing, for example, that ‘truth’ names a working touch between belief and the world. Others favour provocation, as when a dialogue concerning the nature of friendship concludes before a working definition is reached. If we want a definition, we need to generate our own, or ponder what a lack of one implies. Still other texts offer exemplification, as when Simone de Beauvoir in The Second Sex (1949) proves herself to be the agent-intellect that patriarchy insists she’s not. By confronting her historical fate, she shows us how wrong, how unjust that historical fate has been. And she shows us what patriarchy has kept us from.

Genre considerations intensify the question of what should organise philosophical writing: dialogue, treatise, aphorism, essay, professional article and monograph, fragment, autobiography. And if one’s sensibility is more inclusive, letters, manifestos and interviews also become possibilities. No genre is fully scripted, however, hence the need to also consider logical-rhetorical operations: modus ponens, irony, transcendental arguments, allegory, images, analogies, examples, quotations, translation, even voice, a distinctive way of being there and for the reader. So much seems to count when we answer for how we write.

Questions concerning writing sometimes arise when philosophers worry about accessibility and a broader readership. But the possibilities I have enumerated bear directly on thought itself. Writing generates discovery, and genre impacts rather than simply transfers ideas; so too logical-rhetorical operations. Francis Bacon was drawn to the aphorism because it freed observation from scholastic habits, whereas the professional article defers to its lingua franca. The treatise exhausts whatever might be said about a topic – call this the view from everywhere – whereas the essay accepts its partiality and tests its reach relative to topics such as friendship, feminine sexuality, even a fierce love for film. When writing becomes the question, more than outreach calls for consideration.

Here’s a start. How will my thought unfold through this genre and these logical-rhetorical operations? Where will the aphorism, essay or professional article take me, or an exchange of letters? And so too examples, open disagreements, quotation, the labour of translation, or irony for that matter? It is a celebrated trope of surprise and displacement. But a good deal of irony, at least when one turns to the ironist, facilitates self-preservation. It is the reader who is surprised by an encounter with some covert meaning while the author’s overt and covert meanings are fairly settled. (I thus wonder: what does irony keep safe?)

Questions regarding which possibilities to enact cannot be answered through critique, which, following Immanuel Kant, interrogates the character of our judgments and operative concepts, seeking rules that might govern their use. The discoveries that writing occasions are evidence that philosophy belongs too intimately to language to play charioteer to its steeds. Writing is a gamble and, when it’s honest, one faces unexpected results.

Facing a blank page, one might also ask: what relations will this establish with addressees? The polemic seeks converts rather than interlocutors, and at the expense of discovery. And even when an outright polemic is avoided, some schematise opponents rather than read them publicly and carefully, thereby preaching to the converted, which seems a misstep.

Unwilling to proceed dogmatically, one might favour provocation at the expense of doctrine, as some take Plato to do. But any provocation has its own commitments, beginning with the end toward which it provokes its readers. Socrates is one kind of interlocutor, Gaius Laelius quite another, and that is because Plato and Cicero approach education, the soul and their respective states differently. Strict distinctions between provocation and doctrine (or form and content, for that matter) are thus untenable.

Other operations also engage one’s addressees. Examples allow readers to review what’s on offer, something also made possible when meaningful disagreements are staged. (When authors never pause to imagine a disagreement, I feel claustrophobic and throw open a window.) And if one begins to acknowledge how varied one’s addressees could be, other habits become salient. Looking back at my citations, I know that I’ve written texts that suggest ‘whites only’ or ‘women need not apply’.

Texts and readers do not meet in a vacuum, however. I thus wonder: how does one also address prevailing contextual forces, from ethno-nationalisms to white supremacy to the commodification of higher education? It is tempting to imagine a text without footnotes, as if they were ornaments. But in a period so averse to the rigours of knowledge, and so ahistorical in its feel for the truths we have, why not underscore the contested history of a thought, if only to insist: thought is work, the results fragile, and there will be disagreements. Clarity poses another question, and a particular challenge for philosophy, which is not underwritten by experiments. Instead, its ‘results’ are won (or lost) in the presentation. Moreover, philosophical conclusions do not remain philosophical if freed from the path that led to them. ‘God exists’ says one thing in prayer and something else at the close of a proof. Experts often are asked to share their results without showing their work. But showing one’s work is very much the work of philosophy. Can one do so and reach beyond the academy?

Every reader of Plato knows that Socrates, by way of exemplification, is an image of philosophy, from his modes of interrogation to who is interrogated to his reminders that philosophy demands courage. And so too the dialogue itself – it models philosophy. But every text announces: here too is philosophy. The overall bearing of one’s writing thus merits scrutiny. Is it generous or hasty? Has it earned its ‘therefores’ or, after ripping opponents for nuanced failings, does it invoke the intuitively plausible? Does it acknowledge the full range of possible addressees or cloister itself within the narrow folds of the like-minded? Does it challenge its starting points or hide cut corners with jargon and massive generalisations?

Taking my cue from Ludwig Wittgenstein, I would say: philosophy no longer knows its way around writing. And what it does know – the professional article and monograph – is underwritten by conformity rather than philosophical reflection and commitment. Not for all. And many have led elsewhere by example. But on the whole, and thinking of the present moment, the writer’s life remains unexamined in the aspirational context of philosophy.

Looking into a garden of genres and logical-rhetorical operations, I have proposed four orienting questions. How will my thought unfold along these lines? What relationships will they establish with my varied addressees? Will my address be able to navigate the currents of our varied lives and be ‘equal to the moment’, as Walter Benjamin would ask? And finally, what, in the name of philosophy, does my text exemplify? Have I offered a compelling image? ‘Dear you, here is where I stand, for the time being… Yours, me.’Aeon counter – do not remove

John Lysaker

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

To Boost your Self-esteem, Write about Chapters of your Life

1980s-car

New car, 1980s. Photo by Don Pugh/Flickr

Christian Jarrett | Aeon Ideas

In truth, so much of what happens to us in life is random – we are pawns at the mercy of Lady Luck. To take ownership of our experiences and exert a feeling of control over our future, we tell stories about ourselves that weave meaning and continuity into our personal identity. Writing in the 1950s, the psychologist Erik Erikson put it this way:

To be adult means among other things to see one’s own life in continuous perspective, both in retrospect and in prospect … to selectively reconstruct his past in such a way that, step for step, it seems to have planned him, or better, he seems to have planned it.

Alongside your chosen values and goals in life, and your personality traits – how sociable you are, how much of a worrier and so on – your life story as you tell it makes up the final part of what in 2015 the personality psychologist Dan P McAdams at Northwestern University in Illinois called the ‘personological trinity’.

Of course, some of us tell these stories more explicitly than others – one person’s narrative identity might be a barely formed story at the edge of their consciousness, whereas another person might literally write out their past and future in a diary or memoir.

Intriguingly, there’s some evidence that prompting people to reflect on and tell their life stories – a process called ‘life review therapy’ – could be psychologically beneficial. However, most of this work has been on older adults and people with pre-existing problems such as depression or chronic physical illnesses. It remains to be established through careful experimentation whether prompting otherwise healthy people to reflect on their lives will have any immediate benefits.

A relevant factor in this regard is the tone, complexity and mood of the stories that people tell themselves. For instance, it’s been shown that people who tell more positive stories, including referring to more instances of personal redemption, tend to enjoy higher self-esteem and greater ‘self-concept clarity’ (the confidence and lucidity in how you see yourself). Perhaps engaging in writing or talking about one’s past will have immediate benefits only for people whose stories are more positive.

In a recent paper in the Journal of Personality, Kristina L Steiner at Denison University in Ohio and her colleagues looked into these questions and reported that writing about chapters in your life does indeed lead to a modest, temporary self-esteem boost, and that in fact this benefit arises regardless of how positive your stories are. However, there were no effects on self-concept clarity, and many questions on this topic remain for future study.

Steiner’s team tested three groups of healthy American participants across three studies. The first two groups – involving more than 300 people between them – were young undergraduates, most of them female. The final group, a balanced mix of 101 men and women, was recruited from the community, and they were older, with an average age of 62.

The format was essentially the same for each study. The participants were asked to complete various questionnaires measuring their mood, self-esteem and self-concept clarity, among other things. Then half of them were allocated to write about four chapters in their lives, spending 10 minutes on each. They were instructed to be as specific and detailed as possible, and to reflect on main themes, how each chapter related to their lives as a whole, and to think about any causes and effects of the chapter on them and their lives. The other half of the participants, who acted as a control group, spent the same time writing about four famous Americans of their choosing (to make this task more intellectually comparable, they were also instructed to reflect on the links between the individuals they chose, how they became famous, and other similar questions). After the writing tasks, all the participants retook the same psychological measures they’d completed at the start.

The participants who wrote about chapters in their lives displayed small, but statistically significant, increases to their self-esteem, whereas the control-group participants did not. This self-esteem boost wasn’t explained by any changes to their mood, and – to the researchers’ surprise – it didn’t matter whether the participants rated their chapters as mostly positive or negative, nor did it depend on whether they featured themes of agency (that is, being in control) and communion (pertaining to meaningful relationships). Disappointingly, there was no effect of the life-chapter task on self-concept clarity, nor on meaning and identity.

How long do the self-esteem benefits of the life-chapter task last, and might they accumulate by repeating the exercise? Clues come from the second of the studies, which involved two life chapter-writing tasks (and two tasks writing about famous Americans for the control group), with the second task coming 48 hours after the first. The researchers wanted to see if the self-esteem boost arising from the first life-chapter task would still be apparent at the start of the second task two days later – but it wasn’t. They also wanted to see if the self-esteem benefits might accumulate over the two tasks – they didn’t (the second life-chapter task had its own self-esteem benefit, but it wasn’t cumulative with the benefits of the first).

It remains unclear exactly why the life-chapter task had the self-esteem benefits that it did. It’s possible that the task led participants to consider how they had changed in positive ways. They might also have benefited from expressing and confronting their emotional reactions to these periods of their lives – this would certainly be consistent with the well-documented benefits of expressive writing and ‘affect labelling’ (the calming effect of putting our emotions into words). Future research will need to compare different life chapter-writing instructions to tease apart these different potential beneficial mechanisms. It would also be helpful to test more diverse groups of participants and different ‘dosages’ of the writing task to see if it is at all possible for the benefits to accrue over time.

The researchers said: ‘Our findings suggest that the experience of systematically reviewing one’s life and identifying, describing and conceptually linking life chapters may serve to enhance the self, even in the absence of increased self-concept clarity and meaning.’ If you are currently lacking much confidence and feel like you could benefit from an ego boost, it could be worth giving the life-chapter task a go. It’s true that the self-esteem benefits of the exercise were small, but as Steiner’s team noted, ‘the costs are low’ too.Aeon counter – do not remove

Christian Jarrett

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Was the Real Socrates more Worldly and Amorous than We Knew?

socrates-alcibiades-aspasia

Detail from Socrates Dragging Alcibiades from the Embrace of Aspasia (1785) by Jean-Baptiste Regnault. Louvre, Paris. Courtesy Wikipedia

Armand D’Angour | Aeon Ideas

Socrates is widely considered to be the founding figure of Western philosophy – a thinker whose ideas, transmitted by the extensive writings of his devoted follower Plato, have shaped thinking for more than 2,000 years. ‘For better or worse,’ wrote the Classical scholar Diskin Clay in Platonic Questions (2000), ‘Plato’s Socrates is our Socrates.’ The enduring image of Socrates that comes from Plato is of a man of humble background, little education, few means and unappealing looks, who became a brilliant and disputatious philosopher married to an argumentative woman called Xanthippe. Both Plato and Xenophon, Socrates’ other principal biographer, were born c424 BCE, so they knew Socrates (born c469 BCE) only as an old man. Keen to defend his reputation from the charges of ‘introducing new kinds of gods’ and ‘corrupting young men’ on which he was eventually brought to trial and executed, they painted a picture of Socrates in late middle age as a pious teacher and unremitting ethical thinker, a man committed to shunning bodily pleasures for higher educational purposes.

Yet this clearly idealised picture of Socrates is not the whole story, and it gives us no indication of the genesis of his ideas. Plato’s pupil Aristotle and other Ancient writers provide us with correctives to the Platonic Socrates. For instance, Aristotle’s followers Aristoxenus and Clearchus of Soli preserve biographical snippets that they might have known from their teacher. From them we learn that Socrates in his teens was intimate with a distinguished older philosopher, Archelaus; that he married more than once, the first time to an aristocratic woman called Myrto, with whom he had two sons; and that he had an affair with Aspasia of Miletus, the clever and influential woman who was later to become the partner of Pericles, a leading citizen of Athens.

If these statements are to be believed, a different Socrates emerges: that of a highly placed young Athenian, whose personal experiences within an elevated milieu inspired him to embark on a new style of philosophy that was to change the way people thought ever afterwards. But can we trust these later authors? How could writers two or more generations removed from Socrates’ own time have felt entitled to contradict Plato? One answer is that Aristotle might have derived some information from Plato in person, rather than from his writings, and passed this on to his pupils; another is that, as a member of Plato’s Academy for 20 years, Aristotle might have known that Plato had elided certain facts to defend Socrates’ reputation; a third is that the later authors had access to further sources (oral and written) other than Plato, which they considered to be reliable.

Plato’s Socrates is an eccentric. Socrates claimed to have heard voices in his head from youth, and is described as standing still in public places for long stretches of time, deep in thought. Plato notes these phenomena without comment, accepting Socrates’ own description of the voices as his ‘divine sign’, and reporting on his awe-inspiring ability to meditate for hours on end. Aristotle, the son of a doctor, took a more medical approach: he suggested that Socrates (along with other thinkers) suffered from a medical condition he calls ‘melancholy’. Recent medical investigators have agreed, speculating that Socrates’ behaviour was consistent with a medical condition known as catalepsy. Such a condition might well have made Socrates feel estranged from his peers in early life, encouraging him to embark on a different kind of lifestyle.

If the received picture of Socrates’ life and personality merits reconsid­eration, what about his thought? Aristotle makes clear in his Metaphysics that Plato misrepresented Socrates regarding the so-called Theory of Forms:

Socrates concerned himself with ethics, neglecting the natural world but seeking the universal in ethical matters, and he was the first to insist on definitions. Plato took over this doctrine, but argued that what was universal applied not to objects of sense but to entities of another kind. He thought a single description could not define things that are perceived, since such things are always changing. Unchanging entities he called ‘Forms’…

Aristotle himself had little sympathy for such otherwordly views. As a biologist and scientist, he was mainly concerned with the empirical investigation of the world. In his own writings he dismissed the Forms, replacing them with a logical account of universals and their particular instantiations. For him, Socrates was also a more down-to-earth thinker than Plato sought to depict.

Sources from late antiquity, such as the 5th-century CE Christian writers Theodoret of Cyrrhus and Cyril of Alexandria, state that Socrates was, at least as a younger man, a lover of both sexes. They corroborate occasional glimpses of an earthy Socrates in Plato’s own writings, such as in the dialogue Charmides where Socrates claims to be intensely aroused by the sight of a young man’s bare chest. However, the only partner of Socrates’ whom Plato names is Xanthippe; but since she was carrying a baby in her arms when Socrates was aged 70, it is unlikely they met more than a decade or so earlier, when Socrates was already in his 50s. Plato’s failure to mention the earlier aristocratic wife Myrto might be an attempt to minimise any perception that Socrates came from a relatively wealthy background with connections to high-ranking members of his community; it was largely because Socrates was believed to be associated with the antidemocratic aristocrats who took power in Athens that he was put on trial and executed in 399 BCE.

Aristotle’s testimony, therefore, is a valuable reminder that the picture of Socrates bequeathed by Plato should not be accepted uncritically. Above all, if Socrates at some point in his early manhood became the companion of Aspasia – a woman famous as an instructor of eloquence and relationship counsellor – it potentially changes our understanding not only of Socrates’ early life, but of the formation of his philosophical ideas. He is famous for saying: ‘All I know is that I know nothing.’ But the one thing he claims, in Plato’s Symposium, that he does know about, is love, which he learned about from a clever woman. Might that woman have been Aspasia, once his beloved companion? The real Socrates must remain elusive but, in the statements of Aristotle, Aristoxenus and Clearchus of Soli, we get intriguing glimpses of a different Socrates from the one portrayed so eloquently in Plato’s writings.

For more from Armand D’Angour and his extraordinary research bringing the music of Ancient Greece to life, see this Video and read this Idea.Aeon counter – do not remove

Armand D’Angour

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Muhammad: an Anticlerical Hero of the European Enlightenment

koran-sale

Thomas Jefferson’s copy of George Sale’s 1734 translation of the Quran is used in the swearing in ceremony of US Representative Keith Ellison at the United States Capitol in Washington, DC, on 4 January 2007. Photo by Getty

John Tolan | Aeon Ideas

Publishing the Quran and making it available in translation was a dangerous enterprise in the 16th century, apt to confuse or seduce the faithful Christian. This, at least, was the opinion of the Protestant city councillors of Basel in 1542, when they briefly jailed a local printer for planning to publish a Latin translation of the Muslim holy book. The Protestant reformer Martin Luther intervened to salvage the project: there was no better way to combat the Turk, he wrote, than to expose the ‘lies of Muhammad’ for all to see.

The resulting publication in 1543 made the Quran available to European intellectuals, most of whom studied it in order to better understand and combat Islam. There were others, however, who used their reading of the Quran to question Christian doctrine. The Catalonian polymath and theologian Michael Servetus found numerous Quranic arguments to employ in his anti-Trinitarian tract, Christianismi Restitutio (1553), in which he called Muhammad a true reformer who preached a return to the pure monotheism that Christian theologians had corrupted by inventing the perverse and irrational doctrine of the Trinity. After publishing these heretical ideas, Servetus was condemned by the Catholic Inquisition in Vienne, and finally burned with his own books in Calvin’s Geneva.

During the European Enlightenment, a number of authors presented Muhammad in a similar vein, as an anticlerical hero; some saw Islam as a pure form of monotheism close to philosophic Deism and the Quran as a rational paean to the Creator. In 1734, George Sale published a new English translation. In his introduction, he traced the early history of Islam and idealised the Prophet as an iconoclastic, anticlerical reformer who had banished the ‘superstitious’ beliefs and practices of early Christians – the cult of the saints, holy relics – and quashed the power of a corrupt and avaricious clergy.

Sale’s translation of the Quran was widely read and appreciated in England: for many of his readers, Muhammad had become a symbol of anticlerical republicanism. It was influential outside England too. The US founding father Thomas Jefferson bought a copy from a bookseller in Williamsburg, Virginia, in 1765, which helped him conceive of a philosophical deism that surpassed confessional boundaries. (Jefferson’s copy, now in the Library of Congress, has been used for the swearing in of Muslim representatives to Congress, starting with Keith Ellison in 2007.) And in Germany, the Romantic Johann Wolfgang von Goethe read a translation of Sale’s version, which helped to colour his evolving notion of Muhammad as an inspired poet and archetypal prophet.

In France, Voltaire also cited Sale’s translation with admiration: in his world history Essai sur les mœurs et l’esprit des nations (1756), he portrayed Muhammad as an inspired reformer who abolished superstitious practices and eradicated the power of corrupt clergy. By the end of the century, the English Whig Edward Gibbon (an avid reader of both Sale and Voltaire) presented the Prophet in glowing terms in The History of the Decline and Fall of the Roman Empire (1776-89):

The creed of Mahomet is free from suspicion or ambiguity; and the Koran is a glorious testimony to the unity of God. The prophet of Mecca rejected the worship of idols and men, of stars and planets, on the rational principle that whatever rises must set, that whatever is born must die, that whatever is corruptible must decay and perish. In the author of the universe, his rational enthusiasm confessed and adored an infinite and eternal being, without form or place, without issue or similitude, present to our most secret thoughts, existing by the necessity of his own nature, and deriving from himself all moral and intellectual perfection … A philosophic theist might subscribe the popular creed of the Mahometans: a creed too sublime, perhaps, for our present faculties.

But it was Napoleon Bonaparte who took the Prophet most keenly to heart, styling himself a ‘new Muhammad’ after reading the French translation of the Quran that Claude-Étienne Savary produced in 1783. Savary wrote his translation in Egypt: there, surrounded by the music of the Arabic language, he sought to render into French the beauty of the Arabic text. Like Sale, Savary wrote a long introduction presenting Muhammad as a ‘great’ and ‘extraordinary’ man, a ‘genius’ on the battlefield, a man who knew how to inspire loyalty among his followers. Napoleon read this translation on the ship that took him to Egypt in 1798. Inspired by Savary’s portrait of the Prophet as a brilliant general and sage lawgiver, Napoleon sought to become a new Muhammad, and hoped that Cairo’s ulama (scholars) would accept him and his French soldiers as friends of Islam, come to liberate Egyptians from Ottoman tyranny. He even claimed that his own arrival in Egypt had been announced in the Quran.

Napoleon had an idealised, bookish, Enlightenment vision of Islam as pure monotheism: indeed, the failure of his Egyptian expedition owed partly to his idea of Islam being quite different from the religion of Cairo’s ulama. Yet Napoleon was not alone in seeing himself as a new Muhammad: Goethe enthusiastically proclaimed that the emperor was the ‘Mahomet der Welt’ (Muhammad of the world), and the French author Victor Hugo portrayed him as a ‘Mahomet d’occident’ (Muhammad of the West). Napoleon himself, at the end of his life, exiled on Saint Helena and ruminating on his defeat, wrote about Muhammad and defended his legacy as a ‘great man who changed the course of history’. Napoleon’s Muhammad, conqueror and lawgiver, persuasive and charismatic, resembles Napoleon himself – but a Napoleon who was more successful, and certainly never exiled to a cold windswept island in the South Atlantic.

The idea of Muhammad as one of the world’s great legislators persisted into the 20th century. Adolph A Weinman, a German-born American sculptor, depicted Muhammad in his 1935 frieze in the main chamber of the US Supreme Court, where the Prophet takes his place among 18 lawgivers. Various European Christians called on their churches to recognise Muhammad’s special role as prophet of the Muslims. For Catholics scholars of Islam such as Louis Massignon or Hans Küng, or for the Scottish Protestant scholar of Islam William Montgomery Watt, such recognition was the best way to promote peaceful, constructive dialogue between Christians and Muslims.

This kind of dialogue continues today, but it has been largely drowned out by the din of conflict, as extreme-Right politicians in Europe and elsewhere diabolise Muhammad to justify anti-Muslim policies. The Dutch politician Geert Wilders calls him a terrorist, paedophile and psychopath. The negative image of the Prophet is paradoxically promoted by fundamentalist Muslims who adulate him and reject all historical contextualisation of his life and teachings; meanwhile, violent extremists claim to defend Islam and its prophet from ‘insults’ through murder and terror. All the more reason, then, to step back and examine the diverse and often surprising Western portraits of the myriad faces of Muhammad.


Faces of Muhammad: Western Perceptions of the Prophet of Islam from the Middle Ages to Today by John Tolan is published via Princeton University Press.Aeon counter – do not remove

John Tolan

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

A Philosophical Approach to Routines can Illuminate Who We Really Are

Elias Anttila | Aeon Ideas

There are hundreds of things we do – repeatedly, routinely – every day. We wake up, check our phones, eat our meals, brush our teeth, do our jobs, satisfy our addictions. In recent years, such habitual actions have become an arena for self-improvement: bookshelves are saturated with bestsellers about ‘life hacks’, ‘life design’ and how to ‘gamify’ our long-term projects, promising everything from enhanced productivity to a healthier diet and huge fortunes. These guides vary in scientific accuracy, but they tend to depict habits as routines that follow a repeated sequence of behaviours, into which we can intervene to set ourselves on a more desirable track.

The problem is that this account has been bleached of much of its historical richness. Today’s self-help books have in fact inherited a highly contingent version of habit – specifically, one that arises in the work of early 20th-century psychologists such as B F Skinner, Clark Hull, John B Watson and Ivan Pavlov. These thinkers are associated with behaviourism, an approach to psychology that prioritises observable, stimulus-response reactions over the role of inner feelings or thoughts. The behaviourists defined habits in a narrow, individualistic sense; they believed that people were conditioned to respond automatically to certain cues, which produced repeated cycles of action and reward.

The behaviourist image of habit has since been updated in light of contemporary neuroscience. For example, the fact that the brain is plastic and changeable allows habits to inscribe themselves in our neural wiring over time by forming privileged connections between brain regions. The influence of behaviourism has enabled researchers to study habits quantitatively and rigorously. But it has also bequeathed a flattened notion of habit that overlooks the concept’s wider philosophical implications.

Philosophers used to look at habits as ways of contemplating who we are, what it means to have faith, and why our daily routines reveal something about the world at large. In his Nicomachean Ethics, Aristotle uses the terms hexis and ethos – both translated today as ‘habit’ – to study stable qualities in people and things, especially regarding their morals and intellect. Hexis denotes the lasting characteristics of a person or thing, like the smoothness of a table or the kindness of a friend, which can guide our actions and emotions. A hexis is a characteristic, capacity or disposition that one ‘owns’; its etymology is the Greek word ekhein, the term for ownership. For Aristotle, a person’s character is ultimately a sum of their hexeis (plural).

An ethos, on the other hand, is what allows one to develop hexeis. It is both a way of life and the basic calibre of one’s personality. Ethos is what gives rise to the essential principles that help to guide moral and intellectual development. Honing hexeis out of an ethos thus takes both time and practice. This version of habit fits with the tenor of ancient Greek philosophy, which often emphasised the cultivation of virtue as a path to the ethical life.

Millennia later, in medieval Christian Europe, Aristotle’s hexis was Latinised into habitus. The translation tracks a shift away from the virtue ethics of the Ancients towards Christian morality, by which habit acquired distinctly divine connotations. In the middle ages, Christian ethics moved away from the idea of merely shaping one’s moral dispositions, and proceeded instead from the belief that ethical character was handed down by God. In this way, the desired habitus should become entwined with the exercise of Christian virtue.

The great theologian Thomas Aquinas saw habit as a vital component of spiritual life. According to his Summa Theologica (1265-1274), habitus involved a rational choice, and led the true believer to a sense of faithful freedom. By contrast, Aquinas used consuetudo to refer to the habits we acquire that inhibit this freedom: the irreligious, quotidian routines that do not actively engage with faith. Consuetudo signifies mere association and regularity, whereas habitus conveys sincere thoughtfulness and consciousness of God. Consuetudo is also where we derive the terms ‘custom’ and ‘costume’ – a lineage which suggests that the medievals considered habit to extend beyond single individuals.

For the Enlightenment philosopher David Hume, these ancient and medieval interpretations of habit were far too limiting. Hume conceived of habit via what it empowers and enables us to do as human beings. He came to the conclusion that habit is the ‘cement of the universe’, which all ‘operations of the mind … depend on’. For instance, we might throw a ball in the air and watch it rise and descend to Earth. By habit, we come to associate these actions and perceptions – the movement of our limb, the trajectory of the ball – in a way that eventually lets us grasp the relationship between cause and effect. Causality, for Hume, is little more than habitual association. Likewise language, music, relationships – any skills we use to transform experiences into something that’s useful are built from habits, he believed. Habits are thus crucial instruments that enable us to navigate the world and to understand the principles by which it operates. For Hume, habit is nothing less than the ‘great guide of human life’.

It’s clear that we ought to see habits as more than mere routines, tendencies and ticks. They encompass our identities and ethics; they teach us how to practise our faiths; if Hume is to believed, they do no less than bind the world together. Seeing habits in this new-yet-old way requires a certain conceptual and historical about-face, but this U-turn offers much more than shallow self-help. It should show us that the things we do every day aren’t just routines to be hacked, but windows through which we might glimpse who we truly are.Aeon counter – do not remove

Elias Anttila

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Ibn Tufayl and the Story of the Feral Child of Philosophy

scholar-in-garden

Album folio fragment with scholar in a garden. Attributed to Muhammad Ali 1610-15. Courtesy Museum of Fine Arts, Boston

Marwa Elshakry & Murad Idris | Aeon Ideas

Ibn Tufayl, a 12th-century Andalusian, fashioned the feral child in philosophy. His story Hayy ibn Yaqzan is the tale of a child raised by a doe on an unnamed Indian Ocean island. Hayy ibn Yaqzan (literally ‘Living Son of Awakeness’) reaches a state of perfect, ecstatic understanding of the world. A meditation on the possibilities (and pitfalls) of the quest for the good life, Hayy offers not one, but two ‘utopias’: a eutopia (εὖ ‘good’, τόπος ‘place’) of the mind in perfect isolation, and an ethical community under the rule of law. Each has a version of human happiness. Ibn Tufayl pits them against each other, but each unfolds ‘no where’ (οὐ ‘not’, τόπος ‘place’) in the world.

Ibn Tufayl begins with a vision of humanity isolated from society and politics. (Modern European political theorists who employed this literary device called it ‘the state of nature’.) He introduces Hayy by speculating about his origin. Whether Hayy was placed in a basket by his mother to sail through the waters of life (like Moses) or born by spontaneous generation on the island is irrelevant, Ibn Tufayl says. His divine station remains the same, as does much of his life, spent in the company only of animals. Later philosophers held that society elevates humanity from its natural animal state to an advanced, civilised one. Ibn Tufayl took a different view. He maintained that humans can be perfected only outside society, through a progress of the soul, not the species.

In contrast to Thomas Hobbes’s view that ‘man is a wolf to man’, Hayy’s island has no wolves. It proves easy enough for him to fend off other creatures by waving sticks at them or donning terrifying costumes of hides and feathers. For Hobbes, the fear of violent death is the origin of the social contract and the apologia for the state; but Hayy’s first encounter with fear of death is when his doe-mother dies. Desperate to revive her, Hayy dissects her heart only to find one of its chambers is empty. The coroner-turned-theologian concludes that what he loved in his mother no longer resides in her body. Death therefore was the first lesson of metaphysics, not politics.

Hayy then observes the island’s plants and animals. He meditates upon the idea of an elemental, ‘vital spirit’ upon discovering fire. Pondering the plurality of matter leads him to conclude that it must originate from a singular, non-corporeal source or First Cause. He notes the perfect motion of the celestial spheres and begins a series of ascetic exercises (such as spinning until dizzy) to emulate this hidden, universal order. By the age of 50, he retreats from the physical world, meditating in his cave until, finally, he attains a state of ecstatic illumination. Reason, for Ibn Tufayl, is thus no absolute guide to Truth.

The difference between Hayy’s ecstatic journeys of the mind and later rationalist political thought is the role of reason. Yet many later modern European commentaries or translations of Hayy confuse this by framing the allegory in terms of reason. In 1671, Edward Pococke entitled his Latin translation The Self-Taught Philosopher: In Which It Is Demonstrated How Human Reason Can Ascend from Contemplation of the Inferior to Knowledge of the Superior. In 1708, Simon Ockley’s English translation was The Improvement of Human Reason, and it too emphasised reason’s capacity to attain ‘knowledge of God’. For Ibn Tufayl, however, true knowledge of God and the world – as a eutopia for the ‘mind’ (or soul) – could come only through perfect contemplative intuition, not absolute rational thought.

This is Ibn Tufayl’s first utopia: an uninhabited island where a feral philosopher retreats to a cave to reach ecstasy through contemplation and withdrawal from the world. Friedrich Nietzsche’s Zarathustra would be impressed: ‘Flee, my friend, into your solitude!’

The rest of the allegory introduces the problem of communal life and a second utopia. After Hayy achieves his perfect condition, an ascetic is shipwrecked on his island. Hayy is surprised to discover another being who so resembles him. Curiosity leads him to befriend the wanderer, Absal. Absal teaches Hayy language, and describes the mores of his own island’s law-abiding people. The two men determine that the islanders’ religion is a lesser version of the Truth that Hayy discovered, shrouded in symbols and parables. Hayy is driven by compassion to teach them the Truth. They travel to Absal’s home.

The encounter is disastrous. Absal’s islanders feel compelled by their ethical principles of hospitality towards foreigners, friendship with Absal, and association with all people to welcome Hayy. But soon Hayy’s constant attempts to preach irritate them. Hayy realises that they are incapable of understanding. They are driven by satisfactions of the body, not the mind. There can be no perfect society because not everyone can achieve a state of perfection in their soul. Illumination is possible only for the select, in accordance with a sacred order, or a hieros archein. (This hierarchy of being and knowing is a fundamental message of neo-Platonism.) Hayy concludes that persuading people away from their ‘natural’ stations would only corrupt them further. The laws that the ‘masses’ venerate, be they revealed or reasoned, he decides, are their only chance to achieve a good life.

The islanders’ ideals – lawfulness, hospitality, friendship, association – might seem reasonable, but these too exist ‘no where’ in the world. Hence their dilemma: either they adhere to these and endure Hayy’s criticisms, or violate them by shunning him. This is a radical critique of the law and its ethical principles: they are normatively necessary for social life yet inherently contradictory and impossible. It’s a sly reproach of political life, one whose bite endures. Like the islanders, we follow principles that can undermine themselves. To be hospitable, we must be open to the stranger who violates hospitality. To be democratic, we must include those who are antidemocratic. To be worldly, our encounters with other people must be opportunities to learn from them, not just about them.

In the end, Hayy returns to his island with Absal, where they enjoy a life of ecstatic contemplation unto death. They abandon the search for a perfect society of laws. Their eutopia is the quest of the mind left unto itself, beyond the imperfections of language, law and ethics – perhaps beyond even life itself.

The islanders offer a less obvious lesson: our ideals and principles undermine themselves, but this is itself necessary for political life. For an island of pure ethics and law is an impossible utopia. Perhaps, like Ibn Tufayl, all we can say on the search for happiness is (quoting Al-Ghazali):

It was – what it was is harder to say.
Think the best, but don’t make me describe it away.

After all, we don’t know what happened to Hayy and Absal after their deaths – or to the islanders after they left.Aeon counter – do not remove

Marwa Elshakry & Murad Idris

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Descartes was Wrong: ‘A Person is a Person through Other Persons’

young-moe

Detail from Young Moe (1938) by Paul Klee. Courtesy Phillips collection/Wikipedia

Abeba Birhane | Aeon Ideas

According to Ubuntu philosophy, which has its origins in ancient Africa, a newborn baby is not a person. People are born without ‘ena’, or selfhood, and instead must acquire it through interactions and experiences over time. So the ‘self’/‘other’ distinction that’s axiomatic in Western philosophy is much blurrier in Ubuntu thought. As the Kenyan-born philosopher John Mbiti put it in African Religions and Philosophy (1975): ‘I am because we are, and since we are, therefore I am.’

We know from everyday experience that a person is partly forged in the crucible of community. Relationships inform self-understanding. Who I am depends on many ‘others’: my family, my friends, my culture, my work colleagues. The self I take grocery shopping, say, differs in her actions and behaviours from the self that talks to my PhD supervisor. Even my most private and personal reflections are entangled with the perspectives and voices of different people, be it those who agree with me, those who criticise, or those who praise me.

Yet the notion of a fluctuating and ambiguous self can be disconcerting. We can chalk up this discomfort, in large part, to René Descartes. The 17th-century French philosopher believed that a human being was essentially self-contained and self-sufficient; an inherently rational, mind-bound subject, who ought to encounter the world outside her head with skepticism. While Descartes didn’t single-handedly create the modern mind, he went a long way towards defining its contours.

Descartes had set himself a very particular puzzle to solve. He wanted to find a stable point of view from which to look on the world without relying on God-decreed wisdoms; a place from which he could discern the permanent structures beneath the changeable phenomena of nature. But Descartes believed that there was a trade-off between certainty and a kind of social, worldly richness. The only thing you can be certain of is your own cogito – the fact that you are thinking. Other people and other things are inherently fickle and erratic. So they must have nothing to do with the basic constitution of the knowing self, which is a necessarily detached, coherent and contemplative whole.

Few respected philosophers and psychologists would identify as strict Cartesian dualists, in the sense of believing that mind and matter are completely separate. But the Cartesian cogito is still everywhere you look. The experimental design of memory testing, for example, tends to proceed from the assumption that it’s possible to draw a sharp distinction between the self and the world. If memory simply lives inside the skull, then it’s perfectly acceptable to remove a person from her everyday environment and relationships, and to test her recall using flashcards or screens in the artificial confines of a lab. A person is considered a standalone entity, irrespective of her surroundings, inscribed in the brain as a series of cognitive processes. Memory must be simply something you have, not something you do within a certain context.

Social psychology purports to examine the relationship between cognition and society. But even then, the investigation often presumes that a collective of Cartesian subjects are the real focus of the enquiry, not selves that co-evolve with others over time. In the 1960s, the American psychologists John Darley and Bibb Latané became interested in the murder of Kitty Genovese, a young white woman who had been stabbed and assaulted on her way home one night in New York. Multiple people had witnessed the crime but none stepped in to prevent it. Darley and Latané designed a series of experiments in which they simulated a crisis, such as an epileptic fit, or smoke billowing in from the next room, to observe what people did. They were the first to identify the so-called ‘bystander effect’, in which people seem to respond more slowly to someone in distress if others are around.

Darley and Latané suggested that this might come from a ‘diffusion of responsibility’, in which the obligation to react is diluted across a bigger group of people. But as the American psychologist Frances Cherry argued in The Stubborn Particulars of Social Psychology: Essays on the Research Process (1995), this numerical approach wipes away vital contextual information that might help to understand people’s real motives. Genovese’s murder had to be seen against a backdrop in which violence against women was not taken seriously, Cherry said, and in which people were reluctant to step into what might have been a domestic dispute. Moreover, the murder of a poor black woman would have attracted far less subsequent media interest. But Darley and Latané’s focus make structural factors much harder to see.

Is there a way of reconciling these two accounts of the self – the relational, world-embracing version, and the autonomous, inward one? The 20th-century Russian philosopher Mikhail Bakhtin believed that the answer lay in dialogue. We need others in order to evaluate our own existence and construct a coherent self-image. Think of that luminous moment when a poet captures something you’d felt but had never articulated; or when you’d struggled to summarise your thoughts, but they crystallised in conversation with a friend. Bakhtin believed that it was only through an encounter with another person that you could come to appreciate your own unique perspective and see yourself as a whole entity. By ‘looking through the screen of the other’s soul,’ he wrote, ‘I vivify my exterior.’ Selfhood and knowledge are evolving and dynamic; the self is never finished – it is an open book.

So reality is not simply out there, waiting to be uncovered. ‘Truth is not born nor is it to be found inside the head of an individual person, it is born between people collectively searching for truth, in the process of their dialogic interaction,’ Bakhtin wrote in Problems of Dostoevsky’s Poetics (1929). Nothing simply is itself, outside the matrix of relationships in which it appears. Instead, being is an act or event that must happen in the space between the self and the world.

Accepting that others are vital to our self-perception is a corrective to the limitations of the Cartesian view. Consider two different models of child psychology. Jean Piaget’s theory of cognitive development conceives of individual growth in a Cartesian fashion, as the reorganisation of mental processes. The developing child is depicted as a lone learner – an inventive scientist, struggling independently to make sense of the world. By contrast, ‘dialogical’ theories, brought to life in experiments such as Lisa Freund’s ‘doll house study’ from 1990, emphasise interactions between the child and the adult who can provide ‘scaffolding’ for how she understands the world.

A grimmer example might be solitary confinement in prisons. The punishment was originally designed to encourage introspection: to turn the prisoner’s thoughts inward, to prompt her to reflect on her crimes, and to eventually help her return to society as a morally cleansed citizen. A perfect policy for the reform of Cartesian individuals. But, in fact, studies of such prisoners suggest that their sense of self dissolves if they are punished this way for long enough. Prisoners tend to suffer profound physical and psychological difficulties, such as confusion, anxiety, insomnia, feelings of inadequacy, and a distorted sense of time. Deprived of contact and interaction – the external perspective needed to consummate and sustain a coherent self-image – a person risks disappearing into non-existence.

The emerging fields of embodied and enactive cognition have started to take dialogic models of the self more seriously. But for the most part, scientific psychology is only too willing to adopt individualistic Cartesian assumptions that cut away the webbing that ties the self to others. There is a Zulu phrase, ‘Umuntu ngumuntu ngabantu’, which means ‘A person is a person through other persons.’ This is a richer and better account, I think, than ‘I think, therefore I am.’Aeon counter – do not remove

Abeba Birhane

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Do you have a Self-Actualised Personality? Maslow Revisited

pyramids

View of the second Pyramid from the top of the Great Pyramid. Photo courtesy of the Library of Congress

Christian Jarrett | Aeon Ideas

Abraham Maslow was the 20th-century American psychologist best-known for explaining motivation through his hierarchy of needs, which he represented in a pyramid. At the base, our physiological needs include food, water, warmth and rest. Moving up the ladder, Maslow mentions safety, love, and self-esteem and accomplishment. But after all those have been satisfied, the motivating factor at the top of the pyramid involves striving to achieve our full potential and satisfy creative goals. As one of the founders of humanistic psychology, Maslow proposed that the path to self-transcendence and, ultimately, greater compassion for all of humanity requires the ‘self-actualisation’ at the top of his pyramid – fulfilling your true potential, and becoming your authentic self.

Now Scott Barry Kaufman, a psychologist at Barnard College, Columbia University, believes it is time to revive the concept, and link it with contemporary psychological theory. ‘We live in times of increasing divides, selfish concerns, and individualistic pursuits of power,’ Kaufman wrote recently in a blog in Scientific American introducing his new research. He hopes that rediscovering the principles of self-actualisation might be just the tonic that the modern world is crying out for. To this end, he’s used modern statistical methods to create a test of self-actualisation or, more specifically, of the 10 characteristics exhibited by self-actualised people, and it was recently published in the Journal of Humanistic Psychology.

Kaufman first surveyed online participants using 17 characteristics Maslow believed were shared by self-actualised people. Kaufman found that seven of these were redundant or irrelevant and did not correlate with others, leaving 10 key characteristics of self-actualisation.

Next, he reworded some of Maslow’s original language and labelling to compile a modern 30-item questionnaire featuring three items tapping each of these 10 remaining characteristics: continued freshness of appreciation; acceptance; authenticity; equanimity; purpose; efficient perception of reality; humanitarianism; peak experiences; good moral intuition; and creative spirit (see the full questionnaire below, and take the test on Kaufman’s website).

So what did Kaufman report? In a survey of more than 500 people on Amazon’s Mechanical Turk website, Kaufman found that scores on each of these 10 characteristics tended to correlate, but also that they each made a unique contribution to a unifying factor of self-actualisation – suggesting that this is a valid concept comprised of 10 subtraits.

Participants’ total scores on the test also correlated with their scores on the main five personality traits (that is, with higher extraversion, agreeableness, emotional stability, openness and conscientiousness) and with the metatrait of ‘stability’, indicative of an ability to avoid impulses in the pursuit of one’s goals. That the new test corresponded in this way with established personality measures provides further evidence of its validity.

Next, Kaufman turned to modern theories of wellbeing, such as self-determination theory, to see if people’s scores on his self-actualisation scale correlated with these contemporary measures. Sure enough, he found that people with more characteristics of self-actualisation also tended to score higher on curiosity, life-satisfaction, self-acceptance, personal growth and autonomy, among other factors – just as Maslow would have predicted.

‘Taken together, this total pattern of data supports Maslow’s contention that self-actualised individuals are more motivated by growth and exploration than by fulfilling deficiencies in basic needs,’ Kaufman writes. He adds that the new empirical support for Maslow’s ideas is ‘quite remarkable’ given that Maslow put them together with ‘a paucity of actual evidence’.

A criticism often levelled at Maslow’s notion of self-actualisation is that its pursuit encourages an egocentric focus on one’s own goals and needs. However, Maslow always contended that it is only through becoming our true, authentic selves that we can transcend the self and look outward with compassion to the rest of humanity. Kaufman explored this too, and found that higher scorers on his self-actualisation scale tended also to score higher on feelings of oneness with the world, but not on decreased self-salience, a sense of independence and bias toward information relevant to oneself. (These are the two main factors in a modern measure of self-transcendence developed by the psychologist David Yaden at the University of Pennsylvania.)

Kaufman said that this last finding supports ‘Maslow’s contention that self-actualising individuals are able to paradoxically merge with a common humanity while at the same time able to maintain a strong identity and sense of self’.

Where the new data contradicts Maslow is on the demographic factors that correlate with characteristics of self-actualisation – he thought that self-actualisation was rare and almost impossible for young people. Kaufman, by contrast, found scores on his new scale to be normally distributed through his sample (that is, spread evenly like height or weight) and unrelated to factors such as age, gender and educational attainment (although, in personal correspondence, Kaufman informs me that newer data – more than 3,000 people have since taken the new test – is showing a small, but statistically significant association between older age and having more characteristics of self-actualisation).

In conclusion, Kaufman writes that: ‘[H]opefully the current study … brings Maslow’s motivational framework and the central personality characteristics described by the founding humanistic psychologists, into the 21st century.’

The new test is sure to reinvigorate Maslow’s ideas, but if this is to help heal our divided world, then the characteristics required for self-actualisation, rather than being a permanent feature of our personalities, must be something we can develop deliberately. I put this point to Kaufman and he is optimistic. ‘I think there is significant room to develop these characteristics [by changing your habits],’ he told me. ‘A good way to start with that,’ he added, ‘is by first identifying where you stand on those characteristics and assessing your weakest links. Capitalise on your highest characteristics but also don’t forget to intentionally be mindful about what might be blocking your self-actualisation … Identify your patterns and make a concerted effort to change. I do think it’s possible with conscientiousness and willpower.’

Christian Jarrett

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

The Concept of Probability is not as Simple as You Think

probability

Phil Long/Flickr

Nevin Climenhaga | Aeon Ideas

The gambler, the quantum physicist and the juror all reason about probabilities: the probability of winning, of a radioactive atom decaying, of a defendant’s guilt. But despite their ubiquity, experts dispute just what probabilities are. This leads to disagreements on how to reason about, and with, probabilities – disagreements that our cognitive biases can exacerbate, such as our tendency to ignore evidence that runs counter to a hypothesis we favour. Clarifying the nature of probability, then, can help to improve our reasoning.

Three popular theories analyse probabilities as either frequencies, propensities or degrees of belief. Suppose I tell you that a coin has a 50 per cent probability of landing heads up. These theories, respectively, say that this is:

  • The frequency with which that coin lands heads;
  • The propensity, or tendency, that the coin’s physical characteristics give it to land heads;
  • How confident I am that it lands heads.

But each of these interpretations faces problems. Consider the following case:

Adam flips a fair coin that self-destructs after being tossed four times. Adam’s friends Beth, Charles and Dave are present, but blindfolded. After the fourth flip, Beth says: ‘The probability that the coin landed heads the first time is 50 per cent.’

Adam then tells his friends that the coin landed heads three times out of four. Charles says: ‘The probability that the coin landed heads the first time is 75 per cent.’

Dave, despite having the same information as Charles, says: ‘I disagree. The probability that the coin landed heads the first time is 60 per cent.’

The frequency interpretation struggles with Beth’s assertion. The frequency with which the coin lands heads is three out of four, and it can never be tossed again. Still, it seems that Beth was right: the probability that the coin landed heads the first time is 50 per cent.

Meanwhile, the propensity interpretation falters on Charles’s assertion. Since the coin is fair, it had an equal propensity to land heads or tails. Yet Charles also seems right to say that the probability that the coin landed heads the first time is 75 per cent.

The confidence interpretation makes sense of the first two assertions, holding that they express Beth and Charles’s confidence that the coin landed heads. But consider Dave’s assertion. When Dave says that the probability that the coin landed heads is 60 per cent, he says something false. But if Dave really is 60 per cent confident that the coin landed heads, then on the confidence interpretation, he has said something true – he has truly reported how certain he is.

Some philosophers think that such cases support a pluralistic approach in which there are multiple kinds of probabilities. My own view is that we should adopt a fourth interpretation – a degree-of-support interpretation.

Here, probabilities are understood as relations of evidential support between propositions. ‘The probability of X given Y’ is the degree to which Y supports the truth of X. When we speak of ‘the probability of X’ on its own, this is shorthand for the probability of X conditional on any background information we have. When Beth says that there is a 50 per cent probability that the coin landed heads, she means that this is the probability that it lands heads conditional on the information that it was tossed and some information about its construction (for example, it being symmetrical).

Relative to different information, however, the proposition that the coin landed heads has a different probability. When Charles says that there is a 75 per cent probability that the coin landed heads, he means this is the probability that it landed heads relative to the information that three of four tosses landed heads. Meanwhile, Dave says there is a 60 per cent probability that the coin landed heads, relative to this same information – but since this information in fact supports heads more strongly than 60 per cent, what Dave says is false.

The degree-of-support interpretation incorporates what’s right about each of our first three approaches while correcting their problems. It captures the connection between probabilities and degrees of confidence. It does this not by identifying them – instead, it takes degrees of belief to be rationally constrained by degrees of support. The reason I should be 50 per cent confident that a coin lands heads, if all I know about it is that it is symmetrical, is because this is the degree to which my evidence supports this hypothesis.

Similarly, the degree-of-support interpretation allows the information that the coin landed heads with a 75 per cent frequency to make it 75 per cent probable that it landed heads on any particular toss. It captures the connection between frequencies and probabilities but, unlike the frequency interpretation, it denies that frequencies and probabilities are the same thing. Instead, probabilities sometimes relate claims about frequencies to claims about specific individuals.

Finally, the degree-of-support interpretation analyses the propensity of the coin to land heads as a relation between, on the one hand, propositions about the construction of the coin and, on the other, the proposition that it lands heads. That is, it concerns the degree to which the coin’s construction predicts the coin’s behaviour. More generally, propensities link claims about causes and claims about effects – eg, a description of an atom’s intrinsic characteristics and the hypothesis that it decays.

Because they turn probabilities into different kinds of entities, our four theories offer divergent advice on how to figure out the values of probabilities. The first three interpretations (frequency, propensity and confidence) try to make probabilities things we can observe – through counting, experimentation or introspection. By contrast, degrees of support seem to be what philosophers call ‘abstract entities’ – neither in the world nor in our minds. While we know that a coin is symmetrical by observation, we know that the proposition ‘this coin is symmetrical’ supports the propositions ‘this coin lands heads’ and ‘this coin lands tails’ to equal degrees in the same way we know that ‘this coin lands heads’ entails ‘this coin lands heads or tails’: by thinking.

But a skeptic might point out that coin tosses are easy. Suppose we’re on a jury. How are we supposed to figure out the probability that the defendant committed the murder, so as to see whether there can be reasonable doubt about his guilt?

Answer: think more. First, ask: what is our evidence? What we want to figure out is how strongly this evidence supports the hypothesis that the defendant is guilty. Perhaps our salient evidence is that the defendant’s fingerprints are on the gun used to kill the victim.

Then, ask: can we use the mathematical rules of probability to break down the probability of our hypothesis in light of the evidence into more tractable probabilities? Here we are concerned with the probability of a cause (the defendant committing the murder) given an effect (his fingerprints being on the murder weapon). Bayes’s theorem lets us calculate this as a function of three further probabilities: the prior probability of the cause, the probability of the effect given this cause, and the probability of the effect without this cause.

Since this is all relative to any background information we have, the first probability (of the cause) is informed by what we know about the defendant’s motives, means and opportunity. We can get a handle on the third probability (of the effect without the cause) by breaking down the possibility that the defendant is innocent into other possible causes of the victim’s death, and asking how probable each is, and how probable they make it that the defendant’s fingerprints would be on the gun. We will eventually reach probabilities that we cannot break down any further. At this point, we might search for general principles to guide our assignments of probabilities, or we might rely on intuitive judgments, as we do in the coin cases.

When we are reasoning about criminals rather than coins, this process is unlikely to lead to convergence on precise probabilities. But there’s no alternative. We can’t resolve disagreements about how much the information we possess supports a hypothesis just by gathering more information. Instead, we can make progress only by way of philosophical reflection on the space of possibilities, the information we have, and how strongly it supports some possibilities over others.Aeon counter – do not remove

Nevin Climenhaga

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Why the Demoniac Stayed in his Comfortable Corner of Hell

the-drunkard

Detail from The Drunkard (1912) by Marc Chagall. Courtesy Wikipedia

John Kaag | Aeon Ideas

I am not what one might call a religious man. I went to church, and then to confirmation class, under duress. My mother, whom I secretly regarded as more powerful than God, insisted that I go. So I went. Her insistence, however, had the unintended consequence of introducing me to a pastor whom I came to despise. So I eventually quit.

There were many problems with this pastor but the one that bothered me the most was his refusal to explain a story from the New Testament that I found especially hard to believe: the story of the demoniac.

This story from Mark 5:1-20 relates how Jesus and the disciples go to the town of Gerasenes and there encounter a man who is possessed by evil spirits. This demoniac – a self-imposed outcast from society – lived at the outskirts of town and ‘night and day among the tombs and in the hills he would cry out and cut himself with stones’. The grossest part of the story, however, isn’t the self-mutilation. It’s the demoniac’s insane refusal to accept help. When Jesus approached him, the demoniac threw himself to the ground and wailed: ‘What do you want with me? … In God’s name, don’t torture me!’ When you’re possessed by evil spirits, the worst thing in the world is to be healed. In short, the demoniac tells Jesus to bugger off, to leave him and his sharp little stones in his comfortable corner of hell.

When I first read about the demoniac, I was admittedly scared, but I eventually convinced myself that the parable was a manipulative attempt to persuade unbelievers such as me to find religion. And I wasn’t buying it. But when I entered university, went into philosophy, and began to cultivate an agnosticism that one might call atheism, I discovered that many a philosopher had been drawn to this scary story. So I took a second look.

The Danish philosopher Søren Kierkegaard, who spent years analysing the psychological and ethical dimensions of the demoniac, tells us that being demonic is more common than we might like to admit. He points out that when Jesus heals the possessed man, the spirits are exorcised en masse, flying out together as ‘the Legion’ – a vast army of evil forces. There are more than enough little demons to go around, and this explains why they come to roust in some rather mundane places. In Kierkegaard’s words: ‘One may hear the drunkard say: “Let me be the filth that I am.”’ Or, leave me alone with my bottle and let me ruin my life, thank you very much. I heard this first from my father, and then from an increasing number of close friends, and most recently from a voice that occasionally keeps me up at night when everyone else is asleep.

Those who are the most pointedly afflicted are often precisely those who are least able to recognise their affliction, or to save themselves. And those with the resources to rescue themselves are usually already saved. As Kierkegaard suggests, the virtue of sobriety makes perfect sense to one who is already sober. Eating well is second nature to the one who is already healthy; saving money is a no-brainer for one who one is already rich; truth-telling is the good habit of one who is already honest. But for those in the grips of crisis or sin, getting out usually doesn’t make much sense.

Sharp stones can take a variety of forms.

In The Concept of Anxiety (1844), Kierkegaard tells us that the ‘essential nature of [the demoniac] is anxiety about the good’. I’ve been ‘anxious’ about many things – about exams, about spiders, about going to sleep – but Kierkegaard explains that the feeling I have about these nasty things isn’t anxiety at all. It’s fear. Anxiety, on the other hand, has no particular object. It is the sense of uneasiness that one has at the edge of a cliff, or climbing a ladder, or thinking about the prospects of a completely open future – it isn’t fear per se, but the feeling that we get when faced with possibility. It’s the unsettling feeling of freedom. Yes, freedom, that most precious of modern watchwords, is deeply unsettling.

What does this have to do with our demoniac? Everything. Kierkegaard explains that the demoniac reflects ‘an unfreedom that wants to close itself off’; when confronted with the possibility of being healed, he wants nothing to do with it. The free life that Jesus offers is, for the demoniac, pure torture. I’ve often thought that this is the fate of the characters in Jean-Paul Sartre’s play No Exit (1944): they are always free to leave, but leaving seems beyond impossible.

Yet Jesus manages to save the demoniac. And I wanted my pastor to tell me how. At the time, I chalked up most of the miracles from the Bible as exaggeration, or interpretation, or poetic licence. But the healing of the demoniac – unlike the bread and fish and resurrection – seemed really quite fantastic. So how did Jesus do it? I didn’t get a particularly good answer from my pastor, so I left the Church. And never came back.

Today, I still want to know.

I’m not here to explain the salvation of the demoniac. I’m here only to observe, as carefully as I can, that this demonic situation is a problem. Indeed, I suspect it is the problem for many, many readers. The demoniac reflects what theologians call the ‘religious paradox’, namely that it is impossible for fallen human beings – such craven creatures – to bootstrap themselves to heaven. Any redemptive resources at our disposal are probably exactly as botched as we are.

There are many ways to distract ourselves from this paradox – and we are very good at manufacturing them: movies and alcohol and Facebook and all the fixations and obsessions of modern life. But at the end of the day, these are pitifully little comfort.

So this year, as New Year’s Day recedes from memory and the winter darkness remains, I am making a resolution: I will try not to take all the usual escapes. Instead, I will try to simply sit with the plight of the demoniac, to ‘stew in it’ as my mother used to say, for a minute or two more. In his essay ‘Self-will’ (1919), the German author Hermann Hesse put it thus: ‘If you and you … are in pain, if you are sick in body or soul, if you are afraid and have a foreboding of danger – why not, if only to amuse yourselves … try to put the question in another way? Why not ask whether the source of your pain might not be you yourselves?’ I will not reach for my familiar demonic stones, blood-spattered yet comforting. I will ask why I need them in the first place. When I do this, and attempt to come to terms with the demoniac’s underlying suffering, I might notice that it is not unique to me.

When I do, when I let go of the things that I think are going to ease my suffering, I might have the chance to notice that I am not alone in my anxiety. And maybe this is recompense enough. Maybe this is freedom and the best that I can hope for.Aeon counter – do not remove

John Kaag

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.