To Boost your Self-esteem, Write about Chapters of your Life


New car, 1980s. Photo by Don Pugh/Flickr

Christian Jarrett | Aeon Ideas

In truth, so much of what happens to us in life is random – we are pawns at the mercy of Lady Luck. To take ownership of our experiences and exert a feeling of control over our future, we tell stories about ourselves that weave meaning and continuity into our personal identity. Writing in the 1950s, the psychologist Erik Erikson put it this way:

To be adult means among other things to see one’s own life in continuous perspective, both in retrospect and in prospect … to selectively reconstruct his past in such a way that, step for step, it seems to have planned him, or better, he seems to have planned it.

Alongside your chosen values and goals in life, and your personality traits – how sociable you are, how much of a worrier and so on – your life story as you tell it makes up the final part of what in 2015 the personality psychologist Dan P McAdams at Northwestern University in Illinois called the ‘personological trinity’.

Of course, some of us tell these stories more explicitly than others – one person’s narrative identity might be a barely formed story at the edge of their consciousness, whereas another person might literally write out their past and future in a diary or memoir.

Intriguingly, there’s some evidence that prompting people to reflect on and tell their life stories – a process called ‘life review therapy’ – could be psychologically beneficial. However, most of this work has been on older adults and people with pre-existing problems such as depression or chronic physical illnesses. It remains to be established through careful experimentation whether prompting otherwise healthy people to reflect on their lives will have any immediate benefits.

A relevant factor in this regard is the tone, complexity and mood of the stories that people tell themselves. For instance, it’s been shown that people who tell more positive stories, including referring to more instances of personal redemption, tend to enjoy higher self-esteem and greater ‘self-concept clarity’ (the confidence and lucidity in how you see yourself). Perhaps engaging in writing or talking about one’s past will have immediate benefits only for people whose stories are more positive.

In a recent paper in the Journal of Personality, Kristina L Steiner at Denison University in Ohio and her colleagues looked into these questions and reported that writing about chapters in your life does indeed lead to a modest, temporary self-esteem boost, and that in fact this benefit arises regardless of how positive your stories are. However, there were no effects on self-concept clarity, and many questions on this topic remain for future study.

Steiner’s team tested three groups of healthy American participants across three studies. The first two groups – involving more than 300 people between them – were young undergraduates, most of them female. The final group, a balanced mix of 101 men and women, was recruited from the community, and they were older, with an average age of 62.

The format was essentially the same for each study. The participants were asked to complete various questionnaires measuring their mood, self-esteem and self-concept clarity, among other things. Then half of them were allocated to write about four chapters in their lives, spending 10 minutes on each. They were instructed to be as specific and detailed as possible, and to reflect on main themes, how each chapter related to their lives as a whole, and to think about any causes and effects of the chapter on them and their lives. The other half of the participants, who acted as a control group, spent the same time writing about four famous Americans of their choosing (to make this task more intellectually comparable, they were also instructed to reflect on the links between the individuals they chose, how they became famous, and other similar questions). After the writing tasks, all the participants retook the same psychological measures they’d completed at the start.

The participants who wrote about chapters in their lives displayed small, but statistically significant, increases to their self-esteem, whereas the control-group participants did not. This self-esteem boost wasn’t explained by any changes to their mood, and – to the researchers’ surprise – it didn’t matter whether the participants rated their chapters as mostly positive or negative, nor did it depend on whether they featured themes of agency (that is, being in control) and communion (pertaining to meaningful relationships). Disappointingly, there was no effect of the life-chapter task on self-concept clarity, nor on meaning and identity.

How long do the self-esteem benefits of the life-chapter task last, and might they accumulate by repeating the exercise? Clues come from the second of the studies, which involved two life chapter-writing tasks (and two tasks writing about famous Americans for the control group), with the second task coming 48 hours after the first. The researchers wanted to see if the self-esteem boost arising from the first life-chapter task would still be apparent at the start of the second task two days later – but it wasn’t. They also wanted to see if the self-esteem benefits might accumulate over the two tasks – they didn’t (the second life-chapter task had its own self-esteem benefit, but it wasn’t cumulative with the benefits of the first).

It remains unclear exactly why the life-chapter task had the self-esteem benefits that it did. It’s possible that the task led participants to consider how they had changed in positive ways. They might also have benefited from expressing and confronting their emotional reactions to these periods of their lives – this would certainly be consistent with the well-documented benefits of expressive writing and ‘affect labelling’ (the calming effect of putting our emotions into words). Future research will need to compare different life chapter-writing instructions to tease apart these different potential beneficial mechanisms. It would also be helpful to test more diverse groups of participants and different ‘dosages’ of the writing task to see if it is at all possible for the benefits to accrue over time.

The researchers said: ‘Our findings suggest that the experience of systematically reviewing one’s life and identifying, describing and conceptually linking life chapters may serve to enhance the self, even in the absence of increased self-concept clarity and meaning.’ If you are currently lacking much confidence and feel like you could benefit from an ego boost, it could be worth giving the life-chapter task a go. It’s true that the self-esteem benefits of the exercise were small, but as Steiner’s team noted, ‘the costs are low’ too.Aeon counter – do not remove

Christian Jarrett

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Is Consciousness a Battle between your Beliefs and Perceptions?


Now you see it… Magician Harry Houdini moments before ‘disappearing’ Jennie the 10,000lb elephant at the Hippodrome, New York, in 1918. Photo courtesy Library of Congress

Hakwan Lau | Aeon Ideas

Imagine you’re at a magic show, in which the performer suddenly vanishes. Of course, you ultimately know that the person is probably just hiding somewhere. Yet it continues to look as if the person has disappeared. We can’t reason away that appearance, no matter what logic dictates. Why are our conscious experiences so stubborn?

The fact that our perception of the world appears to be so intransigent, however much we might reflect on it, tells us something unique about how our brains are wired. Compare the magician scenario with how we usually process information. Say you have five friends who tell you it’s raining outside, and one weather website indicating that it isn’t. You’d probably just consider the website to be wrong and write it off. But when it comes to conscious perception, there seems to be something strangely persistent about what we see, hear and feel. Even when a perceptual experience is clearly ‘wrong’, we can’t just mute it.

Why is that so? Recent advances in artificial intelligence (AI) shed new light on this puzzle. In computer science, we know that neural networks for pattern-recognition – so-called deep learning models – can benefit from a process known as predictive coding. Instead of just taking in information passively, from the bottom up, networks can make top-down hypotheses about the world, to be tested against observations. They generally work better this way. When a neural network identifies a cat, for example, it first develops a model that allows it to predict or imagine what a cat looks like. It can then examine any incoming data that arrives to see whether or not it fits that expectation.

The trouble is, while these generative models can be super efficient once they’re up and running, they usually demand huge amounts of time and information to train. One solution is to use generative adversarial networks (GANs) – hailed as the ‘coolest idea in deep learning in the last 20 years’ by Facebook’s head of AI research Yann LeCun. In GANs, we might train one network (the generator) to create pictures of cats, mimicking real cats as closely as it can. And we train another network (the discriminator) to distinguish between the manufactured cat images and the real ones. We can then pit the two networks against each other, such that the discriminator is rewarded for catching fakes, while the generator is rewarded for getting away with them. When they are set up to compete, the networks grow together in prowess, not unlike an arch art-forger trying to outwit an art expert. This makes learning very efficient for each of them.

As well as a handy engineering trick, GANs are a potentially useful analogy for understanding the human brain. In mammalian brains, the neurons responsible for encoding perceptual information serve multiple purposes. For example, the neurons that fire when you see a cat also fire when you imagine or remember a cat; they can also activate more or less at random. So whenever there’s activity in our neural circuitry, the brain needs to be able to figure out the cause of the signals, whether internal or external.

We can call this exercise perceptual reality monitoring. John Locke, the 17th-century British philosopher, believed that we had some sort of inner organ that performed the job of sensory self-monitoring. But critics of Locke wondered why Mother Nature would take the trouble to grow a whole separate organ, on top of a system that’s already set up to detect the world via the senses. You have to be able to smell something before you can go about deciding whether or not the perception is real or fake; so why not just build in a check to the detecting mechanism itself?

In light of what we now know about GANs, though, Locke’s idea makes a certain amount of sense. Because our perceptual system takes up neural resources, parts of it get recycled for different uses. So imagining a cat draws on the same neuronal patterns as actually seeing one. But this overlap muddies the water regarding the meaning of the signals. Therefore, for the recycling scheme to work well, we need a discriminator to decide when we are seeing something versus when we’re merely thinking about it. This GAN-like inner sense organ – or something like it – needs to be there to act as an adversarial rival, to stimulate the growth of a well-honed predictive coding mechanism.

If this account is right, it’s fair to say that conscious experience is probably akin to a kind of logical inference. That is, if the perceptual signal from the generator says there is a cat, and the discriminator decides that this signal truthfully reflects the state of the world right now, we naturally see a cat. The same goes for raw feelings: pain can feel sharp, even when we know full well that nothing is poking at us, and patients can report feeling pain in limbs that have already been amputated. To the extent that the discriminator gets things right most of the time, we tend to trust it. No wonder that when there’s a conflict between subjective impressions and rational beliefs, it seems to make sense to believe what we consciously experience.

This perceptual stubbornness is not just a feature of humans. Some primates have it too, as shown by their capacity to be amazed and amused by magic tricks. That is, they seem to understand that there’s a tension between what they’re seeing and what they know to be true. Given what we understand about their brains – specifically, that their perceptual neurons are also ‘recyclable’ for top-down functioning – the GAN theory suggests that these nonhuman animals probably have conscious experiences not dissimilar to ours.

The future of AI is more challenging. If we built a robot with a very complex GAN-style architecture, would it be conscious? On the basis of our theory, it would probably be capable of predictive coding, exercising the same machinery for perception as it deploys for top-down prediction or imagination. Perhaps like some current generative networks, it could ‘dream’. Like us, it probably couldn’t reason away its pain – and it might even be able to appreciate stage magic.

Theorising about consciousness is notoriously hard, and we don’t yet know what it really consists in. So we wouldn’t be in a position to establish if our robot was truly conscious. Then again, we can’t do this with any certainty with respect to other animals either. At least by fleshing out some conjectures about the machinery of consciousness, we can begin
to test them against our intuitions – and, more importantly, in experiments. What we do know is that a model of the mind involving an inner mechanism of doubt – a nit-picking system that’s constantly on the lookout for fakes and forgeries in perception – is one of the most promising ideas we’ve come up with so far.

Hakwan Lau

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

A Philosophical Approach to Routines can Illuminate Who We Really Are

Elias Anttila | Aeon Ideas

There are hundreds of things we do – repeatedly, routinely – every day. We wake up, check our phones, eat our meals, brush our teeth, do our jobs, satisfy our addictions. In recent years, such habitual actions have become an arena for self-improvement: bookshelves are saturated with bestsellers about ‘life hacks’, ‘life design’ and how to ‘gamify’ our long-term projects, promising everything from enhanced productivity to a healthier diet and huge fortunes. These guides vary in scientific accuracy, but they tend to depict habits as routines that follow a repeated sequence of behaviours, into which we can intervene to set ourselves on a more desirable track.

The problem is that this account has been bleached of much of its historical richness. Today’s self-help books have in fact inherited a highly contingent version of habit – specifically, one that arises in the work of early 20th-century psychologists such as B F Skinner, Clark Hull, John B Watson and Ivan Pavlov. These thinkers are associated with behaviourism, an approach to psychology that prioritises observable, stimulus-response reactions over the role of inner feelings or thoughts. The behaviourists defined habits in a narrow, individualistic sense; they believed that people were conditioned to respond automatically to certain cues, which produced repeated cycles of action and reward.

The behaviourist image of habit has since been updated in light of contemporary neuroscience. For example, the fact that the brain is plastic and changeable allows habits to inscribe themselves in our neural wiring over time by forming privileged connections between brain regions. The influence of behaviourism has enabled researchers to study habits quantitatively and rigorously. But it has also bequeathed a flattened notion of habit that overlooks the concept’s wider philosophical implications.

Philosophers used to look at habits as ways of contemplating who we are, what it means to have faith, and why our daily routines reveal something about the world at large. In his Nicomachean Ethics, Aristotle uses the terms hexis and ethos – both translated today as ‘habit’ – to study stable qualities in people and things, especially regarding their morals and intellect. Hexis denotes the lasting characteristics of a person or thing, like the smoothness of a table or the kindness of a friend, which can guide our actions and emotions. A hexis is a characteristic, capacity or disposition that one ‘owns’; its etymology is the Greek word ekhein, the term for ownership. For Aristotle, a person’s character is ultimately a sum of their hexeis (plural).

An ethos, on the other hand, is what allows one to develop hexeis. It is both a way of life and the basic calibre of one’s personality. Ethos is what gives rise to the essential principles that help to guide moral and intellectual development. Honing hexeis out of an ethos thus takes both time and practice. This version of habit fits with the tenor of ancient Greek philosophy, which often emphasised the cultivation of virtue as a path to the ethical life.

Millennia later, in medieval Christian Europe, Aristotle’s hexis was Latinised into habitus. The translation tracks a shift away from the virtue ethics of the Ancients towards Christian morality, by which habit acquired distinctly divine connotations. In the middle ages, Christian ethics moved away from the idea of merely shaping one’s moral dispositions, and proceeded instead from the belief that ethical character was handed down by God. In this way, the desired habitus should become entwined with the exercise of Christian virtue.

The great theologian Thomas Aquinas saw habit as a vital component of spiritual life. According to his Summa Theologica (1265-1274), habitus involved a rational choice, and led the true believer to a sense of faithful freedom. By contrast, Aquinas used consuetudo to refer to the habits we acquire that inhibit this freedom: the irreligious, quotidian routines that do not actively engage with faith. Consuetudo signifies mere association and regularity, whereas habitus conveys sincere thoughtfulness and consciousness of God. Consuetudo is also where we derive the terms ‘custom’ and ‘costume’ – a lineage which suggests that the medievals considered habit to extend beyond single individuals.

For the Enlightenment philosopher David Hume, these ancient and medieval interpretations of habit were far too limiting. Hume conceived of habit via what it empowers and enables us to do as human beings. He came to the conclusion that habit is the ‘cement of the universe’, which all ‘operations of the mind … depend on’. For instance, we might throw a ball in the air and watch it rise and descend to Earth. By habit, we come to associate these actions and perceptions – the movement of our limb, the trajectory of the ball – in a way that eventually lets us grasp the relationship between cause and effect. Causality, for Hume, is little more than habitual association. Likewise language, music, relationships – any skills we use to transform experiences into something that’s useful are built from habits, he believed. Habits are thus crucial instruments that enable us to navigate the world and to understand the principles by which it operates. For Hume, habit is nothing less than the ‘great guide of human life’.

It’s clear that we ought to see habits as more than mere routines, tendencies and ticks. They encompass our identities and ethics; they teach us how to practise our faiths; if Hume is to believed, they do no less than bind the world together. Seeing habits in this new-yet-old way requires a certain conceptual and historical about-face, but this U-turn offers much more than shallow self-help. It should show us that the things we do every day aren’t just routines to be hacked, but windows through which we might glimpse who we truly are.Aeon counter – do not remove

Elias Anttila

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Ibn Tufayl and the Story of the Feral Child of Philosophy


Album folio fragment with scholar in a garden. Attributed to Muhammad Ali 1610-15. Courtesy Museum of Fine Arts, Boston

Marwa Elshakry & Murad Idris | Aeon Ideas

Ibn Tufayl, a 12th-century Andalusian, fashioned the feral child in philosophy. His story Hayy ibn Yaqzan is the tale of a child raised by a doe on an unnamed Indian Ocean island. Hayy ibn Yaqzan (literally ‘Living Son of Awakeness’) reaches a state of perfect, ecstatic understanding of the world. A meditation on the possibilities (and pitfalls) of the quest for the good life, Hayy offers not one, but two ‘utopias’: a eutopia (εὖ ‘good’, τόπος ‘place’) of the mind in perfect isolation, and an ethical community under the rule of law. Each has a version of human happiness. Ibn Tufayl pits them against each other, but each unfolds ‘no where’ (οὐ ‘not’, τόπος ‘place’) in the world.

Ibn Tufayl begins with a vision of humanity isolated from society and politics. (Modern European political theorists who employed this literary device called it ‘the state of nature’.) He introduces Hayy by speculating about his origin. Whether Hayy was placed in a basket by his mother to sail through the waters of life (like Moses) or born by spontaneous generation on the island is irrelevant, Ibn Tufayl says. His divine station remains the same, as does much of his life, spent in the company only of animals. Later philosophers held that society elevates humanity from its natural animal state to an advanced, civilised one. Ibn Tufayl took a different view. He maintained that humans can be perfected only outside society, through a progress of the soul, not the species.

In contrast to Thomas Hobbes’s view that ‘man is a wolf to man’, Hayy’s island has no wolves. It proves easy enough for him to fend off other creatures by waving sticks at them or donning terrifying costumes of hides and feathers. For Hobbes, the fear of violent death is the origin of the social contract and the apologia for the state; but Hayy’s first encounter with fear of death is when his doe-mother dies. Desperate to revive her, Hayy dissects her heart only to find one of its chambers is empty. The coroner-turned-theologian concludes that what he loved in his mother no longer resides in her body. Death therefore was the first lesson of metaphysics, not politics.

Hayy then observes the island’s plants and animals. He meditates upon the idea of an elemental, ‘vital spirit’ upon discovering fire. Pondering the plurality of matter leads him to conclude that it must originate from a singular, non-corporeal source or First Cause. He notes the perfect motion of the celestial spheres and begins a series of ascetic exercises (such as spinning until dizzy) to emulate this hidden, universal order. By the age of 50, he retreats from the physical world, meditating in his cave until, finally, he attains a state of ecstatic illumination. Reason, for Ibn Tufayl, is thus no absolute guide to Truth.

The difference between Hayy’s ecstatic journeys of the mind and later rationalist political thought is the role of reason. Yet many later modern European commentaries or translations of Hayy confuse this by framing the allegory in terms of reason. In 1671, Edward Pococke entitled his Latin translation The Self-Taught Philosopher: In Which It Is Demonstrated How Human Reason Can Ascend from Contemplation of the Inferior to Knowledge of the Superior. In 1708, Simon Ockley’s English translation was The Improvement of Human Reason, and it too emphasised reason’s capacity to attain ‘knowledge of God’. For Ibn Tufayl, however, true knowledge of God and the world – as a eutopia for the ‘mind’ (or soul) – could come only through perfect contemplative intuition, not absolute rational thought.

This is Ibn Tufayl’s first utopia: an uninhabited island where a feral philosopher retreats to a cave to reach ecstasy through contemplation and withdrawal from the world. Friedrich Nietzsche’s Zarathustra would be impressed: ‘Flee, my friend, into your solitude!’

The rest of the allegory introduces the problem of communal life and a second utopia. After Hayy achieves his perfect condition, an ascetic is shipwrecked on his island. Hayy is surprised to discover another being who so resembles him. Curiosity leads him to befriend the wanderer, Absal. Absal teaches Hayy language, and describes the mores of his own island’s law-abiding people. The two men determine that the islanders’ religion is a lesser version of the Truth that Hayy discovered, shrouded in symbols and parables. Hayy is driven by compassion to teach them the Truth. They travel to Absal’s home.

The encounter is disastrous. Absal’s islanders feel compelled by their ethical principles of hospitality towards foreigners, friendship with Absal, and association with all people to welcome Hayy. But soon Hayy’s constant attempts to preach irritate them. Hayy realises that they are incapable of understanding. They are driven by satisfactions of the body, not the mind. There can be no perfect society because not everyone can achieve a state of perfection in their soul. Illumination is possible only for the select, in accordance with a sacred order, or a hieros archein. (This hierarchy of being and knowing is a fundamental message of neo-Platonism.) Hayy concludes that persuading people away from their ‘natural’ stations would only corrupt them further. The laws that the ‘masses’ venerate, be they revealed or reasoned, he decides, are their only chance to achieve a good life.

The islanders’ ideals – lawfulness, hospitality, friendship, association – might seem reasonable, but these too exist ‘no where’ in the world. Hence their dilemma: either they adhere to these and endure Hayy’s criticisms, or violate them by shunning him. This is a radical critique of the law and its ethical principles: they are normatively necessary for social life yet inherently contradictory and impossible. It’s a sly reproach of political life, one whose bite endures. Like the islanders, we follow principles that can undermine themselves. To be hospitable, we must be open to the stranger who violates hospitality. To be democratic, we must include those who are antidemocratic. To be worldly, our encounters with other people must be opportunities to learn from them, not just about them.

In the end, Hayy returns to his island with Absal, where they enjoy a life of ecstatic contemplation unto death. They abandon the search for a perfect society of laws. Their eutopia is the quest of the mind left unto itself, beyond the imperfections of language, law and ethics – perhaps beyond even life itself.

The islanders offer a less obvious lesson: our ideals and principles undermine themselves, but this is itself necessary for political life. For an island of pure ethics and law is an impossible utopia. Perhaps, like Ibn Tufayl, all we can say on the search for happiness is (quoting Al-Ghazali):

It was – what it was is harder to say.
Think the best, but don’t make me describe it away.

After all, we don’t know what happened to Hayy and Absal after their deaths – or to the islanders after they left.Aeon counter – do not remove

Marwa Elshakry & Murad Idris

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Descartes was Wrong: ‘A Person is a Person through Other Persons’


Detail from Young Moe (1938) by Paul Klee. Courtesy Phillips collection/Wikipedia

Abeba Birhane | Aeon Ideas

According to Ubuntu philosophy, which has its origins in ancient Africa, a newborn baby is not a person. People are born without ‘ena’, or selfhood, and instead must acquire it through interactions and experiences over time. So the ‘self’/‘other’ distinction that’s axiomatic in Western philosophy is much blurrier in Ubuntu thought. As the Kenyan-born philosopher John Mbiti put it in African Religions and Philosophy (1975): ‘I am because we are, and since we are, therefore I am.’

We know from everyday experience that a person is partly forged in the crucible of community. Relationships inform self-understanding. Who I am depends on many ‘others’: my family, my friends, my culture, my work colleagues. The self I take grocery shopping, say, differs in her actions and behaviours from the self that talks to my PhD supervisor. Even my most private and personal reflections are entangled with the perspectives and voices of different people, be it those who agree with me, those who criticise, or those who praise me.

Yet the notion of a fluctuating and ambiguous self can be disconcerting. We can chalk up this discomfort, in large part, to René Descartes. The 17th-century French philosopher believed that a human being was essentially self-contained and self-sufficient; an inherently rational, mind-bound subject, who ought to encounter the world outside her head with skepticism. While Descartes didn’t single-handedly create the modern mind, he went a long way towards defining its contours.

Descartes had set himself a very particular puzzle to solve. He wanted to find a stable point of view from which to look on the world without relying on God-decreed wisdoms; a place from which he could discern the permanent structures beneath the changeable phenomena of nature. But Descartes believed that there was a trade-off between certainty and a kind of social, worldly richness. The only thing you can be certain of is your own cogito – the fact that you are thinking. Other people and other things are inherently fickle and erratic. So they must have nothing to do with the basic constitution of the knowing self, which is a necessarily detached, coherent and contemplative whole.

Few respected philosophers and psychologists would identify as strict Cartesian dualists, in the sense of believing that mind and matter are completely separate. But the Cartesian cogito is still everywhere you look. The experimental design of memory testing, for example, tends to proceed from the assumption that it’s possible to draw a sharp distinction between the self and the world. If memory simply lives inside the skull, then it’s perfectly acceptable to remove a person from her everyday environment and relationships, and to test her recall using flashcards or screens in the artificial confines of a lab. A person is considered a standalone entity, irrespective of her surroundings, inscribed in the brain as a series of cognitive processes. Memory must be simply something you have, not something you do within a certain context.

Social psychology purports to examine the relationship between cognition and society. But even then, the investigation often presumes that a collective of Cartesian subjects are the real focus of the enquiry, not selves that co-evolve with others over time. In the 1960s, the American psychologists John Darley and Bibb Latané became interested in the murder of Kitty Genovese, a young white woman who had been stabbed and assaulted on her way home one night in New York. Multiple people had witnessed the crime but none stepped in to prevent it. Darley and Latané designed a series of experiments in which they simulated a crisis, such as an epileptic fit, or smoke billowing in from the next room, to observe what people did. They were the first to identify the so-called ‘bystander effect’, in which people seem to respond more slowly to someone in distress if others are around.

Darley and Latané suggested that this might come from a ‘diffusion of responsibility’, in which the obligation to react is diluted across a bigger group of people. But as the American psychologist Frances Cherry argued in The Stubborn Particulars of Social Psychology: Essays on the Research Process (1995), this numerical approach wipes away vital contextual information that might help to understand people’s real motives. Genovese’s murder had to be seen against a backdrop in which violence against women was not taken seriously, Cherry said, and in which people were reluctant to step into what might have been a domestic dispute. Moreover, the murder of a poor black woman would have attracted far less subsequent media interest. But Darley and Latané’s focus make structural factors much harder to see.

Is there a way of reconciling these two accounts of the self – the relational, world-embracing version, and the autonomous, inward one? The 20th-century Russian philosopher Mikhail Bakhtin believed that the answer lay in dialogue. We need others in order to evaluate our own existence and construct a coherent self-image. Think of that luminous moment when a poet captures something you’d felt but had never articulated; or when you’d struggled to summarise your thoughts, but they crystallised in conversation with a friend. Bakhtin believed that it was only through an encounter with another person that you could come to appreciate your own unique perspective and see yourself as a whole entity. By ‘looking through the screen of the other’s soul,’ he wrote, ‘I vivify my exterior.’ Selfhood and knowledge are evolving and dynamic; the self is never finished – it is an open book.

So reality is not simply out there, waiting to be uncovered. ‘Truth is not born nor is it to be found inside the head of an individual person, it is born between people collectively searching for truth, in the process of their dialogic interaction,’ Bakhtin wrote in Problems of Dostoevsky’s Poetics (1929). Nothing simply is itself, outside the matrix of relationships in which it appears. Instead, being is an act or event that must happen in the space between the self and the world.

Accepting that others are vital to our self-perception is a corrective to the limitations of the Cartesian view. Consider two different models of child psychology. Jean Piaget’s theory of cognitive development conceives of individual growth in a Cartesian fashion, as the reorganisation of mental processes. The developing child is depicted as a lone learner – an inventive scientist, struggling independently to make sense of the world. By contrast, ‘dialogical’ theories, brought to life in experiments such as Lisa Freund’s ‘doll house study’ from 1990, emphasise interactions between the child and the adult who can provide ‘scaffolding’ for how she understands the world.

A grimmer example might be solitary confinement in prisons. The punishment was originally designed to encourage introspection: to turn the prisoner’s thoughts inward, to prompt her to reflect on her crimes, and to eventually help her return to society as a morally cleansed citizen. A perfect policy for the reform of Cartesian individuals. But, in fact, studies of such prisoners suggest that their sense of self dissolves if they are punished this way for long enough. Prisoners tend to suffer profound physical and psychological difficulties, such as confusion, anxiety, insomnia, feelings of inadequacy, and a distorted sense of time. Deprived of contact and interaction – the external perspective needed to consummate and sustain a coherent self-image – a person risks disappearing into non-existence.

The emerging fields of embodied and enactive cognition have started to take dialogic models of the self more seriously. But for the most part, scientific psychology is only too willing to adopt individualistic Cartesian assumptions that cut away the webbing that ties the self to others. There is a Zulu phrase, ‘Umuntu ngumuntu ngabantu’, which means ‘A person is a person through other persons.’ This is a richer and better account, I think, than ‘I think, therefore I am.’Aeon counter – do not remove

Abeba Birhane

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Do you have a Self-Actualised Personality? Maslow Revisited


View of the second Pyramid from the top of the Great Pyramid. Photo courtesy of the Library of Congress

Christian Jarrett | Aeon Ideas

Abraham Maslow was the 20th-century American psychologist best-known for explaining motivation through his hierarchy of needs, which he represented in a pyramid. At the base, our physiological needs include food, water, warmth and rest. Moving up the ladder, Maslow mentions safety, love, and self-esteem and accomplishment. But after all those have been satisfied, the motivating factor at the top of the pyramid involves striving to achieve our full potential and satisfy creative goals. As one of the founders of humanistic psychology, Maslow proposed that the path to self-transcendence and, ultimately, greater compassion for all of humanity requires the ‘self-actualisation’ at the top of his pyramid – fulfilling your true potential, and becoming your authentic self.

Now Scott Barry Kaufman, a psychologist at Barnard College, Columbia University, believes it is time to revive the concept, and link it with contemporary psychological theory. ‘We live in times of increasing divides, selfish concerns, and individualistic pursuits of power,’ Kaufman wrote recently in a blog in Scientific American introducing his new research. He hopes that rediscovering the principles of self-actualisation might be just the tonic that the modern world is crying out for. To this end, he’s used modern statistical methods to create a test of self-actualisation or, more specifically, of the 10 characteristics exhibited by self-actualised people, and it was recently published in the Journal of Humanistic Psychology.

Kaufman first surveyed online participants using 17 characteristics Maslow believed were shared by self-actualised people. Kaufman found that seven of these were redundant or irrelevant and did not correlate with others, leaving 10 key characteristics of self-actualisation.

Next, he reworded some of Maslow’s original language and labelling to compile a modern 30-item questionnaire featuring three items tapping each of these 10 remaining characteristics: continued freshness of appreciation; acceptance; authenticity; equanimity; purpose; efficient perception of reality; humanitarianism; peak experiences; good moral intuition; and creative spirit (see the full questionnaire below, and take the test on Kaufman’s website).

So what did Kaufman report? In a survey of more than 500 people on Amazon’s Mechanical Turk website, Kaufman found that scores on each of these 10 characteristics tended to correlate, but also that they each made a unique contribution to a unifying factor of self-actualisation – suggesting that this is a valid concept comprised of 10 subtraits.

Participants’ total scores on the test also correlated with their scores on the main five personality traits (that is, with higher extraversion, agreeableness, emotional stability, openness and conscientiousness) and with the metatrait of ‘stability’, indicative of an ability to avoid impulses in the pursuit of one’s goals. That the new test corresponded in this way with established personality measures provides further evidence of its validity.

Next, Kaufman turned to modern theories of wellbeing, such as self-determination theory, to see if people’s scores on his self-actualisation scale correlated with these contemporary measures. Sure enough, he found that people with more characteristics of self-actualisation also tended to score higher on curiosity, life-satisfaction, self-acceptance, personal growth and autonomy, among other factors – just as Maslow would have predicted.

‘Taken together, this total pattern of data supports Maslow’s contention that self-actualised individuals are more motivated by growth and exploration than by fulfilling deficiencies in basic needs,’ Kaufman writes. He adds that the new empirical support for Maslow’s ideas is ‘quite remarkable’ given that Maslow put them together with ‘a paucity of actual evidence’.

A criticism often levelled at Maslow’s notion of self-actualisation is that its pursuit encourages an egocentric focus on one’s own goals and needs. However, Maslow always contended that it is only through becoming our true, authentic selves that we can transcend the self and look outward with compassion to the rest of humanity. Kaufman explored this too, and found that higher scorers on his self-actualisation scale tended also to score higher on feelings of oneness with the world, but not on decreased self-salience, a sense of independence and bias toward information relevant to oneself. (These are the two main factors in a modern measure of self-transcendence developed by the psychologist David Yaden at the University of Pennsylvania.)

Kaufman said that this last finding supports ‘Maslow’s contention that self-actualising individuals are able to paradoxically merge with a common humanity while at the same time able to maintain a strong identity and sense of self’.

Where the new data contradicts Maslow is on the demographic factors that correlate with characteristics of self-actualisation – he thought that self-actualisation was rare and almost impossible for young people. Kaufman, by contrast, found scores on his new scale to be normally distributed through his sample (that is, spread evenly like height or weight) and unrelated to factors such as age, gender and educational attainment (although, in personal correspondence, Kaufman informs me that newer data – more than 3,000 people have since taken the new test – is showing a small, but statistically significant association between older age and having more characteristics of self-actualisation).

In conclusion, Kaufman writes that: ‘[H]opefully the current study … brings Maslow’s motivational framework and the central personality characteristics described by the founding humanistic psychologists, into the 21st century.’

The new test is sure to reinvigorate Maslow’s ideas, but if this is to help heal our divided world, then the characteristics required for self-actualisation, rather than being a permanent feature of our personalities, must be something we can develop deliberately. I put this point to Kaufman and he is optimistic. ‘I think there is significant room to develop these characteristics [by changing your habits],’ he told me. ‘A good way to start with that,’ he added, ‘is by first identifying where you stand on those characteristics and assessing your weakest links. Capitalise on your highest characteristics but also don’t forget to intentionally be mindful about what might be blocking your self-actualisation … Identify your patterns and make a concerted effort to change. I do think it’s possible with conscientiousness and willpower.’

Christian Jarrett

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Having a sense of Meaning in life is Good for you — So how do you get one?

File 20190208 174890 1tn0xbx.jpg?ixlib=rb 1.1

There’s a high degree of overlap between experiencing happiness and meaning.

Lisa A Williams, UNSW

The pursuit of happiness and health is a popular endeavour, as the preponderance of self-help books would attest.

Yet it is also fraught. Despite ample advice from experts, individuals regularly engage in activities that may only have short-term benefit for well-being, or even backfire.

The search for the heart of well-being – that is, a nucleus from which other aspects of well-being and health might flow – has been the focus of decades of research. New findings recently reported in Proceedings of the National Academy of Sciences point towards an answer commonly overlooked: meaning in life.

Meaning in life: part of the well-being puzzle?

University College London’s psychology professor Andrew Steptoe and senior research associate Daisy Fancourt analysed a sample of 7,304 UK residents aged 50+ drawn from the English Longitudinal Study of Ageing.

Survey respondents answered a range of questions assessing social, economic, health, and physical activity characteristics, including:

…to what extent do you feel the things you do in your life are worthwhile?

Follow-up surveys two and four years later assessed those same characteristics again.

One key question addressed in this research is: what advantage might having a strong sense of meaning in life afford a few years down the road?

The data revealed that individuals reporting a higher meaning in life had:

  • lower risk of divorce
  • lower risk of living alone
  • increased connections with friends and engagement in social and cultural activities
  • lower incidence of new chronic disease and onset of depression
  • lower obesity and increased physical activity
  • increased adoption of positive health behaviours (exercising, eating fruit and veg).

On the whole, individuals with a higher sense of meaning in life a few years earlier were later living lives characterised by health and well-being.

You might wonder if these findings are attributable to other factors, or to factors already in play by the time participants joined the study. The authors undertook stringent analyses to account for this, which revealed largely similar patterns of findings.

The findings join a body of prior research documenting longitudinal relationships between meaning in life and social functioning, net wealth and reduced mortality, especially among older adults.

What is meaning in life?

The historical arc of consideration of the meaning in life (not to be confused with the meaning of life) starts as far back as Ancient Greece. It tracks through the popular works of people such as Austrian neurologist and psychiatrist Victor Frankl, and continues today in the field of psychology.

One definition, offered by well-being researcher Laura King and colleagues, says

…lives may be experienced as meaningful when they are felt to have a significance beyond the trivial or momentary, to have purpose, or to have a coherence that transcends chaos.

This definition is useful because it highlights three central components of meaning:

  1. purpose: having goals and direction in life
  2. significance: the degree to which a person believes his or her life has value, worth, and importance
  3. coherence: the sense that one’s life is characterised by predictability and routine.
Michael Steger’s TEDx talk What Makes Life Meaningful.

Curious about your own sense of meaning in life? You can take an interactive version of the Meaning in Life Questionnaire, developed by Steger and colleagues, yourself here.

This measure captures not just the presence of meaning in life (whether a person feels that their life has purpose, significance, and coherence), but also the desire to search for meaning in life.

Routes for cultivating meaning in life

Given the documented benefits, you may wonder: how might one go about cultivating a sense of meaning in life?

We know a few things about participants in Steptoe and Fancourt’s study who reported relatively higher meaning in life during the first survey. For instance, they contacted their friends frequently, belonged to social groups, engaged in volunteering, and maintained a suite of healthy habits relating to sleep, diet and exercise.

Backing up the idea that seeking out these qualities might be a good place to start in the quest for meaning, several studies have causally linked these indicators to meaning in life.

For instance, spending money on others and volunteering, eating fruit and vegetables, and being in a well-connected social network have all been prospectively linked to acquiring a sense of meaning in life.

For a temporary boost, some activities have documented benefits for meaning in the short term: envisioning a happier future, writing a note of gratitude to another person, engaging in nostalgic reverie, and bringing to mind one’s close relationships.

Happiness and meaning: is it one or the other?

There’s a high degree of overlap between experiencing happiness and meaning – most people who report one also report the other. Days when people report feeling happy are often also days that people report meaning.

Yet there’s a tricky relationship between the two. Moment-to-moment, happiness and meaning are often decoupled.

Research by social psychologist Roy Baumeister and colleagues suggests that satisfying basic needs promotes happiness, but not meaning. In contrast, linking a sense of self across one’s past, present, and future promotes meaning, but not happiness.

Connecting socially with others is important for both happiness and meaning, but doing so in a way that promotes meaning (such as via parenting) can happen at the cost of personal happiness, at least temporarily.

Given the now-documented long-term social, mental, and physical benefits of having a sense of meaning in life, the recommendation here is clear. Rather than pursuing happiness as an end-state, ensuring one’s activities provide a sense of meaning might be a better route to living well and flourishing throughout life.The Conversation

Lisa A Williams, Senior Lecturer, School of Psychology, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Does Microdosing Improve your Mood and Performance? Here’s what the Research Says

File 20190206 86233 7rvfkk.jpg?ixlib=rb 1.1
Microdosers take such small quantities of psychedelic substances that there are no noticeable effects.
By AppleZoomZoom

Vince Polito, Macquarie University

Microdosing means regularly taking very small doses of psychedelic substances such as LSD or psilocybin (magic mushrooms) over a period of weeks or months. The practice has made countless headlines over the past couple of years, with claims it can improve health, strengthen relationships, and increase productivity.

These claims are surprising because microdosers take doses so small there are no noticeable effects. These can be just 1/20th of a typical recreational dose, often every three or four days. With such small amounts, microdosers go about their daily business, including going to work, without experiencing any typical drug effects.

Previous research suggests microdosing may lead to better mood and energy levels, improved creativity, increased wisdom, and changes to how we perceive time.

Read more:
LSD ‘microdosing’ is trending in Silicon Valley – but can it actually make you more creative?

But these previous studies have mainly involved asking people to complete ratings or behavioural tasks as one-off measures.

Our study, published today in PLOS One, tracked the experience of 98 users over a longer period – six weeks – to systematically measure any psychological changes.

Overall, the participants reported both positive and negative effects from microdosing, including improved attention and mental health; but also more neuroticism.

What we did

As you would expect, there are many legal and bureaucratic barriers to psychedelic research. It wasn’t possible for us to run a study where we actually provided participants with psychedelic substances. Instead, we tried to come up with the most rigorous design possible in the current restrictive legal climate.

Our solution was to recruit people who were already experimenting with microdosing and to track their experiences carefully over time, using well validated and reliable psychometric measures.

Microdosers go about their lives without any typical drug effects.
Parker Byrd

Each day we asked participants to complete some brief ratings, telling us whether they had microdosed that day and describing their overall experience. This let us track the immediate effects of microdosing.

At the beginning and end of the study participants completed a detailed battery of psychological measures. This let us track the longer-term effects of microdosing.

In a separate sample, we explored the beliefs and expectations of people who are interested in microdosing. This let us track whether any changes in our main sample were aligned with what people generally predict will happen when microdosing.

What we found

There are five key findings from our study.

1. A general positive boost on microdosing days, but limited residual effects of each dose.

Many online accounts of microdosing suggest people microdose every three or four days. The thinking is that each microdose supposedly has a residual effect that lasts for a few days.

The daily ratings from participants in our study do not support this idea. Participants reported an immediate boost in all measures (connectedness, contemplation, creativity, focus, happiness, productiveness and wellness) on dosing days. But this was mostly not maintained on the following days.

However, there was some indication of a slight rebound in feelings of focus and productivity two days after dosing.

Microdosers experienced increased focus.

2. Some indications of improvements in mental health

We also looked at cumulative effects of longer term microdosing. We found that after six weeks, participants reported lower levels of depression and stress.

We recruited people who were not experiencing any kind of mental illness for the study, so levels of depression and stress were relatively low to begin with. Nevertheless, ratings on these measures did drop.

This is an intriguing finding but it’s not clear from this result whether microdosing would have any effect on more significant levels of mood disturbance.

3. Shifts in attention

The microdosers in our study reported reduced mind wandering, meaning they were less likely to be distracted by unwanted thoughts.

They also reported an increase in absorption, meaning they were more likely to experience intense focused attention on imaginative experiences. Absorption has been linked to strong engagement with art and nature.

4. Increases in neuroticism and some challenging experiences

Not everyone had a good time microdosing. Some participants reported unpleasant and difficult experiences. In some cases, participants tried microdosing just once or twice, then didn’t want to continue.

Overall, participants reported a small increase in neuroticism after six weeks of microdosing, indicating an increase in the frequency of unpleasant emotions.

5. Changes do not entirely match people’s expectations

People have strong expectations about the effects of microdosing. But when we looked at the specific variables participants most expected would change, these didn’t match up with the changes actually reported by our microdosers.

Two of the biggest changes microdosers expected were increases in creativity and life satisfaction, but we found no evidence of shifts in these areas. This suggests the changes we found were not simply due to people’s expectations.

What does it all mean?

This complex set of findings is not what’s typically reported in media stories and online discussions of microdosing. There are promising indications of possible benefits of microdosing here, but also indications of some potential negative impacts, which should be taken seriously.

Read more:
Opening up the future of psychedelic science

It’s important to remember this was an observational study that relied heavily on the accuracy and honesty of participants in their reports. As such, these results need to be treated cautiously.

It’s early days for microdosing research and this work shows that we need to look more carefully at the effects of low dose psychedelics on mental health, attention, and neuroticism.The Conversation

Vince Polito, Postdoctoral Research Fellow in Cognitive Science, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How Seeing Snakes in the Grass Helped Primates to Evolve


Phrynonax poecilonotus, Wikipedia

Lynne A Isbell | Aeon Ideas

Evolution has favoured the modification and expansion of primate vision. Compared with other mammals, primates have, for example, greater depth perception from having forward-facing eyes with extensively overlapping visual fields, sharper visual acuity, more areas in the brain that are involved with vision, and, in some primates, trichromatic colour vision, which enables them to distinguish red from green hues. In fact, what separates primates from other mammals most is their much greater reliance on vision as the main sensory interface with the environment.

Vision is a window onto the world, its qualities determined by natural selection and the constraints of both animals’ bodies and the environments in which they live. Despite their long, shared evolutionary history, mammals don’t all see the world in the same way because they inhabit a variety of niches with different selective pressures. What were those selective pressures for primates, our lineage, that led to their having visual systems more expansive and more complex than those of other mammals?

In 2006, I published a new idea that could answer that question and more: the ‘snake detection theory’. I hypothesised that when large-gaped constricting snakes appeared about 100 million years ago and began eating mammals, their predatory behaviour favoured the evolution of changes in the vision of one kind of prey, the lineage that was to become primates. In other words, the ability to see immobile predatory snakes before getting too close became a highly beneficial trait for them to have and pass on to their descendants. Then, about 60 million years ago, venomous snakes appeared in Africa or Asia, adding more pressure on primates to detect and avoid them. This has also had repercussions on their visual systems.

There is a consistency between the degree of complexity in primate visual systems and the length of evolutionary time that primates have spent with venomous snakes. At one extreme, the lineage that comprises Old World monkeys, apes and humans has the best vision of all primates, including excellent visual acuity and fully trichromatic colour vision. Having evolved roughly at the same time and in the same place as venomous snakes, these primates have had continuous coexistence with them. They are also uniformly wary of snakes.

At the opposite end of the spectrum, Malagasy primates have the simplest visual systems. Among other things, they have low visual acuity because the fovea, a depression in the retina that is responsible for our visual acuity wherever we focus our eyes, is poorly developed (when it’s present at all). Although Madagascar has constricting snakes, it has no venomous snakes, so primates on that island never had to face that particular selective pressure. Behavioural evidence also reveals that they don’t all react fearfully toward snakes. Some can even walk on snakes or snake models, treating them as if they’re just another branch.

The visual systems of New World monkeys are in the middle. They have better visual acuity than Malagasy primates but more variability in their visual systems than Old World monkeys. For example, New World howler monkeys are all trichromatic, but in other New World primate species, only some individuals are able to distinguish red from green hues. New World primates were originally part of the anthropoid primate lineage in Africa that also includes Old World monkeys and apes, and so had to deal with venomous snakes for about 20-25 million years, but then, some 36 million years ago, they left Africa and arrived in South America where venomous snakes were not present until roughly 15 million years later. By then, New World monkeys had begun to diversify into different genera, and so each genus evolved separate solutions to the renewed problem caused by the arrival again of venomous snakes. As far as I know, no other explanation for the variation in their visual systems exists.

Since I proposed the snake detection theory, several studies have shown that nonhuman and human primates, including young children and snake-naive infants, have a visual bias toward snakes compared with other animate objects, such as lizards, spiders, worms, birds and flowers. Psychologists have discovered that we pick out images of snakes faster or more accurately than other objects, especially under cluttered or obscuring conditions that resemble the sorts of environments in which snakes are typically found. Snakes also distract us from finding other objects as quickly. Our ability to detect snakes faster is also more pronounced when we have less time to detect them and when they are in our periphery. Moreover, our ‘primary visual area’ in the back of the brain shows stronger electrophysiological responses to images of snakes than of lizards 150-300 milliseconds after people see the images, providing a measurable physical correlate of our greater visual bias toward them.

Since vision is mostly in the brain, we need to turn to neuroscience to understand the mechanisms for our visual bias toward snakes. All vertebrates have a visual system that allows them to distinguish potential predators from potential prey. This is a nonconscious visual system that involves only subcortical structures, including those that in mammals are called the superior colliculus and the pulvinar, and it allows for very fast visual detection and response. When an animal sees a predator, this nonconscious visual system also taps directly into motor responses such as freezing and darting.

As vertebrates, mammals have this nonconscious visual system, but they have also incorporated vision into the neocortex. No other animals have a neocortex. This somewhat slower, conscious visual system allows mammals to become cognizant of objects for what they really are. The first neocortical stop is the primary visual area, which is particularly sensitive to edges and lines of different orientations.

In a breakthrough study, a team of neuroscientists probed the responses of individual neurons in the pulvinar of Japanese macaques as they were shown images of snakes, faces of monkeys, hands of monkeys, and simple geometric shapes. Sure enough, many pulvinar neurons responded more strongly and more quickly to snakes than to the other images. The snake-sensitive neurons were found in a subsection of the pulvinar that is connected to a part of the superior colliculus involved in defensive motor behaviour such as freezing and darting, and to the amygdala, a subcortical structure involved in mediating fear responses. Among all mammals, the lineage with the greatest evolutionary exposure to venomous snakes, the anthropoid monkeys, apes and humans, also have the largest pulvinar. This makes perfect sense in the context of the snake detection theory.

What is it about snakes that makes them so attention-grabbing to us? Naturally, we use all the cues available (such as body shape and leglessness) but it’s their scales that should be the most reliable, because a little patch of snake might be all we have to go on. Indeed, wild vervet monkeys in Africa, for instance, are able with their superb visual acuity to detect just an inch of snake skin within a minute of coming near it. In people, electrophysiological responses in the primary visual area reveal greater early visual attention to snake scales compared with lizard skins and bird feathers. Again, the primary visual area is highly sensitive to edges and lines of different orientations, and snake skins with their spades offer these visual cues in spades.

The snake detection theory takes our seemingly contradictory attitudes about snakes and makes sense of them as a cohesive whole. Our long evolutionary exposure to snakes explains why ophiophobia is humanity’s most-reported phobia but also why our attraction and attention to snakes is so strong that we have even included them prominently in our religions and folklore. Most importantly, by recognising that our vision and our behaviour have been shaped by millions of years of interactions with another type of animal, we admit our close relationship with nature. We have not been above or outside nature as we might like to think, but have always been fully a part of it.Aeon counter – do not remove

Lynne A Isbell is professor of anthropology at the University of California, Davis. She is the author of The Fruit, the Tree, and the Serpent: Why We See So Well (2009). She is interested in primate behaviour and ecology.

This article was originally published at Aeon and has been republished under Creative Commons. Visit the original article here.

Psychology’s Five Revelations for Finding Your True Calling

Christian Jarrett | Aeon Ideas

Look. You can’t plan out your life. What you have to do is first discover your passion – what you really care about.
Barack Obama

If, like many, you are searching for your calling in life – perhaps you are still unsure which profession aligns with what you most care about – here are five recent research findings worth taking into consideration.

First, there’s a difference between having a harmonious passion and an obsessive passion. If you can find a career path or occupational goal that fires you up, you are more likely to succeed and find happiness through your work – that much we know from the deep research literature. But beware – since a seminal paper published in 2003 by the Canadian psychologist Robert Vallerand and colleagues, researchers have made an important distinction between having a harmonious passion and an obsessive one. If you feel that your passion or calling is out of control, and that your mood and self-esteem depend on it, then this is the obsessive variety, and such passions, while they are energising, are also associated with negative outcomes such as burnout and anxiety. In contrast, if your passion feels in control, reflects qualities that you like about yourself, and complements other important activities in your life, then this is the harmonious version, which is associated with positive outcomes, such as vitality, better work performance, experiencing flow, and positive mood.

Secondly, having an unanswered calling in life is worse than having no calling at all. If you already have a burning ambition or purpose, do not leave it to languish. A few years ago, researchers at the University of South Florida surveyed hundreds of people and grouped them according to whether they felt like they had no calling in life, that they had a calling they’d answered, or they had a calling but had never done anything about it. In terms of their work engagement, career commitment, life satisfaction, health and stress, the stand-out finding was that the participants who had a calling they hadn’t answered scored the worst across all these measures. The researchers said that this puts a different spin on the presumed benefits of having a calling in life. They concluded: ‘having a calling is only a benefit if it is met, but can be a detriment when it is not as compared to having no calling at all’.

The third finding to bear in mind is that, without passion, grit is ‘merely a grind’. The idea that ‘grit’ is vital for career success was advanced by the psychologist Angela Duckworth of the University of Pennsylvania, who argued that highly successful, ‘gritty’ people have impressive persistence. ‘To be gritty,’ Duckworth writes in her 2016 book on the subject, ‘is to fall down seven times, and rise eight.’ Many studies certainly show that being more conscientious – more self-disciplined and industrious – is associated with more career success. But is that all that being gritty means? Duckworth has always emphasised that it has another vital component that brings us back to passion again – alongside persistence, she says that gritty people also have an ‘ultimate concern’ (another way of describing having a passion or calling).

However, according to a paper published last year, the standard measure of grit has failed to assess passion (or more specifically, ‘passion attainment’) – and Jon Jachimowicz at Columbia Business School in New York and colleagues believe this could explain why the research on grit has been so inconsistent (leading to claims that it is an overhyped concept and simply conscientiousness repackaged). Jachimowicz’s team found that when they explicitly measured passion attainment (how much people feel they have adequate passion for their work) and combined this with a measure of perseverance (a consistency of interests and the ability to overcome setbacks), then the two together did predict superior performance among tech-company employees and university students. ‘Our findings suggest that perseverance without passion attainment is mere drudgery, but perseverance with passion attainment propels individuals forward,’ they said.

Another finding is that, when you invest enough effort, you might find that your work becomes your passion. It’s all very well reading about the benefits of having a passion or calling in life but, if you haven’t got one, where to find it? Duckworth says it’s a mistake to think that in a moment of revelation one will land in your lap, or simply occur to you through quiet contemplation – rather, you need to explore different activities and pursuits, and expose yourself to the different challenges and needs confronting society. If you still draw a blank, then perhaps it’s worth heeding the advice of others who say that it is not always the case that energy and determination flow from finding your passion – sometimes it can be the other way around and, if you put enough energy into your work, then passion will follow. Consider, for instance, an eight-week repeated survey of German entrepreneurs published in 2014 that found a clear pattern – their passion for their ventures increased after they’d invested more effort into them the week before. A follow-up study qualified this, suggesting that the energising effect of investing effort arises only when the project is freely chosen and there is a sense of progress. ‘Entrepreneurs increase their passion when they make significant progress in their venture and when they invest effort out of their own free choice,’ the researchers said.

Finally, if you think that passion comes from doing a job you enjoy, you’re likely to be disappointed. Consider where you think passion comes from. In a preprint paper released at PsyArXiv, Jachimowicz and his team draw a distinction between people who believe that passion comes from doing what you enjoy (which they say is encapsulated by Oprah Winfrey’s commencement address in 2008 in which she said passions ‘bloom when we’re doing what we love’), and those who see it as arising from doing what you believe in or value in life (as reflected in the words of former Mexican president Felipe Calderón who in his own commencement address in 2011 said ‘you have to embrace with passion the things that you believe in, and that you are fighting for’).

The researchers found that people who believe that passion comes from pleasurable work were less likely to feel that they had found their passion (and were more likely to want to leave their job) as compared with people who believe that passion comes from doing what you feel matters. Perhaps this is because there is a superficiality and ephemerality to working for sheer pleasure – what fits the bill one month or year might not do so for long – whereas working towards what you care about is a timeless endeavour that is likely to stretch and sustain you indefinitely. The researchers conclude that their results show ‘the extent to which individuals attain their desired level of work passion may have less to do with their actual jobs and more to do with their beliefs about how work passion is pursued’.

This is an adaptation of an article originally published by The British Psychological Society’s Research Digest.Aeon counter – do not remove

Christian Jarrett

This article was originally published at Aeon and has been republished under Creative Commons.