The Meaning to Life? A Darwinian Existentialist has his Answers

human-lifespan

Michael Ruse | Aeon Ideas

I was raised as a Quaker, but around the age of 20 my faith faded. It would be easiest to say that this was because I took up philosophy – my lifelong occupation as a teacher and scholar. This is not true. More accurately, I joke that having had one headmaster in this life, I’ll be damned if I want another in the next. I was convinced back then that, by the age of 70, I would be getting back onside with the Powers That Be. But faith did not then return and, as I approach 80, is nowhere on the horizon. I feel more at peace with myself than ever before. It’s not that I don’t care about the meaning or purpose of life – I am a philosopher! Nor does my sense of peace mean that I am complacent or that I have delusions about my achievements and successes. Rather, I feel that deep contentment that religious people tell us is the gift or reward for proper living.

I come to my present state for two separate reasons. As a student of Charles Darwin, I am totally convinced – God or no God – that we are (as the 19th-century biologist Thomas Henry Huxley used to say) modified monkeys rather than modified mud. Culture is hugely important, but to ignore our biology is just wrong. Second, I am drawn, philosophically, to existentialism. A century after Darwin, Jean-Paul Sartre said that we are condemned to freedom, and I think he is right. Even if God does exist, He or She is irrelevant. The choices are ours.

Sartre denied such a thing as human nature. From this quintessential Frenchman, I take that with a pinch of salt: we are free, within the context of our Darwinian-created human nature. What am I talking about? A lot of philosophers today are uncomfortable even raising the idea of ‘human nature’. They feel that, too quickly, it is used against minorities – gay people, the disabled, and others – to suggest that they are not really human. This is a challenge not a refutation. If a definition of human nature cannot take account of the fact that up to 10 per cent of us have same-sex orientation, then the problem is not with human nature but with the definition.

What, then, is human nature? In the middle of the 20th century, it was popular to suggest that we are killer apes: we can and do make weapons, and we use them. But modern primatologists have little time for this. Their findings suggest that most apes would far rather fornicate than fight. In making war we are really not doing what comes naturally. I don’t deny that humans are violent, however our essence goes the other way. It is one of sociability. We are not that fast, we are not that strong, we are hopeless in bad weather; but we succeed because we work together. Indeed, our lack of natural weapons points that way. We cannot get all we want through violence. We must cooperate.

Darwinians did not discover this fact about our nature. Listen to the metaphysical poet John Donne in 1624:

No man is an island,
Entire of itself,
Every man is a piece of the continent,
A part of the main.
If a clod be washed away by the sea,
Europe is the less.
As well as if a promontory were.
As well as if a manor of thy friend’s
Or of thine own were:
Any man’s death diminishes me,
Because I am involved in mankind,
And therefore never send to know for whom the bell tolls;
It tolls for thee.

Darwinian evolutionary theory shows how this all came about, historically, through the forces of nature. It suggests that there is no eternal future or, if there is, it is not relevant for the here and now. Rather, we must live life to the full, within the context of – liberated by – our Darwinian-created human nature. I see three basic ways in which this occurs.

First, family. Humans are not like male orangutans whose home life is made up mainly of one-night stands. A male turns up, does his business, and then, sexually sated, vanishes. The impregnated female births and raises the children by herself. This is possible simply because she can. If she couldn’t then, biologically it would be in the interests of the males to lend a hand. Male birds help at the nest because, exposed as they are up trees, the chicks need to grow as quickly as possible. Humans face different challenges, but with the same end. We have big brains that need time to develop. Our young cannot fend for themselves within weeks or days. Therefore humans need lots of parental care, and our biology fits us for home life, as it were: spouses, offspring, parents, and more. Men don’t push the pram just by chance. Nor boast to their co-workers about their kid getting into Harvard.

Second, society. Co-workers, shop attendants, teachers, doctors, hotel clerks – the list is endless. Our evolutionary strength is that we work together, helping and expecting help. I am a teacher, not just of my children, but of yours (and others) too. You are a doctor: you give medical care not just to your children, but to mine (and others) too. In this way, we all benefit. As Adam Smith pointed out in 1776, none of this happens by chance or because nature has suddenly become soft: ‘It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest.’ Smith invoked the ‘invisible hand’. The Darwinian puts it down to evolution through natural selection.

Though life can be a drag sometimes, biology ensures that we generally get on with the job, and do it as part of our fulfilled lives. John Stuart Mill had it exactly right in 1863: ‘When people who are fairly fortunate in their material circumstances don’t find sufficient enjoyment to make life valuable to them, this is usually because they care for nobody but themselves.’

Third, culture. Works of art and entertainment, TV, movies, plays, novels, paintings and sport. Note how social it all is. Romeo and Juliet, about two kids in ill-fated love. The Sopranos, about a mob family. A Roy Lichtenstein faux-comic painting; a girl on the phone: ‘Oh, Jeff… I love you, too… but…’ England beating Australia at cricket. There are evolutionists who doubt that culture is so tightly bound to biology, and who are inclined to see it as a side-product of evolution, what Stephen Jay Gould in 1982 called an ‘exaptation’. This is surely true in part. But probably only in part. Darwin thought that culture might have something to do with sexual selection: protohumans using songs and melodies, say, to attract mates. Sherlock Holmes agreed; in A Study in Scarlet (1887), he tells Watson that musical ability predates speech, according to Darwin: ‘Perhaps that is why we are so subtly influenced by it. There are vague memories in our souls of those misty centuries when the world was in its childhood.’

Draw it together. I have had a full family life, a loving spouse and children. I even liked teenagers. I have been a college professor for 55 years. I have not always done the job as well as I could, but I am not lying when I say that Monday morning is my favourite time of the week. I’m not much of a creative artist, and I’m hopeless at sports. But I have done my scholarship and shared with others. Why else am I writing this? And I have enjoyed the work of fellow humans. A great performance of Mozart’s opera The Marriage of Figaro is heaven. I speak literally.

This is my meaning to life. When I meet my nonexistent God, I shall say to Him: ‘God, you gave me talents and it’s been a hell of a lot of fun using them. Thank you.’ I need no more. As George Meredith wrote in his poem ‘In the Woods’ (1870):

The lover of life knows his labour divine,
And therein is at peace.


A Meaning to Life (2019) by Michael Ruse is published via Princeton University Press.Aeon counter – do not remove

Michael Ruse is the Lucyle T Werkmeister Professor of Philosophy and director of the history and philosophy of science at Florida State University. He has written or edited more than 50 books, including most recently On Purpose (2017), Darwinism as Religion (2016), The Problem of War (2018) and A Meaning to Life (2019).

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Richard Feynman was Wrong about Beauty and Truth in Science

Tuva

Spaceborne Imaging Radar photo of the autonomous republic of Tuva, the subject of Richard Feynmann’s intense interest during the latter part of his life and documented in Tuva or Bust! by Ralph Leighton. Photo taken from Space Shuttle Endeavour in 1994. Photo courtesy NASA/JPL

Massimo Pigliucci | Aeon Ideas

Edited by Nigel Warburton

The American physicist Richard Feynman is often quoted as saying: ‘You can recognise truth by its beauty and simplicity.’ The phrase appears in the work of the American science writer K C Cole – in her Sympathetic Vibrations: Reflections on Physics as a Way of Life (1985) – although I could not find other records of Feynman writing or saying it. We do know, however, that Feynman had great respect for the English physicist Paul Dirac, who believed that theories in physics should be both simple and beautiful.

Feynman was unquestionably one of the outstanding physicists of the 20th century. To his contributions to the Manhattan Project and the solution of the mystery surrounding the explosion of the Space Shuttle Challenger in 1986, add a Nobel Prize in 1965 shared with Julian Schwinger and Shin’ichirō Tomonaga ‘for their fundamental work in quantum electrodynamics, with deep-ploughing consequences for the physics of elementary particles’. And he played the bongos too!

In the area of philosophy of science, though, like many physicists of his and the subsequent generation (and unlike those belonging to the previous one, including Albert Einstein and Niels Bohr), Feynman didn’t really shine – to put it mildly. He might have said that philosophy of science is as helpful to science as ornithology is to birds (a lot of quotations attributed to him are next to impossible to source). This has prompted countless responses from philosophers of science, including that birds are too stupid to do ornithology, or that without ornithology many birds species would be extinct.

The problem is that it’s difficult to defend the notion that the truth is recognisable by its beauty and simplicity, and it’s an idea that has contributed to getting fundamental physics into its current mess; for more on the latter topic, check out The Trouble with Physics (2006) by Lee Smolin, or Farewell to Reality (2013) by Jim Baggott, or subscribe to Peter Woit’s blog. To be clear, when discussing the simplicity and beauty of theories, we are not talking about Ockham’s razor (about which my colleague Elliott Sober has written for Aeon). Ockham’s razor is a prudent heuristic, providing us with an intuitive guide to the comparisons of different hypotheses. Other things being equal, we should prefer simpler ones. More specifically, the English monk William of Ockham (1287-1347) meant that ‘[hypothetical] entities are not to be multiplied without necessity’ (a phrase by the 17th-century Irish Franciscan philosopher John Punch). Thus, Ockham’s razor is an epistemological, not a metaphysical principle. It’s about how we know things, whereas Feynman’s and Dirac’s statements seem to be about the fundamental nature of reality.

But as the German theoretical physicist Sabine Hossenfelder has pointed out (also in Aeon), there is absolutely no reason to think that simplicity and beauty are reliable guides to physical reality. She is right for a number of reasons.

To begin with, the history of physics (alas, seldom studied by physicists) clearly shows that many simple theories have had to be abandoned in favour of more complex and ‘ugly’ ones. The notion that the Universe is in a steady state is simpler than one requiring an ongoing expansion; and yet scientists do now think that the Universe has been expanding for almost 14 billion years. In the 17th century Johannes Kepler realised that Copernicus’ theory was too beautiful to be true, since, as it turns out, planets don’t go around the Sun in perfect (according to human aesthetics!) circles, but rather following somewhat uglier ellipses.

And of course, beauty is, notoriously, in the eye of the beholder. What struck Feynman as beautiful might not be beautiful to other physicists or mathematicians. Beauty is a human value, not something out there in the cosmos. Biologists here know better. The capacity for aesthetic appreciation in our species is the result of a process of biological evolution, possibly involving natural selection. And there is absolutely no reason to think that we evolved an aesthetic sense that somehow happens to be tailored for the discovery of the ultimate theory of everything.

The moral of the story is that physicists should leave philosophy of science to the pros, and stick to what they know best. Better yet: this is an area where fruitful interdisciplinary dialogue is not just a possibility, but arguably a necessity. As Einstein wrote in a letter to his fellow physicist Robert Thornton in 1944:

I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today – and even professional scientists – seem to me like someone who has seen thousands of trees but has never seen a forest. A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is – in my opinion – the mark of distinction between a mere artisan or specialist and a real seeker after truth.

Ironically, it was Plato – a philosopher – who argued that beauty is a guide to truth (and goodness), apparently never having met an untruthful member of the opposite (or same, as the case might be) sex. He wrote about that in the Symposium, the dialogue featuring, among other things, sex education from Socrates. But philosophy has made much progress since Plato, and so has science. It is therefore a good idea for scientists and philosophers alike to check with each other before uttering notions that might be hard to defend, especially when it comes to figures who are influential with the public. To quote another philosopher, Ludwig Wittgenstein, in a different context: ‘Whereof one cannot speak, thereof one must be silent.’Aeon counter – do not remove


Massimo Pigliucci is professor of philosophy at City College and at the Graduate Center of the City University of New York. He is the author of How to Be a Stoic: Ancient Wisdom for Modern Living (2017) and his most recent book is A Handbook for New Stoics: How to Thrive in a World Out of Your Control (2019), co-authored with Gregory Lopez.

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

How the Dualism of Descartes Ruined our Mental Health

goya-lunatics

Yard with Lunatics 1794, (detail) by Francisco José de Goya y Lucientes. Courtesy Wikimedia/Meadows Museum, Dallas

James Barnes | Aeon Ideas

Toward the end of the Renaissance period, a radical epistemological and metaphysical shift overcame the Western psyche. The advances of Nicolaus Copernicus, Galileo Galilei and Francis Bacon posed a serious problem for Christian dogma and its dominion over the natural world. Following Bacon’s arguments, the natural world was now to be understood solely in terms of efficient causes (ie, external effects). Any inherent meaning or purpose to the natural world (ie, its ‘formal’ or ‘final’ causes) was deemed surplus to requirements. Insofar as it could be predicted and controlled in terms of efficient causes, not only was any notion of nature beyond this conception redundant, but God too could be effectively dispensed with.

In the 17th century, René Descartes’s dualism of matter and mind was an ingenious solution to the problem this created. ‘The ideas’ that had hitherto been understood as inhering in nature as ‘God’s thoughts’ were rescued from the advancing army of empirical science and withdrawn into the safety of a separate domain, ‘the mind’. On the one hand, this maintained a dimension proper to God, and on the other, served to ‘make the intellectual world safe for Copernicus and Galileo’, as the American philosopher Richard Rorty put it in Philosophy and the Mirror of Nature (1979). In one fell swoop, God’s substance-divinity was protected, while empirical science was given reign over nature-as-mechanism – something ungodly and therefore free game.

Nature was thereby drained of her inner life, rendered a deaf and blind apparatus of indifferent and value-free law, and humankind was faced with a world of inanimate, meaningless matter, upon which it projected its psyche – its aliveness, meaning and purpose – only in fantasy. It was this disenchanted vision of the world, at the dawn of the industrial revolution that followed, that the Romantics found so revolting, and feverishly revolted against.

The French philosopher Michel Foucault in The Order of Things (1966) termed it a shift in ‘episteme’ (roughly, a system of knowledge). The Western psyche, Foucault argued, had once been typified by ‘resemblance and similitude’. In this episteme, knowledge of the world was derived from participation and analogy (the ‘prose of the world’, as he called it), and the psyche was essentially extroverted and world-involved. But after the bifurcation of mind and nature, an episteme structured around ‘identity and difference’ came to possess the Western psyche. The episteme that now prevailed was, in Rorty’s terms, solely concerned with ‘truth as correspondence’ and ‘knowledge as accuracy of representations’. Psyche, as such, became essentially introverted and untangled from the world.

Foucault argued, however, that this move was not a supersession per se, but rather constituted an ‘othering’ of the prior experiential mode. As a result, its experiential and epistemological dimensions were not only denied validity as an experience, but became the ‘occasion of error’. Irrational experience (ie, experience inaccurately corresponding to the ‘objective’ world) then became a meaningless mistake – and disorder the perpetuation of that mistake. This is where Foucault located the beginning of the modern conception of ‘madness’.

Although Descartes’s dualism did not win the philosophical day, we in the West are still very much the children of the disenchanted bifurcation it ushered in. Our experience remains characterised by the separation of ‘mind’ and ‘nature’ instantiated by Descartes. Its present incarnation  – what we might call the empiricist-materialist position  –  not only predominates in academia, but in our everyday assumptions about ourselves and the world. This is particularly clear in the case of mental disorder.

Common notions of mental disorder remain only elaborations of ‘error’, conceived of in the language of ‘internal dysfunction’ relative to a mechanistic world devoid of any meaning and influence. These dysfunctions are either to be cured by psychopharmacology, or remedied by therapy meant to lead the patient to rediscover ‘objective truth’ of the world. To conceive of it in this way is not only simplistic, but highly biased.

While it is true that there is value in ‘normalising’ irrational experiences like this, it comes at a great cost. These interventions work (to the extent that they do) by emptying our irrational experiences of their intrinsic value or meaning. In doing so, not only are these experiences cut off from any world-meaning they might harbour, but so too from any agency and responsibility we or those around us have – they are only errors to be corrected.

In the previous episteme, before the bifurcation of mind and nature, irrational experiences were not just ‘error’ – they were speaking a language as meaningful as rational experiences, perhaps even more so. Imbued with the meaning and rhyme of nature herself, they were themselves pregnant with the amelioration of the suffering they brought. Within the world experienced this way, we had a ground, guide and container for our ‘irrationality’, but these crucial psychic presences vanished along with the withdrawal of nature’s inner life and the move to ‘identity and difference’.

In the face of an indifferent and unresponsive world that neglects to render our experience meaningful outside of our own minds  –  for nature-as-mechanism is powerless to do this  –  our minds have been left fixated on empty representations of a world that was once its source and being. All we have, if we are lucky to have them, are therapists and parents who try to take on what is, in reality, and given the magnitude of the loss, an impossible task.

But I’m not going to argue that we just need to ‘go back’ somehow. On the contrary, the bifurcation of mind and nature was at the root of immeasurable secular progress –  medical and technological advance, the rise of individual rights and social justice, to name just a few. It also protected us all from being bound up in the inherent uncertainty and flux of nature. It gave us a certain omnipotence – just as it gave science empirical control over nature – and most of us readily accept, and willingly spend, the inheritance bequeathed by it, and rightly so.

It cannot be emphasised enough, however, that this history is much less a ‘linear progress’ and much more a dialectic. Just as unified psyche-nature stunted material progress, material progress has now degenerated psyche. Perhaps, then, we might argue for a new swing in this pendulum. Given the dramatic increase in substance-use issues and recent reports of a teenage ‘mental health crisis’ and teen suicide rates rising in the US, the UK and elsewhere to name only the most conspicuous, perhaps the time is in fact overripe.

However, one might ask, by what means? There has been a resurgence of ‘pan-experiential’ and idealist-leaning theories in several disciplines, largely concerned with undoing the very knot of bifurcation and the excommunication of a living nature, and creating in its wake something afresh. This is because attempts at explaining subjective experience in empiricist-materialist terms have all but failed (principally due to what the Australian philosopher David Chalmers in 1995 termed the ‘the hard problem’ of consciousness). The notion that metaphysics is ‘dead’ would in fact be met with very significant qualification in certain quarters – indeed, the Canadian philosopher Evan Thompson et al argued along the same lines in a recent essay in Aeon.

It must be remembered that mental disorder as ‘error’ rises and falls with the empiricist-materialist metaphysics and the episteme it is a product of. Therefore, we might also think it justified to begin to reconceptualise the notion of mental disorder in the same terms as these theories. There has been a decisive shift in psychotherapeutic theory and practice away from the changing of parts or structures of the individual, and towards the idea that it is the very process of the therapeutic encounter itself that is ameliorative. Here, correct or incorrect judgments about ‘objective reality’ start to lose meaning, and psyche as open and organic starts to come back into focus, but the metaphysics remains. We ultimately need to be thinking about mental disorder on a metaphysical level, and not just within the confines of the status quo.Aeon counter – do not remove

James Barnes

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

To Boost your Self-esteem, Write about Chapters of your Life

1980s-car

New car, 1980s. Photo by Don Pugh/Flickr

Christian Jarrett | Aeon Ideas

In truth, so much of what happens to us in life is random – we are pawns at the mercy of Lady Luck. To take ownership of our experiences and exert a feeling of control over our future, we tell stories about ourselves that weave meaning and continuity into our personal identity. Writing in the 1950s, the psychologist Erik Erikson put it this way:

To be adult means among other things to see one’s own life in continuous perspective, both in retrospect and in prospect … to selectively reconstruct his past in such a way that, step for step, it seems to have planned him, or better, he seems to have planned it.

Alongside your chosen values and goals in life, and your personality traits – how sociable you are, how much of a worrier and so on – your life story as you tell it makes up the final part of what in 2015 the personality psychologist Dan P McAdams at Northwestern University in Illinois called the ‘personological trinity’.

Of course, some of us tell these stories more explicitly than others – one person’s narrative identity might be a barely formed story at the edge of their consciousness, whereas another person might literally write out their past and future in a diary or memoir.

Intriguingly, there’s some evidence that prompting people to reflect on and tell their life stories – a process called ‘life review therapy’ – could be psychologically beneficial. However, most of this work has been on older adults and people with pre-existing problems such as depression or chronic physical illnesses. It remains to be established through careful experimentation whether prompting otherwise healthy people to reflect on their lives will have any immediate benefits.

A relevant factor in this regard is the tone, complexity and mood of the stories that people tell themselves. For instance, it’s been shown that people who tell more positive stories, including referring to more instances of personal redemption, tend to enjoy higher self-esteem and greater ‘self-concept clarity’ (the confidence and lucidity in how you see yourself). Perhaps engaging in writing or talking about one’s past will have immediate benefits only for people whose stories are more positive.

In a recent paper in the Journal of Personality, Kristina L Steiner at Denison University in Ohio and her colleagues looked into these questions and reported that writing about chapters in your life does indeed lead to a modest, temporary self-esteem boost, and that in fact this benefit arises regardless of how positive your stories are. However, there were no effects on self-concept clarity, and many questions on this topic remain for future study.

Steiner’s team tested three groups of healthy American participants across three studies. The first two groups – involving more than 300 people between them – were young undergraduates, most of them female. The final group, a balanced mix of 101 men and women, was recruited from the community, and they were older, with an average age of 62.

The format was essentially the same for each study. The participants were asked to complete various questionnaires measuring their mood, self-esteem and self-concept clarity, among other things. Then half of them were allocated to write about four chapters in their lives, spending 10 minutes on each. They were instructed to be as specific and detailed as possible, and to reflect on main themes, how each chapter related to their lives as a whole, and to think about any causes and effects of the chapter on them and their lives. The other half of the participants, who acted as a control group, spent the same time writing about four famous Americans of their choosing (to make this task more intellectually comparable, they were also instructed to reflect on the links between the individuals they chose, how they became famous, and other similar questions). After the writing tasks, all the participants retook the same psychological measures they’d completed at the start.

The participants who wrote about chapters in their lives displayed small, but statistically significant, increases to their self-esteem, whereas the control-group participants did not. This self-esteem boost wasn’t explained by any changes to their mood, and – to the researchers’ surprise – it didn’t matter whether the participants rated their chapters as mostly positive or negative, nor did it depend on whether they featured themes of agency (that is, being in control) and communion (pertaining to meaningful relationships). Disappointingly, there was no effect of the life-chapter task on self-concept clarity, nor on meaning and identity.

How long do the self-esteem benefits of the life-chapter task last, and might they accumulate by repeating the exercise? Clues come from the second of the studies, which involved two life chapter-writing tasks (and two tasks writing about famous Americans for the control group), with the second task coming 48 hours after the first. The researchers wanted to see if the self-esteem boost arising from the first life-chapter task would still be apparent at the start of the second task two days later – but it wasn’t. They also wanted to see if the self-esteem benefits might accumulate over the two tasks – they didn’t (the second life-chapter task had its own self-esteem benefit, but it wasn’t cumulative with the benefits of the first).

It remains unclear exactly why the life-chapter task had the self-esteem benefits that it did. It’s possible that the task led participants to consider how they had changed in positive ways. They might also have benefited from expressing and confronting their emotional reactions to these periods of their lives – this would certainly be consistent with the well-documented benefits of expressive writing and ‘affect labelling’ (the calming effect of putting our emotions into words). Future research will need to compare different life chapter-writing instructions to tease apart these different potential beneficial mechanisms. It would also be helpful to test more diverse groups of participants and different ‘dosages’ of the writing task to see if it is at all possible for the benefits to accrue over time.

The researchers said: ‘Our findings suggest that the experience of systematically reviewing one’s life and identifying, describing and conceptually linking life chapters may serve to enhance the self, even in the absence of increased self-concept clarity and meaning.’ If you are currently lacking much confidence and feel like you could benefit from an ego boost, it could be worth giving the life-chapter task a go. It’s true that the self-esteem benefits of the exercise were small, but as Steiner’s team noted, ‘the costs are low’ too.Aeon counter – do not remove

Christian Jarrett

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Is Consciousness a Battle between your Beliefs and Perceptions?

elephant-magic

Now you see it… Magician Harry Houdini moments before ‘disappearing’ Jennie the 10,000lb elephant at the Hippodrome, New York, in 1918. Photo courtesy Library of Congress

Hakwan Lau | Aeon Ideas

Imagine you’re at a magic show, in which the performer suddenly vanishes. Of course, you ultimately know that the person is probably just hiding somewhere. Yet it continues to look as if the person has disappeared. We can’t reason away that appearance, no matter what logic dictates. Why are our conscious experiences so stubborn?

The fact that our perception of the world appears to be so intransigent, however much we might reflect on it, tells us something unique about how our brains are wired. Compare the magician scenario with how we usually process information. Say you have five friends who tell you it’s raining outside, and one weather website indicating that it isn’t. You’d probably just consider the website to be wrong and write it off. But when it comes to conscious perception, there seems to be something strangely persistent about what we see, hear and feel. Even when a perceptual experience is clearly ‘wrong’, we can’t just mute it.

Why is that so? Recent advances in artificial intelligence (AI) shed new light on this puzzle. In computer science, we know that neural networks for pattern-recognition – so-called deep learning models – can benefit from a process known as predictive coding. Instead of just taking in information passively, from the bottom up, networks can make top-down hypotheses about the world, to be tested against observations. They generally work better this way. When a neural network identifies a cat, for example, it first develops a model that allows it to predict or imagine what a cat looks like. It can then examine any incoming data that arrives to see whether or not it fits that expectation.

The trouble is, while these generative models can be super efficient once they’re up and running, they usually demand huge amounts of time and information to train. One solution is to use generative adversarial networks (GANs) – hailed as the ‘coolest idea in deep learning in the last 20 years’ by Facebook’s head of AI research Yann LeCun. In GANs, we might train one network (the generator) to create pictures of cats, mimicking real cats as closely as it can. And we train another network (the discriminator) to distinguish between the manufactured cat images and the real ones. We can then pit the two networks against each other, such that the discriminator is rewarded for catching fakes, while the generator is rewarded for getting away with them. When they are set up to compete, the networks grow together in prowess, not unlike an arch art-forger trying to outwit an art expert. This makes learning very efficient for each of them.

As well as a handy engineering trick, GANs are a potentially useful analogy for understanding the human brain. In mammalian brains, the neurons responsible for encoding perceptual information serve multiple purposes. For example, the neurons that fire when you see a cat also fire when you imagine or remember a cat; they can also activate more or less at random. So whenever there’s activity in our neural circuitry, the brain needs to be able to figure out the cause of the signals, whether internal or external.

We can call this exercise perceptual reality monitoring. John Locke, the 17th-century British philosopher, believed that we had some sort of inner organ that performed the job of sensory self-monitoring. But critics of Locke wondered why Mother Nature would take the trouble to grow a whole separate organ, on top of a system that’s already set up to detect the world via the senses. You have to be able to smell something before you can go about deciding whether or not the perception is real or fake; so why not just build in a check to the detecting mechanism itself?

In light of what we now know about GANs, though, Locke’s idea makes a certain amount of sense. Because our perceptual system takes up neural resources, parts of it get recycled for different uses. So imagining a cat draws on the same neuronal patterns as actually seeing one. But this overlap muddies the water regarding the meaning of the signals. Therefore, for the recycling scheme to work well, we need a discriminator to decide when we are seeing something versus when we’re merely thinking about it. This GAN-like inner sense organ – or something like it – needs to be there to act as an adversarial rival, to stimulate the growth of a well-honed predictive coding mechanism.

If this account is right, it’s fair to say that conscious experience is probably akin to a kind of logical inference. That is, if the perceptual signal from the generator says there is a cat, and the discriminator decides that this signal truthfully reflects the state of the world right now, we naturally see a cat. The same goes for raw feelings: pain can feel sharp, even when we know full well that nothing is poking at us, and patients can report feeling pain in limbs that have already been amputated. To the extent that the discriminator gets things right most of the time, we tend to trust it. No wonder that when there’s a conflict between subjective impressions and rational beliefs, it seems to make sense to believe what we consciously experience.

This perceptual stubbornness is not just a feature of humans. Some primates have it too, as shown by their capacity to be amazed and amused by magic tricks. That is, they seem to understand that there’s a tension between what they’re seeing and what they know to be true. Given what we understand about their brains – specifically, that their perceptual neurons are also ‘recyclable’ for top-down functioning – the GAN theory suggests that these nonhuman animals probably have conscious experiences not dissimilar to ours.

The future of AI is more challenging. If we built a robot with a very complex GAN-style architecture, would it be conscious? On the basis of our theory, it would probably be capable of predictive coding, exercising the same machinery for perception as it deploys for top-down prediction or imagination. Perhaps like some current generative networks, it could ‘dream’. Like us, it probably couldn’t reason away its pain – and it might even be able to appreciate stage magic.

Theorising about consciousness is notoriously hard, and we don’t yet know what it really consists in. So we wouldn’t be in a position to establish if our robot was truly conscious. Then again, we can’t do this with any certainty with respect to other animals either. At least by fleshing out some conjectures about the machinery of consciousness, we can begin
to test them against our intuitions – and, more importantly, in experiments. What we do know is that a model of the mind involving an inner mechanism of doubt – a nit-picking system that’s constantly on the lookout for fakes and forgeries in perception – is one of the most promising ideas we’ve come up with so far.

Hakwan Lau

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

The Matrix 20 Years On: How a Sci-fi Film Tackled Big Philosophical Questions

File 20190325 36279 4c3u3u.jpg?ixlib=rb 1.1

The Matrix was a box office hit, but it also explored some of western philosophy’s most interesting themes.
HD Wallpapers Desktop/Warner Bros

Richard Colledge, Australian Catholic University

Incredible as it may seem, the end of March marks 20 years since the release of the first film in the Matrix franchise directed by The Wachowski siblings. This “cyberpunk” sci-fi movie was a box office hit with its dystopian futuristic vision, distinctive fashion sense, and slick, innovative action sequences. But it was also a catalyst for popular discussion around some very big philosophical themes.

The film centres on a computer hacker, “Neo” (played by Keanu Reeves), who learns that his whole life has been lived within an elaborate, simulated reality. This computer-generated dream world was designed by an artificial intelligence of human creation, which industrially farms human bodies for energy while distracting them via a relatively pleasant parallel reality called the “matrix”.

‘Have you ever had a dream, Neo, that you were so sure was real?’

This scenario recalls one of western philosophy’s most enduring thought experiments. In a famous passage from Plato’s Republic (ca 380 BCE), Plato has us imagine the human condition as being like a group of prisoners who have lived their lives underground and shackled, so that their experience of reality is limited to shadows projected onto their cave wall.


Read more:
The great movie scenes: The Matrix and bullet-time


A freed prisoner, Plato suggests, would be startled to discover the truth about reality, and blinded by the brilliance of the sun. Should he return below, his companions would have no means to understand what he has experienced and surely think him mad. Leaving the captivity of ignorance is difficult.

In The Matrix, Neo is freed by rebel leader Morpheus (ironically, the name of the Greek God of sleep) by being awoken to real life for the first time. But unlike Plato’s prisoner, who discovers the “higher” reality beyond his cave, the world that awaits Neo is both desolate and horrifying.

Our Fallible Senses

The Matrix also trades on more recent philosophical questions famously posed by the 17th century Frenchman René Descartes, concerning our inability to be certain about the evidence of our senses, and our capacity to know anything definite about the world as it really is.

Descartes even noted the difficulty of being certain that human experience is not the result of either a dream or a malevolent systematic deception.

The latter scenario was updated in philosopher Hilary Putnam’s 1981 “brain in a vat” thought experiment, which imagines a scientist electrically manipulating a brain to induce sensations of normal life.


Read more:
How do you know you’re not living in a computer simulation?


So ultimately, then, what is reality? The late 20th century French thinker Jean Baudrillard, whose book appears briefly (with an ironic touch) early in the film, wrote extensively on the ways in which contemporary mass society generates sophisticated imitations of reality that become so realistic they are mistaken for reality itself (like mistaking the map for the landscape, or the portrait for the person).

Of course, there is no need for a matrix-like AI conspiracy to achieve this. We see it now, perhaps even more intensely than 20 years ago, in the dominance of “reality TV” and curated identities of social media.

In some respects, the film appears to be reaching for a view close to that of the 18th century German philosopher, Immanuel Kant, who insisted that our senses do not simply copy the world; rather, reality conforms to the terms of our perception. We only ever experience the world as it is available through the partial spectrum of our senses.

The Ethics of Freedom

Ultimately, the Matrix trilogy proclaims that free individuals can change the future. But how should that freedom be exercised?

This dilemma is unfolded in the first film’s increasingly notorious red/blue pill scene, which raises the ethics of belief. Neo’s choice is to embrace either the “really real” (as exemplified by the red pill he is offered by Morpheus) or to return to his more normal “reality” (via the blue one).

This quandary was captured in a 1974 thought experiment by American philosopher, Robert Nozick. Given an “experience machine” capable of providing whatever experiences we desire, in a way indistinguishable from “real” ones, should we stubbornly prefer the truth of reality? Or can we feel free to reside within comfortable illusion?


Read more:
Why virtual reality cannot match the real thing


In The Matrix we see the rebels resolutely rejecting the comforts of the matrix, preferring grim reality. But we also see the rebel traitor Cypher (Joe Pantoliano) desperately seeking reinsertion into pleasant simulated reality. “Ignorance is bliss,” he affirms.

The film’s chief villain, Agent Smith (Hugo Weaving), darkly notes that unlike other mammals, (western) humanity insatiably consumes natural resources. The matrix, he suggests, is a “cure” for this human “contagion”.

We have heard much about the potential perils of AI, but perhaps there is something in Agent Smith’s accusation. In raising this tension, The Matrix still strikes a nerve – especially after 20 further years of insatiable consumption.The Conversation

Richard Colledge, Senior Lecturer & Head of School of Philosophy, Australian Catholic University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Climate Strikes: Researcher explains how Young People can Keep up the Momentum

Harriet Thew, University of Leeds

As part of one of the largest environmental protests ever seen, over a million young people went on strike on Friday March 15 2019, calling for more ambitious action on climate change. Inspired by Greta Thunberg, a Swedish school girl who protested outside the Swedish parliament every Friday throughout 2018, young people in over 100 countries left their classrooms and took to the streets.

The previous #YouthStrike4Climate on February 15 2019 mobilised over 10,000 young people in over 40 locations in the UK alone. Their marches, chants and signs captured attention and prompted debates regarding the motivations and methods of young strikers. Many were criticised by those in the government and the media for simply wanting an opportunity to miss school.

My PhD research explores youth participation in climate change governance, focusing on the UN climate negotiations. Between 2015 and 2018 I closely studied the Youth Climate Coalition (UKYCC) – a UK based, voluntary, youth-led group of 18 to 29 year olds – which attends the international negotiations and coordinates local and national climate change campaigns.

Members of the UK Youth Climate Coalition protest in London.
Harriet Thew, Author provided

My research shows that young people are mobilised by concern for people and wildlife, fears for the future and anger that climate action is neither sufficiently rapid nor ambitious. Young people need to feel as though they are “doing something” about climate change while politicians dither and scientists release increasingly alarming projections of future climate conditions.

The strikes have helped young activists find like-minded peers and new opportunities to engage. They articulate a collective youth voice, wielding the moral power of young people – a group which society agrees it is supposed to protect. All the same, there are threats to sustaining the movement’s momentum which need to be recognised now.

Challenge misplaced paternalism

The paternalism that gives youth a moral platform is a double-edged sword. Patronising responses from adults in positions of authority, from head teachers to the prime minister, dismiss their scientifically informed concerns and attack the messenger, rather than dealing with the message itself.

You’re too young to understand the complexity of this.

You’ll grow out of these beliefs.

You just want to skip school.

Stay in school and wait your turn to make a difference.

Striking may hurt your future job prospects.

The list goes on …

This frightens some children and young people into silence, but doesn’t address the factors which mobilised them in the first place. These threats are also largely unfounded.


Read more:
Climate change: a climate scientist answers questions from teenagers


To any young person reading this, I want to reassure you, as a university educator, that critical thinking, proactivity and an interest in current affairs are qualities that universities encourage. Over 200 academics signed this open letter – myself included – showing our support for the school strikes.

Don’t ‘grow up’

Growing up is inevitable, but it can cause problems for youth movements. As young people gain experience of climate action and expand their professional networks, they “grow out of” being able to represent youth, often getting jobs to advocate for other groups or causes. While this can be positive for individuals, institutional memory is lost when experienced advocates move on to do other things. This puts youth at a disadvantage in relation to other groups who are better resourced and don’t have a “time limit” in how long they can represent their cause.

Well-established youth organisations, such as Guides and Scouts, whom I have worked with in the past, can use their large networks and professional experience to sustain youth advocacy on climate change, though they lack the resources to do so alone. It would also help for other campaigners to show solidarity with the young strikers, and to recognise youth as an important group in climate change debates. This will give people more opportunity to keep supporting the youth climate movement as they get older.

Grow the climate justice movement

Researching the same group of young people for three years, I have identified a shift in their attitudes over time. As young participants become more involved in the movement, they encounter different types of injustices voiced by other groups. They hear activists sharing stories of the devastating climate impacts already experienced by communities, in places where sea level rise is inundating homes and droughts are killing livestock and causing starvation.

The climate justice movement emphasises how climate change exacerbates racial and economic inequality but frequently overlooks the ways these inequalities intersect with age-based disadvantages. Forgetting that frontline communities contain young people, youth movements in developed countries like the UK begin to question the validity of their intergenerational injustice claims.

Indigenous people often inhabit the frontline of impacts from pollution and climate change.
Rainforest Action Network/Flickr, CC BY-NC

Many feel ashamed for having claimed vulnerability, given their relatively privileged position. Over time, they lose faith in their right to be heard. It would strengthen the entire climate movement if other climate justice campaigners more vocally acknowledged young people as a vulnerable group and shared their platform so that these important voices could better amplify one another.

With my own platform, I would like to say this to the thousands who went on strike. You matter. You have a right to be heard and you shouldn’t be embarrassed to speak out. Have confidence in your message, engage with others but stay true to your principles. Stick together and remember that even when you leave school and enter work – you’re never too old to be a youth advocate.

Click here to subscribe to our climate action newsletter. Climate change is inevitable. Our response to it isn’t.The Conversation

Harriet Thew, PhD Researcher in Climate Change Governance, University of Leeds

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Do you have a Self-Actualised Personality? Maslow Revisited

pyramids

View of the second Pyramid from the top of the Great Pyramid. Photo courtesy of the Library of Congress

Christian Jarrett | Aeon Ideas

Abraham Maslow was the 20th-century American psychologist best-known for explaining motivation through his hierarchy of needs, which he represented in a pyramid. At the base, our physiological needs include food, water, warmth and rest. Moving up the ladder, Maslow mentions safety, love, and self-esteem and accomplishment. But after all those have been satisfied, the motivating factor at the top of the pyramid involves striving to achieve our full potential and satisfy creative goals. As one of the founders of humanistic psychology, Maslow proposed that the path to self-transcendence and, ultimately, greater compassion for all of humanity requires the ‘self-actualisation’ at the top of his pyramid – fulfilling your true potential, and becoming your authentic self.

Now Scott Barry Kaufman, a psychologist at Barnard College, Columbia University, believes it is time to revive the concept, and link it with contemporary psychological theory. ‘We live in times of increasing divides, selfish concerns, and individualistic pursuits of power,’ Kaufman wrote recently in a blog in Scientific American introducing his new research. He hopes that rediscovering the principles of self-actualisation might be just the tonic that the modern world is crying out for. To this end, he’s used modern statistical methods to create a test of self-actualisation or, more specifically, of the 10 characteristics exhibited by self-actualised people, and it was recently published in the Journal of Humanistic Psychology.

Kaufman first surveyed online participants using 17 characteristics Maslow believed were shared by self-actualised people. Kaufman found that seven of these were redundant or irrelevant and did not correlate with others, leaving 10 key characteristics of self-actualisation.

Next, he reworded some of Maslow’s original language and labelling to compile a modern 30-item questionnaire featuring three items tapping each of these 10 remaining characteristics: continued freshness of appreciation; acceptance; authenticity; equanimity; purpose; efficient perception of reality; humanitarianism; peak experiences; good moral intuition; and creative spirit (see the full questionnaire below, and take the test on Kaufman’s website).

So what did Kaufman report? In a survey of more than 500 people on Amazon’s Mechanical Turk website, Kaufman found that scores on each of these 10 characteristics tended to correlate, but also that they each made a unique contribution to a unifying factor of self-actualisation – suggesting that this is a valid concept comprised of 10 subtraits.

Participants’ total scores on the test also correlated with their scores on the main five personality traits (that is, with higher extraversion, agreeableness, emotional stability, openness and conscientiousness) and with the metatrait of ‘stability’, indicative of an ability to avoid impulses in the pursuit of one’s goals. That the new test corresponded in this way with established personality measures provides further evidence of its validity.

Next, Kaufman turned to modern theories of wellbeing, such as self-determination theory, to see if people’s scores on his self-actualisation scale correlated with these contemporary measures. Sure enough, he found that people with more characteristics of self-actualisation also tended to score higher on curiosity, life-satisfaction, self-acceptance, personal growth and autonomy, among other factors – just as Maslow would have predicted.

‘Taken together, this total pattern of data supports Maslow’s contention that self-actualised individuals are more motivated by growth and exploration than by fulfilling deficiencies in basic needs,’ Kaufman writes. He adds that the new empirical support for Maslow’s ideas is ‘quite remarkable’ given that Maslow put them together with ‘a paucity of actual evidence’.

A criticism often levelled at Maslow’s notion of self-actualisation is that its pursuit encourages an egocentric focus on one’s own goals and needs. However, Maslow always contended that it is only through becoming our true, authentic selves that we can transcend the self and look outward with compassion to the rest of humanity. Kaufman explored this too, and found that higher scorers on his self-actualisation scale tended also to score higher on feelings of oneness with the world, but not on decreased self-salience, a sense of independence and bias toward information relevant to oneself. (These are the two main factors in a modern measure of self-transcendence developed by the psychologist David Yaden at the University of Pennsylvania.)

Kaufman said that this last finding supports ‘Maslow’s contention that self-actualising individuals are able to paradoxically merge with a common humanity while at the same time able to maintain a strong identity and sense of self’.

Where the new data contradicts Maslow is on the demographic factors that correlate with characteristics of self-actualisation – he thought that self-actualisation was rare and almost impossible for young people. Kaufman, by contrast, found scores on his new scale to be normally distributed through his sample (that is, spread evenly like height or weight) and unrelated to factors such as age, gender and educational attainment (although, in personal correspondence, Kaufman informs me that newer data – more than 3,000 people have since taken the new test – is showing a small, but statistically significant association between older age and having more characteristics of self-actualisation).

In conclusion, Kaufman writes that: ‘[H]opefully the current study … brings Maslow’s motivational framework and the central personality characteristics described by the founding humanistic psychologists, into the 21st century.’

The new test is sure to reinvigorate Maslow’s ideas, but if this is to help heal our divided world, then the characteristics required for self-actualisation, rather than being a permanent feature of our personalities, must be something we can develop deliberately. I put this point to Kaufman and he is optimistic. ‘I think there is significant room to develop these characteristics [by changing your habits],’ he told me. ‘A good way to start with that,’ he added, ‘is by first identifying where you stand on those characteristics and assessing your weakest links. Capitalise on your highest characteristics but also don’t forget to intentionally be mindful about what might be blocking your self-actualisation … Identify your patterns and make a concerted effort to change. I do think it’s possible with conscientiousness and willpower.’

Christian Jarrett

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

The Concept of Probability is not as Simple as You Think

probability

Phil Long/Flickr

Nevin Climenhaga | Aeon Ideas

The gambler, the quantum physicist and the juror all reason about probabilities: the probability of winning, of a radioactive atom decaying, of a defendant’s guilt. But despite their ubiquity, experts dispute just what probabilities are. This leads to disagreements on how to reason about, and with, probabilities – disagreements that our cognitive biases can exacerbate, such as our tendency to ignore evidence that runs counter to a hypothesis we favour. Clarifying the nature of probability, then, can help to improve our reasoning.

Three popular theories analyse probabilities as either frequencies, propensities or degrees of belief. Suppose I tell you that a coin has a 50 per cent probability of landing heads up. These theories, respectively, say that this is:

  • The frequency with which that coin lands heads;
  • The propensity, or tendency, that the coin’s physical characteristics give it to land heads;
  • How confident I am that it lands heads.

But each of these interpretations faces problems. Consider the following case:

Adam flips a fair coin that self-destructs after being tossed four times. Adam’s friends Beth, Charles and Dave are present, but blindfolded. After the fourth flip, Beth says: ‘The probability that the coin landed heads the first time is 50 per cent.’

Adam then tells his friends that the coin landed heads three times out of four. Charles says: ‘The probability that the coin landed heads the first time is 75 per cent.’

Dave, despite having the same information as Charles, says: ‘I disagree. The probability that the coin landed heads the first time is 60 per cent.’

The frequency interpretation struggles with Beth’s assertion. The frequency with which the coin lands heads is three out of four, and it can never be tossed again. Still, it seems that Beth was right: the probability that the coin landed heads the first time is 50 per cent.

Meanwhile, the propensity interpretation falters on Charles’s assertion. Since the coin is fair, it had an equal propensity to land heads or tails. Yet Charles also seems right to say that the probability that the coin landed heads the first time is 75 per cent.

The confidence interpretation makes sense of the first two assertions, holding that they express Beth and Charles’s confidence that the coin landed heads. But consider Dave’s assertion. When Dave says that the probability that the coin landed heads is 60 per cent, he says something false. But if Dave really is 60 per cent confident that the coin landed heads, then on the confidence interpretation, he has said something true – he has truly reported how certain he is.

Some philosophers think that such cases support a pluralistic approach in which there are multiple kinds of probabilities. My own view is that we should adopt a fourth interpretation – a degree-of-support interpretation.

Here, probabilities are understood as relations of evidential support between propositions. ‘The probability of X given Y’ is the degree to which Y supports the truth of X. When we speak of ‘the probability of X’ on its own, this is shorthand for the probability of X conditional on any background information we have. When Beth says that there is a 50 per cent probability that the coin landed heads, she means that this is the probability that it lands heads conditional on the information that it was tossed and some information about its construction (for example, it being symmetrical).

Relative to different information, however, the proposition that the coin landed heads has a different probability. When Charles says that there is a 75 per cent probability that the coin landed heads, he means this is the probability that it landed heads relative to the information that three of four tosses landed heads. Meanwhile, Dave says there is a 60 per cent probability that the coin landed heads, relative to this same information – but since this information in fact supports heads more strongly than 60 per cent, what Dave says is false.

The degree-of-support interpretation incorporates what’s right about each of our first three approaches while correcting their problems. It captures the connection between probabilities and degrees of confidence. It does this not by identifying them – instead, it takes degrees of belief to be rationally constrained by degrees of support. The reason I should be 50 per cent confident that a coin lands heads, if all I know about it is that it is symmetrical, is because this is the degree to which my evidence supports this hypothesis.

Similarly, the degree-of-support interpretation allows the information that the coin landed heads with a 75 per cent frequency to make it 75 per cent probable that it landed heads on any particular toss. It captures the connection between frequencies and probabilities but, unlike the frequency interpretation, it denies that frequencies and probabilities are the same thing. Instead, probabilities sometimes relate claims about frequencies to claims about specific individuals.

Finally, the degree-of-support interpretation analyses the propensity of the coin to land heads as a relation between, on the one hand, propositions about the construction of the coin and, on the other, the proposition that it lands heads. That is, it concerns the degree to which the coin’s construction predicts the coin’s behaviour. More generally, propensities link claims about causes and claims about effects – eg, a description of an atom’s intrinsic characteristics and the hypothesis that it decays.

Because they turn probabilities into different kinds of entities, our four theories offer divergent advice on how to figure out the values of probabilities. The first three interpretations (frequency, propensity and confidence) try to make probabilities things we can observe – through counting, experimentation or introspection. By contrast, degrees of support seem to be what philosophers call ‘abstract entities’ – neither in the world nor in our minds. While we know that a coin is symmetrical by observation, we know that the proposition ‘this coin is symmetrical’ supports the propositions ‘this coin lands heads’ and ‘this coin lands tails’ to equal degrees in the same way we know that ‘this coin lands heads’ entails ‘this coin lands heads or tails’: by thinking.

But a skeptic might point out that coin tosses are easy. Suppose we’re on a jury. How are we supposed to figure out the probability that the defendant committed the murder, so as to see whether there can be reasonable doubt about his guilt?

Answer: think more. First, ask: what is our evidence? What we want to figure out is how strongly this evidence supports the hypothesis that the defendant is guilty. Perhaps our salient evidence is that the defendant’s fingerprints are on the gun used to kill the victim.

Then, ask: can we use the mathematical rules of probability to break down the probability of our hypothesis in light of the evidence into more tractable probabilities? Here we are concerned with the probability of a cause (the defendant committing the murder) given an effect (his fingerprints being on the murder weapon). Bayes’s theorem lets us calculate this as a function of three further probabilities: the prior probability of the cause, the probability of the effect given this cause, and the probability of the effect without this cause.

Since this is all relative to any background information we have, the first probability (of the cause) is informed by what we know about the defendant’s motives, means and opportunity. We can get a handle on the third probability (of the effect without the cause) by breaking down the possibility that the defendant is innocent into other possible causes of the victim’s death, and asking how probable each is, and how probable they make it that the defendant’s fingerprints would be on the gun. We will eventually reach probabilities that we cannot break down any further. At this point, we might search for general principles to guide our assignments of probabilities, or we might rely on intuitive judgments, as we do in the coin cases.

When we are reasoning about criminals rather than coins, this process is unlikely to lead to convergence on precise probabilities. But there’s no alternative. We can’t resolve disagreements about how much the information we possess supports a hypothesis just by gathering more information. Instead, we can make progress only by way of philosophical reflection on the space of possibilities, the information we have, and how strongly it supports some possibilities over others.Aeon counter – do not remove

Nevin Climenhaga

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Between Gods and Animals: Becoming Human in the Gilgamesh Epic

Tablet_V_of_the_Epic_of_Gilgamesh

A newly discovered, partially broken, tablet V of the Epic of Gilgamesh. The tablet dates back to the old Babylonian period, 2003-1595 BCE. From Mesopotamia, Iraq. The Sulaymaniyah Museum, Iraq. Photograph by Osama Shukir Muhammed Amin. Wikimedia.


Sophus Helle | Aeon Ideas

The Epic of Gilgamesh is a Babylonian poem composed in ancient Iraq, millennia before Homer. It tells the story of Gilgamesh, king of the city of Uruk. To curb his restless and destructive energy, the gods create a friend for him, Enkidu, who grows up among the animals of the steppe. When Gilgamesh hears about this wild man, he orders that a woman named Shamhat be brought out to find him. Shamhat seduces Enkidu, and the two make love for six days and seven nights, transforming Enkidu from beast to man. His strength is diminished, but his intellect is expanded, and he becomes able to think and speak like a human being. Shamhat and Enkidu travel together to a camp of shepherds, where Enkidu learns the ways of humanity. Eventually, Enkidu goes to Uruk to confront Gilgamesh’s abuse of power, and the two heroes wrestle with one another, only to form a passionate friendship.

This, at least, is one version of Gilgamesh’s beginning, but in fact the epic went through a number of different editions. It began as a cycle of stories in the Sumerian language, which were then collected and translated into a single epic in the Akkadian language. The earliest version of the epic was written in a dialect called Old Babylonian, and this version was later revised and updated to create another version, in the Standard Babylonian dialect, which is the one that most readers will encounter today.

Not only does Gilgamesh exist in a number of different versions, each version is in turn made up of many different fragments. There is no single manuscript that carries the entire story from beginning to end. Rather, Gilgamesh has to be recreated from hundreds of clay tablets that have become fragmentary over millennia. The story comes to us as a tapestry of shards, pieced together by philologists to create a roughly coherent narrative (about four-fifths of the text have been recovered). The fragmentary state of the epic also means that it is constantly being updated, as archaeological excavations – or, all too often, illegal lootings – bring new tablets to light, making us reconsider our understanding of the text. Despite being more than 4,000 years old, the text remains in flux, changing and expanding with each new finding.

The newest discovery is a tiny fragment that had lain overlooked in the museum archive of Cornell University in New York, identified by Alexandra Kleinerman and Alhena Gadotti and published by Andrew George in 2018. At first, the fragment does not look like much: 16 broken lines, most of them already known from other manuscripts. But working on the text, George noticed something strange. The tablet seemed to preserve parts of both the Old Babylonian and the Standard Babylonian version, but in a sequence that didn’t fit the structure of the story as it had been understood until then.

The fragment is from the scene where Shamhat seduces Enkidu and has sex with him for a week. Before 2018, scholars believed that the scene existed in both an Old Babylonian and a Standard Babylonian version, which gave slightly different accounts of the same episode: Shamhat seduces Enkidu, they have sex for a week, and Shamhat invites Enkidu to Uruk. The two scenes are not identical, but the differences could be explained as a result of the editorial changes that led from the Old Babylonian to the Standard Babylonian version. However, the new fragment challenges this interpretation. One side of the tablet overlaps with the Standard Babylonian version, the other with the Old Babylonian version. In short, the two scenes cannot be different versions of the same episode: the story included two very similar episodes, one after the other.

According to George, both the Old Babylonian and the Standard Babylonian version ran thus: Shamhat seduces Enkidu, they have sex for a week, and Shamhat invites Enkidu to come to Uruk. The two of them then talk about Gilgamesh and his prophetic dreams. Then, it turns out, they had sex for another week, and Shamhat again invites Enkidu to Uruk.

Suddenly, Shamhat and Enkidu’s marathon of love had been doubled, a discovery that The Times publicised under the racy headline ‘Ancient Sex Saga Now Twice As Epic’. But in fact, there is a deeper significance to this discovery. The difference between the episodes can now be understood, not as editorial changes, but as psychological changes that Enkidu undergoes as he becomes human. The episodes represent two stages of the same narrative arc, giving us a surprising insight into what it meant to become human in the ancient world.

The first time that Shamhat invites Enkidu to Uruk, she describes Gilgamesh as a hero of great strength, comparing him to a wild bull. Enkidu replies that he will indeed come to Uruk, but not to befriend Gilgamesh: he will challenge him and usurp his power. Shamhat is dismayed, urging Enkidu to forget his plan, and instead describes the pleasures of city life: music, parties and beautiful women.

After they have sex for a second week, Shamhat invites Enkidu to Uruk again, but with a different emphasis. This time she dwells not on the king’s bullish strength, but on Uruk’s civic life: ‘Where men are engaged in labours of skill, you, too, like a true man, will make a place for yourself.’ Shamhat tells Enkidu that he is to integrate himself in society and find his place within a wider social fabric. Enkidu agrees: ‘the woman’s counsel struck home in his heart’.

It is clear that Enkidu has changed between the two scenes. The first week of sex might have given him the intellect to converse with Shamhat, but he still thinks in animal terms: he sees Gilgamesh as an alpha male to be challenged. After the second week, he has become ready to accept a different vision of society. Social life is not about raw strength and assertions of power, but also about communal duties and responsibility.

Placed in this gradual development, Enkidu’s first reaction becomes all the more interesting, as a kind of intermediary step on the way to humanity. In a nutshell, what we see here is a Babylonian poet looking at society through Enkidu’s still-feral eyes. It is a not-fully-human perspective on city life, which is seen as a place of power and pride rather than skill and cooperation.

What does this tell us? We learn two main things. First, that humanity for the Babylonians was defined through society. To be human was a distinctly social affair. And not just any kind of society: it was the social life of cities that made you a ‘true man’. Babylonian culture was, at heart, an urban culture. Cities such as Uruk, Babylon or Ur were the building blocks of civilisation, and the world outside the city walls was seen as a dangerous and uncultured wasteland.

Second, we learn that humanity is a sliding scale. After a week of sex, Enkidu has not become fully human. There is an intermediary stage, where he speaks like a human but thinks like an animal. Even after the second week, he still has to learn how to eat bread, drink beer and put on clothes. In short, becoming human is a step-by-step process, not an either/or binary.

In her second invitation to Uruk, Shamhat says: ‘I look at you, Enkidu, you are like a god, why with the animals do you range through the wild?’ Gods are here depicted as the opposite of animals, they are omnipotent and immortal, whereas animals are oblivious and destined to die. To be human is to be placed somewhere in the middle: not omnipotent, but capable of skilled labour; not immortal, but aware of one’s mortality.

In short, the new fragment reveals a vision of humanity as a process of maturation that unfolds between the animal and the divine. One is not simply born human: to be human, for the ancient Babylonians, involved finding a place for oneself within a wider field defined by society, gods and the animal world.Aeon counter – do not remove

Sophus Helle

This article was originally published at Aeon and has been republished under Creative Commons.