Between Gods and Animals: Becoming Human in the Gilgamesh Epic

Tablet_V_of_the_Epic_of_Gilgamesh

A newly discovered, partially broken, tablet V of the Epic of Gilgamesh. The tablet dates back to the old Babylonian period, 2003-1595 BCE. From Mesopotamia, Iraq. The Sulaymaniyah Museum, Iraq. Photograph by Osama Shukir Muhammed Amin. Wikimedia.


Sophus Helle | Aeon Ideas

The Epic of Gilgamesh is a Babylonian poem composed in ancient Iraq, millennia before Homer. It tells the story of Gilgamesh, king of the city of Uruk. To curb his restless and destructive energy, the gods create a friend for him, Enkidu, who grows up among the animals of the steppe. When Gilgamesh hears about this wild man, he orders that a woman named Shamhat be brought out to find him. Shamhat seduces Enkidu, and the two make love for six days and seven nights, transforming Enkidu from beast to man. His strength is diminished, but his intellect is expanded, and he becomes able to think and speak like a human being. Shamhat and Enkidu travel together to a camp of shepherds, where Enkidu learns the ways of humanity. Eventually, Enkidu goes to Uruk to confront Gilgamesh’s abuse of power, and the two heroes wrestle with one another, only to form a passionate friendship.

This, at least, is one version of Gilgamesh’s beginning, but in fact the epic went through a number of different editions. It began as a cycle of stories in the Sumerian language, which were then collected and translated into a single epic in the Akkadian language. The earliest version of the epic was written in a dialect called Old Babylonian, and this version was later revised and updated to create another version, in the Standard Babylonian dialect, which is the one that most readers will encounter today.

Not only does Gilgamesh exist in a number of different versions, each version is in turn made up of many different fragments. There is no single manuscript that carries the entire story from beginning to end. Rather, Gilgamesh has to be recreated from hundreds of clay tablets that have become fragmentary over millennia. The story comes to us as a tapestry of shards, pieced together by philologists to create a roughly coherent narrative (about four-fifths of the text have been recovered). The fragmentary state of the epic also means that it is constantly being updated, as archaeological excavations – or, all too often, illegal lootings – bring new tablets to light, making us reconsider our understanding of the text. Despite being more than 4,000 years old, the text remains in flux, changing and expanding with each new finding.

The newest discovery is a tiny fragment that had lain overlooked in the museum archive of Cornell University in New York, identified by Alexandra Kleinerman and Alhena Gadotti and published by Andrew George in 2018. At first, the fragment does not look like much: 16 broken lines, most of them already known from other manuscripts. But working on the text, George noticed something strange. The tablet seemed to preserve parts of both the Old Babylonian and the Standard Babylonian version, but in a sequence that didn’t fit the structure of the story as it had been understood until then.

The fragment is from the scene where Shamhat seduces Enkidu and has sex with him for a week. Before 2018, scholars believed that the scene existed in both an Old Babylonian and a Standard Babylonian version, which gave slightly different accounts of the same episode: Shamhat seduces Enkidu, they have sex for a week, and Shamhat invites Enkidu to Uruk. The two scenes are not identical, but the differences could be explained as a result of the editorial changes that led from the Old Babylonian to the Standard Babylonian version. However, the new fragment challenges this interpretation. One side of the tablet overlaps with the Standard Babylonian version, the other with the Old Babylonian version. In short, the two scenes cannot be different versions of the same episode: the story included two very similar episodes, one after the other.

According to George, both the Old Babylonian and the Standard Babylonian version ran thus: Shamhat seduces Enkidu, they have sex for a week, and Shamhat invites Enkidu to come to Uruk. The two of them then talk about Gilgamesh and his prophetic dreams. Then, it turns out, they had sex for another week, and Shamhat again invites Enkidu to Uruk.

Suddenly, Shamhat and Enkidu’s marathon of love had been doubled, a discovery that The Times publicised under the racy headline ‘Ancient Sex Saga Now Twice As Epic’. But in fact, there is a deeper significance to this discovery. The difference between the episodes can now be understood, not as editorial changes, but as psychological changes that Enkidu undergoes as he becomes human. The episodes represent two stages of the same narrative arc, giving us a surprising insight into what it meant to become human in the ancient world.

The first time that Shamhat invites Enkidu to Uruk, she describes Gilgamesh as a hero of great strength, comparing him to a wild bull. Enkidu replies that he will indeed come to Uruk, but not to befriend Gilgamesh: he will challenge him and usurp his power. Shamhat is dismayed, urging Enkidu to forget his plan, and instead describes the pleasures of city life: music, parties and beautiful women.

After they have sex for a second week, Shamhat invites Enkidu to Uruk again, but with a different emphasis. This time she dwells not on the king’s bullish strength, but on Uruk’s civic life: ‘Where men are engaged in labours of skill, you, too, like a true man, will make a place for yourself.’ Shamhat tells Enkidu that he is to integrate himself in society and find his place within a wider social fabric. Enkidu agrees: ‘the woman’s counsel struck home in his heart’.

It is clear that Enkidu has changed between the two scenes. The first week of sex might have given him the intellect to converse with Shamhat, but he still thinks in animal terms: he sees Gilgamesh as an alpha male to be challenged. After the second week, he has become ready to accept a different vision of society. Social life is not about raw strength and assertions of power, but also about communal duties and responsibility.

Placed in this gradual development, Enkidu’s first reaction becomes all the more interesting, as a kind of intermediary step on the way to humanity. In a nutshell, what we see here is a Babylonian poet looking at society through Enkidu’s still-feral eyes. It is a not-fully-human perspective on city life, which is seen as a place of power and pride rather than skill and cooperation.

What does this tell us? We learn two main things. First, that humanity for the Babylonians was defined through society. To be human was a distinctly social affair. And not just any kind of society: it was the social life of cities that made you a ‘true man’. Babylonian culture was, at heart, an urban culture. Cities such as Uruk, Babylon or Ur were the building blocks of civilisation, and the world outside the city walls was seen as a dangerous and uncultured wasteland.

Second, we learn that humanity is a sliding scale. After a week of sex, Enkidu has not become fully human. There is an intermediary stage, where he speaks like a human but thinks like an animal. Even after the second week, he still has to learn how to eat bread, drink beer and put on clothes. In short, becoming human is a step-by-step process, not an either/or binary.

In her second invitation to Uruk, Shamhat says: ‘I look at you, Enkidu, you are like a god, why with the animals do you range through the wild?’ Gods are here depicted as the opposite of animals, they are omnipotent and immortal, whereas animals are oblivious and destined to die. To be human is to be placed somewhere in the middle: not omnipotent, but capable of skilled labour; not immortal, but aware of one’s mortality.

In short, the new fragment reveals a vision of humanity as a process of maturation that unfolds between the animal and the divine. One is not simply born human: to be human, for the ancient Babylonians, involved finding a place for oneself within a wider field defined by society, gods and the animal world.Aeon counter – do not remove

Sophus Helle

This article was originally published at Aeon and has been republished under Creative Commons.

Should contemporary philosophers read Ockham? Or: what did history ever do for us?

If you are a historian of philosophy, you’ve probably encountered the question whether the stuff you’re working on is of any interest today. It’s the kind of question that awakens all the different souls in your breast at once. Your more enthusiastic self might think, “yes, totally”, while your methodological soul might shout, “anachronism ahead!” And your humbler part might think, “I don’t even understand it myself.” When exposed to this question, I often want to say many things at once, and out comes something garbled. But now I’d like to suggest that there is only one true reply to the main question in the title: “No, that’s the wrong kind of question to ask!” – But of course that’s not all there is to it. So please hear me out…

Read the rest at Handling Ideas, “a blog on (writing) philosophy”

Don’t let the rise of Europe steal World History

harvard-classics


The first 10 volumes of The Harvard Classics, Wikipedia


Peter Frankopan | Aeon Ideas

The centre of a map tells you much, as does the choice where to begin a story, or a history. Arab geographers used to place the Caspian Sea at the centre of world maps. On a medieval Turkish map, one that transfixed me long ago, we find the city of Balasaghun at the heart of the world. How to teach world history today is a question that is going to grow only more and more important.

Last summer in the United States, a debate flared when the influential testing agency Advanced Placement (AP) announced a change to its attendant courses, a change in which ‘world history’ would begin in 1450. In practice, beginning world history in 1450 becomes a story about how Europeans came to dominate not one but all the continents, and excludes the origins of alphabets, agriculture, cities and civilisation. Before the 1400s, it was others who did the empire-building, drove sciences, medicine and philosophy, and sought to capitalise on and extend the trading networks that facilitated the flow and exchange of goods, ideas, faiths and people.

Under pressure, the AP College Board retreated. ‘We’ve received thoughtful, principled feedback from AP teachers, students and college faculty,’ said a statement. As a result, the start date for the course has been nudged back 250 years to 1200. Consequently, said the board, ‘teachers and students can begin the course with a study of the civilisations in Africa, the Americas and Asia that are foundational to the modern era’.

Where that leaves Plato and Aristotle, or ancient Greece and Rome, is unclear – but presumably none are ‘foundational to the modern era’. That in itself is strange given that so many of the most famous buildings of Washington, DC (for example) are designed in classical style to deliberately evoke the world of 2,000 years ago; or that Mark Zuckerberg, a posterboy for new technologies and the 21st century, admits to the Emperor Augustus as his role model.

Gone too is China of the Han dynasty (206 BCE-220 CE) and the networks that linked the Pacific with the Indian Ocean and the Mediterranean 2,000 years ago, and that allow us to understand that Asia, Africa and Europe were connected many centuries prior in a world that was effectively ‘globalised’. No space for the Maya civilisation and culture in Central America or for the kingdom of Igodomigodo in West Africa, whose economic, cultural, military and political achievements have been discarded as irrelevant to the ‘modern era’. Who cares about the Indian emperor Ashoka, or the Chola dynasty of Southern India that spread eastwards into South East Asia in the 10th and 11th centuries? The connections between Scandinavia and Central Asia that helped to bring all of northern Europe out of what used to be called ‘the Dark Ages’ don’t get a look-in either. And too bad for climate change and the ways in which looking at the changes in global temperatures 1,500 years ago led to the collapse of cities, the dispersal of populations and the spread of pandemics.

History is at its most exciting and stimulating for students and teachers alike when there is scope to look at connectivity, to identify and work through deep rhythms and trends, and to explore the past by challenging assumptions that the story of the world can be charted through a linear progression – as the AP College Board seems to think with its statement linking 1200 with the ‘modern era’.

If you really want to see how foolish this view is – and how unfortunate it is to narrow down the scope of the World History course, then take a look at the front pages in just about any country in the world today. In China, news is dominated by the Belt and Road Initiative, the Chinese-led plan to regalvanise the ancient networks of the past into the modern-day Silk Roads: there are many and sharply divergent views about the aims, motivations and likely outcomes of the Belt and Road Initiative. This is far and away the single most important geopolitical development in the modern world today. Understanding why Beijing is trying to return to the glory years of the Silk Roads (which date back 2,000 years) would seem to be both interesting, and important – and largely to be bypassed by the new World History scope.

We can look to the other end of Asia, to Istanbul where, every year, hundreds of thousands of people take to the streets in Turkey to commemorate the Battle of Manzikert – which was fought in 1071. It might be useful to know why. Assessing the relationship between Russia and Ukraine might also be of some value in a period when the former has annexed part of the territory of the latter. A major spat broke out last summer between the two countries over whether Anne of Kiev was Russian or Ukrainian. She died in 1075.

It does not take an expert to see the resonance of the 7th century across the Middle East – where fundamentalists attempted to build an ‘Islamic State’ based on their model of the early Muslim world, destroying not only lives and the region in the process, but deliberately destroying history itself in places such as Palmyra. It does, though, take an expert to work out why they are trying to turn back the clock 1,400 years and what their utopian world looks like. It matters because there are plenty of others who want to do the same thing: Imran Khan, the new Prime Minister of Pakistan, for example, has said that he wants to turn his country, with its population of almost 200 million people, into ‘an ideal welfare state’ on the model that Muhammad set in Medina in the 620s and 630s – a model that set up one of the world’s ‘greatest civilisations’.

Students taking world history courses that begin in 1200 will not learn about any of these topics, even though their peers in colleges and schools around the world will. Education should expand horizons and open minds. What a shame that, in this case, they are being narrowed and shuttered. And what a shame too that this is happening at a time of such profound global change – when understanding the depth of our interconnected world is more important than ever. That, for me anyway, is the most valuable conclusion that is ‘foundational to the modern era’.Aeon counter – do not remove

Peter Frankopan

This article was originally published at Aeon and has been republished under Creative Commons.

The Empathetic Humanities have much to teach our Adversarial Culture

Books


Alexander Bevilacqua | Aeon Ideas

As anyone on Twitter knows, public culture can be quick to attack, castigate and condemn. In search of the moral high ground, we rarely grant each other the benefit of the doubt. In her Class Day remarks at Harvard’s 2018 graduation, the Nigerian novelist Chimamanda Ngozi Adichie addressed the problem of this rush to judgment. In the face of what she called ‘a culture of “calling out”, a culture of outrage’, she asked students to ‘always remember context, and never disregard intent’. She could have been speaking as a historian.

History, as a discipline, turns away from two of the main ways of reading that have dominated the humanities for the past half-century. These methods have been productive, but perhaps they also bear some responsibility for today’s corrosive lack of generosity. The two approaches have different genealogies, but share a significant feature: at heart, they are adversarial.

One mode of reading, first described in 1965 by the French philosopher Paul Ricœur and known as ‘the hermeneutics of suspicion’, aims to uncover the hidden meaning or agenda of a text. Whether inspired by Karl Marx, Friedrich Nietzsche or Sigmund Freud, the reader interprets what happens on the surface as a symptom of something deeper and more dubious, from economic inequality to sexual anxiety. The reader’s task is to reject the face value of a work, and to plumb for a submerged truth.

A second form of interpretation, known as ‘deconstruction’, was developed in 1967 by the French philosopher Jacques Derrida. It aims to identify and reveal a text’s hidden contradictions – ambiguities and even aporias (unthinkable contradictions) that eluded the author. For example, Derrida detected a bias that favoured speech over writing in many influential philosophical texts of the Western tradition, from Plato to Jean-Jacques Rousseau. The fact that written texts could privilege the immediacy and truth of speech was a paradox that revealed unarticulated metaphysical commitments at the heart of Western philosophy.

Both of these ways of reading pit reader against text. The reader’s goal becomes to uncover meanings or problems that the work does not explicitly express. In both cases, intelligence and moral probity are displayed at the expense of what’s been written. In the 20th century, these approaches empowered critics to detect and denounce the workings of power in all kinds of materials – not just the dreams that Freud interpreted, or the essays by Plato and Rousseau with which Derrida was most closely concerned.

They do, however, foster a prosecutorial attitude among academics and public intellectuals. As a colleague once told me: ‘I am always looking for the Freudian slip.’ He scours the writings of his peers to spot when they trip up and betray their problematic intellectual commitments. One poorly chosen phrase can sully an entire work.

Not surprisingly, these methods have fostered a rather paranoid atmosphere in modern academia. Mutual monitoring of lexical choices leads to anxiety, as an increasing number of words are placed on a ‘no fly’ list. One error is taken as the symptom of problematic thinking; it can spoil not just a whole book, but perhaps even the author’s entire oeuvre. This set of attitudes is not a world apart from the pile-ons that we witness on social media.

Does the lack of charity in public discourse – the quickness to judge, the aversion to context and intent – stem in part from what we might call the ‘adversarial’ humanities? These practices of interpretation are certainly on display in many classrooms, where students learn to exercise their moral and intellectual prowess by dismantling what they’ve read. For teachers, showing students how to take a text apart bestows authority; for students, learning to read like this can be electrifying.

Yet the study of history is different. History deals with the past – and the past is, as the British novelist L P Hartley wrote in 1953, ‘a foreign country’. By definition, historians deal with difference: with what is unlike the present, and with what rarely meets today’s moral standards.

The virtue of reading like a historian, then, is that critique or disavowal is not the primary goal. On the contrary, reading historically provides something more destabilising: it requires the historian to put her own values in parentheses.

The French medievalist Marc Bloch wrote that the task of the historian is understanding, not judging. Bloch, who fought in the French Resistance, was caught and turned over to the Gestapo. Poignantly, the manuscript of The Historian’s Craft, where he expressed this humane statement, was left unfinished: Bloch was executed by firing squad in June 1944.

As Bloch knew well, historical empathy involves reaching out across the chasm of time to understand people whose values and motivations are often utterly unlike our own. It means affording these people the gift of intellectual charity – that is, the best possible interpretation of what they said or believed. For example, a belief in magic can be rational on the basis of a period’s knowledge of nature. Yet acknowledging this demands more than just contextual, linguistic or philological skill. It requires empathy.

Aren’t a lot of psychological assumptions built into this model? The call for empathy might seem theoretically naive. Yet we judge people’s intentions all the time in our daily lives; we can’t function socially without making inferences about others’ motivations. Historians merely apply this approach to people who are dead. They invoke intentions not from a desire to attack, nor because they seek reasons to restrain a text’s range of meanings. Their questions about intentions stem, instead, from respect for the people whose actions and thoughts they’re trying to understand.

Reading like a historian, then, involves not just a theory of interpretation, but also a moral stance. It is an attempt to treat others generously, and to extend that generosity even to those who can’t be hic et nunc – here and now.

For many historians (as well as others in what we might call the ‘empathetic’ humanities, such as art history and literary history), empathy is a life practice. Living with the people of the past changes one’s relationship to the present. At our best, we begin to offer empathy not just to those who are distant, but to those who surround us, aiming in our daily life for ‘understanding, not judging’.

To be sure, it’s challenging to impart these lessons to students in their teens or early 20s, to whom the problems of the present seem especially urgent and compelling. The injunction to read more generously is pretty unfashionable. It can even be perceived as conservative: isn’t the past what’s holding us back, and shouldn’t we reject it? Isn’t it more useful to learn how to deconstruct a text, and to be on the lookout for latent, pernicious meanings?

Certainly, reading isn’t a zero-sum game. One can and should cultivate multiple modes of interpretation. Yet the nostrum that the humanities teach ‘critical thinking and reading skills’ obscures the profound differences in how adversarial and empathetic disciplines engage with written works – and how they teach us to respond to other human beings. If the empathetic humanities can make us more compassionate and more charitable – if they can encourage us to ‘always remember context, and never disregard intent’ – they afford something uniquely useful today.Aeon counter – do not remove

Alexander Bevilacqua

This article was originally published at Aeon and has been republished under Creative Commons.

Why Amartya Sen Remains the Century’s Great Critic of Capitalism

amartya-sen

Nobel laureate Amartya Kumar Sen in 2000, Wikipedia


Tim Rogan | Aeon Ideas

Critiques of capitalism come in two varieties. First, there is the moral or spiritual critique. This critique rejects Homo economicus as the organising heuristic of human affairs. Human beings, it says, need more than material things to prosper. Calculating power is only a small part of what makes us who we are. Moral and spiritual relationships are first-order concerns. Material fixes such as a universal basic income will make no difference to societies in which the basic relationships are felt to be unjust.

Then there is the material critique of capitalism. The economists who lead discussions of inequality now are its leading exponents. Homo economicus is the right starting point for social thought. We are poor calculators and single-minded, failing to see our advantage in the rational distribution of prosperity across societies. Hence inequality, the wages of ungoverned growth. But we are calculators all the same, and what we need above all is material plenty, thus the focus on the redress of material inequality. From good material outcomes, the rest follows.

The first kind of argument for capitalism’s reform seems recessive now. The material critique predominates. Ideas emerge in numbers and figures. Talk of non-material values in political economy is muted. The Christians and Marxists who once made the moral critique of capitalism their own are marginal. Utilitarianism grows ubiquitous and compulsory.

But then there is Amartya Sen.

Every major work on material inequality in the 21st century owes a debt to Sen. But his own writings treat material inequality as though the moral frameworks and social relationships that mediate economic exchanges matter. Famine is the nadir of material deprivation. But it seldom occurs – Sen argues – for lack of food. To understand why a people goes hungry, look not for catastrophic crop failure; look rather for malfunctions of the moral economy that moderates competing demands upon a scarce commodity. Material inequality of the most egregious kind is the problem here. But piecemeal modifications to the machinery of production and distribution will not solve it. The relationships between different members of the economy must be put right. Only then will there be enough to go around.

In Sen’s work, the two critiques of capitalism cooperate. We move from moral concerns to material outcomes and back again with no sense of a threshold separating the two. Sen disentangles moral and material issues without favouring one or the other, keeping both in focus. The separation between the two critiques of capitalism is real, but transcending the divide is possible, and not only at some esoteric remove. Sen’s is a singular mind, but his work has a widespread following, not least in provinces of modern life where the predominance of utilitarian thinking is most pronounced. In economics curricula and in the schools of public policy, in internationalist secretariats and in humanitarian NGOs, there too Sen has created a niche for thinking that crosses boundaries otherwise rigidly observed.

This was no feat of lonely genius or freakish charisma. It was an effort of ordinary human innovation, putting old ideas together in new combinations to tackle emerging problems. Formal training in economics, mathematics and moral philosophy supplied the tools Sen has used to construct his critical system. But the influence of Rabindranath Tagore sensitised Sen to the subtle interrelation between our moral lives and our material needs. And a profound historical sensibility has enabled him to see the sharp separation of the two domains as transient.

Tagore’s school at Santiniketan in West Bengal was Sen’s birthplace. Tagore’s pedagogy emphasised articulate relations between a person’s material and spiritual existences. Both were essential – biological necessity, self-creating freedom – but modern societies tended to confuse the proper relation between them. In Santiniketan, pupils played at unstructured exploration of the natural world between brief forays into the arts, learning to understand their sensory and spiritual selves as at once distinct and unified.

Sen left Santiniketan in the late 1940s as a young adult to study economics in Calcutta and Cambridge. The major contemporary controversy in economics was the theory of welfare, and debate was affected by Cold War contention between market- and state-based models of economic order. Sen’s sympathies were social democratic but anti-authoritarian. Welfare economists of the 1930s and 1940s sought to split the difference, insisting that states could legitimate programmes of redistribution by appeal to rigid utilitarian principles: a pound in a poor man’s pocket adds more to overall utility than the same pound in the rich man’s pile. Here was the material critique of capitalism in its infancy, and here is Sen’s response: maximising utility is not everyone’s abiding concern – saying so and then making policy accordingly is a form of tyranny – and in any case using government to move money around in pursuit of some notional optimum is a flawed means to that end.

Economic rationality harbours a hidden politics whose implementation damaged the moral economies that groups of people built up to govern their own lives, frustrating the achievement of its stated aims. In commercial societies, individuals pursue economic ends within agreed social and moral frameworks. The social and moral frameworks are neither superfluous nor inhibiting. They are the coefficients of durable growth.

Moral economies are not neutral, given, unvarying or universal. They are contested and evolving. Each person is more than a cold calculator of rational utility. Societies aren’t just engines of prosperity. The challenge is to make non-economic norms affecting market conduct legible, to bring the moral economies amid which market economies and administrative states function into focus. Thinking that bifurcates moral on the one hand and material on the other is inhibiting. But such thinking is not natural and inevitable, it is mutable and contingent – learned and apt to be unlearned.

Sen was not alone in seeing this. The American economist Kenneth Arrow was his most important interlocutor, connecting Sen in turn with the tradition of moral critique associated with R H Tawney and Karl Polanyi. Each was determined to re-integrate economics into frameworks of moral relationship and social choice. But Sen saw more clearly than any of them how this could be achieved. He realised that at earlier moments in modern political economy this separation of our moral lives from our material concerns had been inconceivable. Utilitarianism had blown in like a weather front around 1800, trailing extremes of moral fervour and calculating zeal in its wake. Sen sensed this climate of opinion changing, and set about cultivating ameliorative ideas and approaches eradicated by its onset once again.

There have been two critiques of capitalism, but there should be only one. Amartya Sen is the new century’s first great critic of capitalism because he has made that clear.Aeon counter – do not remove

Tim Rogan

This article was originally published at Aeon and has been republished under Creative Commons.

How Al-Farabi drew on Plato to argue for censorship in Islam

Israel-2013(2)-Jerusalem-Temple_Mount-Dome_of_the_Rock_(SE_exposure)

Andrew Shiva / Wikipedia

Rashmee Roshan Lall | Aeon Ideas

You might not be familiar with the name Al-Farabi, a 10th-century thinker from Baghdad, but you know his work, or at least its results. Al-Farabi was, by all accounts, a man of steadfast Sufi persuasion and unvaryingly simple tastes. As a labourer in a Damascus vineyard before settling in Baghdad, he favoured a frugal diet of lambs’ hearts and water mixed with sweet basil juice. But in his political philosophy, Al-Farabi drew on a rich variety of Hellenic ideas, notably from Plato and Aristotle, adapting and extending them in order to respond to the flux of his times.

The situation in the mighty Abbasid empire in which Al-Farabi lived demanded a delicate balancing of conservatism with radical adaptation. Against the backdrop of growing dysfunction as the empire became a shrunken version of itself, Al-Farabi formulated a political philosophy conducive to civic virtue, justice, human happiness and social order.

But his real legacy might be the philosophical rationale that Al-Farabi provided for controlling creative expression in the Muslim world. In so doing, he completed the aniconism (or antirepresentational) project begun in the late seventh century by a caliph of the Umayyads, the first Muslim dynasty. Caliph Abd al-Malik did it with nonfigurative images on coins and calligraphic inscriptions on the Dome of the Rock in Jerusalem, the first monument of the new Muslim faith. This heralded Islamic art’s break from the Greco-Roman representative tradition. A few centuries later, Al-Farabi took the notion of creative control to new heights by arguing for restrictions on representation through the word. He did it using solidly Platonic concepts, and can justifiably be said to have helped concretise the way Islam understands and responds to creative expression.

Word portrayals of Islam and its prophet can be deemed sacrilegious just as much as representational art. The consequences of Al-Farabi’s rationalisation of representational taboos are apparent in our times. In 1989, Iran’s Ayatollah Khomeini issued a fatwa sentencing Salman Rushdie to death for writing The Satanic Verses (1988). The book outraged Muslims for its fictionalised account of Prophet Muhammad’s life. In 2001, the Taliban blew up the sixth-century Bamiyan Buddhas in Afghanistan. In 2005, controversy erupted over the publication by the Danish newspaper Jyllands-Posten of cartoons depicting the Prophet. The cartoons continued to ignite fury in some way or other for at least a decade. There were protests across the Middle East, attacks on Western embassies after several European papers reprinted the cartoons, and in 2008 Osama bin Laden issued an incendiary warning to Europe of ‘grave punishment’ for its ‘new Crusade’ against Islam. In 2015, the offices of Charlie Hebdo, a satirical magazine in Paris that habitually offended Muslim sensibilities, was attacked by armed gunmen, killing 12. The magazine had featured Michel Houellebecq’s novel Submission (2015), a futuristic vision of France under Islamic rule.

In a sense, the destruction of the Bamiyan Buddhas was no different from the Rushdie fatwa, which was like the Danish cartoons fallout and the violence wreaked on Charlie Hebdo’s editorial staff. All are linked by the desire to control representation, be it through imagery or the word.

Control of the word was something that Al-Farabi appeared to judge necessary if Islam’s biggest project – the multiethnic commonwealth that was the Abbasid empire – was to be preserved. Figural representation was pretty much settled as an issue for Muslims when Al-Farabi would have been pondering some of his key theories. Within 30 years of the Prophet’s death in 632, art and creative expression took two parallel paths depending on the context for which it was intended. There was art for the secular space, such as the palaces and bathhouses of the Umayyads (661-750). And there was the art considered appropriate for religious spaces – mosques and shrines such as the Dome of the Rock (completed in 691). Caliph Abd al-Malik had already engaged in what has been called a ‘polemic of images’ on coinage with his Byzantine counterpart, Emperor Justinian II. Ultimately, Abd al-Malik issued coins inscribed with the phrases ‘ruler of the orthodox’ and ‘representative [caliph] of Allah’ rather than his portrait. And the Dome of the Rock had script rather than representations of living creatures as a decoration. The lack of image had become an image. In fact, the word was now the image. That is why calligraphy became the greatest of Muslim art forms. The importance of the written word – its absorption and its meaning – was also exemplified by the Abbasids’ investment in the Greek-to-Arabic translation movement from the eighth to the 10th centuries.

Consequently, in Al-Farabi’s time, what was most important for Muslims was to control representation through the word. Christian iconophiles made their case for devotional images with the argument that words have the same representative power as paintings. Words are like icons, declared the iconophile Christian priest Theodore Abu Qurrah, who lived in dar-al Islam and wrote in Arabic in the ninth century. And images, he said, are the writing of the illiterate.

Al-Farabi was concerned about the power – for good or ill – of writings at a time when the Abbasid empire was in decline. He held creative individuals responsible for what they produced. Abbasid caliphs increasingly faced a crisis of authority, both moral and political. This led Al-Farabi – one of the Arab world’s most original thinkers – to extrapolate from topical temporal matters the key issues confronting Islam and its expanding and diverse dominions.

Al-Farabi fashioned a political philosophy that naturalised Plato’s imaginary ideal state for the world to which he belonged. He tackled the obvious issue of leadership, reminding Muslim readers of the need for a philosopher-king, a ‘virtuous ruler’ to preside over a ‘virtuous city’, which would be run on the principles of ‘virtuous religion’.

Like Plato, Al-Farabi suggested creative expression should support the ideal ruler, thus shoring up the virtuous city and the status quo. Just as Plato in the Republic demanded that poets in the ideal state tell stories of unvarying good, especially about the gods, Al-Farabi’s treatises mention ‘praiseworthy’ poems, melodies and songs for the virtuous city. Al-Farabi commended as ‘most venerable’ for the virtuous city the sorts of writing ‘used in the service of the supreme ruler and the virtuous king.’

It is this idea of writers following the approved narrative that most clearly joins Al-Farabi’s political philosophy to that of the man he called Plato the ‘Divine’. When Al-Farabi seized on Plato’s argument for ‘a censorship of the writers’ as a social good for Muslim society, he was making a case for managing the narrative by controlling the word. It would be important to the next phase of Islamic image-building.

Some of Al-Farabi’s ideas might have influenced other prominent Muslim thinkers, including the Persian polymath Ibn Sina, or Avicenna, (c980-1037) and the Persian theologian Al-Ghazali (c1058-1111). Certainly, his rationalisation for controlling creative writing enabled a further move to deny legitimacy to new interpretation.Aeon counter – do not remove

Rashmee Roshan Lall

This article was originally published at Aeon and has been republished under Creative Commons.

What Einstein Meant by ‘God Does Not Play Dice’

Einstein with his second wife Elsa, 1921. Wikipedia.

Jim Baggott | Aeon Ideas

‘The theory produces a good deal but hardly brings us closer to the secret of the Old One,’ wrote Albert Einstein in December 1926. ‘I am at all events convinced that He does not play dice.’

Einstein was responding to a letter from the German physicist Max Born. The heart of the new theory of quantum mechanics, Born had argued, beats randomly and uncertainly, as though suffering from arrhythmia. Whereas physics before the quantum had always been about doing this and getting that, the new quantum mechanics appeared to say that when we do this, we get that only with a certain probability. And in some circumstances we might get the other.

Einstein was having none of it, and his insistence that God does not play dice with the Universe has echoed down the decades, as familiar and yet as elusive in its meaning as E = mc2. What did Einstein mean by it? And how did Einstein conceive of God?

Hermann and Pauline Einstein were nonobservant Ashkenazi Jews. Despite his parents’ secularism, the nine-year-old Albert discovered and embraced Judaism with some considerable passion, and for a time he was a dutiful, observant Jew. Following Jewish custom, his parents would invite a poor scholar to share a meal with them each week, and from the impoverished medical student Max Talmud (later Talmey) the young and impressionable Einstein learned about mathematics and science. He consumed all 21 volumes of Aaron Bernstein’s joyful Popular Books on Natural Science (1880). Talmud then steered him in the direction of Immanuel Kant’s Critique of Pure Reason (1781), from which he migrated to the philosophy of David Hume. From Hume, it was a relatively short step to the Austrian physicist Ernst Mach, whose stridently empiricist, seeing-is-believing brand of philosophy demanded a complete rejection of metaphysics, including notions of absolute space and time, and the existence of atoms.

But this intellectual journey had mercilessly exposed the conflict between science and scripture. The now 12-year-old Einstein rebelled. He developed a deep aversion to the dogma of organised religion that would last for his lifetime, an aversion that extended to all forms of authoritarianism, including any kind of dogmatic atheism.

This youthful, heavy diet of empiricist philosophy would serve Einstein well some 14 years later. Mach’s rejection of absolute space and time helped to shape Einstein’s special theory of relativity (including the iconic equation E = mc2), which he formulated in 1905 while working as a ‘technical expert, third class’ at the Swiss Patent Office in Bern. Ten years later, Einstein would complete the transformation of our understanding of space and time with the formulation of his general theory of relativity, in which the force of gravity is replaced by curved spacetime. But as he grew older (and wiser), he came to reject Mach’s aggressive empiricism, and once declared that ‘Mach was as good at mechanics as he was wretched at philosophy.’

Over time, Einstein evolved a much more realist position. He preferred to accept the content of a scientific theory realistically, as a contingently ‘true’ representation of an objective physical reality. And, although he wanted no part of religion, the belief in God that he had carried with him from his brief flirtation with Judaism became the foundation on which he constructed his philosophy. When asked about the basis for his realist stance, he explained: ‘I have no better expression than the term “religious” for this trust in the rational character of reality and in its being accessible, at least to some extent, to human reason.’

But Einstein’s was a God of philosophy, not religion. When asked many years later whether he believed in God, he replied: ‘I believe in Spinoza’s God, who reveals himself in the lawful harmony of all that exists, but not in a God who concerns himself with the fate and the doings of mankind.’ Baruch Spinoza, a contemporary of Isaac Newton and Gottfried Leibniz, had conceived of God as identical with nature. For this, he was considered a dangerous heretic, and was excommunicated from the Jewish community in Amsterdam.

Einstein’s God is infinitely superior but impersonal and intangible, subtle but not malicious. He is also firmly determinist. As far as Einstein was concerned, God’s ‘lawful harmony’ is established throughout the cosmos by strict adherence to the physical principles of cause and effect. Thus, there is no room in Einstein’s philosophy for free will: ‘Everything is determined, the beginning as well as the end, by forces over which we have no control … we all dance to a mysterious tune, intoned in the distance by an invisible player.’

The special and general theories of relativity provided a radical new way of conceiving of space and time and their active interactions with matter and energy. These theories are entirely consistent with the ‘lawful harmony’ established by Einstein’s God. But the new theory of quantum mechanics, which Einstein had also helped to found in 1905, was telling a different story. Quantum mechanics is about interactions involving matter and radiation, at the scale of atoms and molecules, set against a passive background of space and time.

Earlier in 1926, the Austrian physicist Erwin Schrödinger had radically transformed the theory by formulating it in terms of rather obscure ‘wavefunctions’. Schrödinger himself preferred to interpret these realistically, as descriptive of ‘matter waves’. But a consensus was growing, strongly promoted by the Danish physicist Niels Bohr and the German physicist Werner Heisenberg, that the new quantum representation shouldn’t be taken too literally.

In essence, Bohr and Heisenberg argued that science had finally caught up with the conceptual problems involved in the description of reality that philosophers had been warning of for centuries. Bohr is quoted as saying: ‘There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.’ This vaguely positivist statement was echoed by Heisenberg: ‘[W]e have to remember that what we observe is not nature in itself but nature exposed to our method of questioning.’ Their broadly antirealist ‘Copenhagen interpretation’ – denying that the wavefunction represents the real physical state of a quantum system – quickly became the dominant way of thinking about quantum mechanics. More recent variations of such antirealist interpretations suggest that the wavefunction is simply a way of ‘coding’ our experience, or our subjective beliefs derived from our experience of the physics, allowing us to use what we’ve learned in the past to predict the future.

But this was utterly inconsistent with Einstein’s philosophy. Einstein could not accept an interpretation in which the principal object of the representation – the wavefunction – is not ‘real’. He could not accept that his God would allow the ‘lawful harmony’ to unravel so completely at the atomic scale, bringing lawless indeterminism and uncertainty, with effects that can’t be entirely and unambiguously predicted from their causes.

The stage was thus set for one of the most remarkable debates in the entire history of science, as Bohr and Einstein went head-to-head on the interpretation of quantum mechanics. It was a clash of two philosophies, two conflicting sets of metaphysical preconceptions about the nature of reality and what we might expect from a scientific representation of this. The debate began in 1927, and although the protagonists are no longer with us, the debate is still very much alive.

And unresolved.

I don’t think Einstein would have been particularly surprised by this. In February 1954, just 14 months before he died, he wrote in a letter to the American physicist David Bohm: ‘If God created the world, his primary concern was certainly not to make its understanding easy for us.’


Jim Baggott

This article was originally published at Aeon and has been republished under Creative Commons.

Interview with Simone de Beauvoir (1959)

Simone de Beauvoir was a French writer, intellectual, existentialist philosopher, political activist, feminist and social theorist. Though she did not consider herself a philosopher, she had a significant influence on both feminist existentialism and feminist theory.

De Beauvoir wrote novels, essays, biographies, autobiography and monographs on philosophy, politics and social issues. She was known for her 1949 treatise The Second Sex, a detailed analysis of women’s oppression and a foundational tract of contemporary feminism; and for her novels, including She Came to Stay and The Mandarins. She was also known for her lifelong relationship with French philosopher Jean-Paul Sartre.


You may find two of de Beauvoir’s works, namely, The Second Sex (PDF) and The Ethics of Ambiguity (PDF), in the Political & Cultural and 20th-Century Philosophy sections of the Bookshelf.

How Camus and Sartre Split Up Over the Question of How to be Free

camus

Albert Camus by Cecil Beaton for Vogue in 1946. Photo by Getty

Sam Dresser | Aeon Ideas

They were an odd pair. Albert Camus was French Algerian, a pied-noir born into poverty who effortlessly charmed with his Bogart-esque features. Jean-Paul Sartre, from the upper reaches of French society, was never mistaken for a handsome man. They met in Paris during the Occupation and grew closer after the Second World War. In those days, when the lights of the city were slowly turning back on, Camus was Sartre’s closest friend. ‘How we loved you then,’ Sartre later wrote.

They were gleaming icons of the era. Newspapers reported on their daily movements: Sartre holed up at Les Deux Magots, Camus the peripatetic of Paris. As the city began to rebuild, Sartre and Camus gave voice to the mood of the day. Europe had been immolated, but the ashes left by war created the space to imagine a new world. Readers looked to Sartre and Camus to articulate what that new world might look like. ‘We were,’ remembered the fellow philosopher Simone de Beauvoir, ‘to provide the postwar era with its ideology.’

It came in the form of existentialism. Sartre, Camus and their intellectual companions rejected religion, staged new and unnerving plays, challenged readers to live authentically, and wrote about the absurdity of the world – a world without purpose and without value. ‘[There are] only stones, flesh, stars, and those truths the hand can touch,’ Camus wrote. We must choose to live in this world and to project our own meaning and value onto it in order to make sense of it. This means that people are free and burdened by it, since with freedom there is a terrible, even debilitating, responsibility to live and act authentically.

If the idea of freedom bound Camus and Sartre philosophically, then the fight for justice united them politically. They were committed to confronting and curing injustice, and, in their eyes, no group of people was more unjustly treated than the workers, the proletariat. Camus and Sartre thought of them as shackled to their labour and shorn of their humanity. In order to free them, new political systems must be constructed.

In October 1951, Camus published The Rebel. In it, he gave voice to a roughly drawn ‘philosophy of revolt’. This wasn’t a philosophical system per se, but an amalgamation of philosophical and political ideas: every human is free, but freedom itself is relative; one must embrace limits, moderation, ‘calculated risk’; absolutes are anti-human. Most of all, Camus condemned revolutionary violence. Violence might be used in extreme circumstances (he supported the French war effort, after all) but the use of revolutionary violence to nudge history in the direction you desire is utopian, absolutist, and a betrayal of yourself.

‘Absolute freedom is the right of the strongest to dominate,’ Camus wrote, while ‘absolute justice is achieved by the suppression of all contradiction: therefore it destroys freedom.’ The conflict between justice and freedom required constant re-balancing, political moderation, an acceptance and celebration of that which limits the most: our humanity. ‘To live and let live,’ he said, ‘in order to create what we are.’

Sartre read The Rebel with disgust. As far as he was concerned, it was possible to achieve perfect justice and freedom – that described the achievement of communism. Under capitalism, and in poverty, workers could not be free. Their options were unpalatable and inhumane: to work a pitiless and alienating job, or to die. But by removing the oppressors and broadly returning autonomy to the workers, communism allows each individual to live without material want, and therefore to choose how best they can realise themselves. This makes them free, and through this unbending equality, it is also just.

The problem is that, for Sartre and many others on the Left, communism required revolutionary violence to achieve because the existing order must be smashed. Not all leftists, of course, endorsed such violence. This division between hardline and moderate leftists – broadly, between communists and socialists – was nothing new. The 1930s and early ’40s, however, had seen the Left temporarily united against fascism. With the destruction of fascism, the rupture between hardline leftists willing to condone violence and moderates who condemned it returned. This split was made all the more dramatic by the practical disappearance of the Right and the ascendancy of the Soviet Union – which empowered hardliners throughout Europe, but raised disquieting questions for communists as the horrors of gulags, terror and show trials came to light. The question for every leftist of the postwar era was simple: which side are you on?

With the publication of The Rebel, Camus declared for a peaceful socialism that would not resort to revolutionary violence. He was appalled by the stories emerging from the USSR: it was not a country of hand-in-hand communists, living freely, but a country with no freedom at all. Sartre, meanwhile, would fight for communism, and he was prepared to endorse violence to do so.

The split between the two friends was a media sensation. Les Temps Modernes – the journal edited by Sartre, which published a critical review of The Rebel – sold out three times over. Le Monde and L’Observateur both breathlessly covered the falling out. It’s hard to imagine an intellectual feud capturing that degree of public attention today, but, in this disagreement, many readers saw the political crises of the times reflected back at them. It was a way of seeing politics played out in the world of ideas, and a measure of the worth of ideas. If you are thoroughly committed to an idea, are you compelled to kill for it? What price for justice? What price for freedom?

Sartre’s position was shot through with contradiction, with which he struggled for the remainder of his life. Sartre, the existentialist, who said that humans are condemned to be free, was also Sartre, the Marxist, who thought that history does not allow much space for true freedom in the existential sense. Though he never actually joined the French Communist Party, he would continue to defend communism throughout Europe until 1956, when the Soviet tanks in Budapest convinced him, finally, that the USSR did not hold the way forward. (Indeed, he was dismayed by the Soviets in Hungary because they were acting like Americans, he said.) Sartre would remain a powerful voice on the Left throughout his life, and chose the French president Charles de Gaulle as his favourite whipping boy. (After one particularly vicious attack, de Gaulle was asked to arrest Sartre. ‘One does not imprison Voltaire,’ he responded.) Sartre remained unpredictable, however, and was engaged in a long, bizarre dalliance with hardline Maoism when he died in 1980. Though Sartre moved away from the USSR, he never completely abandoned the idea that revolutionary violence might be warranted.

Philosophy Feud: Sartre vs Camus from Aeon Video on Vimeo

The violence of communism sent Camus on a different trajectory. ‘Finally,’ he wrote in The Rebel, ‘I choose freedom. For even if justice is not realised, freedom maintains the power of protest against injustice and keeps communication open.’ From the other side of the Cold War, it is hard not to sympathise with Camus, and to wonder at the fervour with which Sartre remained a loyal communist. Camus’s embrace of sober political reality, of moral humility, of limits and fallible humanity, remains a message well-heeded today. Even the most venerable and worthy ideas need to be balanced against one another. Absolutism, and the impossible idealism it inspires, is a dangerous path forward – and the reason Europe lay in ashes, as Camus and Sartre struggled to envision a fairer and freer world.Aeon counter – do not remove

Sam Dresser

This article was originally published at Aeon and has been republished under Creative Commons.

Pragmatism & Postmodernism

william-james

To my best belief: just what is the pragmatic theory of truth?


Cheryl Misak | Aeon Ideas

What is it for something to be true? One might think that the answer is obvious. A true belief gets reality right: our words correspond to objects and relations in the world. But making sense of that idea involves one in ever more difficult workarounds to intractable problems. For instance, how do we account for the statement ‘It did not rain in Toronto on 20 May 2018’? There don’t seem to be negative facts in the world that might correspond to the belief. What about ‘Every human is mortal’? There are more humans – past, present and future – than individual people in the world. (That is, a generalisation like ‘All Fs’ goes beyond the existing world of Fs, because ‘All Fs’ stretches into the future.) What about ‘Torture is wrong’? What are the objects in the world that might correspond to that? And what good is it explaining truth in terms of independently existing objects and facts, since we have access only to our interpretations of them?

Pragmatism can help us with some of these issues. The 19th-century American philosopher Charles Peirce, one of the founders of pragmatism, explained the core of this tradition beautifully: ‘We must not begin by talking of pure ideas, – vagabond thoughts that tramp the public roads without any human habitation, – but must begin with men and their conversation.’ Truth is a property of our beliefs. It is what we aim at, and is essentially connected to our practices of enquiry, action and evaluation. Truth, in other words, is the best that we could do.

The pragmatic theory of truth arose in Cambridge, Massachusetts in the 1870s, in a discussion group that included Peirce and William James. They called themselves the Metaphysical Club, with intentional irony. Though they shared the same broad outlook on truth, there was immediate disagreement about how to unpack the idea of the ‘best belief’. The debate stemmed from the different temperaments of Peirce and James.

Philosophy, James said, ‘is at once the most sublime and the most trivial of human pursuits. It works in the minutest crannies and it opens out the widest vistas.’ He was more a vista than a crannies man, dead set against technical philosophy. At the beginning of his book Pragmatism (1907), he said: ‘the philosophy which is so important to each of us is not a technical matter; it is our more or less dumb sense of what life honestly and deeply means.’ He wanted to write accessible philosophy for the public, and did so admirably. He became the most famous living academic in the United States.

The version of the pragmatist theory of truth made famous (or perhaps infamous) by James held that ‘Any idea upon which we can ride … any idea that will carry us prosperously from any one part of our experience to any other part, linking things satisfactorily, working securely, simplifying, saving labour, is … true INSTRUMENTALLY.’

‘Satisfactorily’ for James meant ‘more satisfactorily to ourselves, and individuals will emphasise their points of satisfaction differently. To a certain degree, therefore, everything here is plastic.’ He argued that if the available evidence underdetermines a matter, and if there are non-epistemic reasons for believing something (my people have always believed it, believing it would make me happier), then it is rational to believe it. He argued that if a belief in God has a positive impact on someone’s life, then it is true for that person. If it does not have a good impact on someone else’s life, it is not true for them.

Peirce, a crackerjack logician, was perfectly happy working in the crannies as well as opening out the vistas. He wrote much, but published little. A cantankerous man, Peirce described the difference in personality with his friend James thus: ‘He so concrete, so living; I a mere table of contents, so abstract, a very snarl of twine.’

Peirce said that James’s version of the pragmatic theory of truth was ‘a very exaggerated utterance, such as injures a serious man very much’. It amounted to: ‘Oh, I could not believe so-and-so, because I should be wretched if I did.’ Peirce’s worries, in these days of fake news, are more pressing than ever.

On Peirce’s account, a belief is true if it would be ‘indefeasible’ or would not in the end be defeated by reasons, argument, evidence and the actions that ensue from it. A true belief is the belief that we would come to, were we to enquire as far as we could on a matter. He added an important rider: a true belief must be put in place in a manner ‘not extraneous to the facts’. We cannot believe something because we would like it to be true. The brute impinging of experience cannot be ignored.

The disagreement continues to this day. James influenced John Dewey (who, when a student at Johns Hopkins, avoided Peirce and his technical philosophy like the plague) and later Richard Rorty. Dewey argued that truth (although he tended to stay away from the word) is nothing more than a resolution of a problematic situation. Rorty, at his most extreme, held that truth is nothing more than what our peers will let us get away with saying. This radically subjective or plastic theory of truth is what is usually thought of as pragmatism.

Peirce, however, managed to influence a few people himself, despite being virtually unknown in his lifetime. One was the Harvard logician and Kant scholar C I Lewis. He argued for a position remarkably similar to what his student W V O Quine would take over (and fail to acknowledge as Lewis’s). Reality cannot be ‘alien’, wrote Lewis – ‘the only reality there for us is one delimited in concepts of the results of our own ways of acting’. We have something given to us in brute experience, which we then interpret. With all pragmatists, Lewis was set against conceptions of truth in which ‘the mind approaches the flux of immediacy with some godlike foreknowledge of principles’. There is no ‘natural light’, no ‘self-illuminating propositions’, no ‘innate ideas’ from which other certainties can be deduced. Our body of knowledge is a pyramid, with the most general beliefs, such as the laws of logic, at the top, and the least general, such as ‘all swans are birds’, at the bottom. When faced with recalcitrant experience, we make adjustments in this complex system of interrelated concepts. ‘The higher up a concept stands in our pyramid, the more reluctant we are to disturb it, because the more radical and far-reaching the results will be…’ But all beliefs are fallible, and we can indeed disturb any of them. A true belief would be one that survives this process of enquiry.

Lewis saw that the pragmatist theory of truth deals nicely with those beliefs that the correspondence theory stumbles over. For instance, there is no automatic bar to ethical beliefs being true. Beliefs about what is right and wrong might well be evaluable in ways similar to how other kinds of beliefs are evaluable – in terms of whether they fit with experience and survive scrutiny.Aeon counter – do not remove

Cheryl Misak

This article was originally published at Aeon and has been republished under Creative Commons.