The Empathetic Humanities have much to teach our Adversarial Culture

Books


Alexander Bevilacqua | Aeon Ideas

As anyone on Twitter knows, public culture can be quick to attack, castigate and condemn. In search of the moral high ground, we rarely grant each other the benefit of the doubt. In her Class Day remarks at Harvard’s 2018 graduation, the Nigerian novelist Chimamanda Ngozi Adichie addressed the problem of this rush to judgment. In the face of what she called ‘a culture of “calling out”, a culture of outrage’, she asked students to ‘always remember context, and never disregard intent’. She could have been speaking as a historian.

History, as a discipline, turns away from two of the main ways of reading that have dominated the humanities for the past half-century. These methods have been productive, but perhaps they also bear some responsibility for today’s corrosive lack of generosity. The two approaches have different genealogies, but share a significant feature: at heart, they are adversarial.

One mode of reading, first described in 1965 by the French philosopher Paul Ricœur and known as ‘the hermeneutics of suspicion’, aims to uncover the hidden meaning or agenda of a text. Whether inspired by Karl Marx, Friedrich Nietzsche or Sigmund Freud, the reader interprets what happens on the surface as a symptom of something deeper and more dubious, from economic inequality to sexual anxiety. The reader’s task is to reject the face value of a work, and to plumb for a submerged truth.

A second form of interpretation, known as ‘deconstruction’, was developed in 1967 by the French philosopher Jacques Derrida. It aims to identify and reveal a text’s hidden contradictions – ambiguities and even aporias (unthinkable contradictions) that eluded the author. For example, Derrida detected a bias that favoured speech over writing in many influential philosophical texts of the Western tradition, from Plato to Jean-Jacques Rousseau. The fact that written texts could privilege the immediacy and truth of speech was a paradox that revealed unarticulated metaphysical commitments at the heart of Western philosophy.

Both of these ways of reading pit reader against text. The reader’s goal becomes to uncover meanings or problems that the work does not explicitly express. In both cases, intelligence and moral probity are displayed at the expense of what’s been written. In the 20th century, these approaches empowered critics to detect and denounce the workings of power in all kinds of materials – not just the dreams that Freud interpreted, or the essays by Plato and Rousseau with which Derrida was most closely concerned.

They do, however, foster a prosecutorial attitude among academics and public intellectuals. As a colleague once told me: ‘I am always looking for the Freudian slip.’ He scours the writings of his peers to spot when they trip up and betray their problematic intellectual commitments. One poorly chosen phrase can sully an entire work.

Not surprisingly, these methods have fostered a rather paranoid atmosphere in modern academia. Mutual monitoring of lexical choices leads to anxiety, as an increasing number of words are placed on a ‘no fly’ list. One error is taken as the symptom of problematic thinking; it can spoil not just a whole book, but perhaps even the author’s entire oeuvre. This set of attitudes is not a world apart from the pile-ons that we witness on social media.

Does the lack of charity in public discourse – the quickness to judge, the aversion to context and intent – stem in part from what we might call the ‘adversarial’ humanities? These practices of interpretation are certainly on display in many classrooms, where students learn to exercise their moral and intellectual prowess by dismantling what they’ve read. For teachers, showing students how to take a text apart bestows authority; for students, learning to read like this can be electrifying.

Yet the study of history is different. History deals with the past – and the past is, as the British novelist L P Hartley wrote in 1953, ‘a foreign country’. By definition, historians deal with difference: with what is unlike the present, and with what rarely meets today’s moral standards.

The virtue of reading like a historian, then, is that critique or disavowal is not the primary goal. On the contrary, reading historically provides something more destabilising: it requires the historian to put her own values in parentheses.

The French medievalist Marc Bloch wrote that the task of the historian is understanding, not judging. Bloch, who fought in the French Resistance, was caught and turned over to the Gestapo. Poignantly, the manuscript of The Historian’s Craft, where he expressed this humane statement, was left unfinished: Bloch was executed by firing squad in June 1944.

As Bloch knew well, historical empathy involves reaching out across the chasm of time to understand people whose values and motivations are often utterly unlike our own. It means affording these people the gift of intellectual charity – that is, the best possible interpretation of what they said or believed. For example, a belief in magic can be rational on the basis of a period’s knowledge of nature. Yet acknowledging this demands more than just contextual, linguistic or philological skill. It requires empathy.

Aren’t a lot of psychological assumptions built into this model? The call for empathy might seem theoretically naive. Yet we judge people’s intentions all the time in our daily lives; we can’t function socially without making inferences about others’ motivations. Historians merely apply this approach to people who are dead. They invoke intentions not from a desire to attack, nor because they seek reasons to restrain a text’s range of meanings. Their questions about intentions stem, instead, from respect for the people whose actions and thoughts they’re trying to understand.

Reading like a historian, then, involves not just a theory of interpretation, but also a moral stance. It is an attempt to treat others generously, and to extend that generosity even to those who can’t be hic et nunc – here and now.

For many historians (as well as others in what we might call the ‘empathetic’ humanities, such as art history and literary history), empathy is a life practice. Living with the people of the past changes one’s relationship to the present. At our best, we begin to offer empathy not just to those who are distant, but to those who surround us, aiming in our daily life for ‘understanding, not judging’.

To be sure, it’s challenging to impart these lessons to students in their teens or early 20s, to whom the problems of the present seem especially urgent and compelling. The injunction to read more generously is pretty unfashionable. It can even be perceived as conservative: isn’t the past what’s holding us back, and shouldn’t we reject it? Isn’t it more useful to learn how to deconstruct a text, and to be on the lookout for latent, pernicious meanings?

Certainly, reading isn’t a zero-sum game. One can and should cultivate multiple modes of interpretation. Yet the nostrum that the humanities teach ‘critical thinking and reading skills’ obscures the profound differences in how adversarial and empathetic disciplines engage with written works – and how they teach us to respond to other human beings. If the empathetic humanities can make us more compassionate and more charitable – if they can encourage us to ‘always remember context, and never disregard intent’ – they afford something uniquely useful today.Aeon counter – do not remove

Alexander Bevilacqua

This article was originally published at Aeon and has been republished under Creative Commons.

Why Amartya Sen Remains the Century’s Great Critic of Capitalism

amartya-sen

Nobel laureate Amartya Kumar Sen in 2000, Wikipedia


Tim Rogan | Aeon Ideas

Critiques of capitalism come in two varieties. First, there is the moral or spiritual critique. This critique rejects Homo economicus as the organising heuristic of human affairs. Human beings, it says, need more than material things to prosper. Calculating power is only a small part of what makes us who we are. Moral and spiritual relationships are first-order concerns. Material fixes such as a universal basic income will make no difference to societies in which the basic relationships are felt to be unjust.

Then there is the material critique of capitalism. The economists who lead discussions of inequality now are its leading exponents. Homo economicus is the right starting point for social thought. We are poor calculators and single-minded, failing to see our advantage in the rational distribution of prosperity across societies. Hence inequality, the wages of ungoverned growth. But we are calculators all the same, and what we need above all is material plenty, thus the focus on the redress of material inequality. From good material outcomes, the rest follows.

The first kind of argument for capitalism’s reform seems recessive now. The material critique predominates. Ideas emerge in numbers and figures. Talk of non-material values in political economy is muted. The Christians and Marxists who once made the moral critique of capitalism their own are marginal. Utilitarianism grows ubiquitous and compulsory.

But then there is Amartya Sen.

Every major work on material inequality in the 21st century owes a debt to Sen. But his own writings treat material inequality as though the moral frameworks and social relationships that mediate economic exchanges matter. Famine is the nadir of material deprivation. But it seldom occurs – Sen argues – for lack of food. To understand why a people goes hungry, look not for catastrophic crop failure; look rather for malfunctions of the moral economy that moderates competing demands upon a scarce commodity. Material inequality of the most egregious kind is the problem here. But piecemeal modifications to the machinery of production and distribution will not solve it. The relationships between different members of the economy must be put right. Only then will there be enough to go around.

In Sen’s work, the two critiques of capitalism cooperate. We move from moral concerns to material outcomes and back again with no sense of a threshold separating the two. Sen disentangles moral and material issues without favouring one or the other, keeping both in focus. The separation between the two critiques of capitalism is real, but transcending the divide is possible, and not only at some esoteric remove. Sen’s is a singular mind, but his work has a widespread following, not least in provinces of modern life where the predominance of utilitarian thinking is most pronounced. In economics curricula and in the schools of public policy, in internationalist secretariats and in humanitarian NGOs, there too Sen has created a niche for thinking that crosses boundaries otherwise rigidly observed.

This was no feat of lonely genius or freakish charisma. It was an effort of ordinary human innovation, putting old ideas together in new combinations to tackle emerging problems. Formal training in economics, mathematics and moral philosophy supplied the tools Sen has used to construct his critical system. But the influence of Rabindranath Tagore sensitised Sen to the subtle interrelation between our moral lives and our material needs. And a profound historical sensibility has enabled him to see the sharp separation of the two domains as transient.

Tagore’s school at Santiniketan in West Bengal was Sen’s birthplace. Tagore’s pedagogy emphasised articulate relations between a person’s material and spiritual existences. Both were essential – biological necessity, self-creating freedom – but modern societies tended to confuse the proper relation between them. In Santiniketan, pupils played at unstructured exploration of the natural world between brief forays into the arts, learning to understand their sensory and spiritual selves as at once distinct and unified.

Sen left Santiniketan in the late 1940s as a young adult to study economics in Calcutta and Cambridge. The major contemporary controversy in economics was the theory of welfare, and debate was affected by Cold War contention between market- and state-based models of economic order. Sen’s sympathies were social democratic but anti-authoritarian. Welfare economists of the 1930s and 1940s sought to split the difference, insisting that states could legitimate programmes of redistribution by appeal to rigid utilitarian principles: a pound in a poor man’s pocket adds more to overall utility than the same pound in the rich man’s pile. Here was the material critique of capitalism in its infancy, and here is Sen’s response: maximising utility is not everyone’s abiding concern – saying so and then making policy accordingly is a form of tyranny – and in any case using government to move money around in pursuit of some notional optimum is a flawed means to that end.

Economic rationality harbours a hidden politics whose implementation damaged the moral economies that groups of people built up to govern their own lives, frustrating the achievement of its stated aims. In commercial societies, individuals pursue economic ends within agreed social and moral frameworks. The social and moral frameworks are neither superfluous nor inhibiting. They are the coefficients of durable growth.

Moral economies are not neutral, given, unvarying or universal. They are contested and evolving. Each person is more than a cold calculator of rational utility. Societies aren’t just engines of prosperity. The challenge is to make non-economic norms affecting market conduct legible, to bring the moral economies amid which market economies and administrative states function into focus. Thinking that bifurcates moral on the one hand and material on the other is inhibiting. But such thinking is not natural and inevitable, it is mutable and contingent – learned and apt to be unlearned.

Sen was not alone in seeing this. The American economist Kenneth Arrow was his most important interlocutor, connecting Sen in turn with the tradition of moral critique associated with R H Tawney and Karl Polanyi. Each was determined to re-integrate economics into frameworks of moral relationship and social choice. But Sen saw more clearly than any of them how this could be achieved. He realised that at earlier moments in modern political economy this separation of our moral lives from our material concerns had been inconceivable. Utilitarianism had blown in like a weather front around 1800, trailing extremes of moral fervour and calculating zeal in its wake. Sen sensed this climate of opinion changing, and set about cultivating ameliorative ideas and approaches eradicated by its onset once again.

There have been two critiques of capitalism, but there should be only one. Amartya Sen is the new century’s first great critic of capitalism because he has made that clear.Aeon counter – do not remove

Tim Rogan

This article was originally published at Aeon and has been republished under Creative Commons.

How Al-Farabi drew on Plato to argue for censorship in Islam

Israel-2013(2)-Jerusalem-Temple_Mount-Dome_of_the_Rock_(SE_exposure)

Andrew Shiva / Wikipedia

Rashmee Roshan Lall | Aeon Ideas

You might not be familiar with the name Al-Farabi, a 10th-century thinker from Baghdad, but you know his work, or at least its results. Al-Farabi was, by all accounts, a man of steadfast Sufi persuasion and unvaryingly simple tastes. As a labourer in a Damascus vineyard before settling in Baghdad, he favoured a frugal diet of lambs’ hearts and water mixed with sweet basil juice. But in his political philosophy, Al-Farabi drew on a rich variety of Hellenic ideas, notably from Plato and Aristotle, adapting and extending them in order to respond to the flux of his times.

The situation in the mighty Abbasid empire in which Al-Farabi lived demanded a delicate balancing of conservatism with radical adaptation. Against the backdrop of growing dysfunction as the empire became a shrunken version of itself, Al-Farabi formulated a political philosophy conducive to civic virtue, justice, human happiness and social order.

But his real legacy might be the philosophical rationale that Al-Farabi provided for controlling creative expression in the Muslim world. In so doing, he completed the aniconism (or antirepresentational) project begun in the late seventh century by a caliph of the Umayyads, the first Muslim dynasty. Caliph Abd al-Malik did it with nonfigurative images on coins and calligraphic inscriptions on the Dome of the Rock in Jerusalem, the first monument of the new Muslim faith. This heralded Islamic art’s break from the Greco-Roman representative tradition. A few centuries later, Al-Farabi took the notion of creative control to new heights by arguing for restrictions on representation through the word. He did it using solidly Platonic concepts, and can justifiably be said to have helped concretise the way Islam understands and responds to creative expression.

Word portrayals of Islam and its prophet can be deemed sacrilegious just as much as representational art. The consequences of Al-Farabi’s rationalisation of representational taboos are apparent in our times. In 1989, Iran’s Ayatollah Khomeini issued a fatwa sentencing Salman Rushdie to death for writing The Satanic Verses (1988). The book outraged Muslims for its fictionalised account of Prophet Muhammad’s life. In 2001, the Taliban blew up the sixth-century Bamiyan Buddhas in Afghanistan. In 2005, controversy erupted over the publication by the Danish newspaper Jyllands-Posten of cartoons depicting the Prophet. The cartoons continued to ignite fury in some way or other for at least a decade. There were protests across the Middle East, attacks on Western embassies after several European papers reprinted the cartoons, and in 2008 Osama bin Laden issued an incendiary warning to Europe of ‘grave punishment’ for its ‘new Crusade’ against Islam. In 2015, the offices of Charlie Hebdo, a satirical magazine in Paris that habitually offended Muslim sensibilities, was attacked by armed gunmen, killing 12. The magazine had featured Michel Houellebecq’s novel Submission (2015), a futuristic vision of France under Islamic rule.

In a sense, the destruction of the Bamiyan Buddhas was no different from the Rushdie fatwa, which was like the Danish cartoons fallout and the violence wreaked on Charlie Hebdo’s editorial staff. All are linked by the desire to control representation, be it through imagery or the word.

Control of the word was something that Al-Farabi appeared to judge necessary if Islam’s biggest project – the multiethnic commonwealth that was the Abbasid empire – was to be preserved. Figural representation was pretty much settled as an issue for Muslims when Al-Farabi would have been pondering some of his key theories. Within 30 years of the Prophet’s death in 632, art and creative expression took two parallel paths depending on the context for which it was intended. There was art for the secular space, such as the palaces and bathhouses of the Umayyads (661-750). And there was the art considered appropriate for religious spaces – mosques and shrines such as the Dome of the Rock (completed in 691). Caliph Abd al-Malik had already engaged in what has been called a ‘polemic of images’ on coinage with his Byzantine counterpart, Emperor Justinian II. Ultimately, Abd al-Malik issued coins inscribed with the phrases ‘ruler of the orthodox’ and ‘representative [caliph] of Allah’ rather than his portrait. And the Dome of the Rock had script rather than representations of living creatures as a decoration. The lack of image had become an image. In fact, the word was now the image. That is why calligraphy became the greatest of Muslim art forms. The importance of the written word – its absorption and its meaning – was also exemplified by the Abbasids’ investment in the Greek-to-Arabic translation movement from the eighth to the 10th centuries.

Consequently, in Al-Farabi’s time, what was most important for Muslims was to control representation through the word. Christian iconophiles made their case for devotional images with the argument that words have the same representative power as paintings. Words are like icons, declared the iconophile Christian priest Theodore Abu Qurrah, who lived in dar-al Islam and wrote in Arabic in the ninth century. And images, he said, are the writing of the illiterate.

Al-Farabi was concerned about the power – for good or ill – of writings at a time when the Abbasid empire was in decline. He held creative individuals responsible for what they produced. Abbasid caliphs increasingly faced a crisis of authority, both moral and political. This led Al-Farabi – one of the Arab world’s most original thinkers – to extrapolate from topical temporal matters the key issues confronting Islam and its expanding and diverse dominions.

Al-Farabi fashioned a political philosophy that naturalised Plato’s imaginary ideal state for the world to which he belonged. He tackled the obvious issue of leadership, reminding Muslim readers of the need for a philosopher-king, a ‘virtuous ruler’ to preside over a ‘virtuous city’, which would be run on the principles of ‘virtuous religion’.

Like Plato, Al-Farabi suggested creative expression should support the ideal ruler, thus shoring up the virtuous city and the status quo. Just as Plato in the Republic demanded that poets in the ideal state tell stories of unvarying good, especially about the gods, Al-Farabi’s treatises mention ‘praiseworthy’ poems, melodies and songs for the virtuous city. Al-Farabi commended as ‘most venerable’ for the virtuous city the sorts of writing ‘used in the service of the supreme ruler and the virtuous king.’

It is this idea of writers following the approved narrative that most clearly joins Al-Farabi’s political philosophy to that of the man he called Plato the ‘Divine’. When Al-Farabi seized on Plato’s argument for ‘a censorship of the writers’ as a social good for Muslim society, he was making a case for managing the narrative by controlling the word. It would be important to the next phase of Islamic image-building.

Some of Al-Farabi’s ideas might have influenced other prominent Muslim thinkers, including the Persian polymath Ibn Sina, or Avicenna, (c980-1037) and the Persian theologian Al-Ghazali (c1058-1111). Certainly, his rationalisation for controlling creative writing enabled a further move to deny legitimacy to new interpretation.Aeon counter – do not remove

Rashmee Roshan Lall

This article was originally published at Aeon and has been republished under Creative Commons.

What Einstein Meant by ‘God Does Not Play Dice’

Einstein with his second wife Elsa, 1921. Wikipedia.

Jim Baggott | Aeon Ideas

‘The theory produces a good deal but hardly brings us closer to the secret of the Old One,’ wrote Albert Einstein in December 1926. ‘I am at all events convinced that He does not play dice.’

Einstein was responding to a letter from the German physicist Max Born. The heart of the new theory of quantum mechanics, Born had argued, beats randomly and uncertainly, as though suffering from arrhythmia. Whereas physics before the quantum had always been about doing this and getting that, the new quantum mechanics appeared to say that when we do this, we get that only with a certain probability. And in some circumstances we might get the other.

Einstein was having none of it, and his insistence that God does not play dice with the Universe has echoed down the decades, as familiar and yet as elusive in its meaning as E = mc2. What did Einstein mean by it? And how did Einstein conceive of God?

Hermann and Pauline Einstein were nonobservant Ashkenazi Jews. Despite his parents’ secularism, the nine-year-old Albert discovered and embraced Judaism with some considerable passion, and for a time he was a dutiful, observant Jew. Following Jewish custom, his parents would invite a poor scholar to share a meal with them each week, and from the impoverished medical student Max Talmud (later Talmey) the young and impressionable Einstein learned about mathematics and science. He consumed all 21 volumes of Aaron Bernstein’s joyful Popular Books on Natural Science (1880). Talmud then steered him in the direction of Immanuel Kant’s Critique of Pure Reason (1781), from which he migrated to the philosophy of David Hume. From Hume, it was a relatively short step to the Austrian physicist Ernst Mach, whose stridently empiricist, seeing-is-believing brand of philosophy demanded a complete rejection of metaphysics, including notions of absolute space and time, and the existence of atoms.

But this intellectual journey had mercilessly exposed the conflict between science and scripture. The now 12-year-old Einstein rebelled. He developed a deep aversion to the dogma of organised religion that would last for his lifetime, an aversion that extended to all forms of authoritarianism, including any kind of dogmatic atheism.

This youthful, heavy diet of empiricist philosophy would serve Einstein well some 14 years later. Mach’s rejection of absolute space and time helped to shape Einstein’s special theory of relativity (including the iconic equation E = mc2), which he formulated in 1905 while working as a ‘technical expert, third class’ at the Swiss Patent Office in Bern. Ten years later, Einstein would complete the transformation of our understanding of space and time with the formulation of his general theory of relativity, in which the force of gravity is replaced by curved spacetime. But as he grew older (and wiser), he came to reject Mach’s aggressive empiricism, and once declared that ‘Mach was as good at mechanics as he was wretched at philosophy.’

Over time, Einstein evolved a much more realist position. He preferred to accept the content of a scientific theory realistically, as a contingently ‘true’ representation of an objective physical reality. And, although he wanted no part of religion, the belief in God that he had carried with him from his brief flirtation with Judaism became the foundation on which he constructed his philosophy. When asked about the basis for his realist stance, he explained: ‘I have no better expression than the term “religious” for this trust in the rational character of reality and in its being accessible, at least to some extent, to human reason.’

But Einstein’s was a God of philosophy, not religion. When asked many years later whether he believed in God, he replied: ‘I believe in Spinoza’s God, who reveals himself in the lawful harmony of all that exists, but not in a God who concerns himself with the fate and the doings of mankind.’ Baruch Spinoza, a contemporary of Isaac Newton and Gottfried Leibniz, had conceived of God as identical with nature. For this, he was considered a dangerous heretic, and was excommunicated from the Jewish community in Amsterdam.

Einstein’s God is infinitely superior but impersonal and intangible, subtle but not malicious. He is also firmly determinist. As far as Einstein was concerned, God’s ‘lawful harmony’ is established throughout the cosmos by strict adherence to the physical principles of cause and effect. Thus, there is no room in Einstein’s philosophy for free will: ‘Everything is determined, the beginning as well as the end, by forces over which we have no control … we all dance to a mysterious tune, intoned in the distance by an invisible player.’

The special and general theories of relativity provided a radical new way of conceiving of space and time and their active interactions with matter and energy. These theories are entirely consistent with the ‘lawful harmony’ established by Einstein’s God. But the new theory of quantum mechanics, which Einstein had also helped to found in 1905, was telling a different story. Quantum mechanics is about interactions involving matter and radiation, at the scale of atoms and molecules, set against a passive background of space and time.

Earlier in 1926, the Austrian physicist Erwin Schrödinger had radically transformed the theory by formulating it in terms of rather obscure ‘wavefunctions’. Schrödinger himself preferred to interpret these realistically, as descriptive of ‘matter waves’. But a consensus was growing, strongly promoted by the Danish physicist Niels Bohr and the German physicist Werner Heisenberg, that the new quantum representation shouldn’t be taken too literally.

In essence, Bohr and Heisenberg argued that science had finally caught up with the conceptual problems involved in the description of reality that philosophers had been warning of for centuries. Bohr is quoted as saying: ‘There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.’ This vaguely positivist statement was echoed by Heisenberg: ‘[W]e have to remember that what we observe is not nature in itself but nature exposed to our method of questioning.’ Their broadly antirealist ‘Copenhagen interpretation’ – denying that the wavefunction represents the real physical state of a quantum system – quickly became the dominant way of thinking about quantum mechanics. More recent variations of such antirealist interpretations suggest that the wavefunction is simply a way of ‘coding’ our experience, or our subjective beliefs derived from our experience of the physics, allowing us to use what we’ve learned in the past to predict the future.

But this was utterly inconsistent with Einstein’s philosophy. Einstein could not accept an interpretation in which the principal object of the representation – the wavefunction – is not ‘real’. He could not accept that his God would allow the ‘lawful harmony’ to unravel so completely at the atomic scale, bringing lawless indeterminism and uncertainty, with effects that can’t be entirely and unambiguously predicted from their causes.

The stage was thus set for one of the most remarkable debates in the entire history of science, as Bohr and Einstein went head-to-head on the interpretation of quantum mechanics. It was a clash of two philosophies, two conflicting sets of metaphysical preconceptions about the nature of reality and what we might expect from a scientific representation of this. The debate began in 1927, and although the protagonists are no longer with us, the debate is still very much alive.

And unresolved.

I don’t think Einstein would have been particularly surprised by this. In February 1954, just 14 months before he died, he wrote in a letter to the American physicist David Bohm: ‘If God created the world, his primary concern was certainly not to make its understanding easy for us.’


Jim Baggott

This article was originally published at Aeon and has been republished under Creative Commons.

Interview with Simone de Beauvoir (1959)

Simone de Beauvoir was a French writer, intellectual, existentialist philosopher, political activist, feminist and social theorist. Though she did not consider herself a philosopher, she had a significant influence on both feminist existentialism and feminist theory.

De Beauvoir wrote novels, essays, biographies, autobiography and monographs on philosophy, politics and social issues. She was known for her 1949 treatise The Second Sex, a detailed analysis of women’s oppression and a foundational tract of contemporary feminism; and for her novels, including She Came to Stay and The Mandarins. She was also known for her lifelong relationship with French philosopher Jean-Paul Sartre.


You may find two of de Beauvoir’s works, namely, The Second Sex (PDF) and The Ethics of Ambiguity (PDF), in the Political & Cultural and 20th-Century Philosophy sections of the Bookshelf.

How Camus and Sartre Split Up Over the Question of How to be Free

camus

Albert Camus by Cecil Beaton for Vogue in 1946. Photo by Getty

Sam Dresser | Aeon Ideas

They were an odd pair. Albert Camus was French Algerian, a pied-noir born into poverty who effortlessly charmed with his Bogart-esque features. Jean-Paul Sartre, from the upper reaches of French society, was never mistaken for a handsome man. They met in Paris during the Occupation and grew closer after the Second World War. In those days, when the lights of the city were slowly turning back on, Camus was Sartre’s closest friend. ‘How we loved you then,’ Sartre later wrote.

They were gleaming icons of the era. Newspapers reported on their daily movements: Sartre holed up at Les Deux Magots, Camus the peripatetic of Paris. As the city began to rebuild, Sartre and Camus gave voice to the mood of the day. Europe had been immolated, but the ashes left by war created the space to imagine a new world. Readers looked to Sartre and Camus to articulate what that new world might look like. ‘We were,’ remembered the fellow philosopher Simone de Beauvoir, ‘to provide the postwar era with its ideology.’

It came in the form of existentialism. Sartre, Camus and their intellectual companions rejected religion, staged new and unnerving plays, challenged readers to live authentically, and wrote about the absurdity of the world – a world without purpose and without value. ‘[There are] only stones, flesh, stars, and those truths the hand can touch,’ Camus wrote. We must choose to live in this world and to project our own meaning and value onto it in order to make sense of it. This means that people are free and burdened by it, since with freedom there is a terrible, even debilitating, responsibility to live and act authentically.

If the idea of freedom bound Camus and Sartre philosophically, then the fight for justice united them politically. They were committed to confronting and curing injustice, and, in their eyes, no group of people was more unjustly treated than the workers, the proletariat. Camus and Sartre thought of them as shackled to their labour and shorn of their humanity. In order to free them, new political systems must be constructed.

In October 1951, Camus published The Rebel. In it, he gave voice to a roughly drawn ‘philosophy of revolt’. This wasn’t a philosophical system per se, but an amalgamation of philosophical and political ideas: every human is free, but freedom itself is relative; one must embrace limits, moderation, ‘calculated risk’; absolutes are anti-human. Most of all, Camus condemned revolutionary violence. Violence might be used in extreme circumstances (he supported the French war effort, after all) but the use of revolutionary violence to nudge history in the direction you desire is utopian, absolutist, and a betrayal of yourself.

‘Absolute freedom is the right of the strongest to dominate,’ Camus wrote, while ‘absolute justice is achieved by the suppression of all contradiction: therefore it destroys freedom.’ The conflict between justice and freedom required constant re-balancing, political moderation, an acceptance and celebration of that which limits the most: our humanity. ‘To live and let live,’ he said, ‘in order to create what we are.’

Sartre read The Rebel with disgust. As far as he was concerned, it was possible to achieve perfect justice and freedom – that described the achievement of communism. Under capitalism, and in poverty, workers could not be free. Their options were unpalatable and inhumane: to work a pitiless and alienating job, or to die. But by removing the oppressors and broadly returning autonomy to the workers, communism allows each individual to live without material want, and therefore to choose how best they can realise themselves. This makes them free, and through this unbending equality, it is also just.

The problem is that, for Sartre and many others on the Left, communism required revolutionary violence to achieve because the existing order must be smashed. Not all leftists, of course, endorsed such violence. This division between hardline and moderate leftists – broadly, between communists and socialists – was nothing new. The 1930s and early ’40s, however, had seen the Left temporarily united against fascism. With the destruction of fascism, the rupture between hardline leftists willing to condone violence and moderates who condemned it returned. This split was made all the more dramatic by the practical disappearance of the Right and the ascendancy of the Soviet Union – which empowered hardliners throughout Europe, but raised disquieting questions for communists as the horrors of gulags, terror and show trials came to light. The question for every leftist of the postwar era was simple: which side are you on?

With the publication of The Rebel, Camus declared for a peaceful socialism that would not resort to revolutionary violence. He was appalled by the stories emerging from the USSR: it was not a country of hand-in-hand communists, living freely, but a country with no freedom at all. Sartre, meanwhile, would fight for communism, and he was prepared to endorse violence to do so.

The split between the two friends was a media sensation. Les Temps Modernes – the journal edited by Sartre, which published a critical review of The Rebel – sold out three times over. Le Monde and L’Observateur both breathlessly covered the falling out. It’s hard to imagine an intellectual feud capturing that degree of public attention today, but, in this disagreement, many readers saw the political crises of the times reflected back at them. It was a way of seeing politics played out in the world of ideas, and a measure of the worth of ideas. If you are thoroughly committed to an idea, are you compelled to kill for it? What price for justice? What price for freedom?

Sartre’s position was shot through with contradiction, with which he struggled for the remainder of his life. Sartre, the existentialist, who said that humans are condemned to be free, was also Sartre, the Marxist, who thought that history does not allow much space for true freedom in the existential sense. Though he never actually joined the French Communist Party, he would continue to defend communism throughout Europe until 1956, when the Soviet tanks in Budapest convinced him, finally, that the USSR did not hold the way forward. (Indeed, he was dismayed by the Soviets in Hungary because they were acting like Americans, he said.) Sartre would remain a powerful voice on the Left throughout his life, and chose the French president Charles de Gaulle as his favourite whipping boy. (After one particularly vicious attack, de Gaulle was asked to arrest Sartre. ‘One does not imprison Voltaire,’ he responded.) Sartre remained unpredictable, however, and was engaged in a long, bizarre dalliance with hardline Maoism when he died in 1980. Though Sartre moved away from the USSR, he never completely abandoned the idea that revolutionary violence might be warranted.

Philosophy Feud: Sartre vs Camus from Aeon Video on Vimeo

The violence of communism sent Camus on a different trajectory. ‘Finally,’ he wrote in The Rebel, ‘I choose freedom. For even if justice is not realised, freedom maintains the power of protest against injustice and keeps communication open.’ From the other side of the Cold War, it is hard not to sympathise with Camus, and to wonder at the fervour with which Sartre remained a loyal communist. Camus’s embrace of sober political reality, of moral humility, of limits and fallible humanity, remains a message well-heeded today. Even the most venerable and worthy ideas need to be balanced against one another. Absolutism, and the impossible idealism it inspires, is a dangerous path forward – and the reason Europe lay in ashes, as Camus and Sartre struggled to envision a fairer and freer world.Aeon counter – do not remove

Sam Dresser

This article was originally published at Aeon and has been republished under Creative Commons.

Pragmatism & Postmodernism

william-james

To my best belief: just what is the pragmatic theory of truth?


Cheryl Misak | Aeon Ideas

What is it for something to be true? One might think that the answer is obvious. A true belief gets reality right: our words correspond to objects and relations in the world. But making sense of that idea involves one in ever more difficult workarounds to intractable problems. For instance, how do we account for the statement ‘It did not rain in Toronto on 20 May 2018’? There don’t seem to be negative facts in the world that might correspond to the belief. What about ‘Every human is mortal’? There are more humans – past, present and future – than individual people in the world. (That is, a generalisation like ‘All Fs’ goes beyond the existing world of Fs, because ‘All Fs’ stretches into the future.) What about ‘Torture is wrong’? What are the objects in the world that might correspond to that? And what good is it explaining truth in terms of independently existing objects and facts, since we have access only to our interpretations of them?

Pragmatism can help us with some of these issues. The 19th-century American philosopher Charles Peirce, one of the founders of pragmatism, explained the core of this tradition beautifully: ‘We must not begin by talking of pure ideas, – vagabond thoughts that tramp the public roads without any human habitation, – but must begin with men and their conversation.’ Truth is a property of our beliefs. It is what we aim at, and is essentially connected to our practices of enquiry, action and evaluation. Truth, in other words, is the best that we could do.

The pragmatic theory of truth arose in Cambridge, Massachusetts in the 1870s, in a discussion group that included Peirce and William James. They called themselves the Metaphysical Club, with intentional irony. Though they shared the same broad outlook on truth, there was immediate disagreement about how to unpack the idea of the ‘best belief’. The debate stemmed from the different temperaments of Peirce and James.

Philosophy, James said, ‘is at once the most sublime and the most trivial of human pursuits. It works in the minutest crannies and it opens out the widest vistas.’ He was more a vista than a crannies man, dead set against technical philosophy. At the beginning of his book Pragmatism (1907), he said: ‘the philosophy which is so important to each of us is not a technical matter; it is our more or less dumb sense of what life honestly and deeply means.’ He wanted to write accessible philosophy for the public, and did so admirably. He became the most famous living academic in the United States.

The version of the pragmatist theory of truth made famous (or perhaps infamous) by James held that ‘Any idea upon which we can ride … any idea that will carry us prosperously from any one part of our experience to any other part, linking things satisfactorily, working securely, simplifying, saving labour, is … true INSTRUMENTALLY.’

‘Satisfactorily’ for James meant ‘more satisfactorily to ourselves, and individuals will emphasise their points of satisfaction differently. To a certain degree, therefore, everything here is plastic.’ He argued that if the available evidence underdetermines a matter, and if there are non-epistemic reasons for believing something (my people have always believed it, believing it would make me happier), then it is rational to believe it. He argued that if a belief in God has a positive impact on someone’s life, then it is true for that person. If it does not have a good impact on someone else’s life, it is not true for them.

Peirce, a crackerjack logician, was perfectly happy working in the crannies as well as opening out the vistas. He wrote much, but published little. A cantankerous man, Peirce described the difference in personality with his friend James thus: ‘He so concrete, so living; I a mere table of contents, so abstract, a very snarl of twine.’

Peirce said that James’s version of the pragmatic theory of truth was ‘a very exaggerated utterance, such as injures a serious man very much’. It amounted to: ‘Oh, I could not believe so-and-so, because I should be wretched if I did.’ Peirce’s worries, in these days of fake news, are more pressing than ever.

On Peirce’s account, a belief is true if it would be ‘indefeasible’ or would not in the end be defeated by reasons, argument, evidence and the actions that ensue from it. A true belief is the belief that we would come to, were we to enquire as far as we could on a matter. He added an important rider: a true belief must be put in place in a manner ‘not extraneous to the facts’. We cannot believe something because we would like it to be true. The brute impinging of experience cannot be ignored.

The disagreement continues to this day. James influenced John Dewey (who, when a student at Johns Hopkins, avoided Peirce and his technical philosophy like the plague) and later Richard Rorty. Dewey argued that truth (although he tended to stay away from the word) is nothing more than a resolution of a problematic situation. Rorty, at his most extreme, held that truth is nothing more than what our peers will let us get away with saying. This radically subjective or plastic theory of truth is what is usually thought of as pragmatism.

Peirce, however, managed to influence a few people himself, despite being virtually unknown in his lifetime. One was the Harvard logician and Kant scholar C I Lewis. He argued for a position remarkably similar to what his student W V O Quine would take over (and fail to acknowledge as Lewis’s). Reality cannot be ‘alien’, wrote Lewis – ‘the only reality there for us is one delimited in concepts of the results of our own ways of acting’. We have something given to us in brute experience, which we then interpret. With all pragmatists, Lewis was set against conceptions of truth in which ‘the mind approaches the flux of immediacy with some godlike foreknowledge of principles’. There is no ‘natural light’, no ‘self-illuminating propositions’, no ‘innate ideas’ from which other certainties can be deduced. Our body of knowledge is a pyramid, with the most general beliefs, such as the laws of logic, at the top, and the least general, such as ‘all swans are birds’, at the bottom. When faced with recalcitrant experience, we make adjustments in this complex system of interrelated concepts. ‘The higher up a concept stands in our pyramid, the more reluctant we are to disturb it, because the more radical and far-reaching the results will be…’ But all beliefs are fallible, and we can indeed disturb any of them. A true belief would be one that survives this process of enquiry.

Lewis saw that the pragmatist theory of truth deals nicely with those beliefs that the correspondence theory stumbles over. For instance, there is no automatic bar to ethical beliefs being true. Beliefs about what is right and wrong might well be evaluable in ways similar to how other kinds of beliefs are evaluable – in terms of whether they fit with experience and survive scrutiny.Aeon counter – do not remove

Cheryl Misak

This article was originally published at Aeon and has been republished under Creative Commons.

What did Max Weber mean by the ‘Spirit’ of Capitalism?

ludwigshafen

The BASF factory at Ludwigshafen, Germany, pictured on a postcard in 1881. Courtesy Wikipedia

Peter Ghosh | Aeon Ideas

Max Weber’s famous text The Protestant Ethic and the Spirit of Capitalism (1905) is surely one of the most misunderstood of all the canonical works regularly taught, mangled and revered in universities across the globe. This is not to say that teachers and students are stupid, but that this is an exceptionally compact text that ranges across a very broad subject area, written by an out-and-out intellectual at the top of his game. He would have been dumb­founded to find that it was being used as an elementary introduction to sociology for undergraduate students, or even schoolchildren.

We use the word ‘capitalism’ today as if its meaning were self-evident, or else as if it came from Marx, but this casualness must be set aside. ‘Capitalism’ was Weber’s own word and he defined it as he saw fit. Its most general meaning was quite simply modernity itself: capitalism was ‘the most fateful power in our modern life’. More specifically, it controlled and generated ‘modern Kultur’, the code of values by which people lived in the 20th-century West, and now live, we may add, in much of the 21st-century globe. So the ‘spirit’ of capitalism is also an ‘ethic’, though no doubt the title would have sounded a bit flat if it had been called The Protestant Ethic and the Ethic of Capitalism.

This modern ‘ethic’ or code of values was unlike any other that had gone before. Weber supposed that all previous ethics – that is, socially accepted codes of behaviour rather than the more abstract propositions made by theologians and philosophers – were religious. Religions supplied clear messages about how to behave in society in straightforward human terms, messages that were taken to be moral absolutes binding on all people. In the West this meant Christianity, and its most important social and ethical prescription came out of the Bible: ‘Love thy neighbour.’ Weber was not against love, but his idea of love was a private one – a realm of intimacy and sexuality. As a guide to social behaviour in public places ‘love thy neighbour’ was obviously nonsense, and this was a principal reason why the claims of churches to speak to modern society in authentically religious terms were marginal. He would not have been surprised at the long innings enjoyed by the slogan ‘God is love’ in the 20th-century West – its career was already launched in his own day – nor that its social consequences should have been so limited.

The ethic or code that dominated public life in the modern world was very different. Above all it was impersonal rather than personal: by Weber’s day, agreement on what was right and wrong for the individual was breaking down. The truths of religion – the basis of ethics – were now contested, and other time-honoured norms – such as those pertaining to sexuality, marriage and beauty – were also breaking down. (Here is a blast from the past: who today would think to uphold a binding idea of beauty?) Values were increasingly the property of the individual, not society. So instead of humanly warm contact, based on a shared, intuitively obvious understanding of right and wrong, public behaviour was cool, reserved, hard and sober, governed by strict personal self-control. Correct behaviour lay in the observance of correct procedures. Most obviously, it obeyed the letter of the law (for who could say what its spirit was?) and it was rational. It was logical, consistent, and coherent; or else it obeyed unquestioned modern realities such as the power of numbers, market forces and technology.

There was another kind of disintegration besides that of traditional ethics. The proliferation of knowledge and reflection on knowledge had made it impossible for any one person to know and survey it all. In a world which could not be grasped as a whole, and where there were no universally shared values, most people clung to the particular niche to which they were most committed: their job or profession. They treated their work as a post-religious calling, ‘an absolute end in itself’, and if the modern ‘ethic’ or ‘spirit’ had an ultimate found­ation, this was it. One of the most widespread clichés about Weber’s thought is to say that he preached a work ethic. This is a mistake. He personally saw no particular virtue in sweat – he thought his best ideas came to him when relaxing on a sofa with a cigar – and had he known he would be misunder­stood in this way, he would have pointed out that a capacity for hard work was something that did not dist­inguish the modern West from previous soc­ieties and their value systems. However, the idea that people were being ever more defined by the blinkered focus of their employment was one he regarded as profoundly modern and characteristic.

The blinkered pro­fessional ethic was common to entrepreneurs and an increasingly high-wage, skilled labour force, and it was this combination that produced a situation where the ‘highest good’ was the making of money and ever more money, without any limit. This is what is most readily recognisable as the ‘spirit’ of capitalism, but it should be stressed that it was not a simple ethic of greed which, as Weber recognised, was age-old and eternal. In fact there are two sets of ideas here, though they overlap. There is one about potentially universal rational pro­cedures – specialisation, logic, and formally consistent behaviour – and another that is closer to the modern economy, of which the central part is the professional ethic. The modern situation was the product of narrow-minded adhesion to one’s particular function under a set of conditions where the attempt to understand modernity as a whole had been abandoned by most people. As a result they were not in control of their own destiny, but were governed by the set of rational and impersonal pro­cedures which he likened to an iron cage, or ‘steel housing’. Given its rational and impersonal foundations, the housing fell far short of any human ideal of warmth, spontaneity or breadth of outlook; yet rationality, technology and legality also produced material goods for mass consumption in unprecedented amounts. For this reason, though they could always do so if they chose to, people were unlikely to leave the housing ‘until the last hundredweight of fossil fuel is burned up’.

It is an extremely powerful analysis, which tells us a great deal about the 20th-century West and a set of Western ideas and priorities that the rest of the world has been increasingly happy to take up since 1945. It derives its power not simply from what it says, but because Weber sought to place under­standing before judgment, and to see the world as a whole. If we wish to go beyond him, we must do the same.Aeon counter – do not remove

Peter Ghosh

This article was originally published at Aeon and has been republished under Creative Commons.

The Problem of Atheism

Human Values and Science, Art and Mathematics

Illustration by artist Hugh Lieber from Human Values and Science, Art and Mathematics by mathematician Lillian Lieber


Excerpts from Keiji Nishitani (1900-1990), The Self-Overcoming of Nihilism (Appendix)

Marxist Humanism

As is commonly known, Marxism looks on religion as a way for those unable to come to terms with the frustrations of life to find satisfaction at the ideal level by imagining a world beyond. In so doing, the argument goes, they nullify the self and transpose the essence of their humanity into the image of “God” in the other world. In this act of religious “self-alienation” both nature and humanity become nonessential, void, and without substance. Atheism consists in the negation of this nonessentiality. By denying God it affirms the essence of the human. This emancipation of the human in turn is of a single root with human freedom.

This variety of atheism is connected with Marx’s characterization of the essence of the human individual as worker: humanity is achieved by remaking the world through work. The process of self-creation by which one gradually makes oneself human through work is what constitutes history. Seen from such a perspective, atheism is unavoidable. For since the source of religious self-alienation lies in economic self-alienation (the condition of being deprived of one’s humanity economically), once the latter is overcome, the former will fall away as a matter of course. According to Marx, then, atheism is a humanism wrought through the negation of religion.

Now insofar as Marx’s atheistic humanism is a humanism that has become self-conscious dialectically – its affirmation rests on the negation of religion – it clearly strikes at the very heart of religion. In it we find a clear and pointed expression of the general indifference, if not outright antagonism, to religion in the modern mind. From its very beginning, modern humanism has combined the two facets of maintaining ties to religion and gradually breaking away from it. In a sense, the history of modern philosophy can be read as a struggle among approaches to humanism based on one or the other of these aspects. At present the debate over humanism – what it is that constitutes the essence of the human – has become completely polarized. The responses provided by the various religious traditions show no signs of being able to allay the situation. Questions such as freedom, history, and labor, in the sense in which Marx discusses them in relation to the essence of humanity, paint a picture of the modern individual that had until recently escaped the notice of religion. To come to grips with such questions, religion will have to open up a new horizon.

Even if we grant that Marx’s thought touches the problem of religion at some depth, it is hard to sustain the claim that he understood its true foundations correctly. Matters like the meaning of life and death, or the impermanence of all things, simply cannot be reduced without remainder to a matter of economic self-alienation. These are questions of much broader and deeper reach, indeed questions essential for human being.

The problem expressed in the term “all is suffering” is a good example. It is clearly much more than a matter of the socio-historical suffering of human individuals; it belongs essentially to the way of being of all things in the world. The problem of human suffering is a problem of the suffering of the human being as “being-in-the-world,” too profound a matter to be alleviated merely by removing socio-historical suffering. It has to do with a basic mode of human being that also serves as the foundation for the pleasure, or the freedom from suffering and pleasure, that we oppose to suffering.

Or again, we might say that the issue of “the non-self nature of all dharmas” refers to “the nonessentiality of nature and humanity,” but this does not mean that we can reduce the claim to a self-alienating gesture of projecting the essence of our humanity on to “God.” It refers to the essential way that all things in the world are: depending on each other and existing only in interdependency. It is meant to point to the essential “non-essentiality” of all beings, and hence to a domain that no society can alter, however far it may progress. It is, in short, the very domain of religion that remains untouched by Marx’s critique. Marx argues emphatically that through work human beings conquer nature, change the world, and give the self its human face. But deep in the recesses behind the world of work lies a world whose depth and vastness are beyond our ken, a world in which everything arises only by depending on everything else, in which no single thing exists through the power of a “self” (or what is called “self-power”). This is the world of human beings who exist as “being-in-the-world.”

As for religion itself, whose maxim all along has been “all is suffering,” the idea that this has to do with “historical” suffering has not often come to the fore. (In this regard, Christianity represents an exception.) The idea of “karma” is supposed to relate concretely to the historicity of human existence, but even this viewpoint has not been forthcoming. The human activities of producing and using various things through “self-power,” of changing nature and society and creating a “human” self – in short, the emancipation of the human and the freedom of the human individual – would seem to be the most concrete “karma” of humanity and therefore profoundly connected with modern atheism. But none of these ideas has been forthcoming from the traditional religions. Even though for Christianity the fact that we must labor by the sweat of our brows is related to original sin, the germ of this idea has not, to my knowledge, been developed anywhere in modern theology.


Sartrean Existentialism

Modern atheism also appears in the form of existentialism. The same sharp and total opposition that separates existentialism and Marxism in general applies also to their respective forms of atheism. Unlike Marxism, which understands the human being as an essentially social being, existentialism thinks of the human being essentially as an individual; that is, it defines the human as a way of being in which each individual relates to itself. Marx’s critique of religion begins from the self-alienation of human beings in religion, redefines it as an economic self-alienation, and then deals with religion in terms of its social functions. In contrast, the existentialist Sartre, for example, understands the relationship between God and humanity as a problem of each individual’s relating to the essence of “self”-being itself. In other words, he begins from something like an ontological self-alienation implied in seeing human beings as creatures of God. For all the differences between the standpoints, they share the basic tenet that it is only by denying God that we can regain our own humanity. As is the case with Marx’s socialist individual, for Sartre’s existentialist individual humanism is viable only as an atheism – which is the force of Sartre’s referring to existentialism as a humanism.

According to Sartre, if God existed and had indeed created us, there would be basically no human freedom. If human existence derived from God and the essence of human existence consisted in this derivation, the individual’s every action and situation would be determined by this essential fact. In traditional terms, “essential being” precedes “actual being” and continually determines it. This means that the whole of actual human being is essentially contained within the “Providence” of God and is necessarily predetermined by God’s will. Such predestination amounts to a radical negation of human freedom. If we grant the existence of God we must admit God’s creation; and if we grant God’s creation, we must also allow for God’s predestination – in other words, we are forced to deny that there is any such thing as human freedom. If human freedom is to be affirmed, the existence of God must be denied.

Human “existence” (a temporal and “phenomenal” way of being) does not have behind it any essential being (a supratemporal and “noumenal” way of being) that would constitute its ground. There is nothing at all at the ground of existence. And it is from this ground of “nothing” where there is simply nothing at all that existence must continually determine itself. We must create ourselves anew ever and again out of nothing. Only in this way can one secure the being of a self – and exist. To be a human being is to humanize the self constantly, to create, indeed to have no choice other than to create, a “human being.” This self-being as continued self-creation out of nothing is what Sartre calls freedom. Insofar as one actually creates the self as human, actual existence precedes essence in the human being. In essence, the human individual is existence itself. This way of being human is “Existence,” and Existence can stand only on an atheism.

Of late we are beginning to see a turn in the standpoint of Heidegger, in that he no longer refers to his thought as an “existentialism.” Still, it seems important to point out what his thinking up until now has shared in common with the existentialism of Sartre. That human beings continually create themselves out of nothing is meant to supplant the Christian notion of God’s creatio ex nihilo. To this extent it is not the standpoint of “self-power” in the ordinary sense. Self-creation out of nothing is not brought about simply by the inner power of a being called human and hence is not a power contained within the framework of human being. This “being” is continually stepping beyond the framework of “being.” Nothingness means transcendence, but since this transcendence does not mean that there is some transcendent “other” apart from self-being, it implies a standpoint of “self-power,” not of “other-power.” In contrast to Christianity, it is a view in which nothingness becomes the ground of the subject and thereby becomes subjective nothing – a self-power based on nothing. Here the consciousness of freedom in the modern mind finds a powerful expression and amounts to what is, at least in the West, an entirely new standpoint. It seems doubtful that this standpoint can be confronted from within the traditional horizons that have defined Christianity so far. It is quite different with Buddhism.

From the perspective of Buddhism, Sartre’s notion of Existence, according to which one must create oneself continually in order to maintain oneself within nothing, remains a standpoint of attachment to the self – indeed, the most profound form of this attachment – and as such is caught in the self-contradiction this implies. It is not simply a question here of a standpoint of ordinary self-love in which the self is willfully attached to itself. It is rather a question of the self being compelled to be attached to itself willfully. To step out of the framework of being and into nothing is only to enter into a new framework of being once again. This self-contradiction constitutes a way of being in which the self is its own “prison,” which amounts to a form of karma. Self-creation, or freedom, may be self-aware, but only because, as Sartre himself says, we are “condemned to be free.” Such a freedom is not true freedom. Again, it may represent an exhaustive account of what we normally take freedom to be, but this only means that our usual idea of freedom is basically a kind of karma. Karma manifests itself in the way modern men and women ground themselves on an absolute affirmation of their freedom. As Sartre himself says, his standpoint of Existence is a radical carrying out of the cogito, ergo sum of Descartes, for the Cartesian ego shows us what the modern mode of being is.

That Sartre’s “Existence” retains a sense of attachment to the self implies, if we can get behind the idea, that the “nothingness” of which he speaks remains a nothingness to which the self is attached. It was remarked earlier that in existentialism nothingness became subjective nothingness, which means that, as in the case of Greek philosophy or Christianity, it is still bound to the human individual. Again looked at from behind, we find that human subjectivity is bound up inextricably with nothingness and that at the ground of human existence there is nothing, albeit a nothing of which there is still consciousness at the ground of the self. No matter how “pre-reflective” this consciousness is, it is not the point at which the being of the self is transformed existentially into absolute nothingness. Sartre’s nothingness is unable to make the being of the self (Existence) sufficiently “ek-static,” and to this extent it differs radically from Buddhist “emptiness.” The standpoint of emptiness appears when Sartrean Existence is overturned one more time. The question is whether Buddhism, in its traditional form, is equal to the confrontation with existentialism.

Sartre thinks that to be a human being is to “human-ize” the self continually and to create the self as human out of nothing. Pushing this idea to the extreme, and speaking from the standpoint of emptiness in Buddhism, it is a matter of continually assuming human form from a point where this form has been left behind and absolutely negated. It is, as it were, a matter of continued creative “accommodation,” a never-ending “return” to being a new “human.” Taken in the context of Buddhist thought as a whole, there is some question as to whether this idea of “accommodation” really carries such an actual and existential sense. Does it really, as Sartre’s idea of continual humanization does, have to do with our actual being at each moment?

When Sartre speaks of ceaseless self-creation out of nothing, he refers to an Existence that is temporal through and through. It does not admit of any separate realm of being, such as a supratemporal (or “eternal”) essence, but is simply based on “nothing.” But for Sartre Existence is self-created within a socio-historical situation, which demonstrates his profound appreciation of the social and historical dimensions of the human way of being. In the case of the standpoint of Buddhist emptiness, in which human being is understood as arising out of emptiness and existing in emptiness, we need to ask how far the actual Existence of the human being at each moment is included. How much of the Existence within the actual socio-historical situation, and completely temporalized in this actuality, is comprehended? To the extent that the comprehension is inadequate, the standpoint of Buddhism has become detached from our actuality, and that means that we have failed to take the standpoint of emptiness seriously enough and to make it existential. In this case, talk of “accommodation” is merely a kind of mythologizing.


Atheism in the World of Today

A crisis is taking place in the contemporary world in a variety of forms, cutting across the realms of culture, ethics, politics, and so forth. At the ground of these problems is that fact that the essence of being human has turned into a question mark for humanity itself. This means that a crisis has also struck in the field of religion, and that this crisis is the root of the problems that have arisen in other areas. We see evidence of this state of affairs in the fact that the most recent trends of thought in contemporary philosophy which are having a great influence – directly and indirectly – on culture, ethics, politics, and so on, are all based on a standpoint of atheism. This applies not only to Marxism and existentialism, especially as represented by Sartre, but also to logical positivism and numerous other currents of thought.

Involved in the problem of the essence of human being are the questions, “What is a human being?” and “By what values should one live?” These are questions that need to be thought through in terms of the totality of beings, the “myriad things” of which human beings are only one part. It is a question, too, of the place of human beings in the order of the totality of beings, and of how to accommodate to this position (that is, how to be truly human). For the order of being implies a ranking of values.

For example, even if “man” is said to be the lord of creation, this places him in a certain “locus” within the totality of things, and therefore refers to how one ought to live as a human being. In the Western tradition the locus of human being has been defined in relation to God. While we are said to have been created from nothing, our soul contains the imago dei. This divine image was shattered through original sin, to be restored only through the atonement of God’s Son, Jesus, and our faith in him as the Christ. Here the locus of human beings in the order of being and ranking of value takes a different form from the straightforward characterization of man as lord of creation, a form consisting of a complex interplay of negation and affirmation. This locus of human being is well expressed in Augustine’s saying: “Oh God, you have created us for you, and our hearts are restless until they rest in you.” Needless to say, the basic dynamism behind the forming of this locus came from Greek philosophy and Christianity.

Modern atheism, Marxism, and existentialism share in common the attempt to repudiate this traditional location of the human in order to restore human nature and freedom. The seriousness of this new humanism is that such a restoration is possible only through a denial of God. At the same time, the new humanism harbors a schism in its ranks between the standpoints of Marxism and existentialism. The axis of the existentialist standpoint is a subjectivity in which the self becomes truly itself, while Marxism, for all its talk of human beings as subjects of praxis, does not go beyond a view of the human being as an objective factor in the objective world of nature or society. Each of them comprehends human being from a locus different from the other.

In the Western tradition the objective world and subjective being – the natural and social orders on the one hand, the “soul” with its innate orientation to God on the other – were united within a single system. The two main currents in modern atheism correspond respectively to these two coordinates, the soul and the world, but there is little hope of their uniting given the current confrontation. There is no way for modern men and women simply to return to the old locus, and the new atheism offers only a locus split into two. Confusion reigns in today’s world at the most basic level concerning what human beings are and how they are to live.

Each of these two standpoints seeks to ground itself from start to finish in actual being. This is related to the denial of God, in that full engagement of the self in actual being requires a denial of having already been determined within the world-order established by God, as well as a denial of having been fitted out in advance with an orientation to God in one’s very soul. Both standpoints stress the importance of not becoming detached from the locus in which one “actually” is, of remaining firmly grounded in one’s actual socio-historical situation, or more fundamentally, in actual “time” and “space.” But do these standpoints really engage actual being to the full?

Earlier on I suggested that as long as Marxism and existentialism continue to hold to the standpoint of the “human,” they will never be able to give a full account of actual human being. These new forms of humanism try to restore human beings to actual being by eliminating from the world and the soul the element of divine “predetermination.” The result is that they leave a gaping void at the foundations, as is evidenced by the lack of a locus from which to address the problem of life and death. Since the human mode of being consists in life and death, we must pass beyond the human standpoint to face the problem of life and death squarely. But to overcome the human standpoint does not necessarily mean that one merely returns to the “predetermination” of God, nor that one simply extinguishes freedom or actual being. It is rather a matter of opening up the horizon in which the question can be engaged truly and to its outermost limits.

Earlier I also proposed consideration of the locus of Buddhist “emptiness” in this regard. In the locus of emptiness, beyond the human standpoint, a world of “dependent origination” is opened up in which everything is related to everything else. Seen in this light there is nothing in the world that arises from “self-power” and yet all “self-powered” workings arise from the world. Existence at each instant, Sartre’s self-creation as “human,” the humanization in which the self becomes human – all these can be said to arise ceaselessly as new accommodations from a locus of emptiness that absolutely negates the human standpoint. From the standpoint of emptiness, it is at least possible to see the actuality of human being in its socio-historical situation in such a way that one does not take leave of “actual” time and space. In the words of the Zen master Musō:

When acting apprehend the place of acting, when sitting apprehend the place of sitting, when lying apprehend the place of lying, when seeing and hearing apprehend the place of seeing and hearing, and when experiencing and knowing apprehend the place of experiencing and knowing.


Further Reading

The Self-Overcoming of Nihilism by Keiji Nishitani (PDF)

On Buddhism by Keiji Nishitani (PDF)

The Kyoto School (SEP)

What is Christianity?

This is an excellent discussion between Sam Harris and Bart Ehrman about Christianity and Christian history and theology. There is a remarkable resemblance between Ehrman’s life and periods of my own life in which I pursued truth and my faith was gradually lost. The problems with interpreting the scriptures too literally are exemplified. Anybody that honestly believes herself to be a Christian should cautiously examine the things discussed here. Sapere aude!

inri

Waking Up Podcast with Sam Harris #125: What is Christianity?


In this episode of the Waking Up podcast, Sam Harris speaks to Bart Ehrman about his experience of being a born-again Christian, his academic training in New Testament scholarship, his loss of faith, the most convincing argument in defense of Christianity, the status of miracles, the composition of the New Testament, the resurrection of Jesus, the nature of heaven and hell, the book of Revelation, the End Times, self-contradictions in the Bible, the concept of a messiah, whether Jesus actually existed, Christianity as a cult of human sacrifice, the conversion of Constantine, and other topics.

Bart D. Ehrman is the author or editor of more than thirty books, including the New York Times bestsellers Misquoting Jesus and How Jesus Became God. Ehrman is a professor of religious studies at the University of North Carolina, Chapel Hill, and a leading authority on the New Testament and the history of early Christianity. He has been featured in Time, The New Yorker, and The Washington Post, and has appeared on NBC, CNN, The Daily Show with Jon Stewart, The History Channel, National Geographic, BBC, major NPR shows, and other top print and broadcast media outlets. His most recent book is The Triumph of Christianity.

Twitter: @BartEhrman