Philosophy Can Make the Previously Unthinkable Thinkable

woman-at-window

Detail from Woman at a Window (1822) by Caspar David Friedrich. Courtesy Alte Nationalgalerie, Berlin


Rebecca Brown | Aeon Ideas

In the mid-1990s, Joseph Overton, a researcher at the US think tank the Mackinac Center for Public Policy, proposed the idea of a ‘window’ of socially acceptable policies within any given domain. This came to be known as the Overton window of political possibilities. The job of think tanks, Overton proposed, was not directly to advocate particular policies, but to shift the window of possibilities so that previously unthinkable policy ideas – those shocking to the sensibilities of the time – become mainstream and part of the debate.

Overton’s insight was that there is little point advocating policies that are publicly unacceptable, since (almost) no politician will support them. Efforts are better spent, he argued, in shifting the debate so that such policies seem less radical and become more likely to receive support from sympathetic politicians. For instance, working to increase awareness of climate change might make future proposals to restrict the use of diesel cars more palatable, and ultimately more effective, than directly lobbying for a ban on such vehicles.

Overton was concerned with the activities of think tanks, but philosophers and practical ethicists might gain something from considering the Overton window. By its nature, practical ethics typically addresses controversial, politically sensitive topics. It is the job of philosophers to engage in ‘conceptual hygiene’ or, as the late British philosopher Mary Midgley described it, ‘philosophical plumbing’: clarifying and streamlining, diagnosing unjustified assertions and pointing out circularities.

Hence, philosophers can be eager to apply their skills to new subjects. This can provoke frustration from those embedded within a particular subject. Sometimes, this is deserved: philosophers can be naive in contributing their thoughts to complex areas with which they lack the kind of familiarity that requires time and immersion. But such an outside perspective can also be useful. Although such contributions will rarely get everything right, the standard is too demanding in areas of great division and debate (such as practical ethics). Instead, we should expect philosophers to offer a counterpoint to received wisdom, established norms and doctrinal prejudice.

Ethicists, at least within their academic work, are encouraged to be skeptical of intuition and the naturalistic fallacy (the idea that values can be derived simply from facts). Philosophers are also familiar with tools such as thought experiments: hypothetical and contrived descriptions of events that can be useful for clarifying particular intuitions or the implications of a philosophical claim. These two factors make it unsurprising that philosophers often publicly adopt positions that are unintuitive and outside mainstream thought, and that they might not personally endorse.

This can serve to shift, and perhaps widen, the Overton window. Is this a good thing? Sometimes philosophers argue for conclusions far outside the domain of ‘respectable’ positions; conclusions that could be hijacked by those with intolerant, racist, sexist or fundamentalist beliefs to support their stance. It is understandable that those who are threatened by such beliefs want any argument that might conceivably support them to be absent from the debate, off the table, and ignored.

However, the freedom to test the limits of argumentation and intuition is vital to philosophical practice. There are sufficient and familiar examples of historical orthodoxies that have been overturned – women’s right to vote; the abolition of slavery; the decriminalisation of same-sex relationships – to establish that strength and pervasiveness of a belief indicate neither truth nor immutability.

It can be tedious to repeatedly debate women’s role in the workforce, abortion, animals’ capacity to feel pain and so on, but to silence discussion would be far worse. Genuine attempts to resolve difficult ethical dilemmas must recognise that understanding develops by getting things wrong and having this pointed out. Most (arguably, all) science fails to describe or predict how the world works with perfect accuracy. But as a collective enterprise, it can identify errors and gradually approximate ‘truth’. Ethical truths are less easy to come by, and a different methodology is required in seeking out satisfactory approximations. But part of this model requires allowing plenty of room to get things wrong.

It is unfortunate but true that bad ideas are sometimes undermined by bad reasoning, and also that sometimes those who espouse offensive and largely false views can say true things. Consider the ‘born this way’ argument, which endorses the flawed assumption that a genetic basis for homosexuality indicates the permissibility of same-sex relationships. While this might win over some individuals, it could cause problems down the line if it turns out that homosexuality isn’t genetically determined. Debates relating to the ‘culture wars’ on college campuses have attracted many ad hominem criticisms that set out to discredit the authors’ position by pointing to the fact that they fit a certain demographic (white, middle-class, male) or share some view with a villainous figure, and thus are not fit to contribute. The point of philosophy is to identify such illegitimate moves, and to keep the argument on topic; sometimes, this requires coming to the defence of bad ideas or villainous characters.

Participation in this process can be daunting. Defending an unpopular position can make one a target both for well-directed, thoughtful criticisms, and for emotional, sweeping attacks. Controversial positions on contentious topics attract far more scrutiny than abstract philosophical contributions to niche subjects. This means that, in effect, the former are required to be more rigorous than the latter, and to foresee and head off more potential misappropriations, misinterpretations and misunderstandings – all while contributing to an interdisciplinary area, which requires some understanding not only of philosophical theory but perhaps also medicine, law, natural and social science, politics and various other disciplines.

This can be challenging, though I do not mean to be an apologist for thoughtless, sensationalist provocation and controversy-courting, whether delivered by philosophers or others. We should see one important social function of practical ethicists as widening the Overton window and pushing the public and political debate towards reasoned deliberation and respectful disagreement. Widening the Overton window can yield opportunities for ideas that many find offensive, and straightforwardly mistaken, as well as for ideas that are well-defended and reasonable. It is understandable that those with deep personal involvement in these debates often want to narrow the window and push it in the direction of those views they find unthreatening. But philosophers have a professional duty, as conceptual plumbers, to keep the whole system in good working order. This depends upon philosophical contributors upholding the disciplinary standards of academic rigour and intellectual honesty that are essential to ethical reflection, and trusting that this will gradually, collectively lead us in the right direction.Aeon counter – do not remove

Rebecca Brown

This article was originally published at Aeon and has been republished under Creative Commons.

The Empathetic Humanities have much to teach our Adversarial Culture

Books


Alexander Bevilacqua | Aeon Ideas

As anyone on Twitter knows, public culture can be quick to attack, castigate and condemn. In search of the moral high ground, we rarely grant each other the benefit of the doubt. In her Class Day remarks at Harvard’s 2018 graduation, the Nigerian novelist Chimamanda Ngozi Adichie addressed the problem of this rush to judgment. In the face of what she called ‘a culture of “calling out”, a culture of outrage’, she asked students to ‘always remember context, and never disregard intent’. She could have been speaking as a historian.

History, as a discipline, turns away from two of the main ways of reading that have dominated the humanities for the past half-century. These methods have been productive, but perhaps they also bear some responsibility for today’s corrosive lack of generosity. The two approaches have different genealogies, but share a significant feature: at heart, they are adversarial.

One mode of reading, first described in 1965 by the French philosopher Paul Ricœur and known as ‘the hermeneutics of suspicion’, aims to uncover the hidden meaning or agenda of a text. Whether inspired by Karl Marx, Friedrich Nietzsche or Sigmund Freud, the reader interprets what happens on the surface as a symptom of something deeper and more dubious, from economic inequality to sexual anxiety. The reader’s task is to reject the face value of a work, and to plumb for a submerged truth.

A second form of interpretation, known as ‘deconstruction’, was developed in 1967 by the French philosopher Jacques Derrida. It aims to identify and reveal a text’s hidden contradictions – ambiguities and even aporias (unthinkable contradictions) that eluded the author. For example, Derrida detected a bias that favoured speech over writing in many influential philosophical texts of the Western tradition, from Plato to Jean-Jacques Rousseau. The fact that written texts could privilege the immediacy and truth of speech was a paradox that revealed unarticulated metaphysical commitments at the heart of Western philosophy.

Both of these ways of reading pit reader against text. The reader’s goal becomes to uncover meanings or problems that the work does not explicitly express. In both cases, intelligence and moral probity are displayed at the expense of what’s been written. In the 20th century, these approaches empowered critics to detect and denounce the workings of power in all kinds of materials – not just the dreams that Freud interpreted, or the essays by Plato and Rousseau with which Derrida was most closely concerned.

They do, however, foster a prosecutorial attitude among academics and public intellectuals. As a colleague once told me: ‘I am always looking for the Freudian slip.’ He scours the writings of his peers to spot when they trip up and betray their problematic intellectual commitments. One poorly chosen phrase can sully an entire work.

Not surprisingly, these methods have fostered a rather paranoid atmosphere in modern academia. Mutual monitoring of lexical choices leads to anxiety, as an increasing number of words are placed on a ‘no fly’ list. One error is taken as the symptom of problematic thinking; it can spoil not just a whole book, but perhaps even the author’s entire oeuvre. This set of attitudes is not a world apart from the pile-ons that we witness on social media.

Does the lack of charity in public discourse – the quickness to judge, the aversion to context and intent – stem in part from what we might call the ‘adversarial’ humanities? These practices of interpretation are certainly on display in many classrooms, where students learn to exercise their moral and intellectual prowess by dismantling what they’ve read. For teachers, showing students how to take a text apart bestows authority; for students, learning to read like this can be electrifying.

Yet the study of history is different. History deals with the past – and the past is, as the British novelist L P Hartley wrote in 1953, ‘a foreign country’. By definition, historians deal with difference: with what is unlike the present, and with what rarely meets today’s moral standards.

The virtue of reading like a historian, then, is that critique or disavowal is not the primary goal. On the contrary, reading historically provides something more destabilising: it requires the historian to put her own values in parentheses.

The French medievalist Marc Bloch wrote that the task of the historian is understanding, not judging. Bloch, who fought in the French Resistance, was caught and turned over to the Gestapo. Poignantly, the manuscript of The Historian’s Craft, where he expressed this humane statement, was left unfinished: Bloch was executed by firing squad in June 1944.

As Bloch knew well, historical empathy involves reaching out across the chasm of time to understand people whose values and motivations are often utterly unlike our own. It means affording these people the gift of intellectual charity – that is, the best possible interpretation of what they said or believed. For example, a belief in magic can be rational on the basis of a period’s knowledge of nature. Yet acknowledging this demands more than just contextual, linguistic or philological skill. It requires empathy.

Aren’t a lot of psychological assumptions built into this model? The call for empathy might seem theoretically naive. Yet we judge people’s intentions all the time in our daily lives; we can’t function socially without making inferences about others’ motivations. Historians merely apply this approach to people who are dead. They invoke intentions not from a desire to attack, nor because they seek reasons to restrain a text’s range of meanings. Their questions about intentions stem, instead, from respect for the people whose actions and thoughts they’re trying to understand.

Reading like a historian, then, involves not just a theory of interpretation, but also a moral stance. It is an attempt to treat others generously, and to extend that generosity even to those who can’t be hic et nunc – here and now.

For many historians (as well as others in what we might call the ‘empathetic’ humanities, such as art history and literary history), empathy is a life practice. Living with the people of the past changes one’s relationship to the present. At our best, we begin to offer empathy not just to those who are distant, but to those who surround us, aiming in our daily life for ‘understanding, not judging’.

To be sure, it’s challenging to impart these lessons to students in their teens or early 20s, to whom the problems of the present seem especially urgent and compelling. The injunction to read more generously is pretty unfashionable. It can even be perceived as conservative: isn’t the past what’s holding us back, and shouldn’t we reject it? Isn’t it more useful to learn how to deconstruct a text, and to be on the lookout for latent, pernicious meanings?

Certainly, reading isn’t a zero-sum game. One can and should cultivate multiple modes of interpretation. Yet the nostrum that the humanities teach ‘critical thinking and reading skills’ obscures the profound differences in how adversarial and empathetic disciplines engage with written works – and how they teach us to respond to other human beings. If the empathetic humanities can make us more compassionate and more charitable – if they can encourage us to ‘always remember context, and never disregard intent’ – they afford something uniquely useful today.Aeon counter – do not remove

Alexander Bevilacqua

This article was originally published at Aeon and has been republished under Creative Commons.

Slaying the Snark: What Nonsense Verse tells us about Reality

hunting-snark

Eighth of Henry Holiday’s original illustrations to “The Hunting of the Snark” by Lewis Carroll, Wikipedia

Nina Lyon | Aeon Ideas

The English writer Lewis Carroll’s nonsense poem The Hunting of the Snark (1876) is an exceptionally difficult read. In it, a crew of improbable characters boards a ship to hunt a Snark, which might sound like a plot were it not for the fact that nobody knows what a Snark actually is. It doesn’t help that any attempt to describe a Snark turns into a pile-up of increasingly incoherent attributes: it is said to taste ‘meagre and hollow, but crisp: / Like a coat that is rather too tight in the waist’.

The only significant piece of information we have about the Snark’s identity is that it might be a Boojum. Unfortunately nobody knows what that is either, apart from the fact that anyone who encounters a Boojum will ‘softly and suddenly vanish away’ into nothingness.

Nothingness also characterises the crew’s map: a ‘perfect and absolute blank!’

‘What’s the good of Mercator’s North Poles and Equators,
Tropics, Zones and Meridian Lines?’
So the Bellman would cry: and the crew would reply,
‘They are merely conventional signs!’

Nonsense such as this might get tiresome to read, but it can make for a useful thought-experiment – particularly about language. In the Snark, as in the Alice books of 1865 and 1871, the commonsense assumptions that usually govern language and meaning are turned upside down. It makes us wonder what all of those assumptions are up to, and how they work. How do we know that this sentence is trying to say something serious, or that where we are now is not a dream?

Language can’t always convey meaning alone – it might need sense, which is the governing context that framed it. We talk about ‘common sense’, or whether something ‘makes sense’, or dismiss things as ‘nonsense’, but we rarely think about what sense itself is, until it goes missing. The German logician Gottlob Frege in 1892 used sense to describe a proposition’s meaning, as something distinct from what it denoted. Sense therefore appears to be a mental entity, resistant to fixed definition.

Shortly after Carroll’s death in 1898, a seismic turn took place in both logic and metaphysics. Building on Frege, logical positivists such as Bertrand Russell sought to deploy logic and mathematics in order to establish unconditional truths. A logical truth was, like mathematics, true whether or not people changed their minds about it. Realism, the belief in a mind-independent reality, began to assert itself afresh after a long spell in the philosophical wilderness.

Sense and nonsense would therefore become landmines in a battle over logic’s ability to untether truth from thought. If an issue over meaning seeks recourse in sense, it seeks recourse in thought too. Carroll anticipated where logic was headed, and the strangest of his creations was more than a game, an experiment conceived, as the English author G K Chesterton once wrote of his work, ‘in order to study that darkest problem of metaphysics’.

In 1901, the pragmatist philosopher and provocateur F C S Schiller created a parody Christmas edition of the philosophical journal Mind called Mind!. The frontispiece was a ‘Portrait of Its Immanence the Absolute’, which, Schiller noted, was ‘very like the Bellman’s map in the Hunting of the Snark’: completely blank.

The Absolute – or the Infinite or Ultimate Reality, among other grand aliases – was the sum of all experience and being, and inconceivable to the human mind. It was monistic, consuming all into the One. If it sounded like something you’d struggle to get your head around, that was pretty much the point. The Absolute was an emblem of metaphysical idealism, the doctrine that truth could exist only within the domain of thought. Idealism had dominated the academy for the entirety of Carroll’s career, and it was beginning to come under attack. The realist mission, headed by Russell, was to clean up philosophy’s act with the sound application of mathematics and objective facts, and it felt like a breath of fresh air.

Schiller delighted in trolling absolute idealists in general and the English idealist philosopher F H Bradley in particular. In Mind!, Schiller claimed that the Snark was a satire on the Absolute, whose notorious ineffability drove its seekers to derangement. But this was disingenuous. Bradley’s major work, Appearance and Reality (1893), mirrors the point, insofar that there is one, of the Snark. When you home in on a thing and try to pin it down by describing its attributes, and then try to pin down what those are too – Bradley uses the example of a lump of sugar – it all begins to crumble, and must be something other instead. What appeared to be there was only ever an idea. Carroll was, contrariwise, in line with idealist thinking.

A passionate logician, Carroll had been working on a three-part book on symbolic logic that remained unfinished at his death. Two logical paradoxes that he posed in Mind and shared privately with friends and colleagues, such as Bradley, hint at a troublemaking sentiment regarding where logic might be headed. ‘A Logical Paradox’ (1894) resulted in two contradictory statements being simultaneously true; ‘What the Tortoise Said to Achilles’ (1895) set up a predicament in which each proposition requires an additional supporting proposition, creating an infinite regress.

A few years after Carroll’s death, Russell began to flex logic as a tool for denoting the world and testing the validity of propositions about it. Carroll’s paradoxes were problematic and demanded a solution. Russell’s response to ‘A Logical Paradox’ was to legislate nonsense away into a ‘null-class’ – a set of nonexistent propositions that, because it had no real members, didn’t exist either.

Russell’s solution to ‘What the Tortoise Said to Achilles’, tucked away in a footnote to the Principles of Mathematics (1903), entailed a recourse to sense in order to determine whether or not a proposition should be asserted in the first place, teetering into the mind-dependent realm of idealism. Mentally determining meaning is a bit like mentally determining reality, and it wasn’t a neat win for logic’s role as objective sword of truth.

In the Snark, the principles of narrative self-immolate, so that the story, rather than describing things and events in the world, undoes them into something other. It ends like this:

In the midst of the word he was trying to say,
In the midst of his laughter and glee,
He had softly and suddenly vanished away –
For the Snark was a Boojum, you see.

Strip the plot down to those eight final words, and it is all there. The thing sought turned out, upon examination, to be something else entirely. Beyond the flimsy veil of appearance, formed from words and riddled with holes, lies an inexpressible reality.

By the late-20th century, when Russell had won the battle of ideas and commonsense realism prevailed, critics such as Martin Gardner, author of The Annotated Hunting of the Snark (2006), were rattled by Carroll’s antirealism. If the reality we perceive is all there is, and it falls apart, we are left with nothing.

Carroll’s attacks on realism might look nihilistic or radical to a postwar mind steeped in atheist scientism, but they were neither. Carroll was a man of his time, taking a philosophically conservative party line on absolute idealism and its theistic implications. But he was also prophetic, seeing conflict at the limits of language, logic and reality, and laying a series of conceptual traps that continue to provoke it.

The Snark is one such trap. Carroll rejected his illustrator Henry Holiday’s image of the Boojum on the basis that it needed to remain unimaginable, for, after all, how can you illustrate the incomprehensible nature of ultimate reality? It is a task as doomed as saying the unsayable – which, paradoxically, was a task Carroll himself couldn’t quite resist.Aeon counter – do not remove

Nina Lyon

This article was originally published at Aeon and has been republished under Creative Commons.

Modern Technology is akin to the Metaphysics of Vedanta

whitehead-vedanta.jpg

Akhandadhi Das | Aeon Ideas

You might think that digital technologies, often considered a product of ‘the West’, would hasten the divergence of Eastern and Western philosophies. But within the study of Vedanta, an ancient Indian school of thought, I see the opposite effect at work. Thanks to our growing familiarity with computing, virtual reality (VR) and artificial intelligence (AI), ‘modern’ societies are now better placed than ever to grasp the insights of this tradition.

Vedanta summarises the metaphysics of the Upanishads, a clutch of Sanskrit religious texts, likely written between 800 and 500 BCE. They form the basis for the many philosophical, spiritual and mystical traditions of the Indian sub-continent. The Upanishads were also a source of inspiration for some modern scientists, including Albert Einstein, Erwin Schrödinger and Werner Heisenberg, as they struggled to comprehend quantum physics of the 20th century.

The Vedantic quest for understanding begins from what it considers the logical starting point: our own consciousness. How can we trust conclusions about what we observe and analyse unless we understand what is doing the observation and analysis? The progress of AI, neural nets and deep learning have inclined some modern observers to claim that the human mind is merely an intricate organic processing machine – and consciousness, if it exists at all, might simply be a property that emerges from information complexity. However, this view fails to explain intractable issues such as the subjective self and our experience of qualia, those aspects of mental content such as ‘redness’ or ‘sweetness’ that we experience during conscious awareness. Figuring out how matter can produce phenomenal consciousness remains the so-called ‘hard problem’.

Vedanta offers a model to integrate subjective consciousness and the information-processing systems of our body and brains. Its theory separates the brain and the senses from the mind. But it also distinguishes the mind from the function of consciousness, which it defines as the ability to experience mental output. We’re familiar with this notion from our digital devices. A camera, microphone or other sensors linked to a computer gather information about the world, and convert the various forms of physical energy – light waves, air pressure-waves and so forth – into digital data, just as our bodily senses do. The central processing unit processes this data and produces relevant outputs. The same is true of our brain. In both contexts, there seems to be little scope for subjective experience to play a role within these mechanisms.

While computers can handle all sorts of processing without our help, we furnish them with a screen as an interface between the machine and ourselves. Similarly, Vedanta postulates that the conscious entity – something it terms the atma – is the observer of the output of the mind. The atma possesses, and is said to be composed of, the fundamental property of consciousness. The concept is explored in many of the meditative practices of Eastern traditions.

You might think of the atma like this. Imagine you’re watching a film in the cinema. It’s a thriller, and you’re anxious about the lead character, trapped in a room. Suddenly, the door in the movie crashes open and there stands… You jump, as if startled. But what is the real threat to you, other than maybe spilling your popcorn? By suspending an awareness of your body in the cinema, and identifying with the character on the screen, we are allowing our emotional state to be manipulated. Vedanta suggests that the atma, the conscious self, identifies with the physical world in a similar fashion.

This idea can also be explored in the all-consuming realm of VR. On entering a game, we might be asked to choose our character or avatar – originally a Sanskrit word, aptly enough, meaning ‘one who descends from a higher dimension’. In older texts, the term often refers to divine incarnations. However, the etymology suits the gamer, as he or she chooses to descend from ‘normal’ reality and enter the VR world. Having specified our avatar’s gender, bodily features, attributes and skills, next we learn how to control its limbs and tools. Soon, our awareness diverts from our physical self to the VR capabilities of the avatar.

In Vedanta psychology, this is akin to the atma adopting the psychological persona-self it calls the ahankara, or the ‘pseudo-ego’. Instead of a detached conscious observer, we choose to define ourselves in terms of our social connections and the physical characteristics of the body. Thus, I come to believe in myself with reference to my gender, race, size, age and so forth, along with the roles and responsibilities of family, work and community. Conditioned by such identification, I indulge in the relevant emotions – some happy, some challenging or distressing – produced by the circumstances I witness myself undergoing.

Within a VR game, our avatar represents a pale imitation of our actual self and its entanglements. In our interactions with the avatar-selves of others, we might reveal little about our true personality or feelings, and know correspondingly little about others’. Indeed, encounters among avatars – particularly when competitive or combative – are often vitriolic, seemingly unrestrained by concern for the feelings of the people behind the avatars. Connections made through online gaming aren’t a substitute for other relationships. Rather, as researchers at Johns Hopkins University have noted, gamers with strong real-world social lives are less likely to fall prey to gaming addiction and depression.

These observations mirror the Vedantic claim that our ability to form meaningful relationships is diminished by absorption in the ahankara, the pseudo-ego. The more I regard myself as a physical entity requiring various forms of sensual gratification, the more likely I am to objectify those who can satisfy my desires, and to forge relationships based on mutual selfishness. But Vedanta suggests that love should emanate from the deepest part of the self, not its assumed persona. Love, it claims, is soul-to-soul experience. Interactions with others on the basis of the ahankara offer only a parody of affection.

As the atma, we remain the same subjective self throughout the whole of our life. Our body, mentality and personality change dramatically – but throughout it all, we know ourselves to be the constant observer. However, seeing everything shift and give way around us, we suspect that we’re also subject to change, ageing and heading for annihilation. Yoga, as systematised by Patanjali – an author or authors, like ‘Homer’, who lived in the 2nd century BCE – is intended to be a practical method for freeing the atma from relentless mental tribulation, and to be properly situated in the reality of pure consciousness.

In VR, we’re often called upon to do battle with evil forces, confronting jeopardy and virtual mortality along the way. Despite our efforts, the inevitable almost always happens: our avatar is killed. Game over. Gamers, especially pathological gamers, are known to become deeply attached to their avatars, and can suffer distress when their avatars are harmed. Fortunately, we’re usually offered another chance: Do you want to play again? Sure enough, we do. Perhaps we create a new avatar, someone more adept, based on the lessons learned last time around. This mirrors the Vedantic concept of reincarnation, specifically in its form of metempsychosis: the transmigration of the conscious self into a new physical vehicle.

Some commentators interpret Vedanta as suggesting that there is no real world, and that all that exists is conscious awareness. However, a broader take on Vedantic texts is more akin to VR. The VR world is wholly data, but it becomes ‘real’ when that information manifests itself to our senses as imagery and sounds on the screen or through a headset. Similarly, for Vedanta, it is the external world’s transitory manifestation as observable objects that makes it less ‘real’ than the perpetual, unchanging nature of the consciousness that observes it.

To the sages of old, immersing ourselves in the ephemeral world means allowing the atma to succumb to an illusion: the illusion that our consciousness is somehow part of an external scene, and must suffer or enjoy along with it. It’s amusing to think what Patanjali and the Vedantic fathers would make of VR: an illusion within an illusion, perhaps, but one that might help us to grasp the potency of their message.Aeon counter – do not remove

Akhandadhi Das

This article was originally published at Aeon and has been republished under Creative Commons.

 

Reach out, listen, be patient. Good arguments can stop extremism

coming-together

Walter Sinnott-Armstrong | Aeon Ideas

Many of my best friends think that some of my deeply held beliefs about important issues are obviously false or even nonsense. Sometimes, they tell me so to my face. How can we still be friends? Part of the answer is that these friends and I are philosophers, and philosophers learn how to deal with positions on the edge of sanity. In addition, I explain and give arguments for my claims, and they patiently listen and reply with arguments of their own against my – and for their – stances. By exchanging reasons in the form of arguments, we show each other respect and come to understand each other better.

Philosophers are weird, so this kind of civil disagreement still might seem impossible among ordinary folk. However, some stories give hope and show how to overcome high barriers.

One famous example involved Ann Atwater and C P Ellis in my home town of Durham, North Carolina; it is described in Osha Gray Davidson’s book The Best of Enemies (1996) and a forthcoming movie. Atwater was a single, poor, black parent who led Operation Breakthrough, which tried to improve local black neighbourhoods. Ellis was an equally poor but white parent who was proud to be Exalted Cyclops of the local Ku Klux Klan. They could not have started further apart. At first, Ellis brought a gun and henchmen to town meetings in black neighbourhoods. Atwater once lurched toward Ellis with a knife and had to be held back by her friends.

Despite their mutual hatred, when courts ordered Durham to integrate their public schools, Atwater and Ellis were pressured into co-chairing a charrette – a series of public discussions that lasted eight hours per day for 10 days in July 1971 – about how to implement integration. To plan their ordeal, they met and began by asking questions, answering with reasons, and listening to each other. Atwater asked Ellis why he opposed integration. He replied that mainly he wanted his children to get a good education, but integration would ruin their schools. Atwater was probably tempted to scream at him, call him a racist, and walk off in a huff. But she didn’t. Instead, she listened and said that she also wanted his children – as well as hers – to get a good education. Then Ellis asked Atwater why she worked so hard to improve housing for blacks. She replied that she wanted her friends to have better homes and better lives. He wanted the same for his friends.

When each listened to the other’s reasons, they realised that they shared the same basic values. Both loved their children and wanted decent lives for their communities. As Ellis later put it: ‘I used to think that Ann Atwater was the meanest black woman I’d ever seen in my life … But, you know, her and I got together one day for an hour or two and talked. And she is trying to help her people like I’m trying to help my people.’ After realising their common ground, they were able to work together to integrate Durham schools peacefully. In large part, they succeeded.

None of this happened quickly or easily. Their heated discussions lasted 10 long days in the charrette. They could not have afforded to leave their jobs for so long if their employers (including Duke University, where Ellis worked in maintenance) had not granted them time off with pay. They were also exceptional individuals who had strong incentives to work together as well as many personal virtues, including intelligence and patience. Still, such cases prove that sometimes sworn enemies can become close friends and can accomplish a great deal for their communities.

Why can’t liberals and conservatives do the same today? Admittedly, extremists on both sides of the current political scene often hide in their echo chambers and homogeneous neighbourhoods. They never listen to the other side. When they do venture out, the level of rhetoric on the internet is abysmal. Trolls resort to slogans, name-calling and jokes. When they do bother to give arguments, their arguments often simply justify what suits their feelings and signals tribal alliances.

The spread of bad arguments is undeniable but not inevitable. Rare but valuable examples such as Atwater and Ellis show us how we can use philosophical tools to reduce political polarisation.

The first step is to reach out. Philosophers go to conferences to find critics who can help them improve their theories. Similarly, Atwater and Ellis arranged meetings with each other in order to figure out how to work together in the charrette. All of us need to recognise the value of listening carefully and charitably to opponents. Then we need to go to the trouble of talking with those opponents, even if it means leaving our comfortable neighbourhoods or favourite websites.

Second, we need to ask questions. Since Socrates, philosophers have been known as much for their questions as for their answers. And if Atwater and Ellis had not asked each other questions, they never would have learned that what they both cared about the most was their children and alleviating the frustrations of poverty. By asking the right questions in the right way, we can often discover shared values or at least avoid misunderstanding opponents.

Third, we need to be patient. Philosophers teach courses for months on a single issue. Similarly, Atwater and Ellis spent 10 days in a public charrette before they finally came to understand and appreciate each other. They also welcomed other members of the community to talk as long as they wanted, just as good teachers include conflicting perspectives and bring all students into the conversation. Today, we need to slow down and fight the tendency to exclude competing views or to interrupt and retort with quick quips and slogans that demean opponents.

Fourth, we need to give arguments. Philosophers typically recognise that they owe reasons for their claims. Similarly, Atwater and Ellis did not simply announce their positions. They referred to the concrete needs of their children and their communities in order to explain why they held their positions. On controversial issues, neither side is obvious enough to escape demands for evidence and reasons, which are presented in the form of arguments.

None of these steps is easy or quick, but books and online courses on reasoning – especially in philosophy – are available to teach us how to appreciate and develop arguments. We can also learn through practice by reaching out, asking questions, being patient, and giving arguments in our everyday lives.

We still cannot reach everyone. Even the best arguments sometimes fall on deaf ears. But we should not generalise hastily to the conclusion that arguments always fail. Moderates are often open to reason on both sides. So are those all-too-rare exemplars who admit that they (like most of us) do not know which position to hold on complex moral and political issues.

Two lessons emerge. First, we should not give up on trying to reach extremists, such as Atwater and Ellis, despite how hard it is. Second, it is easier to reach moderates, so it usually makes sense to try reasoning with them first. Practising on more receptive audiences can help us improve our arguments as well as our skills in presenting arguments. These lessons will enable us to do our part to shrink the polarisation that stunts our societies and our lives.Aeon counter – do not remove

Walter Sinnott-Armstrong

This article was originally published at Aeon and has been republished under Creative Commons.

Subjectivity as Truth

conc-sci-post

A Selected Passage


When subjectivity, inwardness, is truth, then objectively truth is the paradox; and the fact that truth is objectively the paradox is just what proves subjectivity to be truth, since the objective situation proves repellent, and this resistance on the part of objectivity, or its expression, is the resilience of inwardness and the gauge of its strength. The paradox is the objective uncertainty that is the expression for the passion of inwardness, which is just what truth is. So much for the Socratic. Eternal, essential truth, i.e., truth that relates essentially to someone existing through essentially concerning what it is to exist (all other knowledge being from the Socratic point of view accidental, its scope and degree a matter of indifference), is the paradox. Yet the eternal, essential truth is by no means itself the paradox; it is so by relating to someone existing. Socratic ignorance is the expression of the objective uncertainty, the inwardness of the one who exists is truth. Just to anticipate here, note the following: Socratic ignorance is an analogue to the category of the absurd, except that in the repellency of the absurd there is even less objective certainty, since there is only the certainty that it is absurd. And just for that reason is the resilience of the inwardness even greater. Socratic inwardness in existing is an analogue of faith, except that the inwardness of faith, corresponding as it does to the resistance not of ignorance but of the absurd, is infinitely more profound.

Socratically, the eternal essential truth is by no means in itself paradoxical; it is so only by relating to someone existing. This is expressed in another Socratic proposition, namely, that all knowing is recollecting. That proposition foreshadows the beginning of speculative thought, which is also the reason why Socrates did not pursue it. Essentially it became Platonic. Here is where the path branches off and Socrates essentially accentuates existing, while Plato, forgetting the latter, loses himself in speculation. The infinite merit of Socrates is precisely to be an existing thinker, not a speculator who forgets what it is to exist. For Socrates, therefore, the proposition that all knowing is recollecting has, at the moment of his leave-taking and as the suspended possibility of speculating, a two-fold significance: (1) that the knower is essentially integer and that there is no other anomaly concerning knowledge confronting him than that he exists, which anomaly, however, is so essential and decisive for him that it means that existing, the inward absorption in and through existing, is truth; (2) that existence in temporality has no decisive importance, since the possibility of taking oneself back into eternity through recollection is always there, even though this possibility is constantly cancelled by the time taken in inner absorption in existing.

The unending merit of the Socratic was precisely to accentuate the fact that the knower is someone existing and that existing is what is essential. Going further through failing to understand this is but a mediocre merit. The Socratic is therefore something we must bear in mind and then see whether the formula might not be altered so as to make a real advance on the Socratic.

Subjectivity, inwardness, accordingly, is truth. Is there now a more inward expression of this? Yes, indeed; when talk of ‘subjectivity, inwardness, is truth’ begins as follows: ‘Subjectivity is untruth.’ But let us not be in a hurry. Speculation also says that subjectivity is untruth, but says this in exactly the opposite direction; namely, that objectivity is truth. Speculation defines subjectivity negatively in the direction of objectivity. This other definition, on the contrary, gets in its own way from the start, which is just what makes the inwardness so much more inward. Socratically, subjectivity is untruth if it refuses to grasp that subjectivity is truth but, for example, wants to become objective. Here, however, in setting about becoming truth by becoming subjective, subjectivity is in the difficult position of being untruth. The work thus goes backwards, that is, back into inwardness. Far from the path leading in the direction of the objective, the beginning itself lies only even deeper in subjectivity.

But the subject cannot be untruth eternally, or be presupposed eternally to have been so; he must have become that in time, or becomes that in time. The Socratic paradox lay in the eternal truth relating to someone existing. But now existence has put its mark a second time on the one who exists. A change so essential has occurred in him that now he cannot possibly take himself back into the eternal through Socratic recollection. To do that is to speculate; the Socratic is to be able to do it but to cancel the possibility by grasping the inward absorption in existence. But now the difficulty is this, that what followed Socrates as a cancelled possibility has become an impossibility. If, in relation to Socrates, speculating was already a dubious merit, now it is only confusion.

The paradox emerges when the eternal truth and existence are put together; but every time existence is marked out, the paradox becomes ever clearer. Socratically, the knower was someone who existed, but now someone who exists has been marked in such a way that existence has undertaken an essential change in him.

How Al-Farabi drew on Plato to argue for censorship in Islam

Israel-2013(2)-Jerusalem-Temple_Mount-Dome_of_the_Rock_(SE_exposure)

Andrew Shiva / Wikipedia

Rashmee Roshan Lall | Aeon Ideas

You might not be familiar with the name Al-Farabi, a 10th-century thinker from Baghdad, but you know his work, or at least its results. Al-Farabi was, by all accounts, a man of steadfast Sufi persuasion and unvaryingly simple tastes. As a labourer in a Damascus vineyard before settling in Baghdad, he favoured a frugal diet of lambs’ hearts and water mixed with sweet basil juice. But in his political philosophy, Al-Farabi drew on a rich variety of Hellenic ideas, notably from Plato and Aristotle, adapting and extending them in order to respond to the flux of his times.

The situation in the mighty Abbasid empire in which Al-Farabi lived demanded a delicate balancing of conservatism with radical adaptation. Against the backdrop of growing dysfunction as the empire became a shrunken version of itself, Al-Farabi formulated a political philosophy conducive to civic virtue, justice, human happiness and social order.

But his real legacy might be the philosophical rationale that Al-Farabi provided for controlling creative expression in the Muslim world. In so doing, he completed the aniconism (or antirepresentational) project begun in the late seventh century by a caliph of the Umayyads, the first Muslim dynasty. Caliph Abd al-Malik did it with nonfigurative images on coins and calligraphic inscriptions on the Dome of the Rock in Jerusalem, the first monument of the new Muslim faith. This heralded Islamic art’s break from the Greco-Roman representative tradition. A few centuries later, Al-Farabi took the notion of creative control to new heights by arguing for restrictions on representation through the word. He did it using solidly Platonic concepts, and can justifiably be said to have helped concretise the way Islam understands and responds to creative expression.

Word portrayals of Islam and its prophet can be deemed sacrilegious just as much as representational art. The consequences of Al-Farabi’s rationalisation of representational taboos are apparent in our times. In 1989, Iran’s Ayatollah Khomeini issued a fatwa sentencing Salman Rushdie to death for writing The Satanic Verses (1988). The book outraged Muslims for its fictionalised account of Prophet Muhammad’s life. In 2001, the Taliban blew up the sixth-century Bamiyan Buddhas in Afghanistan. In 2005, controversy erupted over the publication by the Danish newspaper Jyllands-Posten of cartoons depicting the Prophet. The cartoons continued to ignite fury in some way or other for at least a decade. There were protests across the Middle East, attacks on Western embassies after several European papers reprinted the cartoons, and in 2008 Osama bin Laden issued an incendiary warning to Europe of ‘grave punishment’ for its ‘new Crusade’ against Islam. In 2015, the offices of Charlie Hebdo, a satirical magazine in Paris that habitually offended Muslim sensibilities, was attacked by armed gunmen, killing 12. The magazine had featured Michel Houellebecq’s novel Submission (2015), a futuristic vision of France under Islamic rule.

In a sense, the destruction of the Bamiyan Buddhas was no different from the Rushdie fatwa, which was like the Danish cartoons fallout and the violence wreaked on Charlie Hebdo’s editorial staff. All are linked by the desire to control representation, be it through imagery or the word.

Control of the word was something that Al-Farabi appeared to judge necessary if Islam’s biggest project – the multiethnic commonwealth that was the Abbasid empire – was to be preserved. Figural representation was pretty much settled as an issue for Muslims when Al-Farabi would have been pondering some of his key theories. Within 30 years of the Prophet’s death in 632, art and creative expression took two parallel paths depending on the context for which it was intended. There was art for the secular space, such as the palaces and bathhouses of the Umayyads (661-750). And there was the art considered appropriate for religious spaces – mosques and shrines such as the Dome of the Rock (completed in 691). Caliph Abd al-Malik had already engaged in what has been called a ‘polemic of images’ on coinage with his Byzantine counterpart, Emperor Justinian II. Ultimately, Abd al-Malik issued coins inscribed with the phrases ‘ruler of the orthodox’ and ‘representative [caliph] of Allah’ rather than his portrait. And the Dome of the Rock had script rather than representations of living creatures as a decoration. The lack of image had become an image. In fact, the word was now the image. That is why calligraphy became the greatest of Muslim art forms. The importance of the written word – its absorption and its meaning – was also exemplified by the Abbasids’ investment in the Greek-to-Arabic translation movement from the eighth to the 10th centuries.

Consequently, in Al-Farabi’s time, what was most important for Muslims was to control representation through the word. Christian iconophiles made their case for devotional images with the argument that words have the same representative power as paintings. Words are like icons, declared the iconophile Christian priest Theodore Abu Qurrah, who lived in dar-al Islam and wrote in Arabic in the ninth century. And images, he said, are the writing of the illiterate.

Al-Farabi was concerned about the power – for good or ill – of writings at a time when the Abbasid empire was in decline. He held creative individuals responsible for what they produced. Abbasid caliphs increasingly faced a crisis of authority, both moral and political. This led Al-Farabi – one of the Arab world’s most original thinkers – to extrapolate from topical temporal matters the key issues confronting Islam and its expanding and diverse dominions.

Al-Farabi fashioned a political philosophy that naturalised Plato’s imaginary ideal state for the world to which he belonged. He tackled the obvious issue of leadership, reminding Muslim readers of the need for a philosopher-king, a ‘virtuous ruler’ to preside over a ‘virtuous city’, which would be run on the principles of ‘virtuous religion’.

Like Plato, Al-Farabi suggested creative expression should support the ideal ruler, thus shoring up the virtuous city and the status quo. Just as Plato in the Republic demanded that poets in the ideal state tell stories of unvarying good, especially about the gods, Al-Farabi’s treatises mention ‘praiseworthy’ poems, melodies and songs for the virtuous city. Al-Farabi commended as ‘most venerable’ for the virtuous city the sorts of writing ‘used in the service of the supreme ruler and the virtuous king.’

It is this idea of writers following the approved narrative that most clearly joins Al-Farabi’s political philosophy to that of the man he called Plato the ‘Divine’. When Al-Farabi seized on Plato’s argument for ‘a censorship of the writers’ as a social good for Muslim society, he was making a case for managing the narrative by controlling the word. It would be important to the next phase of Islamic image-building.

Some of Al-Farabi’s ideas might have influenced other prominent Muslim thinkers, including the Persian polymath Ibn Sina, or Avicenna, (c980-1037) and the Persian theologian Al-Ghazali (c1058-1111). Certainly, his rationalisation for controlling creative writing enabled a further move to deny legitimacy to new interpretation.Aeon counter – do not remove

Rashmee Roshan Lall

This article was originally published at Aeon and has been republished under Creative Commons.

Possibility and Necessity: An Introduction to Modality

1000-Word Philosophy: An Introductory Anthology

Author: Andre Leo Rusavuk
Category: Metaphysics
Word count: 991

We frequently say things like, ‘This seems possible,’ ‘That can’t be done,’ ‘This must happen,’ ‘She might be able to . . ,’ ‘This is necessary for . .’ and so on.[1]

Claims like these are modal claims. They involve the modal concepts of actuality, possibility, and necessity. Modality concerns the mode or way in which a claim is true or false, and how something exists or does not exist.

This essay explains basic modal concepts, illustrates some different kinds of possibility and necessity, and briefly explains how we try to identify whether a modal claim is true or false.

“Imagine The Possibilities” by Carol Groenen.

1. Modal Concepts

Modal concepts apply to claims and beings, at least.[2] Here are some basic definitions concerning claims, beliefs or sentences:

  • a claim is possibly true if it could…

View original post 1,943 more words

What Einstein Meant by ‘God Does Not Play Dice’

Einstein with his second wife Elsa, 1921. Wikipedia.

Jim Baggott | Aeon Ideas

‘The theory produces a good deal but hardly brings us closer to the secret of the Old One,’ wrote Albert Einstein in December 1926. ‘I am at all events convinced that He does not play dice.’

Einstein was responding to a letter from the German physicist Max Born. The heart of the new theory of quantum mechanics, Born had argued, beats randomly and uncertainly, as though suffering from arrhythmia. Whereas physics before the quantum had always been about doing this and getting that, the new quantum mechanics appeared to say that when we do this, we get that only with a certain probability. And in some circumstances we might get the other.

Einstein was having none of it, and his insistence that God does not play dice with the Universe has echoed down the decades, as familiar and yet as elusive in its meaning as E = mc2. What did Einstein mean by it? And how did Einstein conceive of God?

Hermann and Pauline Einstein were nonobservant Ashkenazi Jews. Despite his parents’ secularism, the nine-year-old Albert discovered and embraced Judaism with some considerable passion, and for a time he was a dutiful, observant Jew. Following Jewish custom, his parents would invite a poor scholar to share a meal with them each week, and from the impoverished medical student Max Talmud (later Talmey) the young and impressionable Einstein learned about mathematics and science. He consumed all 21 volumes of Aaron Bernstein’s joyful Popular Books on Natural Science (1880). Talmud then steered him in the direction of Immanuel Kant’s Critique of Pure Reason (1781), from which he migrated to the philosophy of David Hume. From Hume, it was a relatively short step to the Austrian physicist Ernst Mach, whose stridently empiricist, seeing-is-believing brand of philosophy demanded a complete rejection of metaphysics, including notions of absolute space and time, and the existence of atoms.

But this intellectual journey had mercilessly exposed the conflict between science and scripture. The now 12-year-old Einstein rebelled. He developed a deep aversion to the dogma of organised religion that would last for his lifetime, an aversion that extended to all forms of authoritarianism, including any kind of dogmatic atheism.

This youthful, heavy diet of empiricist philosophy would serve Einstein well some 14 years later. Mach’s rejection of absolute space and time helped to shape Einstein’s special theory of relativity (including the iconic equation E = mc2), which he formulated in 1905 while working as a ‘technical expert, third class’ at the Swiss Patent Office in Bern. Ten years later, Einstein would complete the transformation of our understanding of space and time with the formulation of his general theory of relativity, in which the force of gravity is replaced by curved spacetime. But as he grew older (and wiser), he came to reject Mach’s aggressive empiricism, and once declared that ‘Mach was as good at mechanics as he was wretched at philosophy.’

Over time, Einstein evolved a much more realist position. He preferred to accept the content of a scientific theory realistically, as a contingently ‘true’ representation of an objective physical reality. And, although he wanted no part of religion, the belief in God that he had carried with him from his brief flirtation with Judaism became the foundation on which he constructed his philosophy. When asked about the basis for his realist stance, he explained: ‘I have no better expression than the term “religious” for this trust in the rational character of reality and in its being accessible, at least to some extent, to human reason.’

But Einstein’s was a God of philosophy, not religion. When asked many years later whether he believed in God, he replied: ‘I believe in Spinoza’s God, who reveals himself in the lawful harmony of all that exists, but not in a God who concerns himself with the fate and the doings of mankind.’ Baruch Spinoza, a contemporary of Isaac Newton and Gottfried Leibniz, had conceived of God as identical with nature. For this, he was considered a dangerous heretic, and was excommunicated from the Jewish community in Amsterdam.

Einstein’s God is infinitely superior but impersonal and intangible, subtle but not malicious. He is also firmly determinist. As far as Einstein was concerned, God’s ‘lawful harmony’ is established throughout the cosmos by strict adherence to the physical principles of cause and effect. Thus, there is no room in Einstein’s philosophy for free will: ‘Everything is determined, the beginning as well as the end, by forces over which we have no control … we all dance to a mysterious tune, intoned in the distance by an invisible player.’

The special and general theories of relativity provided a radical new way of conceiving of space and time and their active interactions with matter and energy. These theories are entirely consistent with the ‘lawful harmony’ established by Einstein’s God. But the new theory of quantum mechanics, which Einstein had also helped to found in 1905, was telling a different story. Quantum mechanics is about interactions involving matter and radiation, at the scale of atoms and molecules, set against a passive background of space and time.

Earlier in 1926, the Austrian physicist Erwin Schrödinger had radically transformed the theory by formulating it in terms of rather obscure ‘wavefunctions’. Schrödinger himself preferred to interpret these realistically, as descriptive of ‘matter waves’. But a consensus was growing, strongly promoted by the Danish physicist Niels Bohr and the German physicist Werner Heisenberg, that the new quantum representation shouldn’t be taken too literally.

In essence, Bohr and Heisenberg argued that science had finally caught up with the conceptual problems involved in the description of reality that philosophers had been warning of for centuries. Bohr is quoted as saying: ‘There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.’ This vaguely positivist statement was echoed by Heisenberg: ‘[W]e have to remember that what we observe is not nature in itself but nature exposed to our method of questioning.’ Their broadly antirealist ‘Copenhagen interpretation’ – denying that the wavefunction represents the real physical state of a quantum system – quickly became the dominant way of thinking about quantum mechanics. More recent variations of such antirealist interpretations suggest that the wavefunction is simply a way of ‘coding’ our experience, or our subjective beliefs derived from our experience of the physics, allowing us to use what we’ve learned in the past to predict the future.

But this was utterly inconsistent with Einstein’s philosophy. Einstein could not accept an interpretation in which the principal object of the representation – the wavefunction – is not ‘real’. He could not accept that his God would allow the ‘lawful harmony’ to unravel so completely at the atomic scale, bringing lawless indeterminism and uncertainty, with effects that can’t be entirely and unambiguously predicted from their causes.

The stage was thus set for one of the most remarkable debates in the entire history of science, as Bohr and Einstein went head-to-head on the interpretation of quantum mechanics. It was a clash of two philosophies, two conflicting sets of metaphysical preconceptions about the nature of reality and what we might expect from a scientific representation of this. The debate began in 1927, and although the protagonists are no longer with us, the debate is still very much alive.

And unresolved.

I don’t think Einstein would have been particularly surprised by this. In February 1954, just 14 months before he died, he wrote in a letter to the American physicist David Bohm: ‘If God created the world, his primary concern was certainly not to make its understanding easy for us.’


Jim Baggott

This article was originally published at Aeon and has been republished under Creative Commons.

Interview with Simone de Beauvoir (1959)

Simone de Beauvoir was a French writer, intellectual, existentialist philosopher, political activist, feminist and social theorist. Though she did not consider herself a philosopher, she had a significant influence on both feminist existentialism and feminist theory.

De Beauvoir wrote novels, essays, biographies, autobiography and monographs on philosophy, politics and social issues. She was known for her 1949 treatise The Second Sex, a detailed analysis of women’s oppression and a foundational tract of contemporary feminism; and for her novels, including She Came to Stay and The Mandarins. She was also known for her lifelong relationship with French philosopher Jean-Paul Sartre.


You may find two of de Beauvoir’s works, namely, The Second Sex (PDF) and The Ethics of Ambiguity (PDF), in the Political & Cultural and 20th-Century Philosophy sections of the Bookshelf.