Philosophy Can Make the Previously Unthinkable Thinkable

woman-at-window

Detail from Woman at a Window (1822) by Caspar David Friedrich. Courtesy Alte Nationalgalerie, Berlin


Rebecca Brown | Aeon Ideas

In the mid-1990s, Joseph Overton, a researcher at the US think tank the Mackinac Center for Public Policy, proposed the idea of a ‘window’ of socially acceptable policies within any given domain. This came to be known as the Overton window of political possibilities. The job of think tanks, Overton proposed, was not directly to advocate particular policies, but to shift the window of possibilities so that previously unthinkable policy ideas – those shocking to the sensibilities of the time – become mainstream and part of the debate.

Overton’s insight was that there is little point advocating policies that are publicly unacceptable, since (almost) no politician will support them. Efforts are better spent, he argued, in shifting the debate so that such policies seem less radical and become more likely to receive support from sympathetic politicians. For instance, working to increase awareness of climate change might make future proposals to restrict the use of diesel cars more palatable, and ultimately more effective, than directly lobbying for a ban on such vehicles.

Overton was concerned with the activities of think tanks, but philosophers and practical ethicists might gain something from considering the Overton window. By its nature, practical ethics typically addresses controversial, politically sensitive topics. It is the job of philosophers to engage in ‘conceptual hygiene’ or, as the late British philosopher Mary Midgley described it, ‘philosophical plumbing’: clarifying and streamlining, diagnosing unjustified assertions and pointing out circularities.

Hence, philosophers can be eager to apply their skills to new subjects. This can provoke frustration from those embedded within a particular subject. Sometimes, this is deserved: philosophers can be naive in contributing their thoughts to complex areas with which they lack the kind of familiarity that requires time and immersion. But such an outside perspective can also be useful. Although such contributions will rarely get everything right, the standard is too demanding in areas of great division and debate (such as practical ethics). Instead, we should expect philosophers to offer a counterpoint to received wisdom, established norms and doctrinal prejudice.

Ethicists, at least within their academic work, are encouraged to be skeptical of intuition and the naturalistic fallacy (the idea that values can be derived simply from facts). Philosophers are also familiar with tools such as thought experiments: hypothetical and contrived descriptions of events that can be useful for clarifying particular intuitions or the implications of a philosophical claim. These two factors make it unsurprising that philosophers often publicly adopt positions that are unintuitive and outside mainstream thought, and that they might not personally endorse.

This can serve to shift, and perhaps widen, the Overton window. Is this a good thing? Sometimes philosophers argue for conclusions far outside the domain of ‘respectable’ positions; conclusions that could be hijacked by those with intolerant, racist, sexist or fundamentalist beliefs to support their stance. It is understandable that those who are threatened by such beliefs want any argument that might conceivably support them to be absent from the debate, off the table, and ignored.

However, the freedom to test the limits of argumentation and intuition is vital to philosophical practice. There are sufficient and familiar examples of historical orthodoxies that have been overturned – women’s right to vote; the abolition of slavery; the decriminalisation of same-sex relationships – to establish that strength and pervasiveness of a belief indicate neither truth nor immutability.

It can be tedious to repeatedly debate women’s role in the workforce, abortion, animals’ capacity to feel pain and so on, but to silence discussion would be far worse. Genuine attempts to resolve difficult ethical dilemmas must recognise that understanding develops by getting things wrong and having this pointed out. Most (arguably, all) science fails to describe or predict how the world works with perfect accuracy. But as a collective enterprise, it can identify errors and gradually approximate ‘truth’. Ethical truths are less easy to come by, and a different methodology is required in seeking out satisfactory approximations. But part of this model requires allowing plenty of room to get things wrong.

It is unfortunate but true that bad ideas are sometimes undermined by bad reasoning, and also that sometimes those who espouse offensive and largely false views can say true things. Consider the ‘born this way’ argument, which endorses the flawed assumption that a genetic basis for homosexuality indicates the permissibility of same-sex relationships. While this might win over some individuals, it could cause problems down the line if it turns out that homosexuality isn’t genetically determined. Debates relating to the ‘culture wars’ on college campuses have attracted many ad hominem criticisms that set out to discredit the authors’ position by pointing to the fact that they fit a certain demographic (white, middle-class, male) or share some view with a villainous figure, and thus are not fit to contribute. The point of philosophy is to identify such illegitimate moves, and to keep the argument on topic; sometimes, this requires coming to the defence of bad ideas or villainous characters.

Participation in this process can be daunting. Defending an unpopular position can make one a target both for well-directed, thoughtful criticisms, and for emotional, sweeping attacks. Controversial positions on contentious topics attract far more scrutiny than abstract philosophical contributions to niche subjects. This means that, in effect, the former are required to be more rigorous than the latter, and to foresee and head off more potential misappropriations, misinterpretations and misunderstandings – all while contributing to an interdisciplinary area, which requires some understanding not only of philosophical theory but perhaps also medicine, law, natural and social science, politics and various other disciplines.

This can be challenging, though I do not mean to be an apologist for thoughtless, sensationalist provocation and controversy-courting, whether delivered by philosophers or others. We should see one important social function of practical ethicists as widening the Overton window and pushing the public and political debate towards reasoned deliberation and respectful disagreement. Widening the Overton window can yield opportunities for ideas that many find offensive, and straightforwardly mistaken, as well as for ideas that are well-defended and reasonable. It is understandable that those with deep personal involvement in these debates often want to narrow the window and push it in the direction of those views they find unthreatening. But philosophers have a professional duty, as conceptual plumbers, to keep the whole system in good working order. This depends upon philosophical contributors upholding the disciplinary standards of academic rigour and intellectual honesty that are essential to ethical reflection, and trusting that this will gradually, collectively lead us in the right direction.Aeon counter – do not remove

Rebecca Brown

This article was originally published at Aeon and has been republished under Creative Commons.

The Existentialist Tradition

existentialist-tradition


This just recently arrived in the mail: The Existentialist Tradition: Selected Writings, edited by Nino Langiulli. I’m very happy to have found this book in good condition. This was my first introduction to existentialism around 10 years ago. I originally found it at the University library and the ideas contained within are thought-provoking and sometimes even profound. Very glad to have found a copy for myself all these years later. Highly recommended as an introduction to existentialism and a guide to which authors you may wish to pursue further.

The Empathetic Humanities have much to teach our Adversarial Culture

Books


Alexander Bevilacqua | Aeon Ideas

As anyone on Twitter knows, public culture can be quick to attack, castigate and condemn. In search of the moral high ground, we rarely grant each other the benefit of the doubt. In her Class Day remarks at Harvard’s 2018 graduation, the Nigerian novelist Chimamanda Ngozi Adichie addressed the problem of this rush to judgment. In the face of what she called ‘a culture of “calling out”, a culture of outrage’, she asked students to ‘always remember context, and never disregard intent’. She could have been speaking as a historian.

History, as a discipline, turns away from two of the main ways of reading that have dominated the humanities for the past half-century. These methods have been productive, but perhaps they also bear some responsibility for today’s corrosive lack of generosity. The two approaches have different genealogies, but share a significant feature: at heart, they are adversarial.

One mode of reading, first described in 1965 by the French philosopher Paul Ricœur and known as ‘the hermeneutics of suspicion’, aims to uncover the hidden meaning or agenda of a text. Whether inspired by Karl Marx, Friedrich Nietzsche or Sigmund Freud, the reader interprets what happens on the surface as a symptom of something deeper and more dubious, from economic inequality to sexual anxiety. The reader’s task is to reject the face value of a work, and to plumb for a submerged truth.

A second form of interpretation, known as ‘deconstruction’, was developed in 1967 by the French philosopher Jacques Derrida. It aims to identify and reveal a text’s hidden contradictions – ambiguities and even aporias (unthinkable contradictions) that eluded the author. For example, Derrida detected a bias that favoured speech over writing in many influential philosophical texts of the Western tradition, from Plato to Jean-Jacques Rousseau. The fact that written texts could privilege the immediacy and truth of speech was a paradox that revealed unarticulated metaphysical commitments at the heart of Western philosophy.

Both of these ways of reading pit reader against text. The reader’s goal becomes to uncover meanings or problems that the work does not explicitly express. In both cases, intelligence and moral probity are displayed at the expense of what’s been written. In the 20th century, these approaches empowered critics to detect and denounce the workings of power in all kinds of materials – not just the dreams that Freud interpreted, or the essays by Plato and Rousseau with which Derrida was most closely concerned.

They do, however, foster a prosecutorial attitude among academics and public intellectuals. As a colleague once told me: ‘I am always looking for the Freudian slip.’ He scours the writings of his peers to spot when they trip up and betray their problematic intellectual commitments. One poorly chosen phrase can sully an entire work.

Not surprisingly, these methods have fostered a rather paranoid atmosphere in modern academia. Mutual monitoring of lexical choices leads to anxiety, as an increasing number of words are placed on a ‘no fly’ list. One error is taken as the symptom of problematic thinking; it can spoil not just a whole book, but perhaps even the author’s entire oeuvre. This set of attitudes is not a world apart from the pile-ons that we witness on social media.

Does the lack of charity in public discourse – the quickness to judge, the aversion to context and intent – stem in part from what we might call the ‘adversarial’ humanities? These practices of interpretation are certainly on display in many classrooms, where students learn to exercise their moral and intellectual prowess by dismantling what they’ve read. For teachers, showing students how to take a text apart bestows authority; for students, learning to read like this can be electrifying.

Yet the study of history is different. History deals with the past – and the past is, as the British novelist L P Hartley wrote in 1953, ‘a foreign country’. By definition, historians deal with difference: with what is unlike the present, and with what rarely meets today’s moral standards.

The virtue of reading like a historian, then, is that critique or disavowal is not the primary goal. On the contrary, reading historically provides something more destabilising: it requires the historian to put her own values in parentheses.

The French medievalist Marc Bloch wrote that the task of the historian is understanding, not judging. Bloch, who fought in the French Resistance, was caught and turned over to the Gestapo. Poignantly, the manuscript of The Historian’s Craft, where he expressed this humane statement, was left unfinished: Bloch was executed by firing squad in June 1944.

As Bloch knew well, historical empathy involves reaching out across the chasm of time to understand people whose values and motivations are often utterly unlike our own. It means affording these people the gift of intellectual charity – that is, the best possible interpretation of what they said or believed. For example, a belief in magic can be rational on the basis of a period’s knowledge of nature. Yet acknowledging this demands more than just contextual, linguistic or philological skill. It requires empathy.

Aren’t a lot of psychological assumptions built into this model? The call for empathy might seem theoretically naive. Yet we judge people’s intentions all the time in our daily lives; we can’t function socially without making inferences about others’ motivations. Historians merely apply this approach to people who are dead. They invoke intentions not from a desire to attack, nor because they seek reasons to restrain a text’s range of meanings. Their questions about intentions stem, instead, from respect for the people whose actions and thoughts they’re trying to understand.

Reading like a historian, then, involves not just a theory of interpretation, but also a moral stance. It is an attempt to treat others generously, and to extend that generosity even to those who can’t be hic et nunc – here and now.

For many historians (as well as others in what we might call the ‘empathetic’ humanities, such as art history and literary history), empathy is a life practice. Living with the people of the past changes one’s relationship to the present. At our best, we begin to offer empathy not just to those who are distant, but to those who surround us, aiming in our daily life for ‘understanding, not judging’.

To be sure, it’s challenging to impart these lessons to students in their teens or early 20s, to whom the problems of the present seem especially urgent and compelling. The injunction to read more generously is pretty unfashionable. It can even be perceived as conservative: isn’t the past what’s holding us back, and shouldn’t we reject it? Isn’t it more useful to learn how to deconstruct a text, and to be on the lookout for latent, pernicious meanings?

Certainly, reading isn’t a zero-sum game. One can and should cultivate multiple modes of interpretation. Yet the nostrum that the humanities teach ‘critical thinking and reading skills’ obscures the profound differences in how adversarial and empathetic disciplines engage with written works – and how they teach us to respond to other human beings. If the empathetic humanities can make us more compassionate and more charitable – if they can encourage us to ‘always remember context, and never disregard intent’ – they afford something uniquely useful today.Aeon counter – do not remove

Alexander Bevilacqua

This article was originally published at Aeon and has been republished under Creative Commons.

Passion and Grit Create a Work of Art


John Rodney Mullen (born August 17, 1966) is an American professional skateboarder, entrepreneur, inventor, and public speaker who practices freestyle and street skateboarding. He is widely considered the most influential street skater in the history of the sport, being credited for inventing numerous tricks, including the flatground ollie, kickflip, heelflip, impossible, and 360-flip. As a result, he has been called the “Godfather of Street Skateboarding.”

Wikipedia

Psychology’s Five Revelations for Finding Your True Calling


Christian Jarrett | Aeon Ideas

Look. You can’t plan out your life. What you have to do is first discover your passion – what you really care about.
Barack Obama

If, like many, you are searching for your calling in life – perhaps you are still unsure which profession aligns with what you most care about – here are five recent research findings worth taking into consideration.

First, there’s a difference between having a harmonious passion and an obsessive passion. If you can find a career path or occupational goal that fires you up, you are more likely to succeed and find happiness through your work – that much we know from the deep research literature. But beware – since a seminal paper published in 2003 by the Canadian psychologist Robert Vallerand and colleagues, researchers have made an important distinction between having a harmonious passion and an obsessive one. If you feel that your passion or calling is out of control, and that your mood and self-esteem depend on it, then this is the obsessive variety, and such passions, while they are energising, are also associated with negative outcomes such as burnout and anxiety. In contrast, if your passion feels in control, reflects qualities that you like about yourself, and complements other important activities in your life, then this is the harmonious version, which is associated with positive outcomes, such as vitality, better work performance, experiencing flow, and positive mood.

Secondly, having an unanswered calling in life is worse than having no calling at all. If you already have a burning ambition or purpose, do not leave it to languish. A few years ago, researchers at the University of South Florida surveyed hundreds of people and grouped them according to whether they felt like they had no calling in life, that they had a calling they’d answered, or they had a calling but had never done anything about it. In terms of their work engagement, career commitment, life satisfaction, health and stress, the stand-out finding was that the participants who had a calling they hadn’t answered scored the worst across all these measures. The researchers said that this puts a different spin on the presumed benefits of having a calling in life. They concluded: ‘having a calling is only a benefit if it is met, but can be a detriment when it is not as compared to having no calling at all’.

The third finding to bear in mind is that, without passion, grit is ‘merely a grind’. The idea that ‘grit’ is vital for career success was advanced by the psychologist Angela Duckworth of the University of Pennsylvania, who argued that highly successful, ‘gritty’ people have impressive persistence. ‘To be gritty,’ Duckworth writes in her 2016 book on the subject, ‘is to fall down seven times, and rise eight.’ Many studies certainly show that being more conscientious – more self-disciplined and industrious – is associated with more career success. But is that all that being gritty means? Duckworth has always emphasised that it has another vital component that brings us back to passion again – alongside persistence, she says that gritty people also have an ‘ultimate concern’ (another way of describing having a passion or calling).

However, according to a paper published last year, the standard measure of grit has failed to assess passion (or more specifically, ‘passion attainment’) – and Jon Jachimowicz at Columbia Business School in New York and colleagues believe this could explain why the research on grit has been so inconsistent (leading to claims that it is an overhyped concept and simply conscientiousness repackaged). Jachimowicz’s team found that when they explicitly measured passion attainment (how much people feel they have adequate passion for their work) and combined this with a measure of perseverance (a consistency of interests and the ability to overcome setbacks), then the two together did predict superior performance among tech-company employees and university students. ‘Our findings suggest that perseverance without passion attainment is mere drudgery, but perseverance with passion attainment propels individuals forward,’ they said.

Another finding is that, when you invest enough effort, you might find that your work becomes your passion. It’s all very well reading about the benefits of having a passion or calling in life but, if you haven’t got one, where to find it? Duckworth says it’s a mistake to think that in a moment of revelation one will land in your lap, or simply occur to you through quiet contemplation – rather, you need to explore different activities and pursuits, and expose yourself to the different challenges and needs confronting society. If you still draw a blank, then perhaps it’s worth heeding the advice of others who say that it is not always the case that energy and determination flow from finding your passion – sometimes it can be the other way around and, if you put enough energy into your work, then passion will follow. Consider, for instance, an eight-week repeated survey of German entrepreneurs published in 2014 that found a clear pattern – their passion for their ventures increased after they’d invested more effort into them the week before. A follow-up study qualified this, suggesting that the energising effect of investing effort arises only when the project is freely chosen and there is a sense of progress. ‘Entrepreneurs increase their passion when they make significant progress in their venture and when they invest effort out of their own free choice,’ the researchers said.

Finally, if you think that passion comes from doing a job you enjoy, you’re likely to be disappointed. Consider where you think passion comes from. In a preprint paper released at PsyArXiv, Jachimowicz and his team draw a distinction between people who believe that passion comes from doing what you enjoy (which they say is encapsulated by Oprah Winfrey’s commencement address in 2008 in which she said passions ‘bloom when we’re doing what we love’), and those who see it as arising from doing what you believe in or value in life (as reflected in the words of former Mexican president Felipe Calderón who in his own commencement address in 2011 said ‘you have to embrace with passion the things that you believe in, and that you are fighting for’).

The researchers found that people who believe that passion comes from pleasurable work were less likely to feel that they had found their passion (and were more likely to want to leave their job) as compared with people who believe that passion comes from doing what you feel matters. Perhaps this is because there is a superficiality and ephemerality to working for sheer pleasure – what fits the bill one month or year might not do so for long – whereas working towards what you care about is a timeless endeavour that is likely to stretch and sustain you indefinitely. The researchers conclude that their results show ‘the extent to which individuals attain their desired level of work passion may have less to do with their actual jobs and more to do with their beliefs about how work passion is pursued’.

This is an adaptation of an article originally published by The British Psychological Society’s Research Digest.Aeon counter – do not remove

Christian Jarrett

This article was originally published at Aeon and has been republished under Creative Commons.

Slaying the Snark: What Nonsense Verse tells us about Reality

hunting-snark

Eighth of Henry Holiday’s original illustrations to “The Hunting of the Snark” by Lewis Carroll, Wikipedia

Nina Lyon | Aeon Ideas

The English writer Lewis Carroll’s nonsense poem The Hunting of the Snark (1876) is an exceptionally difficult read. In it, a crew of improbable characters boards a ship to hunt a Snark, which might sound like a plot were it not for the fact that nobody knows what a Snark actually is. It doesn’t help that any attempt to describe a Snark turns into a pile-up of increasingly incoherent attributes: it is said to taste ‘meagre and hollow, but crisp: / Like a coat that is rather too tight in the waist’.

The only significant piece of information we have about the Snark’s identity is that it might be a Boojum. Unfortunately nobody knows what that is either, apart from the fact that anyone who encounters a Boojum will ‘softly and suddenly vanish away’ into nothingness.

Nothingness also characterises the crew’s map: a ‘perfect and absolute blank!’

‘What’s the good of Mercator’s North Poles and Equators,
Tropics, Zones and Meridian Lines?’
So the Bellman would cry: and the crew would reply,
‘They are merely conventional signs!’

Nonsense such as this might get tiresome to read, but it can make for a useful thought-experiment – particularly about language. In the Snark, as in the Alice books of 1865 and 1871, the commonsense assumptions that usually govern language and meaning are turned upside down. It makes us wonder what all of those assumptions are up to, and how they work. How do we know that this sentence is trying to say something serious, or that where we are now is not a dream?

Language can’t always convey meaning alone – it might need sense, which is the governing context that framed it. We talk about ‘common sense’, or whether something ‘makes sense’, or dismiss things as ‘nonsense’, but we rarely think about what sense itself is, until it goes missing. The German logician Gottlob Frege in 1892 used sense to describe a proposition’s meaning, as something distinct from what it denoted. Sense therefore appears to be a mental entity, resistant to fixed definition.

Shortly after Carroll’s death in 1898, a seismic turn took place in both logic and metaphysics. Building on Frege, logical positivists such as Bertrand Russell sought to deploy logic and mathematics in order to establish unconditional truths. A logical truth was, like mathematics, true whether or not people changed their minds about it. Realism, the belief in a mind-independent reality, began to assert itself afresh after a long spell in the philosophical wilderness.

Sense and nonsense would therefore become landmines in a battle over logic’s ability to untether truth from thought. If an issue over meaning seeks recourse in sense, it seeks recourse in thought too. Carroll anticipated where logic was headed, and the strangest of his creations was more than a game, an experiment conceived, as the English author G K Chesterton once wrote of his work, ‘in order to study that darkest problem of metaphysics’.

In 1901, the pragmatist philosopher and provocateur F C S Schiller created a parody Christmas edition of the philosophical journal Mind called Mind!. The frontispiece was a ‘Portrait of Its Immanence the Absolute’, which, Schiller noted, was ‘very like the Bellman’s map in the Hunting of the Snark’: completely blank.

The Absolute – or the Infinite or Ultimate Reality, among other grand aliases – was the sum of all experience and being, and inconceivable to the human mind. It was monistic, consuming all into the One. If it sounded like something you’d struggle to get your head around, that was pretty much the point. The Absolute was an emblem of metaphysical idealism, the doctrine that truth could exist only within the domain of thought. Idealism had dominated the academy for the entirety of Carroll’s career, and it was beginning to come under attack. The realist mission, headed by Russell, was to clean up philosophy’s act with the sound application of mathematics and objective facts, and it felt like a breath of fresh air.

Schiller delighted in trolling absolute idealists in general and the English idealist philosopher F H Bradley in particular. In Mind!, Schiller claimed that the Snark was a satire on the Absolute, whose notorious ineffability drove its seekers to derangement. But this was disingenuous. Bradley’s major work, Appearance and Reality (1893), mirrors the point, insofar that there is one, of the Snark. When you home in on a thing and try to pin it down by describing its attributes, and then try to pin down what those are too – Bradley uses the example of a lump of sugar – it all begins to crumble, and must be something other instead. What appeared to be there was only ever an idea. Carroll was, contrariwise, in line with idealist thinking.

A passionate logician, Carroll had been working on a three-part book on symbolic logic that remained unfinished at his death. Two logical paradoxes that he posed in Mind and shared privately with friends and colleagues, such as Bradley, hint at a troublemaking sentiment regarding where logic might be headed. ‘A Logical Paradox’ (1894) resulted in two contradictory statements being simultaneously true; ‘What the Tortoise Said to Achilles’ (1895) set up a predicament in which each proposition requires an additional supporting proposition, creating an infinite regress.

A few years after Carroll’s death, Russell began to flex logic as a tool for denoting the world and testing the validity of propositions about it. Carroll’s paradoxes were problematic and demanded a solution. Russell’s response to ‘A Logical Paradox’ was to legislate nonsense away into a ‘null-class’ – a set of nonexistent propositions that, because it had no real members, didn’t exist either.

Russell’s solution to ‘What the Tortoise Said to Achilles’, tucked away in a footnote to the Principles of Mathematics (1903), entailed a recourse to sense in order to determine whether or not a proposition should be asserted in the first place, teetering into the mind-dependent realm of idealism. Mentally determining meaning is a bit like mentally determining reality, and it wasn’t a neat win for logic’s role as objective sword of truth.

In the Snark, the principles of narrative self-immolate, so that the story, rather than describing things and events in the world, undoes them into something other. It ends like this:

In the midst of the word he was trying to say,
In the midst of his laughter and glee,
He had softly and suddenly vanished away –
For the Snark was a Boojum, you see.

Strip the plot down to those eight final words, and it is all there. The thing sought turned out, upon examination, to be something else entirely. Beyond the flimsy veil of appearance, formed from words and riddled with holes, lies an inexpressible reality.

By the late-20th century, when Russell had won the battle of ideas and commonsense realism prevailed, critics such as Martin Gardner, author of The Annotated Hunting of the Snark (2006), were rattled by Carroll’s antirealism. If the reality we perceive is all there is, and it falls apart, we are left with nothing.

Carroll’s attacks on realism might look nihilistic or radical to a postwar mind steeped in atheist scientism, but they were neither. Carroll was a man of his time, taking a philosophically conservative party line on absolute idealism and its theistic implications. But he was also prophetic, seeing conflict at the limits of language, logic and reality, and laying a series of conceptual traps that continue to provoke it.

The Snark is one such trap. Carroll rejected his illustrator Henry Holiday’s image of the Boojum on the basis that it needed to remain unimaginable, for, after all, how can you illustrate the incomprehensible nature of ultimate reality? It is a task as doomed as saying the unsayable – which, paradoxically, was a task Carroll himself couldn’t quite resist.Aeon counter – do not remove

Nina Lyon

This article was originally published at Aeon and has been republished under Creative Commons.

Modern Technology is akin to the Metaphysics of Vedanta

whitehead-vedanta.jpg

Akhandadhi Das | Aeon Ideas

You might think that digital technologies, often considered a product of ‘the West’, would hasten the divergence of Eastern and Western philosophies. But within the study of Vedanta, an ancient Indian school of thought, I see the opposite effect at work. Thanks to our growing familiarity with computing, virtual reality (VR) and artificial intelligence (AI), ‘modern’ societies are now better placed than ever to grasp the insights of this tradition.

Vedanta summarises the metaphysics of the Upanishads, a clutch of Sanskrit religious texts, likely written between 800 and 500 BCE. They form the basis for the many philosophical, spiritual and mystical traditions of the Indian sub-continent. The Upanishads were also a source of inspiration for some modern scientists, including Albert Einstein, Erwin Schrödinger and Werner Heisenberg, as they struggled to comprehend quantum physics of the 20th century.

The Vedantic quest for understanding begins from what it considers the logical starting point: our own consciousness. How can we trust conclusions about what we observe and analyse unless we understand what is doing the observation and analysis? The progress of AI, neural nets and deep learning have inclined some modern observers to claim that the human mind is merely an intricate organic processing machine – and consciousness, if it exists at all, might simply be a property that emerges from information complexity. However, this view fails to explain intractable issues such as the subjective self and our experience of qualia, those aspects of mental content such as ‘redness’ or ‘sweetness’ that we experience during conscious awareness. Figuring out how matter can produce phenomenal consciousness remains the so-called ‘hard problem’.

Vedanta offers a model to integrate subjective consciousness and the information-processing systems of our body and brains. Its theory separates the brain and the senses from the mind. But it also distinguishes the mind from the function of consciousness, which it defines as the ability to experience mental output. We’re familiar with this notion from our digital devices. A camera, microphone or other sensors linked to a computer gather information about the world, and convert the various forms of physical energy – light waves, air pressure-waves and so forth – into digital data, just as our bodily senses do. The central processing unit processes this data and produces relevant outputs. The same is true of our brain. In both contexts, there seems to be little scope for subjective experience to play a role within these mechanisms.

While computers can handle all sorts of processing without our help, we furnish them with a screen as an interface between the machine and ourselves. Similarly, Vedanta postulates that the conscious entity – something it terms the atma – is the observer of the output of the mind. The atma possesses, and is said to be composed of, the fundamental property of consciousness. The concept is explored in many of the meditative practices of Eastern traditions.

You might think of the atma like this. Imagine you’re watching a film in the cinema. It’s a thriller, and you’re anxious about the lead character, trapped in a room. Suddenly, the door in the movie crashes open and there stands… You jump, as if startled. But what is the real threat to you, other than maybe spilling your popcorn? By suspending an awareness of your body in the cinema, and identifying with the character on the screen, we are allowing our emotional state to be manipulated. Vedanta suggests that the atma, the conscious self, identifies with the physical world in a similar fashion.

This idea can also be explored in the all-consuming realm of VR. On entering a game, we might be asked to choose our character or avatar – originally a Sanskrit word, aptly enough, meaning ‘one who descends from a higher dimension’. In older texts, the term often refers to divine incarnations. However, the etymology suits the gamer, as he or she chooses to descend from ‘normal’ reality and enter the VR world. Having specified our avatar’s gender, bodily features, attributes and skills, next we learn how to control its limbs and tools. Soon, our awareness diverts from our physical self to the VR capabilities of the avatar.

In Vedanta psychology, this is akin to the atma adopting the psychological persona-self it calls the ahankara, or the ‘pseudo-ego’. Instead of a detached conscious observer, we choose to define ourselves in terms of our social connections and the physical characteristics of the body. Thus, I come to believe in myself with reference to my gender, race, size, age and so forth, along with the roles and responsibilities of family, work and community. Conditioned by such identification, I indulge in the relevant emotions – some happy, some challenging or distressing – produced by the circumstances I witness myself undergoing.

Within a VR game, our avatar represents a pale imitation of our actual self and its entanglements. In our interactions with the avatar-selves of others, we might reveal little about our true personality or feelings, and know correspondingly little about others’. Indeed, encounters among avatars – particularly when competitive or combative – are often vitriolic, seemingly unrestrained by concern for the feelings of the people behind the avatars. Connections made through online gaming aren’t a substitute for other relationships. Rather, as researchers at Johns Hopkins University have noted, gamers with strong real-world social lives are less likely to fall prey to gaming addiction and depression.

These observations mirror the Vedantic claim that our ability to form meaningful relationships is diminished by absorption in the ahankara, the pseudo-ego. The more I regard myself as a physical entity requiring various forms of sensual gratification, the more likely I am to objectify those who can satisfy my desires, and to forge relationships based on mutual selfishness. But Vedanta suggests that love should emanate from the deepest part of the self, not its assumed persona. Love, it claims, is soul-to-soul experience. Interactions with others on the basis of the ahankara offer only a parody of affection.

As the atma, we remain the same subjective self throughout the whole of our life. Our body, mentality and personality change dramatically – but throughout it all, we know ourselves to be the constant observer. However, seeing everything shift and give way around us, we suspect that we’re also subject to change, ageing and heading for annihilation. Yoga, as systematised by Patanjali – an author or authors, like ‘Homer’, who lived in the 2nd century BCE – is intended to be a practical method for freeing the atma from relentless mental tribulation, and to be properly situated in the reality of pure consciousness.

In VR, we’re often called upon to do battle with evil forces, confronting jeopardy and virtual mortality along the way. Despite our efforts, the inevitable almost always happens: our avatar is killed. Game over. Gamers, especially pathological gamers, are known to become deeply attached to their avatars, and can suffer distress when their avatars are harmed. Fortunately, we’re usually offered another chance: Do you want to play again? Sure enough, we do. Perhaps we create a new avatar, someone more adept, based on the lessons learned last time around. This mirrors the Vedantic concept of reincarnation, specifically in its form of metempsychosis: the transmigration of the conscious self into a new physical vehicle.

Some commentators interpret Vedanta as suggesting that there is no real world, and that all that exists is conscious awareness. However, a broader take on Vedantic texts is more akin to VR. The VR world is wholly data, but it becomes ‘real’ when that information manifests itself to our senses as imagery and sounds on the screen or through a headset. Similarly, for Vedanta, it is the external world’s transitory manifestation as observable objects that makes it less ‘real’ than the perpetual, unchanging nature of the consciousness that observes it.

To the sages of old, immersing ourselves in the ephemeral world means allowing the atma to succumb to an illusion: the illusion that our consciousness is somehow part of an external scene, and must suffer or enjoy along with it. It’s amusing to think what Patanjali and the Vedantic fathers would make of VR: an illusion within an illusion, perhaps, but one that might help us to grasp the potency of their message.Aeon counter – do not remove

Akhandadhi Das

This article was originally published at Aeon and has been republished under Creative Commons.

 

Why Amartya Sen Remains the Century’s Great Critic of Capitalism

amartya-sen

Nobel laureate Amartya Kumar Sen in 2000, Wikipedia


Tim Rogan | Aeon Ideas

Critiques of capitalism come in two varieties. First, there is the moral or spiritual critique. This critique rejects Homo economicus as the organising heuristic of human affairs. Human beings, it says, need more than material things to prosper. Calculating power is only a small part of what makes us who we are. Moral and spiritual relationships are first-order concerns. Material fixes such as a universal basic income will make no difference to societies in which the basic relationships are felt to be unjust.

Then there is the material critique of capitalism. The economists who lead discussions of inequality now are its leading exponents. Homo economicus is the right starting point for social thought. We are poor calculators and single-minded, failing to see our advantage in the rational distribution of prosperity across societies. Hence inequality, the wages of ungoverned growth. But we are calculators all the same, and what we need above all is material plenty, thus the focus on the redress of material inequality. From good material outcomes, the rest follows.

The first kind of argument for capitalism’s reform seems recessive now. The material critique predominates. Ideas emerge in numbers and figures. Talk of non-material values in political economy is muted. The Christians and Marxists who once made the moral critique of capitalism their own are marginal. Utilitarianism grows ubiquitous and compulsory.

But then there is Amartya Sen.

Every major work on material inequality in the 21st century owes a debt to Sen. But his own writings treat material inequality as though the moral frameworks and social relationships that mediate economic exchanges matter. Famine is the nadir of material deprivation. But it seldom occurs – Sen argues – for lack of food. To understand why a people goes hungry, look not for catastrophic crop failure; look rather for malfunctions of the moral economy that moderates competing demands upon a scarce commodity. Material inequality of the most egregious kind is the problem here. But piecemeal modifications to the machinery of production and distribution will not solve it. The relationships between different members of the economy must be put right. Only then will there be enough to go around.

In Sen’s work, the two critiques of capitalism cooperate. We move from moral concerns to material outcomes and back again with no sense of a threshold separating the two. Sen disentangles moral and material issues without favouring one or the other, keeping both in focus. The separation between the two critiques of capitalism is real, but transcending the divide is possible, and not only at some esoteric remove. Sen’s is a singular mind, but his work has a widespread following, not least in provinces of modern life where the predominance of utilitarian thinking is most pronounced. In economics curricula and in the schools of public policy, in internationalist secretariats and in humanitarian NGOs, there too Sen has created a niche for thinking that crosses boundaries otherwise rigidly observed.

This was no feat of lonely genius or freakish charisma. It was an effort of ordinary human innovation, putting old ideas together in new combinations to tackle emerging problems. Formal training in economics, mathematics and moral philosophy supplied the tools Sen has used to construct his critical system. But the influence of Rabindranath Tagore sensitised Sen to the subtle interrelation between our moral lives and our material needs. And a profound historical sensibility has enabled him to see the sharp separation of the two domains as transient.

Tagore’s school at Santiniketan in West Bengal was Sen’s birthplace. Tagore’s pedagogy emphasised articulate relations between a person’s material and spiritual existences. Both were essential – biological necessity, self-creating freedom – but modern societies tended to confuse the proper relation between them. In Santiniketan, pupils played at unstructured exploration of the natural world between brief forays into the arts, learning to understand their sensory and spiritual selves as at once distinct and unified.

Sen left Santiniketan in the late 1940s as a young adult to study economics in Calcutta and Cambridge. The major contemporary controversy in economics was the theory of welfare, and debate was affected by Cold War contention between market- and state-based models of economic order. Sen’s sympathies were social democratic but anti-authoritarian. Welfare economists of the 1930s and 1940s sought to split the difference, insisting that states could legitimate programmes of redistribution by appeal to rigid utilitarian principles: a pound in a poor man’s pocket adds more to overall utility than the same pound in the rich man’s pile. Here was the material critique of capitalism in its infancy, and here is Sen’s response: maximising utility is not everyone’s abiding concern – saying so and then making policy accordingly is a form of tyranny – and in any case using government to move money around in pursuit of some notional optimum is a flawed means to that end.

Economic rationality harbours a hidden politics whose implementation damaged the moral economies that groups of people built up to govern their own lives, frustrating the achievement of its stated aims. In commercial societies, individuals pursue economic ends within agreed social and moral frameworks. The social and moral frameworks are neither superfluous nor inhibiting. They are the coefficients of durable growth.

Moral economies are not neutral, given, unvarying or universal. They are contested and evolving. Each person is more than a cold calculator of rational utility. Societies aren’t just engines of prosperity. The challenge is to make non-economic norms affecting market conduct legible, to bring the moral economies amid which market economies and administrative states function into focus. Thinking that bifurcates moral on the one hand and material on the other is inhibiting. But such thinking is not natural and inevitable, it is mutable and contingent – learned and apt to be unlearned.

Sen was not alone in seeing this. The American economist Kenneth Arrow was his most important interlocutor, connecting Sen in turn with the tradition of moral critique associated with R H Tawney and Karl Polanyi. Each was determined to re-integrate economics into frameworks of moral relationship and social choice. But Sen saw more clearly than any of them how this could be achieved. He realised that at earlier moments in modern political economy this separation of our moral lives from our material concerns had been inconceivable. Utilitarianism had blown in like a weather front around 1800, trailing extremes of moral fervour and calculating zeal in its wake. Sen sensed this climate of opinion changing, and set about cultivating ameliorative ideas and approaches eradicated by its onset once again.

There have been two critiques of capitalism, but there should be only one. Amartya Sen is the new century’s first great critic of capitalism because he has made that clear.Aeon counter – do not remove

Tim Rogan

This article was originally published at Aeon and has been republished under Creative Commons.

Reach out, listen, be patient. Good arguments can stop extremism

coming-together

Walter Sinnott-Armstrong | Aeon Ideas

Many of my best friends think that some of my deeply held beliefs about important issues are obviously false or even nonsense. Sometimes, they tell me so to my face. How can we still be friends? Part of the answer is that these friends and I are philosophers, and philosophers learn how to deal with positions on the edge of sanity. In addition, I explain and give arguments for my claims, and they patiently listen and reply with arguments of their own against my – and for their – stances. By exchanging reasons in the form of arguments, we show each other respect and come to understand each other better.

Philosophers are weird, so this kind of civil disagreement still might seem impossible among ordinary folk. However, some stories give hope and show how to overcome high barriers.

One famous example involved Ann Atwater and C P Ellis in my home town of Durham, North Carolina; it is described in Osha Gray Davidson’s book The Best of Enemies (1996) and a forthcoming movie. Atwater was a single, poor, black parent who led Operation Breakthrough, which tried to improve local black neighbourhoods. Ellis was an equally poor but white parent who was proud to be Exalted Cyclops of the local Ku Klux Klan. They could not have started further apart. At first, Ellis brought a gun and henchmen to town meetings in black neighbourhoods. Atwater once lurched toward Ellis with a knife and had to be held back by her friends.

Despite their mutual hatred, when courts ordered Durham to integrate their public schools, Atwater and Ellis were pressured into co-chairing a charrette – a series of public discussions that lasted eight hours per day for 10 days in July 1971 – about how to implement integration. To plan their ordeal, they met and began by asking questions, answering with reasons, and listening to each other. Atwater asked Ellis why he opposed integration. He replied that mainly he wanted his children to get a good education, but integration would ruin their schools. Atwater was probably tempted to scream at him, call him a racist, and walk off in a huff. But she didn’t. Instead, she listened and said that she also wanted his children – as well as hers – to get a good education. Then Ellis asked Atwater why she worked so hard to improve housing for blacks. She replied that she wanted her friends to have better homes and better lives. He wanted the same for his friends.

When each listened to the other’s reasons, they realised that they shared the same basic values. Both loved their children and wanted decent lives for their communities. As Ellis later put it: ‘I used to think that Ann Atwater was the meanest black woman I’d ever seen in my life … But, you know, her and I got together one day for an hour or two and talked. And she is trying to help her people like I’m trying to help my people.’ After realising their common ground, they were able to work together to integrate Durham schools peacefully. In large part, they succeeded.

None of this happened quickly or easily. Their heated discussions lasted 10 long days in the charrette. They could not have afforded to leave their jobs for so long if their employers (including Duke University, where Ellis worked in maintenance) had not granted them time off with pay. They were also exceptional individuals who had strong incentives to work together as well as many personal virtues, including intelligence and patience. Still, such cases prove that sometimes sworn enemies can become close friends and can accomplish a great deal for their communities.

Why can’t liberals and conservatives do the same today? Admittedly, extremists on both sides of the current political scene often hide in their echo chambers and homogeneous neighbourhoods. They never listen to the other side. When they do venture out, the level of rhetoric on the internet is abysmal. Trolls resort to slogans, name-calling and jokes. When they do bother to give arguments, their arguments often simply justify what suits their feelings and signals tribal alliances.

The spread of bad arguments is undeniable but not inevitable. Rare but valuable examples such as Atwater and Ellis show us how we can use philosophical tools to reduce political polarisation.

The first step is to reach out. Philosophers go to conferences to find critics who can help them improve their theories. Similarly, Atwater and Ellis arranged meetings with each other in order to figure out how to work together in the charrette. All of us need to recognise the value of listening carefully and charitably to opponents. Then we need to go to the trouble of talking with those opponents, even if it means leaving our comfortable neighbourhoods or favourite websites.

Second, we need to ask questions. Since Socrates, philosophers have been known as much for their questions as for their answers. And if Atwater and Ellis had not asked each other questions, they never would have learned that what they both cared about the most was their children and alleviating the frustrations of poverty. By asking the right questions in the right way, we can often discover shared values or at least avoid misunderstanding opponents.

Third, we need to be patient. Philosophers teach courses for months on a single issue. Similarly, Atwater and Ellis spent 10 days in a public charrette before they finally came to understand and appreciate each other. They also welcomed other members of the community to talk as long as they wanted, just as good teachers include conflicting perspectives and bring all students into the conversation. Today, we need to slow down and fight the tendency to exclude competing views or to interrupt and retort with quick quips and slogans that demean opponents.

Fourth, we need to give arguments. Philosophers typically recognise that they owe reasons for their claims. Similarly, Atwater and Ellis did not simply announce their positions. They referred to the concrete needs of their children and their communities in order to explain why they held their positions. On controversial issues, neither side is obvious enough to escape demands for evidence and reasons, which are presented in the form of arguments.

None of these steps is easy or quick, but books and online courses on reasoning – especially in philosophy – are available to teach us how to appreciate and develop arguments. We can also learn through practice by reaching out, asking questions, being patient, and giving arguments in our everyday lives.

We still cannot reach everyone. Even the best arguments sometimes fall on deaf ears. But we should not generalise hastily to the conclusion that arguments always fail. Moderates are often open to reason on both sides. So are those all-too-rare exemplars who admit that they (like most of us) do not know which position to hold on complex moral and political issues.

Two lessons emerge. First, we should not give up on trying to reach extremists, such as Atwater and Ellis, despite how hard it is. Second, it is easier to reach moderates, so it usually makes sense to try reasoning with them first. Practising on more receptive audiences can help us improve our arguments as well as our skills in presenting arguments. These lessons will enable us to do our part to shrink the polarisation that stunts our societies and our lives.Aeon counter – do not remove

Walter Sinnott-Armstrong

This article was originally published at Aeon and has been republished under Creative Commons.

Subjectivity as Truth

conc-sci-post

A Selected Passage


When subjectivity, inwardness, is truth, then objectively truth is the paradox; and the fact that truth is objectively the paradox is just what proves subjectivity to be truth, since the objective situation proves repellent, and this resistance on the part of objectivity, or its expression, is the resilience of inwardness and the gauge of its strength. The paradox is the objective uncertainty that is the expression for the passion of inwardness, which is just what truth is. So much for the Socratic. Eternal, essential truth, i.e., truth that relates essentially to someone existing through essentially concerning what it is to exist (all other knowledge being from the Socratic point of view accidental, its scope and degree a matter of indifference), is the paradox. Yet the eternal, essential truth is by no means itself the paradox; it is so by relating to someone existing. Socratic ignorance is the expression of the objective uncertainty, the inwardness of the one who exists is truth. Just to anticipate here, note the following: Socratic ignorance is an analogue to the category of the absurd, except that in the repellency of the absurd there is even less objective certainty, since there is only the certainty that it is absurd. And just for that reason is the resilience of the inwardness even greater. Socratic inwardness in existing is an analogue of faith, except that the inwardness of faith, corresponding as it does to the resistance not of ignorance but of the absurd, is infinitely more profound.

Socratically, the eternal essential truth is by no means in itself paradoxical; it is so only by relating to someone existing. This is expressed in another Socratic proposition, namely, that all knowing is recollecting. That proposition foreshadows the beginning of speculative thought, which is also the reason why Socrates did not pursue it. Essentially it became Platonic. Here is where the path branches off and Socrates essentially accentuates existing, while Plato, forgetting the latter, loses himself in speculation. The infinite merit of Socrates is precisely to be an existing thinker, not a speculator who forgets what it is to exist. For Socrates, therefore, the proposition that all knowing is recollecting has, at the moment of his leave-taking and as the suspended possibility of speculating, a two-fold significance: (1) that the knower is essentially integer and that there is no other anomaly concerning knowledge confronting him than that he exists, which anomaly, however, is so essential and decisive for him that it means that existing, the inward absorption in and through existing, is truth; (2) that existence in temporality has no decisive importance, since the possibility of taking oneself back into eternity through recollection is always there, even though this possibility is constantly cancelled by the time taken in inner absorption in existing.

The unending merit of the Socratic was precisely to accentuate the fact that the knower is someone existing and that existing is what is essential. Going further through failing to understand this is but a mediocre merit. The Socratic is therefore something we must bear in mind and then see whether the formula might not be altered so as to make a real advance on the Socratic.

Subjectivity, inwardness, accordingly, is truth. Is there now a more inward expression of this? Yes, indeed; when talk of ‘subjectivity, inwardness, is truth’ begins as follows: ‘Subjectivity is untruth.’ But let us not be in a hurry. Speculation also says that subjectivity is untruth, but says this in exactly the opposite direction; namely, that objectivity is truth. Speculation defines subjectivity negatively in the direction of objectivity. This other definition, on the contrary, gets in its own way from the start, which is just what makes the inwardness so much more inward. Socratically, subjectivity is untruth if it refuses to grasp that subjectivity is truth but, for example, wants to become objective. Here, however, in setting about becoming truth by becoming subjective, subjectivity is in the difficult position of being untruth. The work thus goes backwards, that is, back into inwardness. Far from the path leading in the direction of the objective, the beginning itself lies only even deeper in subjectivity.

But the subject cannot be untruth eternally, or be presupposed eternally to have been so; he must have become that in time, or becomes that in time. The Socratic paradox lay in the eternal truth relating to someone existing. But now existence has put its mark a second time on the one who exists. A change so essential has occurred in him that now he cannot possibly take himself back into the eternal through Socratic recollection. To do that is to speculate; the Socratic is to be able to do it but to cancel the possibility by grasping the inward absorption in existence. But now the difficulty is this, that what followed Socrates as a cancelled possibility has become an impossibility. If, in relation to Socrates, speculating was already a dubious merit, now it is only confusion.

The paradox emerges when the eternal truth and existence are put together; but every time existence is marked out, the paradox becomes ever clearer. Socratically, the knower was someone who existed, but now someone who exists has been marked in such a way that existence has undertaken an essential change in him.