To Avoid Moral Failure, Don’t See People as Sherlock Does


Suspicious minds; William Gillette as Sherlock Holmes (right) and Bruce McRae as Dr John Watson in the play Sherlock Holmes (c1900). Courtesy Wikimedia

Rima Basu | Aeon Ideas

If we’re the kind of people who care both about not being racist, and also about basing our beliefs on the evidence that we have, then the world presents us with a challenge. The world is pretty racist. It shouldn’t be surprising then that sometimes it seems as if the evidence is stacked in favour of some racist belief. For example, it’s racist to assume that someone’s a staff member on the basis of his skin colour. But what if it’s the case that, because of historical patterns of discrimination, the members of staff with whom you interact are predominantly of one race? When the late John Hope Franklin, professor of history at Duke University in North Carolina, hosted a dinner party at his private club in Washington, DC in 1995, he was mistaken as a member of staff. Did the woman who did so do something wrong? Yes. It was indeed racist of her, even though Franklin was, since 1962, that club’s first black member.

To begin with, we don’t relate to people in the same way that we relate to objects. Human beings are different in an important way. In the world, there are things – tables, chairs, desks and other objects that aren’t furniture – and we try our best to understand how this world works. We ask why plants grow when watered, why dogs give birth to dogs and never to cats, and so on. But when it comes to people, ‘we have a different way of going on, though it is hard to capture just what that is’, as Rae Langton, now professor of philosophy at the University of Cambridge, put it so nicely in 1991.

Once you accept this general intuition, you might begin to wonder how can we capture that different way in which we ought to relate to others. To do this, first we must recognise that, as Langton goes on to write, ‘we don’t simply observe people as we might observe planets, we don’t simply treat them as things to be sought out when they can be of use to us, and avoid when they are a nuisance. We are, as [the British philosopher P F] Strawson says, involved.’

This way of being involved has been played out in many different ways, but here’s the basic thought: being involved is thinking that others’ attitudes and intentions towards us are important in a special way, and that our treatment of others should reflect that importance. We are, each of us, in virtue of being social beings, vulnerable. We depend upon others for our self-esteem and self-respect.

For example, we each think of ourselves as having a variety of more or less stable characteristics, from marginal ones such as being born on a Friday to central ones such as being a philosopher or a spouse. The more central self-descriptions are important to our sense of self-worth, to our self-understanding, and they constitute our sense of identity. When these central self-descriptions are ignored by others in favour of expectations on the basis of our race, gender or sexual orientation, we’re wronged. Perhaps our self-worth shouldn’t be based on something so fragile, but not only are we all-too-human, these self-descriptions also allow us to understand who we are and where we stand in the world.

This thought is echoed in the American sociologist and civil rights activist W E B DuBois’s concept of double consciousness. In The Souls of Black Folk (1903), DuBois notes a common feeling: ‘this sense of always looking at one’s self through the eyes of others, of measuring one’s soul by the tape of a world that looks on in amused contempt and pity’.

When you believe that John Hope Franklin must be a staff member rather than a club member, you’ve made predictions of him and observed him in the same way that one might observe the planets. Our private thoughts can wrong other people. When someone forms beliefs about you in this predictive way, they fail to see you, they fail to interact with you as a person. This is not only upsetting. It is a moral failing.

The English philosopher W K Clifford argued in 1877 that we were morally criticisable if our beliefs weren’t formed in the right way. He warned that we have a duty to humanity to never believe on the basis of insufficient evidence because to do so would be to put society at risk. As we look at the world around us and the epistemic crisis in which we find ourselves, we see what happens when Clifford’s imperative is ignored. And if we combine Clifford’s warning with DuBois’s and Langton’s observations, it becomes clear that, for our belief-forming practices, the stakes aren’t just high because we depend on one another for knowledge – the stakes are also high because we depend on one another for respect and dignity.

Consider how upset Arthur Conan Doyle’s characters get with Sherlock Holmes for the beliefs this fictional detective forms about them. Without fail, the people whom Holmes encounters find the way he forms beliefs about others to be insulting. Sometimes it’s because it is a negative belief. Often, however, the belief is mundane: eg, what they ate on the train or which shoe they put on first in the morning. There’s something improper about the way that Holmes relates to other human beings. Holmes’s failure to relate is not just a matter of his actions or his words (though sometimes it is also that), but what really rubs us up the wrong way is that Holmes observes us all as objects to be studied, predicted and managed. He doesn’t relate to us as human beings.

Maybe in an ideal world, what goes on inside our heads wouldn’t matter. But just as the personal is the political, our private thoughts aren’t really only our own. If a man believes of every woman he meets: ‘She’s someone I can sleep with,’ it’s no excuse that he never acts on the belief or reveals the belief to others. He has objectified her and failed to relate to her as a human being, and he has done so in a world in which women are routinely objectified and made to feel less-than.

This kind of indifference to the effect one has on others is morally criticisable. It has always struck me as odd that everyone grants that our actions and words are apt for moral critique, but once we enter the realm of thought we’re off the hook. Our beliefs about others matter. We care what others think of us.

When we mistake a person of colour for a staff member, that challenges this person’s central self-descriptions, the descriptions from which he draws his sense of self-worth. This is not to say that there is anything wrong with being a staff member, but if your reason for thinking that someone is staff is tied not only to something he has no control over (his skin colour) but also to a history of oppression (being denied access to more prestigious forms of employment), then that should give you pause.

The facts might not be racist, but the facts that we often rely on can be the result of racism, including racist institutions and policies. So when forming beliefs using evidence that is a result of racist history, we are accountable for failing to show more care and for believing so easily that someone is a staff member. Precisely what is owed can vary along a number of dimensions, but nonetheless we can recognise that some extra care with our beliefs is owed along these lines. We owe each other not only better actions and better words, but also better thoughts.Aeon counter – do not remove

Rima Basu is an assistant professor of philosophy at Claremont McKenna College in California. Her work has been published in Philosophical Studies, among others.

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

How do we Pry Apart the True and Compelling from the False and Toxic?


Stack of CPU’s. Shawn Stutzman, Pexels

David V Johnson | Aeon Ideas

When false and malicious speech roils the body politic, when racism and violence surge, the right and role of freedom of speech in society comes into crisis. People rightly begin to wonder what are the limits, what should be the rules. It is a complicated issue, and resolving it requires care about the exact problems targeted and solutions proposed. Otherwise the risk to free speech is real.

Propaganda from Russian-funded troll farms (boosted by Facebook data breaches) might have contributed to the United Kingdom’s vote to exit the European Union and aided the United States’ election of Donald Trump as president. Conspiracy theories spread by alternative news outlets or over social media sometimes lead to outbreaks of violence. Politicians exploit the mainstream news media’s commitment to balance, to covering newsworthy public statements and their need for viewers or readers by making baseless, sensational claims.

In On Liberty (1859), John Stuart Mill offers the most compelling defence of freedom of speech, conscience and autonomy ever written. Mill argues that the only reason to restrict speech is to prevent harm to others, such as with hate speech and incitement to violence. Otherwise, all speech must be protected. Even if we know a view is false, Mill says, it is wrong to suppress it. We avoid prejudice and dogmatism, and achieve understanding, through freely discussing and defending what we believe against contrary claims.

Today, a growing number of people see these views as naive. Mill’s arguments are better suited to those who still believe in the open marketplace of ideas, where free and rational debate is the best way to settle all disputes about truth and falsity. Who could possibly believe we live in such a world anymore? Instead, what we have is a Wild West of partisanship and manipulation, where social media gurus exploit research in behavioural psychology to compel users to affirm and echo absurd claims. We have a world where people live in cognitive bubbles of the like-minded and share one another’s biases and prejudices. According to this savvy view, our brave new world is too prone to propaganda and conspiracy-mongering to rely on Mill’s optimism about free speech. To do so is to risk abetting the rise of fascist and absolutist tendencies.

In his book How Fascism Works (2018), the American philosopher Jason Stanley cites the Russian television network RT, which presents all sorts of misleading and slanted views. If Mill is right, claims Stanley, then RT and such propaganda outfits ‘should be the paradigm of knowledge production’ because they force us to scrutinise their claims. But this is a reductio ad absurdum of Mill’s argument. Similarly, Alexis Papazoglou in The New Republic questions whether Nick Clegg, the former British deputy prime minister turned Facebook’s new vice president of global affairs and communication, will be led astray by his appreciation of Mill’s On Liberty. ‘Mill seemed to believe that an open, free debate meant the truth would usually prevail, whereas under censorship, truth could end up being accidentally suppressed, along with falsehood,’ writes Papazoglou. ‘It’s a view that seems a bit archaic in the age of an online marketplace of memes and clickbait, where false stories tend to spread faster and wider than their true counterpoints.’

When important and false beliefs and theories gain traction in public conversation, Mill’s protection of speech can be frustrating. But there is nothing new about ‘fake news’, whether in Mill’s age of sensationalist newspapers or in our age of digital media. Nonetheless to seek a solution in restricting speech is foolish and counterproductive – it lends credibility to the illiberal forces you, paradoxically, seek to silence. It also betrays an elitism about engaging with those of different opinions and a cynicism about affording your fellow citizens the freedom to muddle through the morass on their own. If we want to live in a liberal democratic society, rational engagement is the only solution on offer. Rather than restricting speech, we should look to supplement Mill’s view with effective tools for dealing with bad actors and with beliefs that, although false, seem compelling to some.

Fake news and propaganda are certainly problems, as they were in Mill’s day, but the problems they raise are more serious than the falsity of their claims. After all, they are not unique in saying false things, as the latest newspaper corrections will tell you. More importantly, they involve bad actors: people and organisations who intentionally pass off false views as the truth, and hide their nature and motives. (Think Russian troll farms.) Anyone who knows that they are dealing with bad actors – people trying to mislead – ignores them, and justifiably so. It’s not worth your time to consider the claim of someone you know is trying to deceive you.

There is nothing in Mill that demands that we engage any and all false views. After all, there are too many out there and so people have to be selective. Transparency is key, helping people know with whom, or what, they are dealing. Transparency helps filter out noise and fosters accountability, so that bad actors – those who hide their identity for the purpose of misleading others – are eliminated.

Mill’s critics fail to see the truth that is mixed in with the false views that they wish to restrict, and that makes those views compelling. RT, for instance, has covered many issues, such as the US financial crisis, economic inequality and imperialism more accurately than mainstream news channels. RT also includes informed sources who are ignored by other outlets. The channel might be biased toward demeaning the US and fomenting division, but it often pursues this agenda by speaking truths that are not covered in mainstream US media. Informed news-watchers know to view RT and all news sources with skepticism, and there is no reason not to extend the same respect to the entire viewing public, unless you presume you are a better judge of what to believe than your fellow citizens.

Mill rightly thought that the typical case wasn’t one of views that are false, but views that have a mixture of true and false. It would be far more effective to try to engage with the truth in views we despise than to try to ban them for their alleged falsity. The Canadian psychologist and YouTube sensation Jordan Peterson, for example, says things that are false, misogynistic and illiberal, but one possible reason for his following is that he recognises and speaks to a deficit of meaning and values in many young men’s lives. Here, the right approach is to pry apart the true and compelling from the false and toxic, through reasoned consideration. This way, following Mill’s path, presents a better chance of winning over those who are lost to views we despise. It also helps us improve our own understanding, as Mill wisely suggests.Aeon counter – do not remove

David V Johnson

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Climate Strikes: Researcher explains how Young People can Keep up the Momentum

Harriet Thew, University of Leeds

As part of one of the largest environmental protests ever seen, over a million young people went on strike on Friday March 15 2019, calling for more ambitious action on climate change. Inspired by Greta Thunberg, a Swedish school girl who protested outside the Swedish parliament every Friday throughout 2018, young people in over 100 countries left their classrooms and took to the streets.

The previous #YouthStrike4Climate on February 15 2019 mobilised over 10,000 young people in over 40 locations in the UK alone. Their marches, chants and signs captured attention and prompted debates regarding the motivations and methods of young strikers. Many were criticised by those in the government and the media for simply wanting an opportunity to miss school.

My PhD research explores youth participation in climate change governance, focusing on the UN climate negotiations. Between 2015 and 2018 I closely studied the Youth Climate Coalition (UKYCC) – a UK based, voluntary, youth-led group of 18 to 29 year olds – which attends the international negotiations and coordinates local and national climate change campaigns.

Members of the UK Youth Climate Coalition protest in London.
Harriet Thew, Author provided

My research shows that young people are mobilised by concern for people and wildlife, fears for the future and anger that climate action is neither sufficiently rapid nor ambitious. Young people need to feel as though they are “doing something” about climate change while politicians dither and scientists release increasingly alarming projections of future climate conditions.

The strikes have helped young activists find like-minded peers and new opportunities to engage. They articulate a collective youth voice, wielding the moral power of young people – a group which society agrees it is supposed to protect. All the same, there are threats to sustaining the movement’s momentum which need to be recognised now.

Challenge misplaced paternalism

The paternalism that gives youth a moral platform is a double-edged sword. Patronising responses from adults in positions of authority, from head teachers to the prime minister, dismiss their scientifically informed concerns and attack the messenger, rather than dealing with the message itself.

You’re too young to understand the complexity of this.

You’ll grow out of these beliefs.

You just want to skip school.

Stay in school and wait your turn to make a difference.

Striking may hurt your future job prospects.

The list goes on …

This frightens some children and young people into silence, but doesn’t address the factors which mobilised them in the first place. These threats are also largely unfounded.

Read more:
Climate change: a climate scientist answers questions from teenagers

To any young person reading this, I want to reassure you, as a university educator, that critical thinking, proactivity and an interest in current affairs are qualities that universities encourage. Over 200 academics signed this open letter – myself included – showing our support for the school strikes.

Don’t ‘grow up’

Growing up is inevitable, but it can cause problems for youth movements. As young people gain experience of climate action and expand their professional networks, they “grow out of” being able to represent youth, often getting jobs to advocate for other groups or causes. While this can be positive for individuals, institutional memory is lost when experienced advocates move on to do other things. This puts youth at a disadvantage in relation to other groups who are better resourced and don’t have a “time limit” in how long they can represent their cause.

Well-established youth organisations, such as Guides and Scouts, whom I have worked with in the past, can use their large networks and professional experience to sustain youth advocacy on climate change, though they lack the resources to do so alone. It would also help for other campaigners to show solidarity with the young strikers, and to recognise youth as an important group in climate change debates. This will give people more opportunity to keep supporting the youth climate movement as they get older.

Grow the climate justice movement

Researching the same group of young people for three years, I have identified a shift in their attitudes over time. As young participants become more involved in the movement, they encounter different types of injustices voiced by other groups. They hear activists sharing stories of the devastating climate impacts already experienced by communities, in places where sea level rise is inundating homes and droughts are killing livestock and causing starvation.

The climate justice movement emphasises how climate change exacerbates racial and economic inequality but frequently overlooks the ways these inequalities intersect with age-based disadvantages. Forgetting that frontline communities contain young people, youth movements in developed countries like the UK begin to question the validity of their intergenerational injustice claims.

Indigenous people often inhabit the frontline of impacts from pollution and climate change.
Rainforest Action Network/Flickr, CC BY-NC

Many feel ashamed for having claimed vulnerability, given their relatively privileged position. Over time, they lose faith in their right to be heard. It would strengthen the entire climate movement if other climate justice campaigners more vocally acknowledged young people as a vulnerable group and shared their platform so that these important voices could better amplify one another.

With my own platform, I would like to say this to the thousands who went on strike. You matter. You have a right to be heard and you shouldn’t be embarrassed to speak out. Have confidence in your message, engage with others but stay true to your principles. Stick together and remember that even when you leave school and enter work – you’re never too old to be a youth advocate.

Click here to subscribe to our climate action newsletter. Climate change is inevitable. Our response to it isn’t.The Conversation

Harriet Thew, PhD Researcher in Climate Change Governance, University of Leeds

This article is republished from The Conversation under a Creative Commons license. Read the original article.

African Art in Western Museums: It’s Patrimony not Heritage


Detail from a 16th-century bronze plaque from Benin, West Africa, held at the British Museum, London. Courtesy the Trustees of the British Museum

Charlotte Joy | Aeon Ideas

Museums with colonial-era collections have always known about the brutal parts of their biographies. But, through acts of purification via historical distance, they have chosen to ignore them. Museum directors now have to re-think their position as defenders of their collections in light of a different political agenda that locates people and their patrimony in a precolonial, yet radically altered, landscape.

When learning about cultural heritage, you will be directed to the etymology of the words ‘heritage’ and ‘patrimony’. Whereas ‘heritage’ invokes inheritance, ‘patrimony’ leads us to patriarchy. In French, patrie refers to the homeland, the fatherland, and during colonialism vast swathes of West Africa were brought under this French conceptual model in the 19th and early 20th centuries. Objects taken from West Africa (the periphery) and brought back to the centre/metropole were therefore conceptualised as part of the coloniser’s national identity. They were used in a series of Great Exhibitions and expos to gain support for the colonial project before entering national and private collections throughout Europe.

The immediate paradox here is that, whereas objects from the periphery were welcome in the centre, people were very much not. Since the independence of West African countries throughout the late 1950s and early ’60s, the retention of objects and the simultaneous rejection of people has become ever more fraught. Young undocumented migrants from former French colonies stand metres away from the Musée du quai Branly – Jacques Chirac, a museum in Paris full of their inaccessible patrimony. The migrants are treated with contempt while the objects from their homelands are cared for in museums and treated with great reverence. The migrants will be deported but the objects will not be repatriated. The homeland is therefore only home to objects, not people.

Sub-Saharan Africa has a unique demographic profile. By 2050, it is projected that the region will be home to the 10 countries with the youngest populations in the world. Most Western leaders would like to see strong and stable states in West Africa, states that can provide their citizens with jobs, cultural pride and a reason for staying in their countries and building new futures. The return of objects from museums could become central to this nation-building, undoing some of the harm of the colonial project and supporting emerging creative economies.

The objects taken from West Africa during the colonial period indexed many things, most of them problematic and racist. Some objects acted as a catalyst for the creative work of Western artists, and consequently entered the artistic canon as prompts and props (seen in the background of artists’ studios such as that of Pablo Picasso). The objects that Picasso encountered at the Palais du Trocadéro in Paris were the impetus for his ‘African period’ at the beginning of the 20th century, which produced one of his most famous works, Les Demoiselles d’Avignon (1907).

Beyond the influence that non-European art had on many Western artists, some objects, such as the Benin Bronzes (looted by the British in 1897 from the Kingdom of Benin, in current-day Nigeria) entered global art history on their own merit, as unrivalled technological and artistic accomplishments. This recognition came about only after a difficult period of skepticism, when art historians expressed doubt that African artists could produce work of such sophistication.

Thus, the way in which African objects are held and displayed in Western museums can tell us a lot about the legacy of colonialism and the West’s ambivalent relationship towards its former colonies. But it cannot be said to provide generations of young people in sub-Saharan Africa with a rich cultural repository from which to draw.

Regardless of the politics of return, over the next few decades people born in sub-Saharan Africa will be brought up within a vibrant cultural milieu of art, photography, music and film. However, as colonialism was a humiliating experience for many formerly colonised people, it is not hard to see why regaining control over their patrimony would be a step towards the beginning of healing. The return of cultural objects would allow meaningful access to art and cultural knowledge that could fuel the creative economies of these young nations.

The acts of return in themselves are a symbol of strong contrition, re-opening the dialogue on past wrongs to better establish relationships for the future. It seems that behind proclamations of the complicated nature of the process of return lies this more difficult truth. Human remains have been returned from museums to be reburied with dignity. Nazi-looted art has been seized from unsuspecting collectors and returned to Jewish families. Now is the time for colonial patrimony to be reckoned with because patrimony indexes the biographies of those who made and acquired the objects, drawing their descendants into moral relationships in the present. It is now not a matter of if but when objects will be returned, and whether this happens with good grace or through a fractious period of resistance.

The museums’ ‘cosmopolitan’ defence, made for example by Tiffany Jenkins in Keeping Their Marbles (2016), is that only by juxtaposition in global centres can we truly make sense of global art and the experience of being human. This might be true to some extent but the juxtapositions in themselves are problematic: for example, the British Museum houses its Africa collections in the basement. Museums are also bursting at the seams, and what isn’t displayed is housed in vast stores. To date, the logic of the museum is not one of access and display but of acquisition and retention. The defenders of the museum’s patrimony, the trustees, are appointed on the understanding that their primary role is to protect collections for future generations, narrowly defined within the model of nation states. Perhaps if trustees of museums could rethink their role to include descendants of the colonised, as well as the colonisers, they could help reshape a heritage ethic that is alive to the challenges of global demographics.Aeon counter – do not remove

Charlotte Joy

This article was originally published at Aeon and has been republished under Creative Commons.

Tools for Thinking: Isaiah Berlin’s Two Concepts of Freedom


Maria Kasmirli | Aeon Ideas

‘Freedom’ is a powerful word. We all respond positively to it, and under its banner revolutions have been started, wars have been fought, and political campaigns are continually being waged. But what exactly do we mean by ‘freedom’? The fact that politicians of all parties claim to believe in freedom suggests that people don’t always have the same thing in mind when they talk about it. Might there be different kinds of freedom and, if so, could the different kinds conflict with each other? Could the promotion of one kind of freedom limit another kind? Could people even be coerced in the name of freedom?

The 20th-century political philosopher Isaiah Berlin (1909-97) thought that the answer to both these questions was ‘Yes’, and in his essay ‘Two Concepts of Liberty’ (1958) he distinguished two kinds of freedom (or liberty; Berlin used the words interchangeably), which he called negative freedom and positive freedom.

Negative freedom is freedom from interference. You are negatively free to the extent that other people do not restrict what you can do. If other people prevent you from doing something, either directly by what they do, or indirectly by supporting social and economic arrangements that disadvantage you, then to that extent they restrict your negative freedom. Berlin stresses that it is only restrictions imposed by other people that count as limitations of one’s freedom. Restrictions due to natural causes do not count. The fact that I cannot levitate is a physical limitation but not a limitation of my freedom.

Virtually everyone agrees that we must accept some restrictions on our negative freedom if we are to avoid chaos. All states require their citizens to follow laws and regulations designed to help them live together and make society function smoothly. We accept these restrictions on our freedom as a trade-off for other benefits, such as peace, security and prosperity. At the same time, most of us would insist that there are some areas of life that should not be regulated, and where individuals should have considerable, if not complete, freedom. A major debate in political philosophy concerns the boundaries of this area of personal negative freedom. For example, should the state place restrictions on what we may say or read, or on what sexual activities we may engage in?

Whereas negative freedom is freedom from control by others, positive freedom is freedom to control oneself. To be positively free is to be one’s own master, acting rationally and choosing responsibly in line with one’s interests. This might seem to be simply the counterpart of negative freedom; I control myself to the extent that no one else controls me. However, a gap can open between positive and negative freedom, since a person might be lacking in self-control even when he is not restrained by others. Think, for example, of a drug addict who cannot kick the habit that is killing him. He is not positively free (that is, acting rationally in his own best interests) even though his negative freedom is not being limited (no one is forcing him to take the drug).

In such cases, Berlin notes, it is natural to talk of something like two selves: a lower self, which is irrational and impulsive, and a higher self, which is rational and far-sighted. And the suggestion is that a person is positively free only if his higher self is dominant. If this is right, then we might be able to make a person more free by coercing him. If we prevent the addict from taking the drug, we might help his higher self to gain control. By limiting his negative freedom, we would increase his positive freedom. It is easy to see how this view could be abused to justify interventions that are misguided or malign.

Berlin argued that the gap between positive and negative freedom, and the risk of abuse, increases further if we identify the higher, or ‘real’, self, with a social group (‘a tribe, a race, a church, a state’). For we might then conclude that individuals are free only when the group suppresses individual desires (which stem from lower, nonsocial selves) and imposes its will upon them. What particularly worried Berlin about this move was that it justifies the coercion of individuals, not merely as a means of securing social benefits, such as security and cooperation, but as a way of freeing the individuals themselves. The coercion is not seen as coercion at all, but as liberation, and protests against it can be dismissed as expressions of the lower self, like the addict’s craving for his fix. Berlin called this a ‘monstrous impersonation’, which allows those in power ‘to ignore the actual wishes of men or societies, to bully, oppress, torture them in the name, and on behalf, of their “real” selves’. (The reader might be reminded of George Orwell’s novel Nineteen Eighty-Four (1949), which shows how a Stalinist political party imposes its conception of truth on an individual, ‘freeing’ him to love the Party leader.)

Berlin was thinking of how ideas of freedom had been abused by the totalitarian regimes of Nazi Germany and Stalinist Russia, and he was right to highlight the dangers of this kind of thinking. But it does not follow that it is always wrong to promote positive freedom. (Berlin does not claim that it is, and he notes that the notion of negative freedom can be abused in a similar way.) Some people might need help to understand their best interests and achieve their full potential, and we could believe that the state has a responsibility to help them do so. Indeed, this is the main rationale for compulsory education. We require children to attend school (severely limiting their negative freedom) because we believe it is in their own best interests. To leave children free to do whatever they like would, arguably, amount to neglect or abuse. In the case of adults, too, it is arguable that the state has a responsibility to help its citizens live rich and fulfilling lives, through cultural, educational and health programmes. (The need for such help might be especially pressing in freemarket societies, where advertisers continually tempt us to indulge our ‘lower’ appetites.) It might be, too, that some people find meaning and purpose through identification with a wider social or political movement, such as feminism, and that in helping them to do so we are helping to liberate them.

Of course, this raises many further questions. Does our current education system really work in children’s best interests, or does it just mould them into a form that is socially and economically useful? Who decides what counts as a rich and fulfilling life? What means can the state legitimately use to help people live well? Is coercion ever acceptable? These are questions about what kind of society we want to live in, and they have no easy answers. But in giving us the distinction between negative and positive freedom, Berlin has given us a powerful tool for thinking about them.Aeon counter – do not remove

Maria Kasmirli

This article was originally published at Aeon and has been republished under Creative Commons.

Introduction to Deontology: Kantian Ethics

One popular moral theory that denies that morality is solely about the consequences of our actions is known as Deontology. The most influential and widely adhered to version of Deontology was extensively laid out by Immanuel Kant (1724–1804). Kant’s ethics, as well as the overall philosophical system in which it is embedded, is vast and incredibly difficult. However, one relatively simple concept lies at the center of his ethical system: The Categorical Imperative.

via Introduction to Deontology: Kantian Ethics (1000-Word Philosophy)

Author: Andrew Chapman
Category: Ethics
Word Count: 1000

The Empathetic Humanities have much to teach our Adversarial Culture


Alexander Bevilacqua | Aeon Ideas

As anyone on Twitter knows, public culture can be quick to attack, castigate and condemn. In search of the moral high ground, we rarely grant each other the benefit of the doubt. In her Class Day remarks at Harvard’s 2018 graduation, the Nigerian novelist Chimamanda Ngozi Adichie addressed the problem of this rush to judgment. In the face of what she called ‘a culture of “calling out”, a culture of outrage’, she asked students to ‘always remember context, and never disregard intent’. She could have been speaking as a historian.

History, as a discipline, turns away from two of the main ways of reading that have dominated the humanities for the past half-century. These methods have been productive, but perhaps they also bear some responsibility for today’s corrosive lack of generosity. The two approaches have different genealogies, but share a significant feature: at heart, they are adversarial.

One mode of reading, first described in 1965 by the French philosopher Paul Ricœur and known as ‘the hermeneutics of suspicion’, aims to uncover the hidden meaning or agenda of a text. Whether inspired by Karl Marx, Friedrich Nietzsche or Sigmund Freud, the reader interprets what happens on the surface as a symptom of something deeper and more dubious, from economic inequality to sexual anxiety. The reader’s task is to reject the face value of a work, and to plumb for a submerged truth.

A second form of interpretation, known as ‘deconstruction’, was developed in 1967 by the French philosopher Jacques Derrida. It aims to identify and reveal a text’s hidden contradictions – ambiguities and even aporias (unthinkable contradictions) that eluded the author. For example, Derrida detected a bias that favoured speech over writing in many influential philosophical texts of the Western tradition, from Plato to Jean-Jacques Rousseau. The fact that written texts could privilege the immediacy and truth of speech was a paradox that revealed unarticulated metaphysical commitments at the heart of Western philosophy.

Both of these ways of reading pit reader against text. The reader’s goal becomes to uncover meanings or problems that the work does not explicitly express. In both cases, intelligence and moral probity are displayed at the expense of what’s been written. In the 20th century, these approaches empowered critics to detect and denounce the workings of power in all kinds of materials – not just the dreams that Freud interpreted, or the essays by Plato and Rousseau with which Derrida was most closely concerned.

They do, however, foster a prosecutorial attitude among academics and public intellectuals. As a colleague once told me: ‘I am always looking for the Freudian slip.’ He scours the writings of his peers to spot when they trip up and betray their problematic intellectual commitments. One poorly chosen phrase can sully an entire work.

Not surprisingly, these methods have fostered a rather paranoid atmosphere in modern academia. Mutual monitoring of lexical choices leads to anxiety, as an increasing number of words are placed on a ‘no fly’ list. One error is taken as the symptom of problematic thinking; it can spoil not just a whole book, but perhaps even the author’s entire oeuvre. This set of attitudes is not a world apart from the pile-ons that we witness on social media.

Does the lack of charity in public discourse – the quickness to judge, the aversion to context and intent – stem in part from what we might call the ‘adversarial’ humanities? These practices of interpretation are certainly on display in many classrooms, where students learn to exercise their moral and intellectual prowess by dismantling what they’ve read. For teachers, showing students how to take a text apart bestows authority; for students, learning to read like this can be electrifying.

Yet the study of history is different. History deals with the past – and the past is, as the British novelist L P Hartley wrote in 1953, ‘a foreign country’. By definition, historians deal with difference: with what is unlike the present, and with what rarely meets today’s moral standards.

The virtue of reading like a historian, then, is that critique or disavowal is not the primary goal. On the contrary, reading historically provides something more destabilising: it requires the historian to put her own values in parentheses.

The French medievalist Marc Bloch wrote that the task of the historian is understanding, not judging. Bloch, who fought in the French Resistance, was caught and turned over to the Gestapo. Poignantly, the manuscript of The Historian’s Craft, where he expressed this humane statement, was left unfinished: Bloch was executed by firing squad in June 1944.

As Bloch knew well, historical empathy involves reaching out across the chasm of time to understand people whose values and motivations are often utterly unlike our own. It means affording these people the gift of intellectual charity – that is, the best possible interpretation of what they said or believed. For example, a belief in magic can be rational on the basis of a period’s knowledge of nature. Yet acknowledging this demands more than just contextual, linguistic or philological skill. It requires empathy.

Aren’t a lot of psychological assumptions built into this model? The call for empathy might seem theoretically naive. Yet we judge people’s intentions all the time in our daily lives; we can’t function socially without making inferences about others’ motivations. Historians merely apply this approach to people who are dead. They invoke intentions not from a desire to attack, nor because they seek reasons to restrain a text’s range of meanings. Their questions about intentions stem, instead, from respect for the people whose actions and thoughts they’re trying to understand.

Reading like a historian, then, involves not just a theory of interpretation, but also a moral stance. It is an attempt to treat others generously, and to extend that generosity even to those who can’t be hic et nunc – here and now.

For many historians (as well as others in what we might call the ‘empathetic’ humanities, such as art history and literary history), empathy is a life practice. Living with the people of the past changes one’s relationship to the present. At our best, we begin to offer empathy not just to those who are distant, but to those who surround us, aiming in our daily life for ‘understanding, not judging’.

To be sure, it’s challenging to impart these lessons to students in their teens or early 20s, to whom the problems of the present seem especially urgent and compelling. The injunction to read more generously is pretty unfashionable. It can even be perceived as conservative: isn’t the past what’s holding us back, and shouldn’t we reject it? Isn’t it more useful to learn how to deconstruct a text, and to be on the lookout for latent, pernicious meanings?

Certainly, reading isn’t a zero-sum game. One can and should cultivate multiple modes of interpretation. Yet the nostrum that the humanities teach ‘critical thinking and reading skills’ obscures the profound differences in how adversarial and empathetic disciplines engage with written works – and how they teach us to respond to other human beings. If the empathetic humanities can make us more compassionate and more charitable – if they can encourage us to ‘always remember context, and never disregard intent’ – they afford something uniquely useful today.Aeon counter – do not remove

Alexander Bevilacqua

This article was originally published at Aeon and has been republished under Creative Commons.

Reach out, listen, be patient. Good arguments can stop extremism


Walter Sinnott-Armstrong | Aeon Ideas

Many of my best friends think that some of my deeply held beliefs about important issues are obviously false or even nonsense. Sometimes, they tell me so to my face. How can we still be friends? Part of the answer is that these friends and I are philosophers, and philosophers learn how to deal with positions on the edge of sanity. In addition, I explain and give arguments for my claims, and they patiently listen and reply with arguments of their own against my – and for their – stances. By exchanging reasons in the form of arguments, we show each other respect and come to understand each other better.

Philosophers are weird, so this kind of civil disagreement still might seem impossible among ordinary folk. However, some stories give hope and show how to overcome high barriers.

One famous example involved Ann Atwater and C P Ellis in my home town of Durham, North Carolina; it is described in Osha Gray Davidson’s book The Best of Enemies (1996) and a forthcoming movie. Atwater was a single, poor, black parent who led Operation Breakthrough, which tried to improve local black neighbourhoods. Ellis was an equally poor but white parent who was proud to be Exalted Cyclops of the local Ku Klux Klan. They could not have started further apart. At first, Ellis brought a gun and henchmen to town meetings in black neighbourhoods. Atwater once lurched toward Ellis with a knife and had to be held back by her friends.

Despite their mutual hatred, when courts ordered Durham to integrate their public schools, Atwater and Ellis were pressured into co-chairing a charrette – a series of public discussions that lasted eight hours per day for 10 days in July 1971 – about how to implement integration. To plan their ordeal, they met and began by asking questions, answering with reasons, and listening to each other. Atwater asked Ellis why he opposed integration. He replied that mainly he wanted his children to get a good education, but integration would ruin their schools. Atwater was probably tempted to scream at him, call him a racist, and walk off in a huff. But she didn’t. Instead, she listened and said that she also wanted his children – as well as hers – to get a good education. Then Ellis asked Atwater why she worked so hard to improve housing for blacks. She replied that she wanted her friends to have better homes and better lives. He wanted the same for his friends.

When each listened to the other’s reasons, they realised that they shared the same basic values. Both loved their children and wanted decent lives for their communities. As Ellis later put it: ‘I used to think that Ann Atwater was the meanest black woman I’d ever seen in my life … But, you know, her and I got together one day for an hour or two and talked. And she is trying to help her people like I’m trying to help my people.’ After realising their common ground, they were able to work together to integrate Durham schools peacefully. In large part, they succeeded.

None of this happened quickly or easily. Their heated discussions lasted 10 long days in the charrette. They could not have afforded to leave their jobs for so long if their employers (including Duke University, where Ellis worked in maintenance) had not granted them time off with pay. They were also exceptional individuals who had strong incentives to work together as well as many personal virtues, including intelligence and patience. Still, such cases prove that sometimes sworn enemies can become close friends and can accomplish a great deal for their communities.

Why can’t liberals and conservatives do the same today? Admittedly, extremists on both sides of the current political scene often hide in their echo chambers and homogeneous neighbourhoods. They never listen to the other side. When they do venture out, the level of rhetoric on the internet is abysmal. Trolls resort to slogans, name-calling and jokes. When they do bother to give arguments, their arguments often simply justify what suits their feelings and signals tribal alliances.

The spread of bad arguments is undeniable but not inevitable. Rare but valuable examples such as Atwater and Ellis show us how we can use philosophical tools to reduce political polarisation.

The first step is to reach out. Philosophers go to conferences to find critics who can help them improve their theories. Similarly, Atwater and Ellis arranged meetings with each other in order to figure out how to work together in the charrette. All of us need to recognise the value of listening carefully and charitably to opponents. Then we need to go to the trouble of talking with those opponents, even if it means leaving our comfortable neighbourhoods or favourite websites.

Second, we need to ask questions. Since Socrates, philosophers have been known as much for their questions as for their answers. And if Atwater and Ellis had not asked each other questions, they never would have learned that what they both cared about the most was their children and alleviating the frustrations of poverty. By asking the right questions in the right way, we can often discover shared values or at least avoid misunderstanding opponents.

Third, we need to be patient. Philosophers teach courses for months on a single issue. Similarly, Atwater and Ellis spent 10 days in a public charrette before they finally came to understand and appreciate each other. They also welcomed other members of the community to talk as long as they wanted, just as good teachers include conflicting perspectives and bring all students into the conversation. Today, we need to slow down and fight the tendency to exclude competing views or to interrupt and retort with quick quips and slogans that demean opponents.

Fourth, we need to give arguments. Philosophers typically recognise that they owe reasons for their claims. Similarly, Atwater and Ellis did not simply announce their positions. They referred to the concrete needs of their children and their communities in order to explain why they held their positions. On controversial issues, neither side is obvious enough to escape demands for evidence and reasons, which are presented in the form of arguments.

None of these steps is easy or quick, but books and online courses on reasoning – especially in philosophy – are available to teach us how to appreciate and develop arguments. We can also learn through practice by reaching out, asking questions, being patient, and giving arguments in our everyday lives.

We still cannot reach everyone. Even the best arguments sometimes fall on deaf ears. But we should not generalise hastily to the conclusion that arguments always fail. Moderates are often open to reason on both sides. So are those all-too-rare exemplars who admit that they (like most of us) do not know which position to hold on complex moral and political issues.

Two lessons emerge. First, we should not give up on trying to reach extremists, such as Atwater and Ellis, despite how hard it is. Second, it is easier to reach moderates, so it usually makes sense to try reasoning with them first. Practising on more receptive audiences can help us improve our arguments as well as our skills in presenting arguments. These lessons will enable us to do our part to shrink the polarisation that stunts our societies and our lives.Aeon counter – do not remove

Walter Sinnott-Armstrong

This article was originally published at Aeon and has been republished under Creative Commons.

Emptiness, form, and Dzogchen ethics

For a hundred years, the West has wrestled with the problem of ethical nihilism. God’s commands once provided a firm foundation for morality; but then he died. All attempts to find an alternative foundation have failed. Why, then, should we be moral? How can we be sure what is moral? No one has satisfactory answers, despite many ingenious attempts by brilliant philosophers…

Read the rest at Vividness.

What did Max Weber mean by the ‘Spirit’ of Capitalism?


The BASF factory at Ludwigshafen, Germany, pictured on a postcard in 1881. Courtesy Wikipedia

Peter Ghosh | Aeon Ideas

Max Weber’s famous text The Protestant Ethic and the Spirit of Capitalism (1905) is surely one of the most misunderstood of all the canonical works regularly taught, mangled and revered in universities across the globe. This is not to say that teachers and students are stupid, but that this is an exceptionally compact text that ranges across a very broad subject area, written by an out-and-out intellectual at the top of his game. He would have been dumb­founded to find that it was being used as an elementary introduction to sociology for undergraduate students, or even schoolchildren.

We use the word ‘capitalism’ today as if its meaning were self-evident, or else as if it came from Marx, but this casualness must be set aside. ‘Capitalism’ was Weber’s own word and he defined it as he saw fit. Its most general meaning was quite simply modernity itself: capitalism was ‘the most fateful power in our modern life’. More specifically, it controlled and generated ‘modern Kultur’, the code of values by which people lived in the 20th-century West, and now live, we may add, in much of the 21st-century globe. So the ‘spirit’ of capitalism is also an ‘ethic’, though no doubt the title would have sounded a bit flat if it had been called The Protestant Ethic and the Ethic of Capitalism.

This modern ‘ethic’ or code of values was unlike any other that had gone before. Weber supposed that all previous ethics – that is, socially accepted codes of behaviour rather than the more abstract propositions made by theologians and philosophers – were religious. Religions supplied clear messages about how to behave in society in straightforward human terms, messages that were taken to be moral absolutes binding on all people. In the West this meant Christianity, and its most important social and ethical prescription came out of the Bible: ‘Love thy neighbour.’ Weber was not against love, but his idea of love was a private one – a realm of intimacy and sexuality. As a guide to social behaviour in public places ‘love thy neighbour’ was obviously nonsense, and this was a principal reason why the claims of churches to speak to modern society in authentically religious terms were marginal. He would not have been surprised at the long innings enjoyed by the slogan ‘God is love’ in the 20th-century West – its career was already launched in his own day – nor that its social consequences should have been so limited.

The ethic or code that dominated public life in the modern world was very different. Above all it was impersonal rather than personal: by Weber’s day, agreement on what was right and wrong for the individual was breaking down. The truths of religion – the basis of ethics – were now contested, and other time-honoured norms – such as those pertaining to sexuality, marriage and beauty – were also breaking down. (Here is a blast from the past: who today would think to uphold a binding idea of beauty?) Values were increasingly the property of the individual, not society. So instead of humanly warm contact, based on a shared, intuitively obvious understanding of right and wrong, public behaviour was cool, reserved, hard and sober, governed by strict personal self-control. Correct behaviour lay in the observance of correct procedures. Most obviously, it obeyed the letter of the law (for who could say what its spirit was?) and it was rational. It was logical, consistent, and coherent; or else it obeyed unquestioned modern realities such as the power of numbers, market forces and technology.

There was another kind of disintegration besides that of traditional ethics. The proliferation of knowledge and reflection on knowledge had made it impossible for any one person to know and survey it all. In a world which could not be grasped as a whole, and where there were no universally shared values, most people clung to the particular niche to which they were most committed: their job or profession. They treated their work as a post-religious calling, ‘an absolute end in itself’, and if the modern ‘ethic’ or ‘spirit’ had an ultimate found­ation, this was it. One of the most widespread clichés about Weber’s thought is to say that he preached a work ethic. This is a mistake. He personally saw no particular virtue in sweat – he thought his best ideas came to him when relaxing on a sofa with a cigar – and had he known he would be misunder­stood in this way, he would have pointed out that a capacity for hard work was something that did not dist­inguish the modern West from previous soc­ieties and their value systems. However, the idea that people were being ever more defined by the blinkered focus of their employment was one he regarded as profoundly modern and characteristic.

The blinkered pro­fessional ethic was common to entrepreneurs and an increasingly high-wage, skilled labour force, and it was this combination that produced a situation where the ‘highest good’ was the making of money and ever more money, without any limit. This is what is most readily recognisable as the ‘spirit’ of capitalism, but it should be stressed that it was not a simple ethic of greed which, as Weber recognised, was age-old and eternal. In fact there are two sets of ideas here, though they overlap. There is one about potentially universal rational pro­cedures – specialisation, logic, and formally consistent behaviour – and another that is closer to the modern economy, of which the central part is the professional ethic. The modern situation was the product of narrow-minded adhesion to one’s particular function under a set of conditions where the attempt to understand modernity as a whole had been abandoned by most people. As a result they were not in control of their own destiny, but were governed by the set of rational and impersonal pro­cedures which he likened to an iron cage, or ‘steel housing’. Given its rational and impersonal foundations, the housing fell far short of any human ideal of warmth, spontaneity or breadth of outlook; yet rationality, technology and legality also produced material goods for mass consumption in unprecedented amounts. For this reason, though they could always do so if they chose to, people were unlikely to leave the housing ‘until the last hundredweight of fossil fuel is burned up’.

It is an extremely powerful analysis, which tells us a great deal about the 20th-century West and a set of Western ideas and priorities that the rest of the world has been increasingly happy to take up since 1945. It derives its power not simply from what it says, but because Weber sought to place under­standing before judgment, and to see the world as a whole. If we wish to go beyond him, we must do the same.Aeon counter – do not remove

Peter Ghosh

This article was originally published at Aeon and has been republished under Creative Commons.