What did Max Weber mean by the ‘Spirit’ of Capitalism?

ludwigshafen

The BASF factory at Ludwigshafen, Germany, pictured on a postcard in 1881. Courtesy Wikipedia

Peter Ghosh | Aeon Ideas

Max Weber’s famous text The Protestant Ethic and the Spirit of Capitalism (1905) is surely one of the most misunderstood of all the canonical works regularly taught, mangled and revered in universities across the globe. This is not to say that teachers and students are stupid, but that this is an exceptionally compact text that ranges across a very broad subject area, written by an out-and-out intellectual at the top of his game. He would have been dumb­founded to find that it was being used as an elementary introduction to sociology for undergraduate students, or even schoolchildren.

We use the word ‘capitalism’ today as if its meaning were self-evident, or else as if it came from Marx, but this casualness must be set aside. ‘Capitalism’ was Weber’s own word and he defined it as he saw fit. Its most general meaning was quite simply modernity itself: capitalism was ‘the most fateful power in our modern life’. More specifically, it controlled and generated ‘modern Kultur’, the code of values by which people lived in the 20th-century West, and now live, we may add, in much of the 21st-century globe. So the ‘spirit’ of capitalism is also an ‘ethic’, though no doubt the title would have sounded a bit flat if it had been called The Protestant Ethic and the Ethic of Capitalism.

This modern ‘ethic’ or code of values was unlike any other that had gone before. Weber supposed that all previous ethics – that is, socially accepted codes of behaviour rather than the more abstract propositions made by theologians and philosophers – were religious. Religions supplied clear messages about how to behave in society in straightforward human terms, messages that were taken to be moral absolutes binding on all people. In the West this meant Christianity, and its most important social and ethical prescription came out of the Bible: ‘Love thy neighbour.’ Weber was not against love, but his idea of love was a private one – a realm of intimacy and sexuality. As a guide to social behaviour in public places ‘love thy neighbour’ was obviously nonsense, and this was a principal reason why the claims of churches to speak to modern society in authentically religious terms were marginal. He would not have been surprised at the long innings enjoyed by the slogan ‘God is love’ in the 20th-century West – its career was already launched in his own day – nor that its social consequences should have been so limited.

The ethic or code that dominated public life in the modern world was very different. Above all it was impersonal rather than personal: by Weber’s day, agreement on what was right and wrong for the individual was breaking down. The truths of religion – the basis of ethics – were now contested, and other time-honoured norms – such as those pertaining to sexuality, marriage and beauty – were also breaking down. (Here is a blast from the past: who today would think to uphold a binding idea of beauty?) Values were increasingly the property of the individual, not society. So instead of humanly warm contact, based on a shared, intuitively obvious understanding of right and wrong, public behaviour was cool, reserved, hard and sober, governed by strict personal self-control. Correct behaviour lay in the observance of correct procedures. Most obviously, it obeyed the letter of the law (for who could say what its spirit was?) and it was rational. It was logical, consistent, and coherent; or else it obeyed unquestioned modern realities such as the power of numbers, market forces and technology.

There was another kind of disintegration besides that of traditional ethics. The proliferation of knowledge and reflection on knowledge had made it impossible for any one person to know and survey it all. In a world which could not be grasped as a whole, and where there were no universally shared values, most people clung to the particular niche to which they were most committed: their job or profession. They treated their work as a post-religious calling, ‘an absolute end in itself’, and if the modern ‘ethic’ or ‘spirit’ had an ultimate found­ation, this was it. One of the most widespread clichés about Weber’s thought is to say that he preached a work ethic. This is a mistake. He personally saw no particular virtue in sweat – he thought his best ideas came to him when relaxing on a sofa with a cigar – and had he known he would be misunder­stood in this way, he would have pointed out that a capacity for hard work was something that did not dist­inguish the modern West from previous soc­ieties and their value systems. However, the idea that people were being ever more defined by the blinkered focus of their employment was one he regarded as profoundly modern and characteristic.

The blinkered pro­fessional ethic was common to entrepreneurs and an increasingly high-wage, skilled labour force, and it was this combination that produced a situation where the ‘highest good’ was the making of money and ever more money, without any limit. This is what is most readily recognisable as the ‘spirit’ of capitalism, but it should be stressed that it was not a simple ethic of greed which, as Weber recognised, was age-old and eternal. In fact there are two sets of ideas here, though they overlap. There is one about potentially universal rational pro­cedures – specialisation, logic, and formally consistent behaviour – and another that is closer to the modern economy, of which the central part is the professional ethic. The modern situation was the product of narrow-minded adhesion to one’s particular function under a set of conditions where the attempt to understand modernity as a whole had been abandoned by most people. As a result they were not in control of their own destiny, but were governed by the set of rational and impersonal pro­cedures which he likened to an iron cage, or ‘steel housing’. Given its rational and impersonal foundations, the housing fell far short of any human ideal of warmth, spontaneity or breadth of outlook; yet rationality, technology and legality also produced material goods for mass consumption in unprecedented amounts. For this reason, though they could always do so if they chose to, people were unlikely to leave the housing ‘until the last hundredweight of fossil fuel is burned up’.

It is an extremely powerful analysis, which tells us a great deal about the 20th-century West and a set of Western ideas and priorities that the rest of the world has been increasingly happy to take up since 1945. It derives its power not simply from what it says, but because Weber sought to place under­standing before judgment, and to see the world as a whole. If we wish to go beyond him, we must do the same.Aeon counter – do not remove

Peter Ghosh

This article was originally published at Aeon and has been republished under Creative Commons.

What makes People distrust Science?

square-stationary-earth

A Map of the Square and Stationary Earth by Professor Orlando Ferguson, South Dakota, 1893. Photo courtesy Wikipedia

Bastiaan T Rutjens | Aeon Ideas

Today, there is a crisis of trust in science. Many people – including politicians and, yes, even presidents – publicly express doubts about the validity of scientific findings. Meanwhile, scientific institutions and journals express their concerns about the public’s increasing distrust in science. How is it possible that science, the products of which permeate our everyday lives, making them in many ways more comfortable, elicits such negative attitudes among a substantial part of the population? Understanding why people distrust science will go a long way towards understanding what needs to be done for people to take science seriously.

Political ideology is seen by many researchers as the main culprit of science skepticism. The sociologist Gordon Gauchat has shown that political conservatives in the United States have become more distrusting of science, a trend that started in the 1970s. And a swath of recent research conducted by social and political psychologists has consistently shown that climate-change skepticism in particular is typically found among those on the conservative side of the political spectrum. However, there is more to science skepticism than just political ideology.

The same research that has observed the effects of political ideology on attitudes towards climate change has also found that political ideology is not that predictive of skepticism about other controversial research topics. Work by the cognitive scientist Stephan Lewandowsky, as well as research led by the psychologist Sydney Scott, observed no relation between political ideology and attitudes toward genetic modification. Lewandowsky also found no clear relation between political conservatism and vaccine skepticism.

So there is more that underlies science skepticism than just political conservatism. But what? It is important to systematically map which factors do and do not contribute to science skepticism and science (dis)trust in order to provide more precise explanations for why a growing number of individuals reject the notion of anthropogenic climate change, or fear that eating genetically modified products is dangerous, or believe that vaccines cause autism.

My colleagues and I recently published a set of studies that investigated science trust and science skepticism. One of the take-home messages of our research is that it is crucial not to lump various forms of science skepticism together. And although we were certainly not the first to look beyond political ideology, we did note two important lacunae in the literature. First, religiosity has so far been curiously under-researched as a precursor to science skepticism, perhaps because political ideology commanded so much attention. Second, current research lacks a systematic investigation into various forms of skepticism, alongside more general measures of trust in science. We attempted to correct both oversights.

People can be skeptical or distrusting of science for different reasons, whether it is about one specific finding from one discipline (for example, ‘The climate is not warming, but I believe in evolution’), or about science in general (‘Science is just one of many opinions’). We identified four major predictors of science acceptance and science skepticism: political ideology; religiosity; morality; and knowledge about science. These variables tend to intercorrelate – in some cases quite strongly – which means that they are potentially confounded. To illustrate, an observed relation between political conservatism and trust in science might in reality be caused by another variable, for example religiosity. When not measuring all constructs simultaneously, it is hard to properly assess what the predictive value of each of these is.

So, we investigated the heterogeneity of science skepticism among samples of North American participants (a large-scale cross-national study of science skepticism in Europe and beyond will follow). We provided participants with statements about climate change (eg, ‘Human CO2 emissions cause climate change’), genetic modification (eg, ‘GM of foods is a safe and reliable technology’), and vaccination (eg, ‘I believe that vaccines have negative side effects that outweigh the benefits of vaccination for children’). Participants could indicate to what extent they agreed or disagreed with these statements. We also measured participants’ general faith in science, and included a task in which they could indicate how much federal money should be spent on science, compared with various other domains. We assessed the impact of political ideology, religiosity, moral concerns and science knowledge (measured with a science literacy test, consisting of true or false items such as ‘All radioactivity is made by humans’, and ‘The centre of the Earth is very hot’) on participants’ responses to these various measures.

Political ideology did not play a meaningful role when it came to most of our measures. The only form of science skepticism that was consistently more pronounced among the politically conservative respondents in our studies was, not surprisingly, climate-change skepticism. But what about the other forms of skepticism, or skepticism of science generally?

Skepticism about genetic modification was not related to political ideology or religious beliefs, though it did correlate with science knowledge: the worse people did on the scientific literacy test, the more skeptical they were about the safety of genetically modified food. Vaccine skepticism also had no relation to political ideology, but it was strongest among religious participants, with a particular relation to moral concerns about the naturalness of vaccination.

Moving beyond domain-specific skepticism, what did we observe about a general trust in science, and the willingness to support science more broadly? The results were quite clear: trust in science was by far the lowest among the religious. In particular, religious orthodoxy was a strong negative predictor of faith in science and the orthodox participants were also the least positive about investing federal money in science. But notice here again political ideology did not contribute any meaningful variance over and beyond religiosity.

From these studies there are a couple of lessons to be learned about the current crisis of faith that plagues science. Science skepticism is quite diverse. Further, distrust of science is not really that much about political ideology, with the exception of climate-change skepticism, which is consistently found to be politically driven. Additionally, these results suggest that science skepticism cannot simply be remedied by increasing people’s knowledge about science. The impact of scientific literacy on science skepticism, trust in science, and willingness to support science was minor, save for the case of genetic modification. Some people are reluctant to accept particular scientific findings, for various reasons. When the aim is to combat skepticism and increase trust in science, a good starting point is to acknowledge that science skepticism comes in many forms.Aeon counter – do not remove

Bastiaan T Rutjens

This article was originally published at Aeon and has been republished under Creative Commons.

 

Is Philosophy Absurd? Only When You’re Doing it Right

diogenes

Helena de Bres | Aeon Ideas

Last semester, halfway through a meeting of my ‘Meaning of Life’ seminar, I found myself lying on a window seat along the eastern wall of the classroom. I was scheduled for spinal surgery in a few months, and sitting and standing were tough. I needed a break.

‘It was the Romantics,’ I intoned, adjusting the pillow under my head, ‘who first argued that living “authentically” is an end in itself. For some, authenticity overtook morality as the ultimate ideal. As Ralph Waldo Emerson put it [here I began to gesticulate energetically]: The only right is what is after my constitution, the only wrong what is against it!’ I whacked my elbow involuntarily against the wall. ‘Nothing is at last sacred but the integrity of your own mind!

I glanced up at my students and faltered. It had occurred to me, and perhaps to them, that I was being absurd.

I had this thought, and then, because overthinking is my profession, I analysed it. Why absurd, exactly? On one account, absurdity springs from a noticeable gap between expectation and reality, aim and outcome, or means and end. Sometimes the discrepancy is amusing. Imagine an artist-in-residence’s end-of-year exhibition involving only a tiny makeshift diorama depicting the artist sleeping. Other times, the discrepancy is terrifying, as when a darling of the fossil-fuel industry is appointed to lead the Environmental Protection Agency. In my case, the mismatch was between the command and authority that a professor is expected to display and the fact that I was lying below eye level on a puffy log-shaped pillow.

My horizontal lecture wouldn’t have been quite as absurd, though, if I were, say, an economist or historian. There’s something especially absurd about philosophers, supine or not. The explanation for this might lie in the best-known philosophical account of absurdity, offered by Thomas Nagel in 1971. Nagel argued that when we sense that something – or everything – in life is absurd, we’re experiencing the clash of two perspectives from which to view the world. One is that of the engaged agent, seeing her life from the inside, with her heart vibrating in her chest. The other is that of the detached spectator, watching human activity coolly, as if from the distance of another planet. Nagel notes that it’s our nature to flip between these points of view. One moment we’re fully caught up in our mushroom-cultivation class, our infatuation with our sister’s husband or our intractable power struggle with Terri in accounting. The next moment, our mental tectonics shift and we see ourselves from an emotional remove, like a spirit hovering over its own body. It becomes evident to us that, ‘from the point of view of the Universe’, to use the 19th-century utilitarian Henry Sidgwick’s phrase, none of these things matter.

Our sense of absurdity kicks in when we snap between these two perspectives rapidly, in a kind of duck-rabbit movement of the soul. The sense of absurdity depends on this instability. If we could retain the internal perspective forever, we’d never experience the shock of doubt about whether what we were doing was ultimately worthwhile or made any kind of sense. If, alternatively, we could permanently view all human affairs, our own included, from the perspective of the Universe, we’d never find ourselves eagerly attempting to adhere fungi to a damp log. We’d be full-time ascetics, to whom nothing human mattered at all, people who couldn’t be caught red-handed caring about something small.

Though Nagel says that we all adopt both the internal and external perspectives on our lives, some people clearly identify more with one than the other. And some of these people cluster in professions where one perspective is disproportionately valued. Academic philosophy is one such profession. When people say: ‘Let’s be philosophical about this,’ they mean: ‘Let’s calm down, step back, detach.’ The philosopher, in the public imagination, is set apart from the mundane concerns and fiery attachments that govern the rest of humanity. He or she takes the external perspective on pretty much everything. When Søren Kierkegaard collapsed at a party and people tried to help him up, he allegedly said: ‘Oh, leave it. Let the maid sweep it up in the morning.’

If this image is accurate, and if Nagel’s account is right, philosophers, parked forever in only one of Nagel’s perspectives, will escape the absurdity of the human condition. We philosophers, however, are among the most absurd people I’ve ever met. The reason for this has a whiff of paradox. Abstraction and detachment might be a philosopher’s stock-in-trade, but philosophers are often fiercely attached to those very things: passionate about impassion, abstract in the most concrete of ways. They spend years working obsessively on papers with titles such as ‘Nonreducible Supervenient Causation’ and then have public brawls about them at conferences. This is part of philosophy’s charm for me. There’s something especially absurd, yes, but also endearing, about people who are so serious about their core life endeavour that they regularly forget its ridiculous aspects, even though the endeavour itself is meant to serve as a perpetual reminder.

So I was both abstract and fervent down there on my log pillow. But what does this really have to do with the absurd? Many of us associate the concept not with simple discrepancy, nor with Nagel’s more complex perspectival clash, but with futility. A nice illustration of this is the video of a Japanese game show named ‘Slippery Stairs’ that went viral last year. The show requires its contestants – barefoot, in skin-tight onesies – to scramble to the top of a staircase coated with what looks like tepid ice. The video portrays six people painstakingly, desperately, attempting to do this, and repeatedly sliding dramatically back down the stairs, often taking the other five with them. ‘Life,’ someone wrote in the comments.

What attitude should we take to our situation or ourselves, once we recognise that they’re absurd, in any of these ways? One option is to shake our noble fists at the cosmos, cursing its silent coldness and slippery stairs. This stance appeals to a certain kind of guy in college. But some of us – women, the disabled, ethnic and gender minorities, etc got the memo pretty early on that we weren’t plausibly the centre of the Universe. So when our adolescent attention was directed to life’s disappointments and farcicality, we were more inclined to shrug and get back to what we were doing than get theatrical about it.

Nagel recommends something like this approach. He writes: ‘If sub specie aeternitatis [viewed in relation to the eternal; in a universal perspective] there is no reason to believe that anything matters, then that doesn’t matter either, and we can approach our absurd lives with irony instead of heroism or despair.’ But irony might be less attractive in 2018 than it was in 1971. There’s something about seeing everything you value under constant attack that increases your sense that some things do matter.

My preferred take is this. The absurdity of our situation is only troubling if it implies that nothing really matters and that all human pursuits are inherently meaningless. But none of the accounts of absurdity canvassed above have that implication. If you love what you’re doing, and if what you love has genuine human-sized value (roughly, the moral philosopher Susan Wolf’s definition of meaningfulness), your life can have depth and purpose even if it involves incongruity and failure, and even if the Universe cares naught for it, or for you. Talking seriously about philosophy with teenagers, while your back collapses, their hearts break, their parents struggle, and the country falls apart – you could call it absurd. But you could also look up from your window seat, catch yourself in the thick of it, and, after a twinge of embarrassment, call it beautiful. Then get back to work.Aeon counter – do not remove

Helena de Bres

This article was originally published at Aeon and has been republished under Creative Commons.

Philosophers Should be Keener to Talk about the Meaning of Life

gachet-van-gogh

Detail from Portrait of Dr Gachet (1890), by Vincent van Gogh. Private collection. Photo courtesy Wikipedia

Kieran Setiya | Aeon Ideas

Philosophers ponder the meaning of life. At least, that is the stereotype. When I risk admitting to a stranger that I teach philosophy for a living and face the question ‘What is the meaning of life?’, I have a ready response: we figured that out in the 1980s, but we have to keep it secret or we’d be out of a job; I could tell you, but then I’d have to kill you. In fact, professional philosophers rarely ask the question and, when they do, they often dismiss it as nonsense.

The phrase itself is of relatively recent origin. Its first use in English is in Thomas Carlyle’s parodic novel Sartor Resartus (1836), where it appears in the mouth of a comic German philosopher, Diogenes Teufelsdröckh (‘God-born devil-dung’), noted for his treatise on clothes. The question of life’s meaning remains both easy to mock and paradigmatically obscure.

What is the meaning of ‘meaning’ in ‘the meaning of life’? We talk about the meaning of words, or linguistic meaning, the meaning of an utterance or of writing in a book. When we ask if human life has meaning, are we asking whether it has meaning in this semantic sense? Could human history be a sentence in some cosmic language? The answer is that it could, in principle, but that this isn’t what we want when we search for the meaning of life. If we are unwitting ink in some alien script, it would be interesting to know what we spell out, but the answer would not have authority over us, as befits the meaning of life.

‘Meaning’ could mean purpose or function in a larger system. Could human life play that role? Again, it could, but yet again, this seems irrelevant. In Douglas Adams’s Hitchhiker’s books, the Earth is part of a galactic computer, designed (ironically) to reveal the meaning of life. Whatever that meaning might be, our role in the computer program is not it. To discover that we are cogs in some cosmic machine is not to discover the meaning of life. It leaves our existential maladies untouched.

Seeing no other way to interpret the question, many philosophers conclude that the question is confused. If they go on to talk about meaning in life, they have in mind the meaning of individual lives, the question of whether this life or that life is meaningful for the person who is living it. But the meaning of life is not an individual possession. If life has meaning, it has a meaning that applies to us all. Does this idea make sense?

I think it does. We can make progress if we turn from the words that make up the question – ‘meaning’ in particular – to the contexts in which we feel compelled to ask it. We raise the question ‘Does life have meaning?’ in times of anguish, or despair, or emptiness. We ask it when we confront mortality and loss, the pervasiveness of suffering and injustice, the facts of life from which we recoil and which we cannot accept. Life seems profoundly flawed. Is there meaning to it all? Historically, the question of life’s meaning comes into focus through the anxiety of early existentialist philosophers, such as Søren Kierkegaard and Friedrich Nietzsche, who worried that it has none.

On the interpretation that this context suggests, the meaning of life would be a truth about us and about the world that makes sense of the worst. It would be something we could know about life, the Universe and everything, that should reconcile us to mortality and loss, suffering and injustice. Knowledge of this truth would make it irrational not to affirm life as it is, not to accept things as they are. It would show that despair, or angst, is a mistake.

The idea that life has meaning is the idea that there is a truth of this extraordinary kind. Whether or not there is, the suggestion is not nonsense. It is a hope that animates the great religions. Whatever else they do, religions offer metaphysical pictures whose acceptance is meant to bestow salvation, to reconcile us to the seeming faults of life. Or if they do not supply the truth, if they do not claim to convey the meaning of life, they offer the conviction that there is one, however hard to grasp or articulate it might be.

The meaning of life might be theistic, involving God or gods, or it might be non-theistic, as in one form of Buddhism. What distinguishes Buddhist meditation from mindfulness-based stress-reduction is the aim of ending suffering through metaphysical revelation. The emotional solace of Buddhism is meant to derive from insight into how things are – in particular, into the non-existence of the self – an insight that should move anyone. To come to terms with life through meditation for serenity, or through talk therapy, is not to discover the meaning of life, since it is not to discover any such truth.

Albert Einstein wrote that to know an answer to the question ‘What is the meaning of human life?’ means to be religious. But there is in principle room for non-religious accounts of meaning, ones that do not appeal to anything beyond the given world or the world revealed to us by science. Religion has no monopoly on meaning, even if it is hard to see how a non-transcendent truth could meet our definition: to know the meaning of life is to be reconciled to all that is wrong with the world. At the same time, it is hard to prove a negative, to show that nothing short of religion could play this role.

Philosophers are prone to see confusion in the question ‘What is the meaning of life?’ They have replaced it with questions about meaningful lives. But the search for life’s meaning will not go away and it is perfectly intelligible. I cannot tell you the meaning of life or give assurance that it has one. But I can say that it is not a mistake to ask the question. Does life have meaning? The answer is: it might.Aeon counter – do not remove

Kieran Setiya

This article was originally published at Aeon and has been republished under Creative Commons.

What do you really believe?

titian

Portrait of a Man with a Quilted Sleeve, Titian, c1509. Courtesy Wikipedia/National Gallery, London.

Keith Frankish | Aeon Ideas

Edited by Nigel Warburton

Most of us have views on politics, current events, religion, society, morality and sport, and we spend a lot of time expressing these views, whether in conversation or on social media. We argue for our positions, and get annoyed if they are challenged. Why do we do this? The obvious answer is that we believe the views we express (ie, we think they are true), and we want to get others to believe them too, because they are true. We want the truth to prevail. That’s how it seems. But do we really believe everything we say? Are you always trying to establish the truth when you argue, or might there be other motives at work?

These questions might seem strange, offensive even. Am I suggesting that you are insincere or hypocritical in your views? No – at least I’m not suggesting that you are consciously so. But you might be unconsciously influenced by concerns other than truth. Nowadays, most psychologists agree that rapid, unconscious mental processes (sometimes called ‘System 1’ processes) play a huge role in guiding our behaviour. These processes are not thought of as Freudian ones, involving repressed memories and desires, but as ordinary, everyday judgments, motives and feelings that operate without conscious awareness, like a mental autopilot.

It seems plausible that such processes guide much of our speech. After all, we rarely give conscious thought to our reasons for saying what we do; the words just come to our lips. But if the motives behind our words are unconscious, then we must infer them from our behaviour, and might be mistaken about what they are. Again, this isn’t a revolutionary idea; for centuries, dramatists and novelists have depicted people deceived about their own motives. (For more on the nature and limits of self-knowledge, see my earlier Aeon article.)

It’s easy to think of motives that might prompt us to express a view we don’t really believe. We might want it to be true, and feel reassurance when we argue for it (think of the parents who insist that their missing child is still alive, despite the lack of evidence). We might associate it with people we admire, and assert it so as to be like them (think of how people are influenced by the views of celebrities). We might think that it will get us attention, and make us seem interesting (think of teenagers who adopt provocative views). We might profess it to fit in and gain social acceptance (think of a university student from a conservative background). Or we might feel that we have a duty to defend it because of our commitment to some creed or ideology (we sometimes call this attitude faith – belief in the religious sense).

Such motives might also be reinforced by other factors. As a society, we tend to admire people who know their own minds and stick to their principles. So, once we have expressed a view, for whatever reason, we might feel (again, unconsciously) that we are now committed to it, and should stick with it as a matter of integrity. At the same time, we might develop an emotional attachment to the view, a bit like an attachment to a sports team. It is now our view, the one we have publicly endorsed, and we want it to win out over its rivals just because it is ours. In this way, we might come to have a strong personal commitment to a claim, even if we don’t really believe it.

I am not suggesting that we are never guided by concerns for truth and knowledge (what philosophers call epistemic concerns), but I suspect that these sorts of emotional and social factors play a much larger role than we like to think. How else can we explain the vehemence with which people defend their views, and the hurt they feel when their views are challenged?

Is it bad if we sometimes say things we don’t believe? It might seem not. The aims I’ve mentioned – seeking social acceptance, for example, or cultivating a self-image – are not necessarily bad ones, and since they are unconscious it is arguable that we shouldn’t be held responsible for them anyway. There are dangers, however. For in order to achieve these aims we must convince our audience that we genuinely believe what we say. If they thought we were saying something merely in order to create an impression on them, then we wouldn’t succeed in creating that impression. And when our aim is to make some impression on ourselves – like the parents who insist that their child is still alive – we must convince ourselves that we believe it too. As a consequence, we might need to back up our words with deeds, acting as if we believe what we say. If there were a glaring disparity between what we said and did, our insincerity would be obvious. In this way, unconscious desires for acceptance, approval and reassurance can lead us to make choices on the basis of claims for which we have no good evidence, with obvious risks of frustration and failure.

Is there, then, any way of telling whether you really believe a claim? It might seem that conscious reflection would settle it. If you consciously entertain the claim, do you think it is true? Even this process might be unreliable, however. Many theorists hold that conscious thinking is simply talking to oneself in inner speech, in which case it can be guided by unconscious motives, just like outer speech. And, as I mentioned, unconscious desires can prompt us to deceive ourselves, telling ourselves that a claim is true even though we don’t really believe it.

Despite this, a thought experiment might help us detect what we genuinely believe to be true. In real life, there might be few contexts where truth really is our dominant concern: maintaining a comforting view or upholding a cherished ideology or self-image might almost always be more important to us than truth. But suppose you were being questioned by the Truth Demon – a super-powerful being who knows the truth on every topic, and will punish you horribly if you give a wrong answer or fail to answer at all. If you continue to assert a claim when the Truth Demon asks you if it is true, then you do really believe it, really think it is true. But if you give a different answer when under threat of torture by the all-knowing demon, then you don’t really believe the claim. This gives us a practical test for belief: imagine the situation just described as vividly as you can, and see what you would say about any of your views. But do be careful not to give too much conscious thought to the matter in case you start telling yourself what you want to hear.Aeon counter – do not remove


Keith Frankish is an English philosopher and writer. He is a visiting research fellow with the Open University in the UK and an adjunct professor with the Brain and Mind Programme at the University of Crete. He lives in Greece.

This article was originally published at Aeon and has been republished under Creative Commons.


Commentary

I like the gist of this article (along with the author’s previous article), but the Truth Demon (a.k.a. God) thought experiment could use some work. It’s too easy to deceive and delude oneself, even under imagined duress. It’s also unclear if many people would be “punished horribly” while expressing their false beliefs in good conscience. As mentioned by the author, a good judge of honest belief is one’s actions, but there is a difference between honest belief and truth itself (or is there?). Although the author’s final word is on point, I prefer Nietzsche’s thought experiment: “What, if some day or night a demon were to steal after you into your loneliest loneliness and say to you: ‘This life as you now live it and have lived it, you will have to live once more and innumerable times more’ … Would you not throw yourself down and gnash your teeth and curse the demon who spoke thus? Or have you once experienced a tremendous moment when you would have answered him: ‘You are a god and never have I heard anything more divine.”


We don’t know ourselves, we knowledgeable people—we are personally ignorant
about ourselves. And there’s good reason for that. We’ve never tried to find out who
we are. How could it ever happen that one day we’d discover our own selves? With
justice it’s been said that “Where your treasure is, there shall your heart be also.” Our treasure lies where the beehives of our knowledge stand. We are always busy with our knowledge, as if we were born winged creatures—collectors of intellectual honey. In our hearts we are basically concerned with only one thing, to “bring something home.” As far as the rest of life is concerned, what people call “experience”—which of us is serious enough for that? Who has enough time? In these matters, I fear, we’ve been “missing the point.”

Our hearts have not even been engaged—nor, for that matter, have our ears! We’ve
been much more like someone divinely distracted and self-absorbed into whose ear
the clock has just pealed the twelve strokes of noon with all its force and who all at
once wakes up and asks himself “What exactly did that clock strike?”—so we rub
ourselves behind the ears afterwards and ask, totally surprised and embarrassed “What have we really just experienced? And more: “Who are we really?” Then, as I’ve mentioned, we count—after the fact—all the twelve trembling strokes of the clock of our experience, our lives, our being—alas! in the process we keep losing the count. So we remain necessarily strangers to ourselves, we do not understand ourselves, we have to keep ourselves confused. For us this law holds for all eternity: “Each man is furthest from himself.” Where we ourselves are concerned, we are not “knowledgeable people.”

― Friedrich Nietzsche, On the Genealogy of Morals/Ecce Homo

When subjectivity, inwardness, is the truth, the truth becomes objectively determined as a paradox, and that it is paradoxical is made clear by the fact that subjectivity is truth, for it repels objectivity, and the expression for the objective repulsion is the intensity and measure of inwardness. The paradox is the objective uncertainty, which is the expression for the passion of inwardness, which is precisely the truth. This is the Socratic principle. The eternal, essential truth, that is, that which relates itself essentially to the individual because it concerns his existence (all other knowledge is, Socratically speaking, accidental, its degree and scope being indifferent), is a paradox. Nevertheless, the eternal truth is not essentially in itself paradoxical, but it becomes so by relating itself to an existing individual. Socratic ignorance is the expression of this objective uncertainty, the inwardness of the existential subject is the truth. To anticipate what I will develop later, Socratic ignorance is an analogy to the category of the absurd, only that there is still less objective certainty in the absurd, and therefore infinitely greater tension in its inwardness. The Socratic inwardness that involves existence is an analogy to faith, except that this inwardness is repulsed not by ignorance but by the absurd, which is infinitely deeper. Socratically the eternal, essential truth is by no means paradoxical in itself, but only by virtue of its relation to an existing individual.

― Søren Kierkegaard, Concluding Unscientific Postscript

Lost in Translation

paul.jpg

Detail from The Apostle Paul by Rembrandt van Rijn (c1675). Courtesy National Gallery of Art/Wikipedia

Everything You Know about the Gospel of Paul is Likely Wrong

By David Bentley Hart

This past year, I burdened the English-speaking world with my very own translation of the New Testament – a project that I undertook at the behest of my editor at Yale University Press, but that I agreed to almost in the instant that it was proposed. I had long contemplated attempting a ‘subversively literal’ rendering of the text. Over the years, I had become disenchanted with almost all the standard translations available, and especially with modern versions produced by large committees of scholars, many of whom (it seems to me) have been predisposed by inherited theological habits to see things in the text that are not really there, and to fail to notice other things that most definitely are. Committees are bland affairs, and tend to reinforce our expectations; but the world of late antiquity is so remote from our own that it is almost never what we expect.

Ask, for instance, the average American Christian – say, some genial Presbyterian who attends church regularly and owns a New International Version of the Bible – what gospel the Apostle Paul preached. The reply will fall along predictable lines: human beings, bearing the guilt of original sin and destined for eternal hell, cannot save themselves through good deeds, or make themselves acceptable to God; yet God, in his mercy, sent the eternal Son to offer himself up for our sins, and the righteousness of Christ has been graciously imputed or imparted to all who have faith.

Some details might vary, but not the basic story. And, admittedly, much of the tale’s language is reminiscent of terms used by Paul, at least as filtered through certain conventional translations; but it is a fantasy. It presumes elements of later Christian belief absent from Paul’s own writings. Some of these (like the idea that humans are born damnably guilty in God’s eyes, or that good deeds are not required for salvation) arise from a history of misleading translations. Others (like the concept of an eternal hell of conscious torment) are entirely imagined, attributed to Paul on the basis of some mistaken picture of what the New Testament as a whole teaches.

Paul’s actual teachings, however, as taken directly from the Greek of his letters, emphasise neither original guilt nor imputed righteousness (he believed in neither), but rather the overthrow of bad angels. A certain long history of misreadings – especially of the Letter to the Romans – has created an impression of Paul’s theological concerns so entirely alien to his conceptual world that the real Paul occupies scarcely any place at all in Christian memory. It is true that he addresses issues of ‘righteousness’ or ‘justice’, and asserts that this is available to us only through the virtue of pistis – ‘faith’ or ‘trust’ or even ‘fidelity’. But for Paul, pistis largely consists in works of obedience to God and love of others. The only erga, ‘works’, which he is anxious to claim make no contribution to personal sanctity, are certain ‘ritual observances’ of the Law of Moses, such as circumcision or kosher dietary laws. This, though, means that the separation between Jews and gentiles has been annulled in Christ, opening salvation to all peoples; it does not mean (as Paul fears some might imagine) that God has abandoned his covenant with Israel.

Questions of law and righteousness, however, are secondary concerns. The essence of Paul’s theology is something far stranger, and unfolds on a far vaster scale. For Paul, the present world-age is rapidly passing, while another world-age differing from the former in every dimension – heavenly or terrestrial, spiritual or physical – is already dawning. The story of salvation concerns the entire cosmos; and it is a story of invasion, conquest, spoliation and triumph. For Paul, the cosmos has been enslaved to death, both by our sin and by the malign governance of those ‘angelic’ or ‘daemonian’ agencies who reign over the earth from the heavens, and who hold spirits in thrall below the earth. These angelic beings, these Archons, whom Paul calls Thrones and Powers and Dominations and Spiritual Forces of Evil in the High Places, are the gods of the nations. In the Letter to the Galatians, he even hints that the angel of the Lord who rules over Israel might be one of their number. Whether fallen, or mutinous, or merely incompetent, these beings stand intractably between us and God. But Christ has conquered them all.

In descending to Hades and ascending again through the heavens, Christ has vanquished all the Powers below and above that separate us from the love of God, taking them captive in a kind of triumphal procession. All that now remains is the final consummation of the present age, when Christ will appear in his full glory as cosmic conqueror, having ‘subordinated’ (hypetaxen) all the cosmic powers to himself – literally, having properly ‘ordered’ them ‘under’ himself – and will then return this whole reclaimed empire to his Father. God himself, rather than wicked or inept spiritual intermediaries, will rule the cosmos directly. Sometimes, Paul speaks as if some human beings will perish along with the present age, and sometimes as if all human beings will finally be saved. He never speaks of some hell for the torment of unregenerate souls.

The new age, moreover – when creation will be glorified and transformed into God’s kingdom – will be an age of ‘spirit’ rather than ‘flesh’. For Paul, these are two antithetical principles of creaturely existence, though most translations misrepresent the antithesis as a mere contrast between God’s ‘spirit’ and human perversity. But Paul is quite explicit: ‘Flesh and blood cannot inherit the Kingdom.’ Neither can psychē, ‘soul’, the life-principle or anima that gives life to perishable flesh. In the age to come, the ‘psychical body’, the ‘ensouled’ or ‘animal’ way of life, will be replaced by a ‘spiritual body’, beyond the reach of death – though, again, conventional translations usually obscure this by speaking of the former, vaguely, as a ‘natural body’.

Paul’s voice, I hasten to add, is hardly an eccentric one. John’s Gospel too, for instance, tells of the divine saviour who comes ‘from above’, descending from God’s realm into this cosmos, overthrowing its reigning Archon, bringing God’s light into the darkness of our captivity, and ‘dragging’ everyone to himself. And, in varying registers, so do most of the texts of the New Testament. As I say, it is a conceptual world very remote from our own.

And yet it would be foolish to try to judge the gospel’s spiritual claims by how plausible we find the cosmology that accompanies them. For one thing, the ancient picture of reality might be in many significant respects more accurate than ours. And it would surely be a category error to assume that the story of Christ’s overthrow of death and sin cannot express a truth that transcends the historical and cultural conditions in which it was first told. But, before we decide anything at all about that story, we must first recover it from the very different stories that we so frequently tell in its place.

This Idea was made possible through the support of a grant from Templeton Religion Trust.  The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of Templeton Religion Trust.Aeon counter – do not remove

David Bentley Hart

This article was originally published at Aeon and has been republished under Creative Commons.

‘Know thyself’ is not just silly advice: it’s actively dangerous

idea_sized-edgar_degas_-_mrs_jeantaud_in_the_mirror_-_google_art_project

Detail from Madame Jeantaud au miroir by Edgar Degas c1875. Courtesy Wikipedia

By Bence Nanay

There is a phrase you are as likely to find in a serious philosophy text as you are in the wackiest self-help book: ‘Know thyself!’ The phrase has serious philosophical pedigree: by Socrates’ time, it was more or less received wisdom (apparently chiselled into the forecourt of the Temple of Apollo at Delphi) though a form of the phrase reaches back to Ancient Egypt. And ever since, the majority of philosophers have had something to say about it.

But ‘Know thyself!’ also has self-help appeal. Is your aim to accept yourself? Well, you need to know thyself for that first. Or is it to make good decisions – decisions that are right for you? Again, this would be difficult unless you knew thyself. The problem is that none of this is based on a realistic picture of the self and of how we make decisions. This whole ‘knowing thyself’ business is not as simple as it seems. In fact, it might be a serious philosophical muddle – not to say bad advice.

Let’s take an everyday example. You go to the local cafe and order an espresso. Why? Just a momentary whim? Trying something new? Maybe you know that the owner is Italian and she would judge you if you ordered a cappuccino after 11am? Or are you just an espresso kind of person?

I suspect that the last of these options best reflects your choices. You do much of what you do because you think it meshes with the kind of person you think you are. You order eggs Benedict because you’re an eggs Benedict kind of person. It’s part of who you are. And this goes for many of our daily choices. You go to the philosophy section of the bookshop and the fair-trade section at the grocer’s shop because you are a philosopher who cares about global justice, and that’s what philosophers who care about global justice do.

We all have fairly stable ideas about what kind of people we are. And that’s all for the best – we don’t have to think too hard when ordering coffee every morning. These ideas about what kind of people we are might also be accompanied by ideas about what kind of people we are not – I’m not going to shop at Costco, I’m not that kind of person. (This way of thinking about yourself could easily slide into moralising your preferences, but let’s not open that can of worms here.)

There is, however, a deep problem with this mental set-up: people change. There are tumultuous periods when we change drastically – in times of romantic love, say, or divorce, or having children. Often we are aware of these changes. After you’ve had kids, you probably notice that you’ve suddenly become a morning person.

But most changes happen gradually and under the radar. A few mechanisms of these changes are well understood, such as the ‘mere exposure effect’: the more you are exposed to something, the more you tend to like it. Another, more troubling one, is that the more your desire for something is frustrated, the more you tend to dislike it. These changes happen gradually, often without us noticing anything.

The problem is this: if we change while our self-image remains the same, then there will be a deep abyss between who we are and who we think we are. And this leads to conflict.

To make things worse, we are exceptionally good at dismissing even the possibility that we might change. Psychologists have given this phenomenon a fancy name: ‘The End of History Illusion’. We all think that who we are now is the finished product: we will be the same in five, 10, 20 years. But, as these psychologists found, this is completely delusional – our preferences and values will be very different already in the not-so-distant future.

Why is this such a big issue? It might be okay when it comes to ordering the espresso. Maybe you now slightly prefer cappuccino, but you think of yourself as an espresso kind of person, so you keep ordering espresso. So you’re enjoying your morning drink a little bit less – not such a big deal.

But what is true of espresso is true of other preferences and values in life. Maybe you used to genuinely enjoy doing philosophy, but you no longer do. But as being a philosopher is such a stable feature of your self-image, you keep doing it. There is a huge difference between what you like and what you do. What you do is dictated not by what you like, but by what kind of person you think you are.

The real harm of this situation is not only that you spend much of your time doing something that you don’t particularly like (and often positively dislike). Instead, it is that the human mind does not like blatant contradictions of this kind. It does its best to hide this contradiction: a phenomenon known as cognitive dissonance.

Hiding a gaping contradiction between what we like and what we do takes significant mental effort and this leaves little energy to do anything else. And if you have little mental energy left, it is so much more difficult to switch off the TV or to resist spending half an hour looking at Facebook or Instagram.

‘Know thyself!’, right? If we take the importance of change in our lives seriously, this just isn’t an option. You might be able to know what you think of yourself in this moment. But what you think of yourself is very different from who you are and what you actually like. And in a couple of days or weeks, all of this might change anyway.

Knowing thyself is an obstacle to acknowledging and making peace with constantly changing values. If you know thyself to be such-and-such a kind of person, this limits your freedom considerably. You might have been the one who chose to be an espresso person or a donating-to-charity person but, once these features are built into your self-image, you have very little say in what direction your life is going. Any change would be either censored or lead to cognitive dissonance. As André Gide wrote in Autumn Leaves (1950): ‘A caterpillar who seeks to know himself would never become a butterfly.’Aeon counter – do not remove


Bence Nanay is professor of philosophy at the University of Antwerp and Senior Research Associate at the University of Cambridge. He is the author of Aesthetics as Philosophy of Perception (2016).

This article was originally published at Aeon and has been republished under Creative Commons.

Our Illusory Sense of Agency Has a Deeply Important Social Purpose

I’m trying to concentrate on writing this piece, but my two grandchildren in the room next door have stopped making paper aeroplanes and started arguing. ‘You kicked me,’ yells Freya. Her brother Ben insists it was an accident. ‘I didn’t mean to,’ he cries.

Why should this be an excuse, I wonder? The pain is the same in either case.

But Freya is more concerned with Ben’s intention than the pain. ‘You did it deliberately,’ she says. But did Ben hit her on purpose? How do we know, and why should it matter?

It certainly doesn’t come from having access to the brain processes that underlie our actions. After all, I have no insight into the electrochemical particulars of how my nerves are firing or how neurotransmitters are coursing through my brain and bloodstream. Instead, our experience of agency seems to come from inferences we make about the causes of our actions, based on crude sensory data. And, as with any kind of perception based on inference, our experience can be tricked.

Look at this picture of a domino:

We clearly see five convex knobs and three concave hollows, despite the fact we’re looking at a flat screen. Our brain creates the illusion because we expect light to come from above, and so we can infer the 3D shapes from the shading. If the shadow is at the top, we see a hollow. If it is at the bottom, we see a knob. But, for the same reason, if you turn the picture upside down you’ll see three knobs and five hollows.

It’s the same with our experience of agency. Our inferences can be wrong. I can believe that I am acting when it’s actually someone else. Or I can believe that someone else is acting when it’s actually me.

Such illusions aren’t confined to highly contrived laboratory situations. In the 1970s, facilitated communication, or supported typing, was promoted as a teaching strategy for helping people with autism communicate with the wider world. The child’s fingers rested on the keys and the facilitator helped the child to type by detecting their intended movements. The technique was eventually discredited after many demonstrations showed that any ‘communication’ came from the facilitator, and not from the child. But the striking thing was that most of the facilitators sincerely believed that they were not the agents of these actions. Free will is not something we have, so much as something we feel.

These observations point to a fundamental paradox about consciousness. We have the strong impression that we choose when we do and don’t act and, as a consequence, we hold people responsible for their actions. Yet many of the ways we encounter the world don’t require any real conscious processing, and our feeling of agency can be deeply misleading.

If our experience of action doesn’t really affect what we do in the moment, then what is it for? Why have it? Contrary to what many people believe, I think agency is only relevant to what happens after we act – when we try to justify and explain ourselves to each other.

There are a few hints that support this view. Take the subjective experience of fluency: the easier it feels to do something, the more likely you are to think that you’re in control of the action. But we have to learn to interpret such feelings, and what other people tell us can alter the way we respond. When doing hard mental work, we have a strong sense of making an effort. Does this mean that we’ll be tired and need a rest, or that we’ll be energised and ready to keep going? If someone tells us we will feel depleted, we’ll perform badly on the task. But if we’re told we should feel energised, we’ll do well. In the same way we learn to associate certain experiences of action with a sense of agency. And it is these kinds of action that we feel responsible for.

The bond between agency and mutual accountability goes back at least as far as 300 BCE. The Greek philosophers, Epicurus and the Stoics, wanted to defend the idea of free will despite believing the universe to be pre-determined by the laws of nature. Free will has two fundamental features, they said. The first is the feeling of being in control: ‘I am the cause of this event.’ The second is a grasp of the counterfactual: ‘I could have chosen otherwise.’ Pangs of regret – something we’ve all experienced – make no sense unless we believe that we could have done something differently. Furthermore, Epicurus believed that we acquire this sense of responsibility via the praise and blame we received from others. By listening to our peers and elders, we become attuned to our capacity to effect change in the world.

Our conscious experience is what enables us to pick up these lessons. It might be surplus to requirements for most of our actions, but we certainly need consciousness when we’re reflecting on our life and discussing it with other people. For example, many children are reminded to think before they act, lest they regret it. They also learn that ‘accidents’ are more readily excused than intentional wrongs. So my grandson Ben might not really be sure whether he kicked Freya accidentally or on purpose, but he knows he’s got a better chance of getting away with it if he claims the kick was unintentional. In this way, we gradually figure out what it ‘feels’ like for our actions to be ‘deliberate’, and if all goes well, we develop into adults with a sense of responsibility about our own powers.

Given the social dimensions of agency, it’s unsurprising that the norms about responsibility vary considerably. In another time and place Ben might not get off so lightly for inadvertently kicking Freya. Certain Pacific Islander cultures, for example, believe in the ‘opacity’ of other minds – the idea that it is impossible, or at least very difficult, to know what other people think and feel. As a result, people are frequently held responsible for their wrongdoings, even when they were the result of an accident or error. Intentionality is impossible to grasp, and therefore largely irrelevant. Similarly, among the Mopan Maya of Belize and Guatemala, children and adults alike are punished according to the outcome of their actions.

What’s more, by considering our experiences and sharing them with others, we can reach a consensus about what the world and we humans are really like. A consensus need not be accurate to be attractive or useful, of course. For a long time everyone agreed that the Sun went round the Earth. Perhaps our sense of agency is a similar trick: it might not be ‘true’, but it maintains social cohesion by creating a shared basis for morality. It helps us understand why people act as they do – and, as a result, makes it is easier to predict people’s behaviour.

Responsibility, then, is the real currency of conscious experience. In turn, it is also the bedrock of culture. Humans are social animals, but we’d be unable to cooperate or get along in communities if we couldn’t agree on the kinds of creatures we are and the sort of world we inhabit. It’s only by reflecting, sharing and accounting for our experiences that we can find such common ground. To date, the scientific method is the most advanced cognitive technology we’ve developed for honing the accuracy of our consensus – a method involving continuous experimentation, discussion and replication. Ben and Freya’s debate about the meaning of action is just the beginning.Aeon counter – do not remove

Chris Frith

This article was originally published at Aeon and has been republished under Creative Commons.