Richard Feynman was Wrong about Beauty and Truth in Science

Tuva

Spaceborne Imaging Radar photo of the autonomous republic of Tuva, the subject of Richard Feynmann’s intense interest during the latter part of his life and documented in Tuva or Bust! by Ralph Leighton. Photo taken from Space Shuttle Endeavour in 1994. Photo courtesy NASA/JPL

Massimo Pigliucci | Aeon Ideas

Edited by Nigel Warburton

The American physicist Richard Feynman is often quoted as saying: ‘You can recognise truth by its beauty and simplicity.’ The phrase appears in the work of the American science writer K C Cole – in her Sympathetic Vibrations: Reflections on Physics as a Way of Life (1985) – although I could not find other records of Feynman writing or saying it. We do know, however, that Feynman had great respect for the English physicist Paul Dirac, who believed that theories in physics should be both simple and beautiful.

Feynman was unquestionably one of the outstanding physicists of the 20th century. To his contributions to the Manhattan Project and the solution of the mystery surrounding the explosion of the Space Shuttle Challenger in 1986, add a Nobel Prize in 1965 shared with Julian Schwinger and Shin’ichirō Tomonaga ‘for their fundamental work in quantum electrodynamics, with deep-ploughing consequences for the physics of elementary particles’. And he played the bongos too!

In the area of philosophy of science, though, like many physicists of his and the subsequent generation (and unlike those belonging to the previous one, including Albert Einstein and Niels Bohr), Feynman didn’t really shine – to put it mildly. He might have said that philosophy of science is as helpful to science as ornithology is to birds (a lot of quotations attributed to him are next to impossible to source). This has prompted countless responses from philosophers of science, including that birds are too stupid to do ornithology, or that without ornithology many birds species would be extinct.

The problem is that it’s difficult to defend the notion that the truth is recognisable by its beauty and simplicity, and it’s an idea that has contributed to getting fundamental physics into its current mess; for more on the latter topic, check out The Trouble with Physics (2006) by Lee Smolin, or Farewell to Reality (2013) by Jim Baggott, or subscribe to Peter Woit’s blog. To be clear, when discussing the simplicity and beauty of theories, we are not talking about Ockham’s razor (about which my colleague Elliott Sober has written for Aeon). Ockham’s razor is a prudent heuristic, providing us with an intuitive guide to the comparisons of different hypotheses. Other things being equal, we should prefer simpler ones. More specifically, the English monk William of Ockham (1287-1347) meant that ‘[hypothetical] entities are not to be multiplied without necessity’ (a phrase by the 17th-century Irish Franciscan philosopher John Punch). Thus, Ockham’s razor is an epistemological, not a metaphysical principle. It’s about how we know things, whereas Feynman’s and Dirac’s statements seem to be about the fundamental nature of reality.

But as the German theoretical physicist Sabine Hossenfelder has pointed out (also in Aeon), there is absolutely no reason to think that simplicity and beauty are reliable guides to physical reality. She is right for a number of reasons.

To begin with, the history of physics (alas, seldom studied by physicists) clearly shows that many simple theories have had to be abandoned in favour of more complex and ‘ugly’ ones. The notion that the Universe is in a steady state is simpler than one requiring an ongoing expansion; and yet scientists do now think that the Universe has been expanding for almost 14 billion years. In the 17th century Johannes Kepler realised that Copernicus’ theory was too beautiful to be true, since, as it turns out, planets don’t go around the Sun in perfect (according to human aesthetics!) circles, but rather following somewhat uglier ellipses.

And of course, beauty is, notoriously, in the eye of the beholder. What struck Feynman as beautiful might not be beautiful to other physicists or mathematicians. Beauty is a human value, not something out there in the cosmos. Biologists here know better. The capacity for aesthetic appreciation in our species is the result of a process of biological evolution, possibly involving natural selection. And there is absolutely no reason to think that we evolved an aesthetic sense that somehow happens to be tailored for the discovery of the ultimate theory of everything.

The moral of the story is that physicists should leave philosophy of science to the pros, and stick to what they know best. Better yet: this is an area where fruitful interdisciplinary dialogue is not just a possibility, but arguably a necessity. As Einstein wrote in a letter to his fellow physicist Robert Thornton in 1944:

I fully agree with you about the significance and educational value of methodology as well as history and philosophy of science. So many people today – and even professional scientists – seem to me like someone who has seen thousands of trees but has never seen a forest. A knowledge of the historic and philosophical background gives that kind of independence from prejudices of his generation from which most scientists are suffering. This independence created by philosophical insight is – in my opinion – the mark of distinction between a mere artisan or specialist and a real seeker after truth.

Ironically, it was Plato – a philosopher – who argued that beauty is a guide to truth (and goodness), apparently never having met an untruthful member of the opposite (or same, as the case might be) sex. He wrote about that in the Symposium, the dialogue featuring, among other things, sex education from Socrates. But philosophy has made much progress since Plato, and so has science. It is therefore a good idea for scientists and philosophers alike to check with each other before uttering notions that might be hard to defend, especially when it comes to figures who are influential with the public. To quote another philosopher, Ludwig Wittgenstein, in a different context: ‘Whereof one cannot speak, thereof one must be silent.’Aeon counter – do not remove


Massimo Pigliucci is professor of philosophy at City College and at the Graduate Center of the City University of New York. He is the author of How to Be a Stoic: Ancient Wisdom for Modern Living (2017) and his most recent book is A Handbook for New Stoics: How to Thrive in a World Out of Your Control (2019), co-authored with Gregory Lopez.

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Philosophy Should Care about the Filthy, Excessive and Unclean

Thomas White | Aeon Ideas

Philosophy traditionally has been about ‘higher’ questions: what is knowledge? What is the meaning of justice? What is the nature of ultimate reality? These questions soar above the petty concerns of the everyday and reach towards a realm of pure ideas. But can the ‘unclean’ – dirt, mud, bodily wastes, the grime of existence – be relevant to the philosopher’s quest for wisdom and the truth? Philosophers don’t often discuss filth and all its disgusting variations, but investigating the unclean turns out to be as useful an exercise as examining the highest ideals of justice, morality and metaphysics.

In his dialogue Parmenides, Plato gives us an inkling of the significance of philosophising about the unclean, which he names ‘undignified objects’, such as hair, mud and dirt. The young Socrates, at this stage but an entry-level philosopher, is discussing the foundations of reality with the venerable Parmenides. While this encounter between these philosophers about ‘undignified objects’ is brief, it is profound, for it shows how insightful thinkers use digressions and marginal comments to demonstrate that not everything is as clearcut as system-builders – including even Plato – might think.

Parmenides quizzes Socrates about whether the theory of ideal forms – the argument that particular material objects have correlated ideal patterns, which are the perfect forms of the imperfect things – can include mud and dirt. Can there be a perfect form of filth? Taken aback, Socrates confesses that he is troubled by this point because it seems to lead to nonsense: ‘perfect filth’ is contradictory. Instead, Socrates prefers to return to discussing the higher ideals of ‘goodness’ and ‘beauty’. Confronted by Parmenides with the unseemly facts of mud and dirt, he takes refuge in the beautiful – unlike Antoine Roquentin, the protagonist in Jean-Paul Sartre’s philosophical novel Nausea (1938), who, in confronting the ugly facticity of the world, obtains a glimpse of actual, albeit repugnant, reality.

Socrates’ puzzlement at how to explain the very lowest (dirt, mud) in terms of the very highest (ideal forms) suggests the limitations of the dualistic, two-world theory that has formed the basis of several millennia of Western thought. The unclean’s ‘undignified objects’ represent a kind of outer twilight zone – a metaphysical no-man’s land – that eludes overarching theories about the meaning of reality. The very resistance of filth’s inclusion into a master philosophical system serves as a cautionary note, and a lesson in Socratic humility, warning the ambitious and overeager intellectual to slow down. Do not try to assimilate every aspect of our diverse experience into grand explanatory narratives. The unclean’s raw existence is a great intractable that rudely interrupts a philosopher’s thinking when it fails to fit neatly into the theory of forms, thus forcing the philosopher to curb hasty, ambitious generalisations, and think even harder and more clearly. (The classicist Edith Hamilton, in her introductory notes to Parmenides, suggests that Plato attacked his own theory of Platonic ideas in order to know the truth, not to defend his own preconceived views.)

Parmenides’ concerns about the limits of the theory of forms presages the empiricist Francis Bacon. In Novum Organum (1620), he argued similarly for the limits of intellectual speculation, and about the dangers of creating idols out of promiscuously generated philosophical systems by exceeding speculative boundaries:

The understanding must also be cautioned against the intemperance of systems, so far as regards its giving or withholding its assent; for such intemperance appears to fix and perpetuate idols, so as to leave no means of removing them.

In our own day, Slavoj Žižek in his book Disparities (2016) echoes the Parmenidean point about how the unclean can disrupt our comfortable theories about reality: ‘[S]hit remains an excess which does not fit our daily reality.’ An experience of disgust in the presence of the filthy and unclean disturbs our sense of systems and order, causing a ‘disintegration’ of our metaphysical understanding of reality, ‘the very ontological coordinates which enable [us] to locate an object “out there”.’

Like Plato, Žižek uses allusions to the unclean to alert the reader to how repugnant, discordant facts can undercut a particular vision of reality. He also expands the use of the metaphor of filth to call our attention to something else closer to his heart: the failings of our modern political discourse. Bacon warned us of intellectual intemperance, but Žižek uses references to the unclean to warn us of modern political intemperance. In the cases of Plato, Bacon and Žižek, the philosophical issue raised is about boundaries and the implications of transgressing them.

In the unclean, Žižek finds the ultimate metaphor for the dumbing down of political thought and speech, a way of understanding the collapse of modern political discourse – itself an echo of Plato’s critique of the false, that is, ‘sophistical’ use of political language – in which ‘public vulgarity’ is used without shame.

He begins his argument with a scene from a surreal film from 1974 in which people at a dinner party defecate in public:

We probably all remember the scene from Luis Buñuel’s The Phantom of Liberty in which relations between eating and excreting are inverted: people sit at their toilets around the table, pleasantly talking, and when they want to eat, they silently ask the housekeeper: ‘Where is that place, you know?,’ and sneak away to a small room in the back.

Political figures today, Žižek argues, are committing the verbal equivalent of this public defecation. They are violating traditional, unwritten rules and boundaries that are used to guide public conduct by making outrageous statements that were once taboo. ‘They are a clear sign of the regression of our public sphere,’ he writes in Newsweek in 2016. ‘Accusations and ideas that were till now confined to the obscure underworld of racist obscenity are now gaining a foothold in official discourse.’ And citing Georg Hegel’s notion of Sittlichkeit – the ‘the thick background of (unwritten) rules of social life … that tell us what we can and cannot do’, Žižek further observes that ‘These [unwritten] rules are disintegrating today: what was a couple of decades ago simply unsayable in a public debate can now be pronounced with impunity.’

A discharge of verbal political filth has changed the public sphere into a kind of collective public toilet for language users – lurid speeches full of nasty ignorance, blatant vulgarity and raw prejudice. Plato and Žižek, with some tacit support from Bacon, use the notion of the unclean in similar ways to offer, implicitly, practical advice about how humans should conduct themselves: be wary of intemperately overstepping limits by chasing overweening ambitions, whether intellectual or political, which soil clear thinking and logic, and/or corrupt language, politics and ethics. Discussions of lowly filth, and all of its disgusting variations, are not merely the province of vulgarians, but seem to offer life lessons for everyone, not just philosophers.Aeon counter – do not remove


Thomas White is a Wiley Journal contributing author, whose philosophical and theological writings have appeared in print and online.

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

To Avoid Moral Failure, Don’t See People as Sherlock Does

sherlock-holmes

Suspicious minds; William Gillette as Sherlock Holmes (right) and Bruce McRae as Dr John Watson in the play Sherlock Holmes (c1900). Courtesy Wikimedia

Rima Basu | Aeon Ideas

If we’re the kind of people who care both about not being racist, and also about basing our beliefs on the evidence that we have, then the world presents us with a challenge. The world is pretty racist. It shouldn’t be surprising then that sometimes it seems as if the evidence is stacked in favour of some racist belief. For example, it’s racist to assume that someone’s a staff member on the basis of his skin colour. But what if it’s the case that, because of historical patterns of discrimination, the members of staff with whom you interact are predominantly of one race? When the late John Hope Franklin, professor of history at Duke University in North Carolina, hosted a dinner party at his private club in Washington, DC in 1995, he was mistaken as a member of staff. Did the woman who did so do something wrong? Yes. It was indeed racist of her, even though Franklin was, since 1962, that club’s first black member.

To begin with, we don’t relate to people in the same way that we relate to objects. Human beings are different in an important way. In the world, there are things – tables, chairs, desks and other objects that aren’t furniture – and we try our best to understand how this world works. We ask why plants grow when watered, why dogs give birth to dogs and never to cats, and so on. But when it comes to people, ‘we have a different way of going on, though it is hard to capture just what that is’, as Rae Langton, now professor of philosophy at the University of Cambridge, put it so nicely in 1991.

Once you accept this general intuition, you might begin to wonder how can we capture that different way in which we ought to relate to others. To do this, first we must recognise that, as Langton goes on to write, ‘we don’t simply observe people as we might observe planets, we don’t simply treat them as things to be sought out when they can be of use to us, and avoid when they are a nuisance. We are, as [the British philosopher P F] Strawson says, involved.’

This way of being involved has been played out in many different ways, but here’s the basic thought: being involved is thinking that others’ attitudes and intentions towards us are important in a special way, and that our treatment of others should reflect that importance. We are, each of us, in virtue of being social beings, vulnerable. We depend upon others for our self-esteem and self-respect.

For example, we each think of ourselves as having a variety of more or less stable characteristics, from marginal ones such as being born on a Friday to central ones such as being a philosopher or a spouse. The more central self-descriptions are important to our sense of self-worth, to our self-understanding, and they constitute our sense of identity. When these central self-descriptions are ignored by others in favour of expectations on the basis of our race, gender or sexual orientation, we’re wronged. Perhaps our self-worth shouldn’t be based on something so fragile, but not only are we all-too-human, these self-descriptions also allow us to understand who we are and where we stand in the world.

This thought is echoed in the American sociologist and civil rights activist W E B DuBois’s concept of double consciousness. In The Souls of Black Folk (1903), DuBois notes a common feeling: ‘this sense of always looking at one’s self through the eyes of others, of measuring one’s soul by the tape of a world that looks on in amused contempt and pity’.

When you believe that John Hope Franklin must be a staff member rather than a club member, you’ve made predictions of him and observed him in the same way that one might observe the planets. Our private thoughts can wrong other people. When someone forms beliefs about you in this predictive way, they fail to see you, they fail to interact with you as a person. This is not only upsetting. It is a moral failing.

The English philosopher W K Clifford argued in 1877 that we were morally criticisable if our beliefs weren’t formed in the right way. He warned that we have a duty to humanity to never believe on the basis of insufficient evidence because to do so would be to put society at risk. As we look at the world around us and the epistemic crisis in which we find ourselves, we see what happens when Clifford’s imperative is ignored. And if we combine Clifford’s warning with DuBois’s and Langton’s observations, it becomes clear that, for our belief-forming practices, the stakes aren’t just high because we depend on one another for knowledge – the stakes are also high because we depend on one another for respect and dignity.

Consider how upset Arthur Conan Doyle’s characters get with Sherlock Holmes for the beliefs this fictional detective forms about them. Without fail, the people whom Holmes encounters find the way he forms beliefs about others to be insulting. Sometimes it’s because it is a negative belief. Often, however, the belief is mundane: eg, what they ate on the train or which shoe they put on first in the morning. There’s something improper about the way that Holmes relates to other human beings. Holmes’s failure to relate is not just a matter of his actions or his words (though sometimes it is also that), but what really rubs us up the wrong way is that Holmes observes us all as objects to be studied, predicted and managed. He doesn’t relate to us as human beings.

Maybe in an ideal world, what goes on inside our heads wouldn’t matter. But just as the personal is the political, our private thoughts aren’t really only our own. If a man believes of every woman he meets: ‘She’s someone I can sleep with,’ it’s no excuse that he never acts on the belief or reveals the belief to others. He has objectified her and failed to relate to her as a human being, and he has done so in a world in which women are routinely objectified and made to feel less-than.

This kind of indifference to the effect one has on others is morally criticisable. It has always struck me as odd that everyone grants that our actions and words are apt for moral critique, but once we enter the realm of thought we’re off the hook. Our beliefs about others matter. We care what others think of us.

When we mistake a person of colour for a staff member, that challenges this person’s central self-descriptions, the descriptions from which he draws his sense of self-worth. This is not to say that there is anything wrong with being a staff member, but if your reason for thinking that someone is staff is tied not only to something he has no control over (his skin colour) but also to a history of oppression (being denied access to more prestigious forms of employment), then that should give you pause.

The facts might not be racist, but the facts that we often rely on can be the result of racism, including racist institutions and policies. So when forming beliefs using evidence that is a result of racist history, we are accountable for failing to show more care and for believing so easily that someone is a staff member. Precisely what is owed can vary along a number of dimensions, but nonetheless we can recognise that some extra care with our beliefs is owed along these lines. We owe each other not only better actions and better words, but also better thoughts.Aeon counter – do not remove


Rima Basu is an assistant professor of philosophy at Claremont McKenna College in California. Her work has been published in Philosophical Studies, among others.

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

How the Dualism of Descartes Ruined our Mental Health

goya-lunatics

Yard with Lunatics 1794, (detail) by Francisco José de Goya y Lucientes. Courtesy Wikimedia/Meadows Museum, Dallas

James Barnes | Aeon Ideas

Toward the end of the Renaissance period, a radical epistemological and metaphysical shift overcame the Western psyche. The advances of Nicolaus Copernicus, Galileo Galilei and Francis Bacon posed a serious problem for Christian dogma and its dominion over the natural world. Following Bacon’s arguments, the natural world was now to be understood solely in terms of efficient causes (ie, external effects). Any inherent meaning or purpose to the natural world (ie, its ‘formal’ or ‘final’ causes) was deemed surplus to requirements. Insofar as it could be predicted and controlled in terms of efficient causes, not only was any notion of nature beyond this conception redundant, but God too could be effectively dispensed with.

In the 17th century, René Descartes’s dualism of matter and mind was an ingenious solution to the problem this created. ‘The ideas’ that had hitherto been understood as inhering in nature as ‘God’s thoughts’ were rescued from the advancing army of empirical science and withdrawn into the safety of a separate domain, ‘the mind’. On the one hand, this maintained a dimension proper to God, and on the other, served to ‘make the intellectual world safe for Copernicus and Galileo’, as the American philosopher Richard Rorty put it in Philosophy and the Mirror of Nature (1979). In one fell swoop, God’s substance-divinity was protected, while empirical science was given reign over nature-as-mechanism – something ungodly and therefore free game.

Nature was thereby drained of her inner life, rendered a deaf and blind apparatus of indifferent and value-free law, and humankind was faced with a world of inanimate, meaningless matter, upon which it projected its psyche – its aliveness, meaning and purpose – only in fantasy. It was this disenchanted vision of the world, at the dawn of the industrial revolution that followed, that the Romantics found so revolting, and feverishly revolted against.

The French philosopher Michel Foucault in The Order of Things (1966) termed it a shift in ‘episteme’ (roughly, a system of knowledge). The Western psyche, Foucault argued, had once been typified by ‘resemblance and similitude’. In this episteme, knowledge of the world was derived from participation and analogy (the ‘prose of the world’, as he called it), and the psyche was essentially extroverted and world-involved. But after the bifurcation of mind and nature, an episteme structured around ‘identity and difference’ came to possess the Western psyche. The episteme that now prevailed was, in Rorty’s terms, solely concerned with ‘truth as correspondence’ and ‘knowledge as accuracy of representations’. Psyche, as such, became essentially introverted and untangled from the world.

Foucault argued, however, that this move was not a supersession per se, but rather constituted an ‘othering’ of the prior experiential mode. As a result, its experiential and epistemological dimensions were not only denied validity as an experience, but became the ‘occasion of error’. Irrational experience (ie, experience inaccurately corresponding to the ‘objective’ world) then became a meaningless mistake – and disorder the perpetuation of that mistake. This is where Foucault located the beginning of the modern conception of ‘madness’.

Although Descartes’s dualism did not win the philosophical day, we in the West are still very much the children of the disenchanted bifurcation it ushered in. Our experience remains characterised by the separation of ‘mind’ and ‘nature’ instantiated by Descartes. Its present incarnation  – what we might call the empiricist-materialist position  –  not only predominates in academia, but in our everyday assumptions about ourselves and the world. This is particularly clear in the case of mental disorder.

Common notions of mental disorder remain only elaborations of ‘error’, conceived of in the language of ‘internal dysfunction’ relative to a mechanistic world devoid of any meaning and influence. These dysfunctions are either to be cured by psychopharmacology, or remedied by therapy meant to lead the patient to rediscover ‘objective truth’ of the world. To conceive of it in this way is not only simplistic, but highly biased.

While it is true that there is value in ‘normalising’ irrational experiences like this, it comes at a great cost. These interventions work (to the extent that they do) by emptying our irrational experiences of their intrinsic value or meaning. In doing so, not only are these experiences cut off from any world-meaning they might harbour, but so too from any agency and responsibility we or those around us have – they are only errors to be corrected.

In the previous episteme, before the bifurcation of mind and nature, irrational experiences were not just ‘error’ – they were speaking a language as meaningful as rational experiences, perhaps even more so. Imbued with the meaning and rhyme of nature herself, they were themselves pregnant with the amelioration of the suffering they brought. Within the world experienced this way, we had a ground, guide and container for our ‘irrationality’, but these crucial psychic presences vanished along with the withdrawal of nature’s inner life and the move to ‘identity and difference’.

In the face of an indifferent and unresponsive world that neglects to render our experience meaningful outside of our own minds  –  for nature-as-mechanism is powerless to do this  –  our minds have been left fixated on empty representations of a world that was once its source and being. All we have, if we are lucky to have them, are therapists and parents who try to take on what is, in reality, and given the magnitude of the loss, an impossible task.

But I’m not going to argue that we just need to ‘go back’ somehow. On the contrary, the bifurcation of mind and nature was at the root of immeasurable secular progress –  medical and technological advance, the rise of individual rights and social justice, to name just a few. It also protected us all from being bound up in the inherent uncertainty and flux of nature. It gave us a certain omnipotence – just as it gave science empirical control over nature – and most of us readily accept, and willingly spend, the inheritance bequeathed by it, and rightly so.

It cannot be emphasised enough, however, that this history is much less a ‘linear progress’ and much more a dialectic. Just as unified psyche-nature stunted material progress, material progress has now degenerated psyche. Perhaps, then, we might argue for a new swing in this pendulum. Given the dramatic increase in substance-use issues and recent reports of a teenage ‘mental health crisis’ and teen suicide rates rising in the US, the UK and elsewhere to name only the most conspicuous, perhaps the time is in fact overripe.

However, one might ask, by what means? There has been a resurgence of ‘pan-experiential’ and idealist-leaning theories in several disciplines, largely concerned with undoing the very knot of bifurcation and the excommunication of a living nature, and creating in its wake something afresh. This is because attempts at explaining subjective experience in empiricist-materialist terms have all but failed (principally due to what the Australian philosopher David Chalmers in 1995 termed the ‘the hard problem’ of consciousness). The notion that metaphysics is ‘dead’ would in fact be met with very significant qualification in certain quarters – indeed, the Canadian philosopher Evan Thompson et al argued along the same lines in a recent essay in Aeon.

It must be remembered that mental disorder as ‘error’ rises and falls with the empiricist-materialist metaphysics and the episteme it is a product of. Therefore, we might also think it justified to begin to reconceptualise the notion of mental disorder in the same terms as these theories. There has been a decisive shift in psychotherapeutic theory and practice away from the changing of parts or structures of the individual, and towards the idea that it is the very process of the therapeutic encounter itself that is ameliorative. Here, correct or incorrect judgments about ‘objective reality’ start to lose meaning, and psyche as open and organic starts to come back into focus, but the metaphysics remains. We ultimately need to be thinking about mental disorder on a metaphysical level, and not just within the confines of the status quo.Aeon counter – do not remove

James Barnes

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

How do we Pry Apart the True and Compelling from the False and Toxic?

cpu-stack

Stack of CPU’s. Shawn Stutzman, Pexels

David V Johnson | Aeon Ideas

When false and malicious speech roils the body politic, when racism and violence surge, the right and role of freedom of speech in society comes into crisis. People rightly begin to wonder what are the limits, what should be the rules. It is a complicated issue, and resolving it requires care about the exact problems targeted and solutions proposed. Otherwise the risk to free speech is real.

Propaganda from Russian-funded troll farms (boosted by Facebook data breaches) might have contributed to the United Kingdom’s vote to exit the European Union and aided the United States’ election of Donald Trump as president. Conspiracy theories spread by alternative news outlets or over social media sometimes lead to outbreaks of violence. Politicians exploit the mainstream news media’s commitment to balance, to covering newsworthy public statements and their need for viewers or readers by making baseless, sensational claims.

In On Liberty (1859), John Stuart Mill offers the most compelling defence of freedom of speech, conscience and autonomy ever written. Mill argues that the only reason to restrict speech is to prevent harm to others, such as with hate speech and incitement to violence. Otherwise, all speech must be protected. Even if we know a view is false, Mill says, it is wrong to suppress it. We avoid prejudice and dogmatism, and achieve understanding, through freely discussing and defending what we believe against contrary claims.

Today, a growing number of people see these views as naive. Mill’s arguments are better suited to those who still believe in the open marketplace of ideas, where free and rational debate is the best way to settle all disputes about truth and falsity. Who could possibly believe we live in such a world anymore? Instead, what we have is a Wild West of partisanship and manipulation, where social media gurus exploit research in behavioural psychology to compel users to affirm and echo absurd claims. We have a world where people live in cognitive bubbles of the like-minded and share one another’s biases and prejudices. According to this savvy view, our brave new world is too prone to propaganda and conspiracy-mongering to rely on Mill’s optimism about free speech. To do so is to risk abetting the rise of fascist and absolutist tendencies.

In his book How Fascism Works (2018), the American philosopher Jason Stanley cites the Russian television network RT, which presents all sorts of misleading and slanted views. If Mill is right, claims Stanley, then RT and such propaganda outfits ‘should be the paradigm of knowledge production’ because they force us to scrutinise their claims. But this is a reductio ad absurdum of Mill’s argument. Similarly, Alexis Papazoglou in The New Republic questions whether Nick Clegg, the former British deputy prime minister turned Facebook’s new vice president of global affairs and communication, will be led astray by his appreciation of Mill’s On Liberty. ‘Mill seemed to believe that an open, free debate meant the truth would usually prevail, whereas under censorship, truth could end up being accidentally suppressed, along with falsehood,’ writes Papazoglou. ‘It’s a view that seems a bit archaic in the age of an online marketplace of memes and clickbait, where false stories tend to spread faster and wider than their true counterpoints.’

When important and false beliefs and theories gain traction in public conversation, Mill’s protection of speech can be frustrating. But there is nothing new about ‘fake news’, whether in Mill’s age of sensationalist newspapers or in our age of digital media. Nonetheless to seek a solution in restricting speech is foolish and counterproductive – it lends credibility to the illiberal forces you, paradoxically, seek to silence. It also betrays an elitism about engaging with those of different opinions and a cynicism about affording your fellow citizens the freedom to muddle through the morass on their own. If we want to live in a liberal democratic society, rational engagement is the only solution on offer. Rather than restricting speech, we should look to supplement Mill’s view with effective tools for dealing with bad actors and with beliefs that, although false, seem compelling to some.

Fake news and propaganda are certainly problems, as they were in Mill’s day, but the problems they raise are more serious than the falsity of their claims. After all, they are not unique in saying false things, as the latest newspaper corrections will tell you. More importantly, they involve bad actors: people and organisations who intentionally pass off false views as the truth, and hide their nature and motives. (Think Russian troll farms.) Anyone who knows that they are dealing with bad actors – people trying to mislead – ignores them, and justifiably so. It’s not worth your time to consider the claim of someone you know is trying to deceive you.

There is nothing in Mill that demands that we engage any and all false views. After all, there are too many out there and so people have to be selective. Transparency is key, helping people know with whom, or what, they are dealing. Transparency helps filter out noise and fosters accountability, so that bad actors – those who hide their identity for the purpose of misleading others – are eliminated.

Mill’s critics fail to see the truth that is mixed in with the false views that they wish to restrict, and that makes those views compelling. RT, for instance, has covered many issues, such as the US financial crisis, economic inequality and imperialism more accurately than mainstream news channels. RT also includes informed sources who are ignored by other outlets. The channel might be biased toward demeaning the US and fomenting division, but it often pursues this agenda by speaking truths that are not covered in mainstream US media. Informed news-watchers know to view RT and all news sources with skepticism, and there is no reason not to extend the same respect to the entire viewing public, unless you presume you are a better judge of what to believe than your fellow citizens.

Mill rightly thought that the typical case wasn’t one of views that are false, but views that have a mixture of true and false. It would be far more effective to try to engage with the truth in views we despise than to try to ban them for their alleged falsity. The Canadian psychologist and YouTube sensation Jordan Peterson, for example, says things that are false, misogynistic and illiberal, but one possible reason for his following is that he recognises and speaks to a deficit of meaning and values in many young men’s lives. Here, the right approach is to pry apart the true and compelling from the false and toxic, through reasoned consideration. This way, following Mill’s path, presents a better chance of winning over those who are lost to views we despise. It also helps us improve our own understanding, as Mill wisely suggests.Aeon counter – do not remove

David V Johnson

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Philosophical Writing Should Read like a Letter Written to Oneself

kierkegaard2

Søren Kierkegaard at his high desk (1920) by Luplau Janssen. Courtesy Wikipedia

John Lysaker | Aeon Ideas

In memory of Stanley Cavell (1926-2018)

I came to philosophy bursting with things to say. Somewhere along the way, that changed. Not that I stopped talking, or, as time went on, writing. But the mood of it, the key in which it was pitched, moved. I came to feel answerable. And not just to myself or those I knew but to some broader public, some open, indefinite ‘you’. ‘Answer for yourself’ wove into ‘know thyself’.

How though does one register a key change in prose? If philosophy is bound, in part, to the feeling of being answerable, shouldn’t it have more of an epistolary feel? ‘Dear you, here is where I stand, for the time being… Yours, me.’ One ventures thoughts, accounts for them and awaits a reply, only to begin again: ‘Dear you, thank you for your response. So much (or very little) has changed since I received your letter…’

A move toward the epistolary seems right to me, at least for philosophy. Still a gadfly perhaps, but also working through having been stung, and with the vulnerability of doing so before, even for others. But how much philosophy has the feel of a letter? And when we philosophise, are we cognisant of our addressees and the varied situations in which they find us? The view from nowhere has been more or less exiled from epistemology. We know that we know in concrete, situated locales. But has philosophical writing kept pace and developed a feel for what to consider when pondering: how should I write?

Survey philosophy’s history, and the plot thickens. Philosophical writing is a varied affair. Some texts prioritise demonstration, arguing, for example, that ‘truth’ names a working touch between belief and the world. Others favour provocation, as when a dialogue concerning the nature of friendship concludes before a working definition is reached. If we want a definition, we need to generate our own, or ponder what a lack of one implies. Still other texts offer exemplification, as when Simone de Beauvoir in The Second Sex (1949) proves herself to be the agent-intellect that patriarchy insists she’s not. By confronting her historical fate, she shows us how wrong, how unjust that historical fate has been. And she shows us what patriarchy has kept us from.

Genre considerations intensify the question of what should organise philosophical writing: dialogue, treatise, aphorism, essay, professional article and monograph, fragment, autobiography. And if one’s sensibility is more inclusive, letters, manifestos and interviews also become possibilities. No genre is fully scripted, however, hence the need to also consider logical-rhetorical operations: modus ponens, irony, transcendental arguments, allegory, images, analogies, examples, quotations, translation, even voice, a distinctive way of being there and for the reader. So much seems to count when we answer for how we write.

Questions concerning writing sometimes arise when philosophers worry about accessibility and a broader readership. But the possibilities I have enumerated bear directly on thought itself. Writing generates discovery, and genre impacts rather than simply transfers ideas; so too logical-rhetorical operations. Francis Bacon was drawn to the aphorism because it freed observation from scholastic habits, whereas the professional article defers to its lingua franca. The treatise exhausts whatever might be said about a topic – call this the view from everywhere – whereas the essay accepts its partiality and tests its reach relative to topics such as friendship, feminine sexuality, even a fierce love for film. When writing becomes the question, more than outreach calls for consideration.

Here’s a start. How will my thought unfold through this genre and these logical-rhetorical operations? Where will the aphorism, essay or professional article take me, or an exchange of letters? And so too examples, open disagreements, quotation, the labour of translation, or irony for that matter? It is a celebrated trope of surprise and displacement. But a good deal of irony, at least when one turns to the ironist, facilitates self-preservation. It is the reader who is surprised by an encounter with some covert meaning while the author’s overt and covert meanings are fairly settled. (I thus wonder: what does irony keep safe?)

Questions regarding which possibilities to enact cannot be answered through critique, which, following Immanuel Kant, interrogates the character of our judgments and operative concepts, seeking rules that might govern their use. The discoveries that writing occasions are evidence that philosophy belongs too intimately to language to play charioteer to its steeds. Writing is a gamble and, when it’s honest, one faces unexpected results.

Facing a blank page, one might also ask: what relations will this establish with addressees? The polemic seeks converts rather than interlocutors, and at the expense of discovery. And even when an outright polemic is avoided, some schematise opponents rather than read them publicly and carefully, thereby preaching to the converted, which seems a misstep.

Unwilling to proceed dogmatically, one might favour provocation at the expense of doctrine, as some take Plato to do. But any provocation has its own commitments, beginning with the end toward which it provokes its readers. Socrates is one kind of interlocutor, Gaius Laelius quite another, and that is because Plato and Cicero approach education, the soul and their respective states differently. Strict distinctions between provocation and doctrine (or form and content, for that matter) are thus untenable.

Other operations also engage one’s addressees. Examples allow readers to review what’s on offer, something also made possible when meaningful disagreements are staged. (When authors never pause to imagine a disagreement, I feel claustrophobic and throw open a window.) And if one begins to acknowledge how varied one’s addressees could be, other habits become salient. Looking back at my citations, I know that I’ve written texts that suggest ‘whites only’ or ‘women need not apply’.

Texts and readers do not meet in a vacuum, however. I thus wonder: how does one also address prevailing contextual forces, from ethno-nationalisms to white supremacy to the commodification of higher education? It is tempting to imagine a text without footnotes, as if they were ornaments. But in a period so averse to the rigours of knowledge, and so ahistorical in its feel for the truths we have, why not underscore the contested history of a thought, if only to insist: thought is work, the results fragile, and there will be disagreements. Clarity poses another question, and a particular challenge for philosophy, which is not underwritten by experiments. Instead, its ‘results’ are won (or lost) in the presentation. Moreover, philosophical conclusions do not remain philosophical if freed from the path that led to them. ‘God exists’ says one thing in prayer and something else at the close of a proof. Experts often are asked to share their results without showing their work. But showing one’s work is very much the work of philosophy. Can one do so and reach beyond the academy?

Every reader of Plato knows that Socrates, by way of exemplification, is an image of philosophy, from his modes of interrogation to who is interrogated to his reminders that philosophy demands courage. And so too the dialogue itself – it models philosophy. But every text announces: here too is philosophy. The overall bearing of one’s writing thus merits scrutiny. Is it generous or hasty? Has it earned its ‘therefores’ or, after ripping opponents for nuanced failings, does it invoke the intuitively plausible? Does it acknowledge the full range of possible addressees or cloister itself within the narrow folds of the like-minded? Does it challenge its starting points or hide cut corners with jargon and massive generalisations?

Taking my cue from Ludwig Wittgenstein, I would say: philosophy no longer knows its way around writing. And what it does know – the professional article and monograph – is underwritten by conformity rather than philosophical reflection and commitment. Not for all. And many have led elsewhere by example. But on the whole, and thinking of the present moment, the writer’s life remains unexamined in the aspirational context of philosophy.

Looking into a garden of genres and logical-rhetorical operations, I have proposed four orienting questions. How will my thought unfold along these lines? What relationships will they establish with my varied addressees? Will my address be able to navigate the currents of our varied lives and be ‘equal to the moment’, as Walter Benjamin would ask? And finally, what, in the name of philosophy, does my text exemplify? Have I offered a compelling image? ‘Dear you, here is where I stand, for the time being… Yours, me.’Aeon counter – do not remove

John Lysaker

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Atheism has been Part of Many Asian Traditions for Millennia

File 20190328 139361 138qhpw.jpg?ixlib=rb 1.1

Atheism is not a modern concept.
Zoe Margolis, CC BY-NC-ND

Signe Cohen, University of Missouri-Columbia

A group of atheists and secularists recently gathered in Southern California to talk about social and political issues. This was the first of three summits planned by the Secular Coalition for America, an advocacy group based in Washington D.C.

To many, atheism – the lack of belief in a personal god or gods – may appear an entirely modern concept. After all, it would seem that it is religious traditions that have dominated the world since the beginning of recorded history.

As a scholar of Asian religions, however, I’m often struck by the prevalence of atheism and agnosticism – the view that it is impossible to know whether a god exists – in ancient Asian texts. Atheistic traditions have played a significant part in Asian cultures for millennia.

Atheism in Buddhism, Jainism

Buddhists do not believe in a creator God.
Keith Cuddeback, CC BY-NC-ND

While Buddhism is a tradition focused on spiritual liberation, it is not a theistic religion.

The Buddha himself rejected the idea of a creator god, and Buddhist philosophers have even argued that belief in an eternal god is nothing but a distraction for humans seeking enlightenment.

While Buddhism does not argue that gods don’t exist, gods are seen as completely irrelevant to those who strive for enlightenment.

Jains do not believe in a divine creator.
Gandalf’s Gallery, CC BY-NC-SA

A similar form of functional atheism can also be found in the ancient Asian religion of Jainism, a tradition that emphasizes non-violence toward all living beings, non-attachment to worldly possessions and ascetic practice. While Jains believe in an eternal soul or jiva, that can be reborn, they do not believe in a divine creator.

According to Jainism, the universe is eternal, and while gods may exist, they too must be reborn, just like humans are. The gods play no role in spiritual liberation and enlightenment; humans must find their own path to enlightenment with the help of wise human teachers.

Other Atheistic Philosophies

Around the same time when Buddhism and Jainism arose in the sixth century B.C., there was also an explicitly atheist school of thought in India called the Carvaka school. Although none of their original texts have survived, Buddhist and Hindu authors describe the Carvakas as firm atheists who believed that nothing existed beyond the material world.

To the Carvakas, there was no life after death, no soul apart from the body, no gods and no world other than this one.

Another school of thought, Ajivika, which flourished around the same time, similarly argued that gods didn’t exist, although its followers did believe in a soul and in rebirth.

The Ajivikas claimed that the fate of the soul was determined by fate alone, and not by a god, or even by free will. The Ajivikas taught that everything was made up of atoms, but that these atoms were moving and combining with each other in predestined ways.

Like the Carvaka school, the Ajivika school is today only known from texts composed by Hindus, Buddhists and Jains. It is therefore difficult to determine exactly what the Ajivikas themselves thought.

According to Buddhist texts, the Ajivikas argued that there was no distinction between good and evil and there was no such thing as sin. The school may have existed around the same time as early Buddhism, in the fifth century B.C.

Atheism in Hinduism

There are many gods in Hinduism, but there are also atheistic beliefs.
Religious Studies Unisa, CC BY-SA

While the Hindu tradition of India embraces the belief in many gods and goddesses – 330 million of them, according to some sources – there are also atheistic strands of thought found within Hinduism.

The Samkhya school of Hindu philosophy is one such example. It believes that humans can achieve liberation for themselves by freeing their own spirit from the realm of matter.

Another example is the Mimamsa school. This school also rejects the idea of a creator God. The Mimamsa philosopher Kumarila said that if a god had created the world by himself in the beginning, how could anyone else possibly confirm it? Kumarila further argued that if a merciful god had created the world, it could not have been as full of suffering as it is.

According to the 2011 census, there were approximately 2.9 million atheists in India. Atheism is still a significant cultural force in India, as well as in other Asian countries influenced by Indian religions.The Conversation

Signe Cohen, Associate Professor and Department Chair, University of Missouri-Columbia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Is Consciousness a Battle between your Beliefs and Perceptions?

elephant-magic

Now you see it… Magician Harry Houdini moments before ‘disappearing’ Jennie the 10,000lb elephant at the Hippodrome, New York, in 1918. Photo courtesy Library of Congress

Hakwan Lau | Aeon Ideas

Imagine you’re at a magic show, in which the performer suddenly vanishes. Of course, you ultimately know that the person is probably just hiding somewhere. Yet it continues to look as if the person has disappeared. We can’t reason away that appearance, no matter what logic dictates. Why are our conscious experiences so stubborn?

The fact that our perception of the world appears to be so intransigent, however much we might reflect on it, tells us something unique about how our brains are wired. Compare the magician scenario with how we usually process information. Say you have five friends who tell you it’s raining outside, and one weather website indicating that it isn’t. You’d probably just consider the website to be wrong and write it off. But when it comes to conscious perception, there seems to be something strangely persistent about what we see, hear and feel. Even when a perceptual experience is clearly ‘wrong’, we can’t just mute it.

Why is that so? Recent advances in artificial intelligence (AI) shed new light on this puzzle. In computer science, we know that neural networks for pattern-recognition – so-called deep learning models – can benefit from a process known as predictive coding. Instead of just taking in information passively, from the bottom up, networks can make top-down hypotheses about the world, to be tested against observations. They generally work better this way. When a neural network identifies a cat, for example, it first develops a model that allows it to predict or imagine what a cat looks like. It can then examine any incoming data that arrives to see whether or not it fits that expectation.

The trouble is, while these generative models can be super efficient once they’re up and running, they usually demand huge amounts of time and information to train. One solution is to use generative adversarial networks (GANs) – hailed as the ‘coolest idea in deep learning in the last 20 years’ by Facebook’s head of AI research Yann LeCun. In GANs, we might train one network (the generator) to create pictures of cats, mimicking real cats as closely as it can. And we train another network (the discriminator) to distinguish between the manufactured cat images and the real ones. We can then pit the two networks against each other, such that the discriminator is rewarded for catching fakes, while the generator is rewarded for getting away with them. When they are set up to compete, the networks grow together in prowess, not unlike an arch art-forger trying to outwit an art expert. This makes learning very efficient for each of them.

As well as a handy engineering trick, GANs are a potentially useful analogy for understanding the human brain. In mammalian brains, the neurons responsible for encoding perceptual information serve multiple purposes. For example, the neurons that fire when you see a cat also fire when you imagine or remember a cat; they can also activate more or less at random. So whenever there’s activity in our neural circuitry, the brain needs to be able to figure out the cause of the signals, whether internal or external.

We can call this exercise perceptual reality monitoring. John Locke, the 17th-century British philosopher, believed that we had some sort of inner organ that performed the job of sensory self-monitoring. But critics of Locke wondered why Mother Nature would take the trouble to grow a whole separate organ, on top of a system that’s already set up to detect the world via the senses. You have to be able to smell something before you can go about deciding whether or not the perception is real or fake; so why not just build in a check to the detecting mechanism itself?

In light of what we now know about GANs, though, Locke’s idea makes a certain amount of sense. Because our perceptual system takes up neural resources, parts of it get recycled for different uses. So imagining a cat draws on the same neuronal patterns as actually seeing one. But this overlap muddies the water regarding the meaning of the signals. Therefore, for the recycling scheme to work well, we need a discriminator to decide when we are seeing something versus when we’re merely thinking about it. This GAN-like inner sense organ – or something like it – needs to be there to act as an adversarial rival, to stimulate the growth of a well-honed predictive coding mechanism.

If this account is right, it’s fair to say that conscious experience is probably akin to a kind of logical inference. That is, if the perceptual signal from the generator says there is a cat, and the discriminator decides that this signal truthfully reflects the state of the world right now, we naturally see a cat. The same goes for raw feelings: pain can feel sharp, even when we know full well that nothing is poking at us, and patients can report feeling pain in limbs that have already been amputated. To the extent that the discriminator gets things right most of the time, we tend to trust it. No wonder that when there’s a conflict between subjective impressions and rational beliefs, it seems to make sense to believe what we consciously experience.

This perceptual stubbornness is not just a feature of humans. Some primates have it too, as shown by their capacity to be amazed and amused by magic tricks. That is, they seem to understand that there’s a tension between what they’re seeing and what they know to be true. Given what we understand about their brains – specifically, that their perceptual neurons are also ‘recyclable’ for top-down functioning – the GAN theory suggests that these nonhuman animals probably have conscious experiences not dissimilar to ours.

The future of AI is more challenging. If we built a robot with a very complex GAN-style architecture, would it be conscious? On the basis of our theory, it would probably be capable of predictive coding, exercising the same machinery for perception as it deploys for top-down prediction or imagination. Perhaps like some current generative networks, it could ‘dream’. Like us, it probably couldn’t reason away its pain – and it might even be able to appreciate stage magic.

Theorising about consciousness is notoriously hard, and we don’t yet know what it really consists in. So we wouldn’t be in a position to establish if our robot was truly conscious. Then again, we can’t do this with any certainty with respect to other animals either. At least by fleshing out some conjectures about the machinery of consciousness, we can begin
to test them against our intuitions – and, more importantly, in experiments. What we do know is that a model of the mind involving an inner mechanism of doubt – a nit-picking system that’s constantly on the lookout for fakes and forgeries in perception – is one of the most promising ideas we’ve come up with so far.

Hakwan Lau

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Was the Real Socrates more Worldly and Amorous than We Knew?

socrates-alcibiades-aspasia

Detail from Socrates Dragging Alcibiades from the Embrace of Aspasia (1785) by Jean-Baptiste Regnault. Louvre, Paris. Courtesy Wikipedia

Armand D’Angour | Aeon Ideas

Socrates is widely considered to be the founding figure of Western philosophy – a thinker whose ideas, transmitted by the extensive writings of his devoted follower Plato, have shaped thinking for more than 2,000 years. ‘For better or worse,’ wrote the Classical scholar Diskin Clay in Platonic Questions (2000), ‘Plato’s Socrates is our Socrates.’ The enduring image of Socrates that comes from Plato is of a man of humble background, little education, few means and unappealing looks, who became a brilliant and disputatious philosopher married to an argumentative woman called Xanthippe. Both Plato and Xenophon, Socrates’ other principal biographer, were born c424 BCE, so they knew Socrates (born c469 BCE) only as an old man. Keen to defend his reputation from the charges of ‘introducing new kinds of gods’ and ‘corrupting young men’ on which he was eventually brought to trial and executed, they painted a picture of Socrates in late middle age as a pious teacher and unremitting ethical thinker, a man committed to shunning bodily pleasures for higher educational purposes.

Yet this clearly idealised picture of Socrates is not the whole story, and it gives us no indication of the genesis of his ideas. Plato’s pupil Aristotle and other Ancient writers provide us with correctives to the Platonic Socrates. For instance, Aristotle’s followers Aristoxenus and Clearchus of Soli preserve biographical snippets that they might have known from their teacher. From them we learn that Socrates in his teens was intimate with a distinguished older philosopher, Archelaus; that he married more than once, the first time to an aristocratic woman called Myrto, with whom he had two sons; and that he had an affair with Aspasia of Miletus, the clever and influential woman who was later to become the partner of Pericles, a leading citizen of Athens.

If these statements are to be believed, a different Socrates emerges: that of a highly placed young Athenian, whose personal experiences within an elevated milieu inspired him to embark on a new style of philosophy that was to change the way people thought ever afterwards. But can we trust these later authors? How could writers two or more generations removed from Socrates’ own time have felt entitled to contradict Plato? One answer is that Aristotle might have derived some information from Plato in person, rather than from his writings, and passed this on to his pupils; another is that, as a member of Plato’s Academy for 20 years, Aristotle might have known that Plato had elided certain facts to defend Socrates’ reputation; a third is that the later authors had access to further sources (oral and written) other than Plato, which they considered to be reliable.

Plato’s Socrates is an eccentric. Socrates claimed to have heard voices in his head from youth, and is described as standing still in public places for long stretches of time, deep in thought. Plato notes these phenomena without comment, accepting Socrates’ own description of the voices as his ‘divine sign’, and reporting on his awe-inspiring ability to meditate for hours on end. Aristotle, the son of a doctor, took a more medical approach: he suggested that Socrates (along with other thinkers) suffered from a medical condition he calls ‘melancholy’. Recent medical investigators have agreed, speculating that Socrates’ behaviour was consistent with a medical condition known as catalepsy. Such a condition might well have made Socrates feel estranged from his peers in early life, encouraging him to embark on a different kind of lifestyle.

If the received picture of Socrates’ life and personality merits reconsid­eration, what about his thought? Aristotle makes clear in his Metaphysics that Plato misrepresented Socrates regarding the so-called Theory of Forms:

Socrates concerned himself with ethics, neglecting the natural world but seeking the universal in ethical matters, and he was the first to insist on definitions. Plato took over this doctrine, but argued that what was universal applied not to objects of sense but to entities of another kind. He thought a single description could not define things that are perceived, since such things are always changing. Unchanging entities he called ‘Forms’…

Aristotle himself had little sympathy for such otherwordly views. As a biologist and scientist, he was mainly concerned with the empirical investigation of the world. In his own writings he dismissed the Forms, replacing them with a logical account of universals and their particular instantiations. For him, Socrates was also a more down-to-earth thinker than Plato sought to depict.

Sources from late antiquity, such as the 5th-century CE Christian writers Theodoret of Cyrrhus and Cyril of Alexandria, state that Socrates was, at least as a younger man, a lover of both sexes. They corroborate occasional glimpses of an earthy Socrates in Plato’s own writings, such as in the dialogue Charmides where Socrates claims to be intensely aroused by the sight of a young man’s bare chest. However, the only partner of Socrates’ whom Plato names is Xanthippe; but since she was carrying a baby in her arms when Socrates was aged 70, it is unlikely they met more than a decade or so earlier, when Socrates was already in his 50s. Plato’s failure to mention the earlier aristocratic wife Myrto might be an attempt to minimise any perception that Socrates came from a relatively wealthy background with connections to high-ranking members of his community; it was largely because Socrates was believed to be associated with the antidemocratic aristocrats who took power in Athens that he was put on trial and executed in 399 BCE.

Aristotle’s testimony, therefore, is a valuable reminder that the picture of Socrates bequeathed by Plato should not be accepted uncritically. Above all, if Socrates at some point in his early manhood became the companion of Aspasia – a woman famous as an instructor of eloquence and relationship counsellor – it potentially changes our understanding not only of Socrates’ early life, but of the formation of his philosophical ideas. He is famous for saying: ‘All I know is that I know nothing.’ But the one thing he claims, in Plato’s Symposium, that he does know about, is love, which he learned about from a clever woman. Might that woman have been Aspasia, once his beloved companion? The real Socrates must remain elusive but, in the statements of Aristotle, Aristoxenus and Clearchus of Soli, we get intriguing glimpses of a different Socrates from the one portrayed so eloquently in Plato’s writings.

For more from Armand D’Angour and his extraordinary research bringing the music of Ancient Greece to life, see this Video and read this Idea.Aeon counter – do not remove

Armand D’Angour

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

The Matrix 20 Years On: How a Sci-fi Film Tackled Big Philosophical Questions

File 20190325 36279 4c3u3u.jpg?ixlib=rb 1.1

The Matrix was a box office hit, but it also explored some of western philosophy’s most interesting themes.
HD Wallpapers Desktop/Warner Bros

Richard Colledge, Australian Catholic University

Incredible as it may seem, the end of March marks 20 years since the release of the first film in the Matrix franchise directed by The Wachowski siblings. This “cyberpunk” sci-fi movie was a box office hit with its dystopian futuristic vision, distinctive fashion sense, and slick, innovative action sequences. But it was also a catalyst for popular discussion around some very big philosophical themes.

The film centres on a computer hacker, “Neo” (played by Keanu Reeves), who learns that his whole life has been lived within an elaborate, simulated reality. This computer-generated dream world was designed by an artificial intelligence of human creation, which industrially farms human bodies for energy while distracting them via a relatively pleasant parallel reality called the “matrix”.

‘Have you ever had a dream, Neo, that you were so sure was real?’

This scenario recalls one of western philosophy’s most enduring thought experiments. In a famous passage from Plato’s Republic (ca 380 BCE), Plato has us imagine the human condition as being like a group of prisoners who have lived their lives underground and shackled, so that their experience of reality is limited to shadows projected onto their cave wall.


Read more:
The great movie scenes: The Matrix and bullet-time


A freed prisoner, Plato suggests, would be startled to discover the truth about reality, and blinded by the brilliance of the sun. Should he return below, his companions would have no means to understand what he has experienced and surely think him mad. Leaving the captivity of ignorance is difficult.

In The Matrix, Neo is freed by rebel leader Morpheus (ironically, the name of the Greek God of sleep) by being awoken to real life for the first time. But unlike Plato’s prisoner, who discovers the “higher” reality beyond his cave, the world that awaits Neo is both desolate and horrifying.

Our Fallible Senses

The Matrix also trades on more recent philosophical questions famously posed by the 17th century Frenchman René Descartes, concerning our inability to be certain about the evidence of our senses, and our capacity to know anything definite about the world as it really is.

Descartes even noted the difficulty of being certain that human experience is not the result of either a dream or a malevolent systematic deception.

The latter scenario was updated in philosopher Hilary Putnam’s 1981 “brain in a vat” thought experiment, which imagines a scientist electrically manipulating a brain to induce sensations of normal life.


Read more:
How do you know you’re not living in a computer simulation?


So ultimately, then, what is reality? The late 20th century French thinker Jean Baudrillard, whose book appears briefly (with an ironic touch) early in the film, wrote extensively on the ways in which contemporary mass society generates sophisticated imitations of reality that become so realistic they are mistaken for reality itself (like mistaking the map for the landscape, or the portrait for the person).

Of course, there is no need for a matrix-like AI conspiracy to achieve this. We see it now, perhaps even more intensely than 20 years ago, in the dominance of “reality TV” and curated identities of social media.

In some respects, the film appears to be reaching for a view close to that of the 18th century German philosopher, Immanuel Kant, who insisted that our senses do not simply copy the world; rather, reality conforms to the terms of our perception. We only ever experience the world as it is available through the partial spectrum of our senses.

The Ethics of Freedom

Ultimately, the Matrix trilogy proclaims that free individuals can change the future. But how should that freedom be exercised?

This dilemma is unfolded in the first film’s increasingly notorious red/blue pill scene, which raises the ethics of belief. Neo’s choice is to embrace either the “really real” (as exemplified by the red pill he is offered by Morpheus) or to return to his more normal “reality” (via the blue one).

This quandary was captured in a 1974 thought experiment by American philosopher, Robert Nozick. Given an “experience machine” capable of providing whatever experiences we desire, in a way indistinguishable from “real” ones, should we stubbornly prefer the truth of reality? Or can we feel free to reside within comfortable illusion?


Read more:
Why virtual reality cannot match the real thing


In The Matrix we see the rebels resolutely rejecting the comforts of the matrix, preferring grim reality. But we also see the rebel traitor Cypher (Joe Pantoliano) desperately seeking reinsertion into pleasant simulated reality. “Ignorance is bliss,” he affirms.

The film’s chief villain, Agent Smith (Hugo Weaving), darkly notes that unlike other mammals, (western) humanity insatiably consumes natural resources. The matrix, he suggests, is a “cure” for this human “contagion”.

We have heard much about the potential perils of AI, but perhaps there is something in Agent Smith’s accusation. In raising this tension, The Matrix still strikes a nerve – especially after 20 further years of insatiable consumption.The Conversation

Richard Colledge, Senior Lecturer & Head of School of Philosophy, Australian Catholic University

This article is republished from The Conversation under a Creative Commons license. Read the original article.