Atheism has been Part of Many Asian Traditions for Millennia

File 20190328 139361 138qhpw.jpg?ixlib=rb 1.1

Atheism is not a modern concept.
Zoe Margolis, CC BY-NC-ND

Signe Cohen, University of Missouri-Columbia

A group of atheists and secularists recently gathered in Southern California to talk about social and political issues. This was the first of three summits planned by the Secular Coalition for America, an advocacy group based in Washington D.C.

To many, atheism – the lack of belief in a personal god or gods – may appear an entirely modern concept. After all, it would seem that it is religious traditions that have dominated the world since the beginning of recorded history.

As a scholar of Asian religions, however, I’m often struck by the prevalence of atheism and agnosticism – the view that it is impossible to know whether a god exists – in ancient Asian texts. Atheistic traditions have played a significant part in Asian cultures for millennia.

Atheism in Buddhism, Jainism

Buddhists do not believe in a creator God.
Keith Cuddeback, CC BY-NC-ND

While Buddhism is a tradition focused on spiritual liberation, it is not a theistic religion.

The Buddha himself rejected the idea of a creator god, and Buddhist philosophers have even argued that belief in an eternal god is nothing but a distraction for humans seeking enlightenment.

While Buddhism does not argue that gods don’t exist, gods are seen as completely irrelevant to those who strive for enlightenment.

Jains do not believe in a divine creator.
Gandalf’s Gallery, CC BY-NC-SA

A similar form of functional atheism can also be found in the ancient Asian religion of Jainism, a tradition that emphasizes non-violence toward all living beings, non-attachment to worldly possessions and ascetic practice. While Jains believe in an eternal soul or jiva, that can be reborn, they do not believe in a divine creator.

According to Jainism, the universe is eternal, and while gods may exist, they too must be reborn, just like humans are. The gods play no role in spiritual liberation and enlightenment; humans must find their own path to enlightenment with the help of wise human teachers.

Other Atheistic Philosophies

Around the same time when Buddhism and Jainism arose in the sixth century B.C., there was also an explicitly atheist school of thought in India called the Carvaka school. Although none of their original texts have survived, Buddhist and Hindu authors describe the Carvakas as firm atheists who believed that nothing existed beyond the material world.

To the Carvakas, there was no life after death, no soul apart from the body, no gods and no world other than this one.

Another school of thought, Ajivika, which flourished around the same time, similarly argued that gods didn’t exist, although its followers did believe in a soul and in rebirth.

The Ajivikas claimed that the fate of the soul was determined by fate alone, and not by a god, or even by free will. The Ajivikas taught that everything was made up of atoms, but that these atoms were moving and combining with each other in predestined ways.

Like the Carvaka school, the Ajivika school is today only known from texts composed by Hindus, Buddhists and Jains. It is therefore difficult to determine exactly what the Ajivikas themselves thought.

According to Buddhist texts, the Ajivikas argued that there was no distinction between good and evil and there was no such thing as sin. The school may have existed around the same time as early Buddhism, in the fifth century B.C.

Atheism in Hinduism

There are many gods in Hinduism, but there are also atheistic beliefs.
Religious Studies Unisa, CC BY-SA

While the Hindu tradition of India embraces the belief in many gods and goddesses – 330 million of them, according to some sources – there are also atheistic strands of thought found within Hinduism.

The Samkhya school of Hindu philosophy is one such example. It believes that humans can achieve liberation for themselves by freeing their own spirit from the realm of matter.

Another example is the Mimamsa school. This school also rejects the idea of a creator God. The Mimamsa philosopher Kumarila said that if a god had created the world by himself in the beginning, how could anyone else possibly confirm it? Kumarila further argued that if a merciful god had created the world, it could not have been as full of suffering as it is.

According to the 2011 census, there were approximately 2.9 million atheists in India. Atheism is still a significant cultural force in India, as well as in other Asian countries influenced by Indian religions.The Conversation

Signe Cohen, Associate Professor and Department Chair, University of Missouri-Columbia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Was the Real Socrates more Worldly and Amorous than We Knew?

socrates-alcibiades-aspasia

Detail from Socrates Dragging Alcibiades from the Embrace of Aspasia (1785) by Jean-Baptiste Regnault. Louvre, Paris. Courtesy Wikipedia

Armand D’Angour | Aeon Ideas

Socrates is widely considered to be the founding figure of Western philosophy – a thinker whose ideas, transmitted by the extensive writings of his devoted follower Plato, have shaped thinking for more than 2,000 years. ‘For better or worse,’ wrote the Classical scholar Diskin Clay in Platonic Questions (2000), ‘Plato’s Socrates is our Socrates.’ The enduring image of Socrates that comes from Plato is of a man of humble background, little education, few means and unappealing looks, who became a brilliant and disputatious philosopher married to an argumentative woman called Xanthippe. Both Plato and Xenophon, Socrates’ other principal biographer, were born c424 BCE, so they knew Socrates (born c469 BCE) only as an old man. Keen to defend his reputation from the charges of ‘introducing new kinds of gods’ and ‘corrupting young men’ on which he was eventually brought to trial and executed, they painted a picture of Socrates in late middle age as a pious teacher and unremitting ethical thinker, a man committed to shunning bodily pleasures for higher educational purposes.

Yet this clearly idealised picture of Socrates is not the whole story, and it gives us no indication of the genesis of his ideas. Plato’s pupil Aristotle and other Ancient writers provide us with correctives to the Platonic Socrates. For instance, Aristotle’s followers Aristoxenus and Clearchus of Soli preserve biographical snippets that they might have known from their teacher. From them we learn that Socrates in his teens was intimate with a distinguished older philosopher, Archelaus; that he married more than once, the first time to an aristocratic woman called Myrto, with whom he had two sons; and that he had an affair with Aspasia of Miletus, the clever and influential woman who was later to become the partner of Pericles, a leading citizen of Athens.

If these statements are to be believed, a different Socrates emerges: that of a highly placed young Athenian, whose personal experiences within an elevated milieu inspired him to embark on a new style of philosophy that was to change the way people thought ever afterwards. But can we trust these later authors? How could writers two or more generations removed from Socrates’ own time have felt entitled to contradict Plato? One answer is that Aristotle might have derived some information from Plato in person, rather than from his writings, and passed this on to his pupils; another is that, as a member of Plato’s Academy for 20 years, Aristotle might have known that Plato had elided certain facts to defend Socrates’ reputation; a third is that the later authors had access to further sources (oral and written) other than Plato, which they considered to be reliable.

Plato’s Socrates is an eccentric. Socrates claimed to have heard voices in his head from youth, and is described as standing still in public places for long stretches of time, deep in thought. Plato notes these phenomena without comment, accepting Socrates’ own description of the voices as his ‘divine sign’, and reporting on his awe-inspiring ability to meditate for hours on end. Aristotle, the son of a doctor, took a more medical approach: he suggested that Socrates (along with other thinkers) suffered from a medical condition he calls ‘melancholy’. Recent medical investigators have agreed, speculating that Socrates’ behaviour was consistent with a medical condition known as catalepsy. Such a condition might well have made Socrates feel estranged from his peers in early life, encouraging him to embark on a different kind of lifestyle.

If the received picture of Socrates’ life and personality merits reconsid­eration, what about his thought? Aristotle makes clear in his Metaphysics that Plato misrepresented Socrates regarding the so-called Theory of Forms:

Socrates concerned himself with ethics, neglecting the natural world but seeking the universal in ethical matters, and he was the first to insist on definitions. Plato took over this doctrine, but argued that what was universal applied not to objects of sense but to entities of another kind. He thought a single description could not define things that are perceived, since such things are always changing. Unchanging entities he called ‘Forms’…

Aristotle himself had little sympathy for such otherwordly views. As a biologist and scientist, he was mainly concerned with the empirical investigation of the world. In his own writings he dismissed the Forms, replacing them with a logical account of universals and their particular instantiations. For him, Socrates was also a more down-to-earth thinker than Plato sought to depict.

Sources from late antiquity, such as the 5th-century CE Christian writers Theodoret of Cyrrhus and Cyril of Alexandria, state that Socrates was, at least as a younger man, a lover of both sexes. They corroborate occasional glimpses of an earthy Socrates in Plato’s own writings, such as in the dialogue Charmides where Socrates claims to be intensely aroused by the sight of a young man’s bare chest. However, the only partner of Socrates’ whom Plato names is Xanthippe; but since she was carrying a baby in her arms when Socrates was aged 70, it is unlikely they met more than a decade or so earlier, when Socrates was already in his 50s. Plato’s failure to mention the earlier aristocratic wife Myrto might be an attempt to minimise any perception that Socrates came from a relatively wealthy background with connections to high-ranking members of his community; it was largely because Socrates was believed to be associated with the antidemocratic aristocrats who took power in Athens that he was put on trial and executed in 399 BCE.

Aristotle’s testimony, therefore, is a valuable reminder that the picture of Socrates bequeathed by Plato should not be accepted uncritically. Above all, if Socrates at some point in his early manhood became the companion of Aspasia – a woman famous as an instructor of eloquence and relationship counsellor – it potentially changes our understanding not only of Socrates’ early life, but of the formation of his philosophical ideas. He is famous for saying: ‘All I know is that I know nothing.’ But the one thing he claims, in Plato’s Symposium, that he does know about, is love, which he learned about from a clever woman. Might that woman have been Aspasia, once his beloved companion? The real Socrates must remain elusive but, in the statements of Aristotle, Aristoxenus and Clearchus of Soli, we get intriguing glimpses of a different Socrates from the one portrayed so eloquently in Plato’s writings.

For more from Armand D’Angour and his extraordinary research bringing the music of Ancient Greece to life, see this Video and read this Idea.Aeon counter – do not remove

Armand D’Angour

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

Muhammad: an Anticlerical Hero of the European Enlightenment

koran-sale

Thomas Jefferson’s copy of George Sale’s 1734 translation of the Quran is used in the swearing in ceremony of US Representative Keith Ellison at the United States Capitol in Washington, DC, on 4 January 2007. Photo by Getty

John Tolan | Aeon Ideas

Publishing the Quran and making it available in translation was a dangerous enterprise in the 16th century, apt to confuse or seduce the faithful Christian. This, at least, was the opinion of the Protestant city councillors of Basel in 1542, when they briefly jailed a local printer for planning to publish a Latin translation of the Muslim holy book. The Protestant reformer Martin Luther intervened to salvage the project: there was no better way to combat the Turk, he wrote, than to expose the ‘lies of Muhammad’ for all to see.

The resulting publication in 1543 made the Quran available to European intellectuals, most of whom studied it in order to better understand and combat Islam. There were others, however, who used their reading of the Quran to question Christian doctrine. The Catalonian polymath and theologian Michael Servetus found numerous Quranic arguments to employ in his anti-Trinitarian tract, Christianismi Restitutio (1553), in which he called Muhammad a true reformer who preached a return to the pure monotheism that Christian theologians had corrupted by inventing the perverse and irrational doctrine of the Trinity. After publishing these heretical ideas, Servetus was condemned by the Catholic Inquisition in Vienne, and finally burned with his own books in Calvin’s Geneva.

During the European Enlightenment, a number of authors presented Muhammad in a similar vein, as an anticlerical hero; some saw Islam as a pure form of monotheism close to philosophic Deism and the Quran as a rational paean to the Creator. In 1734, George Sale published a new English translation. In his introduction, he traced the early history of Islam and idealised the Prophet as an iconoclastic, anticlerical reformer who had banished the ‘superstitious’ beliefs and practices of early Christians – the cult of the saints, holy relics – and quashed the power of a corrupt and avaricious clergy.

Sale’s translation of the Quran was widely read and appreciated in England: for many of his readers, Muhammad had become a symbol of anticlerical republicanism. It was influential outside England too. The US founding father Thomas Jefferson bought a copy from a bookseller in Williamsburg, Virginia, in 1765, which helped him conceive of a philosophical deism that surpassed confessional boundaries. (Jefferson’s copy, now in the Library of Congress, has been used for the swearing in of Muslim representatives to Congress, starting with Keith Ellison in 2007.) And in Germany, the Romantic Johann Wolfgang von Goethe read a translation of Sale’s version, which helped to colour his evolving notion of Muhammad as an inspired poet and archetypal prophet.

In France, Voltaire also cited Sale’s translation with admiration: in his world history Essai sur les mœurs et l’esprit des nations (1756), he portrayed Muhammad as an inspired reformer who abolished superstitious practices and eradicated the power of corrupt clergy. By the end of the century, the English Whig Edward Gibbon (an avid reader of both Sale and Voltaire) presented the Prophet in glowing terms in The History of the Decline and Fall of the Roman Empire (1776-89):

The creed of Mahomet is free from suspicion or ambiguity; and the Koran is a glorious testimony to the unity of God. The prophet of Mecca rejected the worship of idols and men, of stars and planets, on the rational principle that whatever rises must set, that whatever is born must die, that whatever is corruptible must decay and perish. In the author of the universe, his rational enthusiasm confessed and adored an infinite and eternal being, without form or place, without issue or similitude, present to our most secret thoughts, existing by the necessity of his own nature, and deriving from himself all moral and intellectual perfection … A philosophic theist might subscribe the popular creed of the Mahometans: a creed too sublime, perhaps, for our present faculties.

But it was Napoleon Bonaparte who took the Prophet most keenly to heart, styling himself a ‘new Muhammad’ after reading the French translation of the Quran that Claude-Étienne Savary produced in 1783. Savary wrote his translation in Egypt: there, surrounded by the music of the Arabic language, he sought to render into French the beauty of the Arabic text. Like Sale, Savary wrote a long introduction presenting Muhammad as a ‘great’ and ‘extraordinary’ man, a ‘genius’ on the battlefield, a man who knew how to inspire loyalty among his followers. Napoleon read this translation on the ship that took him to Egypt in 1798. Inspired by Savary’s portrait of the Prophet as a brilliant general and sage lawgiver, Napoleon sought to become a new Muhammad, and hoped that Cairo’s ulama (scholars) would accept him and his French soldiers as friends of Islam, come to liberate Egyptians from Ottoman tyranny. He even claimed that his own arrival in Egypt had been announced in the Quran.

Napoleon had an idealised, bookish, Enlightenment vision of Islam as pure monotheism: indeed, the failure of his Egyptian expedition owed partly to his idea of Islam being quite different from the religion of Cairo’s ulama. Yet Napoleon was not alone in seeing himself as a new Muhammad: Goethe enthusiastically proclaimed that the emperor was the ‘Mahomet der Welt’ (Muhammad of the world), and the French author Victor Hugo portrayed him as a ‘Mahomet d’occident’ (Muhammad of the West). Napoleon himself, at the end of his life, exiled on Saint Helena and ruminating on his defeat, wrote about Muhammad and defended his legacy as a ‘great man who changed the course of history’. Napoleon’s Muhammad, conqueror and lawgiver, persuasive and charismatic, resembles Napoleon himself – but a Napoleon who was more successful, and certainly never exiled to a cold windswept island in the South Atlantic.

The idea of Muhammad as one of the world’s great legislators persisted into the 20th century. Adolph A Weinman, a German-born American sculptor, depicted Muhammad in his 1935 frieze in the main chamber of the US Supreme Court, where the Prophet takes his place among 18 lawgivers. Various European Christians called on their churches to recognise Muhammad’s special role as prophet of the Muslims. For Catholics scholars of Islam such as Louis Massignon or Hans Küng, or for the Scottish Protestant scholar of Islam William Montgomery Watt, such recognition was the best way to promote peaceful, constructive dialogue between Christians and Muslims.

This kind of dialogue continues today, but it has been largely drowned out by the din of conflict, as extreme-Right politicians in Europe and elsewhere diabolise Muhammad to justify anti-Muslim policies. The Dutch politician Geert Wilders calls him a terrorist, paedophile and psychopath. The negative image of the Prophet is paradoxically promoted by fundamentalist Muslims who adulate him and reject all historical contextualisation of his life and teachings; meanwhile, violent extremists claim to defend Islam and its prophet from ‘insults’ through murder and terror. All the more reason, then, to step back and examine the diverse and often surprising Western portraits of the myriad faces of Muhammad.


Faces of Muhammad: Western Perceptions of the Prophet of Islam from the Middle Ages to Today by John Tolan is published via Princeton University Press.Aeon counter – do not remove

John Tolan

This article was originally published at Aeon and has been republished under Creative Commons. Read the original article here.

African Art in Western Museums: It’s Patrimony not Heritage

african-art

Detail from a 16th-century bronze plaque from Benin, West Africa, held at the British Museum, London. Courtesy the Trustees of the British Museum

Charlotte Joy | Aeon Ideas

Museums with colonial-era collections have always known about the brutal parts of their biographies. But, through acts of purification via historical distance, they have chosen to ignore them. Museum directors now have to re-think their position as defenders of their collections in light of a different political agenda that locates people and their patrimony in a precolonial, yet radically altered, landscape.

When learning about cultural heritage, you will be directed to the etymology of the words ‘heritage’ and ‘patrimony’. Whereas ‘heritage’ invokes inheritance, ‘patrimony’ leads us to patriarchy. In French, patrie refers to the homeland, the fatherland, and during colonialism vast swathes of West Africa were brought under this French conceptual model in the 19th and early 20th centuries. Objects taken from West Africa (the periphery) and brought back to the centre/metropole were therefore conceptualised as part of the coloniser’s national identity. They were used in a series of Great Exhibitions and expos to gain support for the colonial project before entering national and private collections throughout Europe.

The immediate paradox here is that, whereas objects from the periphery were welcome in the centre, people were very much not. Since the independence of West African countries throughout the late 1950s and early ’60s, the retention of objects and the simultaneous rejection of people has become ever more fraught. Young undocumented migrants from former French colonies stand metres away from the Musée du quai Branly – Jacques Chirac, a museum in Paris full of their inaccessible patrimony. The migrants are treated with contempt while the objects from their homelands are cared for in museums and treated with great reverence. The migrants will be deported but the objects will not be repatriated. The homeland is therefore only home to objects, not people.

Sub-Saharan Africa has a unique demographic profile. By 2050, it is projected that the region will be home to the 10 countries with the youngest populations in the world. Most Western leaders would like to see strong and stable states in West Africa, states that can provide their citizens with jobs, cultural pride and a reason for staying in their countries and building new futures. The return of objects from museums could become central to this nation-building, undoing some of the harm of the colonial project and supporting emerging creative economies.

The objects taken from West Africa during the colonial period indexed many things, most of them problematic and racist. Some objects acted as a catalyst for the creative work of Western artists, and consequently entered the artistic canon as prompts and props (seen in the background of artists’ studios such as that of Pablo Picasso). The objects that Picasso encountered at the Palais du Trocadéro in Paris were the impetus for his ‘African period’ at the beginning of the 20th century, which produced one of his most famous works, Les Demoiselles d’Avignon (1907).

Beyond the influence that non-European art had on many Western artists, some objects, such as the Benin Bronzes (looted by the British in 1897 from the Kingdom of Benin, in current-day Nigeria) entered global art history on their own merit, as unrivalled technological and artistic accomplishments. This recognition came about only after a difficult period of skepticism, when art historians expressed doubt that African artists could produce work of such sophistication.

Thus, the way in which African objects are held and displayed in Western museums can tell us a lot about the legacy of colonialism and the West’s ambivalent relationship towards its former colonies. But it cannot be said to provide generations of young people in sub-Saharan Africa with a rich cultural repository from which to draw.

Regardless of the politics of return, over the next few decades people born in sub-Saharan Africa will be brought up within a vibrant cultural milieu of art, photography, music and film. However, as colonialism was a humiliating experience for many formerly colonised people, it is not hard to see why regaining control over their patrimony would be a step towards the beginning of healing. The return of cultural objects would allow meaningful access to art and cultural knowledge that could fuel the creative economies of these young nations.

The acts of return in themselves are a symbol of strong contrition, re-opening the dialogue on past wrongs to better establish relationships for the future. It seems that behind proclamations of the complicated nature of the process of return lies this more difficult truth. Human remains have been returned from museums to be reburied with dignity. Nazi-looted art has been seized from unsuspecting collectors and returned to Jewish families. Now is the time for colonial patrimony to be reckoned with because patrimony indexes the biographies of those who made and acquired the objects, drawing their descendants into moral relationships in the present. It is now not a matter of if but when objects will be returned, and whether this happens with good grace or through a fractious period of resistance.

The museums’ ‘cosmopolitan’ defence, made for example by Tiffany Jenkins in Keeping Their Marbles (2016), is that only by juxtaposition in global centres can we truly make sense of global art and the experience of being human. This might be true to some extent but the juxtapositions in themselves are problematic: for example, the British Museum houses its Africa collections in the basement. Museums are also bursting at the seams, and what isn’t displayed is housed in vast stores. To date, the logic of the museum is not one of access and display but of acquisition and retention. The defenders of the museum’s patrimony, the trustees, are appointed on the understanding that their primary role is to protect collections for future generations, narrowly defined within the model of nation states. Perhaps if trustees of museums could rethink their role to include descendants of the colonised, as well as the colonisers, they could help reshape a heritage ethic that is alive to the challenges of global demographics.Aeon counter – do not remove

Charlotte Joy

This article was originally published at Aeon and has been republished under Creative Commons.

Between Gods and Animals: Becoming Human in the Gilgamesh Epic

Tablet_V_of_the_Epic_of_Gilgamesh

A newly discovered, partially broken, tablet V of the Epic of Gilgamesh. The tablet dates back to the old Babylonian period, 2003-1595 BCE. From Mesopotamia, Iraq. The Sulaymaniyah Museum, Iraq. Photograph by Osama Shukir Muhammed Amin. Wikimedia.


Sophus Helle | Aeon Ideas

The Epic of Gilgamesh is a Babylonian poem composed in ancient Iraq, millennia before Homer. It tells the story of Gilgamesh, king of the city of Uruk. To curb his restless and destructive energy, the gods create a friend for him, Enkidu, who grows up among the animals of the steppe. When Gilgamesh hears about this wild man, he orders that a woman named Shamhat be brought out to find him. Shamhat seduces Enkidu, and the two make love for six days and seven nights, transforming Enkidu from beast to man. His strength is diminished, but his intellect is expanded, and he becomes able to think and speak like a human being. Shamhat and Enkidu travel together to a camp of shepherds, where Enkidu learns the ways of humanity. Eventually, Enkidu goes to Uruk to confront Gilgamesh’s abuse of power, and the two heroes wrestle with one another, only to form a passionate friendship.

This, at least, is one version of Gilgamesh’s beginning, but in fact the epic went through a number of different editions. It began as a cycle of stories in the Sumerian language, which were then collected and translated into a single epic in the Akkadian language. The earliest version of the epic was written in a dialect called Old Babylonian, and this version was later revised and updated to create another version, in the Standard Babylonian dialect, which is the one that most readers will encounter today.

Not only does Gilgamesh exist in a number of different versions, each version is in turn made up of many different fragments. There is no single manuscript that carries the entire story from beginning to end. Rather, Gilgamesh has to be recreated from hundreds of clay tablets that have become fragmentary over millennia. The story comes to us as a tapestry of shards, pieced together by philologists to create a roughly coherent narrative (about four-fifths of the text have been recovered). The fragmentary state of the epic also means that it is constantly being updated, as archaeological excavations – or, all too often, illegal lootings – bring new tablets to light, making us reconsider our understanding of the text. Despite being more than 4,000 years old, the text remains in flux, changing and expanding with each new finding.

The newest discovery is a tiny fragment that had lain overlooked in the museum archive of Cornell University in New York, identified by Alexandra Kleinerman and Alhena Gadotti and published by Andrew George in 2018. At first, the fragment does not look like much: 16 broken lines, most of them already known from other manuscripts. But working on the text, George noticed something strange. The tablet seemed to preserve parts of both the Old Babylonian and the Standard Babylonian version, but in a sequence that didn’t fit the structure of the story as it had been understood until then.

The fragment is from the scene where Shamhat seduces Enkidu and has sex with him for a week. Before 2018, scholars believed that the scene existed in both an Old Babylonian and a Standard Babylonian version, which gave slightly different accounts of the same episode: Shamhat seduces Enkidu, they have sex for a week, and Shamhat invites Enkidu to Uruk. The two scenes are not identical, but the differences could be explained as a result of the editorial changes that led from the Old Babylonian to the Standard Babylonian version. However, the new fragment challenges this interpretation. One side of the tablet overlaps with the Standard Babylonian version, the other with the Old Babylonian version. In short, the two scenes cannot be different versions of the same episode: the story included two very similar episodes, one after the other.

According to George, both the Old Babylonian and the Standard Babylonian version ran thus: Shamhat seduces Enkidu, they have sex for a week, and Shamhat invites Enkidu to come to Uruk. The two of them then talk about Gilgamesh and his prophetic dreams. Then, it turns out, they had sex for another week, and Shamhat again invites Enkidu to Uruk.

Suddenly, Shamhat and Enkidu’s marathon of love had been doubled, a discovery that The Times publicised under the racy headline ‘Ancient Sex Saga Now Twice As Epic’. But in fact, there is a deeper significance to this discovery. The difference between the episodes can now be understood, not as editorial changes, but as psychological changes that Enkidu undergoes as he becomes human. The episodes represent two stages of the same narrative arc, giving us a surprising insight into what it meant to become human in the ancient world.

The first time that Shamhat invites Enkidu to Uruk, she describes Gilgamesh as a hero of great strength, comparing him to a wild bull. Enkidu replies that he will indeed come to Uruk, but not to befriend Gilgamesh: he will challenge him and usurp his power. Shamhat is dismayed, urging Enkidu to forget his plan, and instead describes the pleasures of city life: music, parties and beautiful women.

After they have sex for a second week, Shamhat invites Enkidu to Uruk again, but with a different emphasis. This time she dwells not on the king’s bullish strength, but on Uruk’s civic life: ‘Where men are engaged in labours of skill, you, too, like a true man, will make a place for yourself.’ Shamhat tells Enkidu that he is to integrate himself in society and find his place within a wider social fabric. Enkidu agrees: ‘the woman’s counsel struck home in his heart’.

It is clear that Enkidu has changed between the two scenes. The first week of sex might have given him the intellect to converse with Shamhat, but he still thinks in animal terms: he sees Gilgamesh as an alpha male to be challenged. After the second week, he has become ready to accept a different vision of society. Social life is not about raw strength and assertions of power, but also about communal duties and responsibility.

Placed in this gradual development, Enkidu’s first reaction becomes all the more interesting, as a kind of intermediary step on the way to humanity. In a nutshell, what we see here is a Babylonian poet looking at society through Enkidu’s still-feral eyes. It is a not-fully-human perspective on city life, which is seen as a place of power and pride rather than skill and cooperation.

What does this tell us? We learn two main things. First, that humanity for the Babylonians was defined through society. To be human was a distinctly social affair. And not just any kind of society: it was the social life of cities that made you a ‘true man’. Babylonian culture was, at heart, an urban culture. Cities such as Uruk, Babylon or Ur were the building blocks of civilisation, and the world outside the city walls was seen as a dangerous and uncultured wasteland.

Second, we learn that humanity is a sliding scale. After a week of sex, Enkidu has not become fully human. There is an intermediary stage, where he speaks like a human but thinks like an animal. Even after the second week, he still has to learn how to eat bread, drink beer and put on clothes. In short, becoming human is a step-by-step process, not an either/or binary.

In her second invitation to Uruk, Shamhat says: ‘I look at you, Enkidu, you are like a god, why with the animals do you range through the wild?’ Gods are here depicted as the opposite of animals, they are omnipotent and immortal, whereas animals are oblivious and destined to die. To be human is to be placed somewhere in the middle: not omnipotent, but capable of skilled labour; not immortal, but aware of one’s mortality.

In short, the new fragment reveals a vision of humanity as a process of maturation that unfolds between the animal and the divine. One is not simply born human: to be human, for the ancient Babylonians, involved finding a place for oneself within a wider field defined by society, gods and the animal world.Aeon counter – do not remove

Sophus Helle

This article was originally published at Aeon and has been republished under Creative Commons.

Should contemporary philosophers read Ockham? Or: what did history ever do for us?

If you are a historian of philosophy, you’ve probably encountered the question whether the stuff you’re working on is of any interest today. It’s the kind of question that awakens all the different souls in your breast at once. Your more enthusiastic self might think, “yes, totally”, while your methodological soul might shout, “anachronism ahead!” And your humbler part might think, “I don’t even understand it myself.” When exposed to this question, I often want to say many things at once, and out comes something garbled. But now I’d like to suggest that there is only one true reply to the main question in the title: “No, that’s the wrong kind of question to ask!” – But of course that’s not all there is to it. So please hear me out…

Read the rest at Handling Ideas, “a blog on (writing) philosophy”

Don’t let the rise of Europe steal World History

harvard-classics


The first 10 volumes of The Harvard Classics, Wikipedia


Peter Frankopan | Aeon Ideas

The centre of a map tells you much, as does the choice where to begin a story, or a history. Arab geographers used to place the Caspian Sea at the centre of world maps. On a medieval Turkish map, one that transfixed me long ago, we find the city of Balasaghun at the heart of the world. How to teach world history today is a question that is going to grow only more and more important.

Last summer in the United States, a debate flared when the influential testing agency Advanced Placement (AP) announced a change to its attendant courses, a change in which ‘world history’ would begin in 1450. In practice, beginning world history in 1450 becomes a story about how Europeans came to dominate not one but all the continents, and excludes the origins of alphabets, agriculture, cities and civilisation. Before the 1400s, it was others who did the empire-building, drove sciences, medicine and philosophy, and sought to capitalise on and extend the trading networks that facilitated the flow and exchange of goods, ideas, faiths and people.

Under pressure, the AP College Board retreated. ‘We’ve received thoughtful, principled feedback from AP teachers, students and college faculty,’ said a statement. As a result, the start date for the course has been nudged back 250 years to 1200. Consequently, said the board, ‘teachers and students can begin the course with a study of the civilisations in Africa, the Americas and Asia that are foundational to the modern era’.

Where that leaves Plato and Aristotle, or ancient Greece and Rome, is unclear – but presumably none are ‘foundational to the modern era’. That in itself is strange given that so many of the most famous buildings of Washington, DC (for example) are designed in classical style to deliberately evoke the world of 2,000 years ago; or that Mark Zuckerberg, a posterboy for new technologies and the 21st century, admits to the Emperor Augustus as his role model.

Gone too is China of the Han dynasty (206 BCE-220 CE) and the networks that linked the Pacific with the Indian Ocean and the Mediterranean 2,000 years ago, and that allow us to understand that Asia, Africa and Europe were connected many centuries prior in a world that was effectively ‘globalised’. No space for the Maya civilisation and culture in Central America or for the kingdom of Igodomigodo in West Africa, whose economic, cultural, military and political achievements have been discarded as irrelevant to the ‘modern era’. Who cares about the Indian emperor Ashoka, or the Chola dynasty of Southern India that spread eastwards into South East Asia in the 10th and 11th centuries? The connections between Scandinavia and Central Asia that helped to bring all of northern Europe out of what used to be called ‘the Dark Ages’ don’t get a look-in either. And too bad for climate change and the ways in which looking at the changes in global temperatures 1,500 years ago led to the collapse of cities, the dispersal of populations and the spread of pandemics.

History is at its most exciting and stimulating for students and teachers alike when there is scope to look at connectivity, to identify and work through deep rhythms and trends, and to explore the past by challenging assumptions that the story of the world can be charted through a linear progression – as the AP College Board seems to think with its statement linking 1200 with the ‘modern era’.

If you really want to see how foolish this view is – and how unfortunate it is to narrow down the scope of the World History course, then take a look at the front pages in just about any country in the world today. In China, news is dominated by the Belt and Road Initiative, the Chinese-led plan to regalvanise the ancient networks of the past into the modern-day Silk Roads: there are many and sharply divergent views about the aims, motivations and likely outcomes of the Belt and Road Initiative. This is far and away the single most important geopolitical development in the modern world today. Understanding why Beijing is trying to return to the glory years of the Silk Roads (which date back 2,000 years) would seem to be both interesting, and important – and largely to be bypassed by the new World History scope.

We can look to the other end of Asia, to Istanbul where, every year, hundreds of thousands of people take to the streets in Turkey to commemorate the Battle of Manzikert – which was fought in 1071. It might be useful to know why. Assessing the relationship between Russia and Ukraine might also be of some value in a period when the former has annexed part of the territory of the latter. A major spat broke out last summer between the two countries over whether Anne of Kiev was Russian or Ukrainian. She died in 1075.

It does not take an expert to see the resonance of the 7th century across the Middle East – where fundamentalists attempted to build an ‘Islamic State’ based on their model of the early Muslim world, destroying not only lives and the region in the process, but deliberately destroying history itself in places such as Palmyra. It does, though, take an expert to work out why they are trying to turn back the clock 1,400 years and what their utopian world looks like. It matters because there are plenty of others who want to do the same thing: Imran Khan, the new Prime Minister of Pakistan, for example, has said that he wants to turn his country, with its population of almost 200 million people, into ‘an ideal welfare state’ on the model that Muhammad set in Medina in the 620s and 630s – a model that set up one of the world’s ‘greatest civilisations’.

Students taking world history courses that begin in 1200 will not learn about any of these topics, even though their peers in colleges and schools around the world will. Education should expand horizons and open minds. What a shame that, in this case, they are being narrowed and shuttered. And what a shame too that this is happening at a time of such profound global change – when understanding the depth of our interconnected world is more important than ever. That, for me anyway, is the most valuable conclusion that is ‘foundational to the modern era’.Aeon counter – do not remove

Peter Frankopan

This article was originally published at Aeon and has been republished under Creative Commons.

The Empathetic Humanities have much to teach our Adversarial Culture

Books


Alexander Bevilacqua | Aeon Ideas

As anyone on Twitter knows, public culture can be quick to attack, castigate and condemn. In search of the moral high ground, we rarely grant each other the benefit of the doubt. In her Class Day remarks at Harvard’s 2018 graduation, the Nigerian novelist Chimamanda Ngozi Adichie addressed the problem of this rush to judgment. In the face of what she called ‘a culture of “calling out”, a culture of outrage’, she asked students to ‘always remember context, and never disregard intent’. She could have been speaking as a historian.

History, as a discipline, turns away from two of the main ways of reading that have dominated the humanities for the past half-century. These methods have been productive, but perhaps they also bear some responsibility for today’s corrosive lack of generosity. The two approaches have different genealogies, but share a significant feature: at heart, they are adversarial.

One mode of reading, first described in 1965 by the French philosopher Paul Ricœur and known as ‘the hermeneutics of suspicion’, aims to uncover the hidden meaning or agenda of a text. Whether inspired by Karl Marx, Friedrich Nietzsche or Sigmund Freud, the reader interprets what happens on the surface as a symptom of something deeper and more dubious, from economic inequality to sexual anxiety. The reader’s task is to reject the face value of a work, and to plumb for a submerged truth.

A second form of interpretation, known as ‘deconstruction’, was developed in 1967 by the French philosopher Jacques Derrida. It aims to identify and reveal a text’s hidden contradictions – ambiguities and even aporias (unthinkable contradictions) that eluded the author. For example, Derrida detected a bias that favoured speech over writing in many influential philosophical texts of the Western tradition, from Plato to Jean-Jacques Rousseau. The fact that written texts could privilege the immediacy and truth of speech was a paradox that revealed unarticulated metaphysical commitments at the heart of Western philosophy.

Both of these ways of reading pit reader against text. The reader’s goal becomes to uncover meanings or problems that the work does not explicitly express. In both cases, intelligence and moral probity are displayed at the expense of what’s been written. In the 20th century, these approaches empowered critics to detect and denounce the workings of power in all kinds of materials – not just the dreams that Freud interpreted, or the essays by Plato and Rousseau with which Derrida was most closely concerned.

They do, however, foster a prosecutorial attitude among academics and public intellectuals. As a colleague once told me: ‘I am always looking for the Freudian slip.’ He scours the writings of his peers to spot when they trip up and betray their problematic intellectual commitments. One poorly chosen phrase can sully an entire work.

Not surprisingly, these methods have fostered a rather paranoid atmosphere in modern academia. Mutual monitoring of lexical choices leads to anxiety, as an increasing number of words are placed on a ‘no fly’ list. One error is taken as the symptom of problematic thinking; it can spoil not just a whole book, but perhaps even the author’s entire oeuvre. This set of attitudes is not a world apart from the pile-ons that we witness on social media.

Does the lack of charity in public discourse – the quickness to judge, the aversion to context and intent – stem in part from what we might call the ‘adversarial’ humanities? These practices of interpretation are certainly on display in many classrooms, where students learn to exercise their moral and intellectual prowess by dismantling what they’ve read. For teachers, showing students how to take a text apart bestows authority; for students, learning to read like this can be electrifying.

Yet the study of history is different. History deals with the past – and the past is, as the British novelist L P Hartley wrote in 1953, ‘a foreign country’. By definition, historians deal with difference: with what is unlike the present, and with what rarely meets today’s moral standards.

The virtue of reading like a historian, then, is that critique or disavowal is not the primary goal. On the contrary, reading historically provides something more destabilising: it requires the historian to put her own values in parentheses.

The French medievalist Marc Bloch wrote that the task of the historian is understanding, not judging. Bloch, who fought in the French Resistance, was caught and turned over to the Gestapo. Poignantly, the manuscript of The Historian’s Craft, where he expressed this humane statement, was left unfinished: Bloch was executed by firing squad in June 1944.

As Bloch knew well, historical empathy involves reaching out across the chasm of time to understand people whose values and motivations are often utterly unlike our own. It means affording these people the gift of intellectual charity – that is, the best possible interpretation of what they said or believed. For example, a belief in magic can be rational on the basis of a period’s knowledge of nature. Yet acknowledging this demands more than just contextual, linguistic or philological skill. It requires empathy.

Aren’t a lot of psychological assumptions built into this model? The call for empathy might seem theoretically naive. Yet we judge people’s intentions all the time in our daily lives; we can’t function socially without making inferences about others’ motivations. Historians merely apply this approach to people who are dead. They invoke intentions not from a desire to attack, nor because they seek reasons to restrain a text’s range of meanings. Their questions about intentions stem, instead, from respect for the people whose actions and thoughts they’re trying to understand.

Reading like a historian, then, involves not just a theory of interpretation, but also a moral stance. It is an attempt to treat others generously, and to extend that generosity even to those who can’t be hic et nunc – here and now.

For many historians (as well as others in what we might call the ‘empathetic’ humanities, such as art history and literary history), empathy is a life practice. Living with the people of the past changes one’s relationship to the present. At our best, we begin to offer empathy not just to those who are distant, but to those who surround us, aiming in our daily life for ‘understanding, not judging’.

To be sure, it’s challenging to impart these lessons to students in their teens or early 20s, to whom the problems of the present seem especially urgent and compelling. The injunction to read more generously is pretty unfashionable. It can even be perceived as conservative: isn’t the past what’s holding us back, and shouldn’t we reject it? Isn’t it more useful to learn how to deconstruct a text, and to be on the lookout for latent, pernicious meanings?

Certainly, reading isn’t a zero-sum game. One can and should cultivate multiple modes of interpretation. Yet the nostrum that the humanities teach ‘critical thinking and reading skills’ obscures the profound differences in how adversarial and empathetic disciplines engage with written works – and how they teach us to respond to other human beings. If the empathetic humanities can make us more compassionate and more charitable – if they can encourage us to ‘always remember context, and never disregard intent’ – they afford something uniquely useful today.Aeon counter – do not remove

Alexander Bevilacqua

This article was originally published at Aeon and has been republished under Creative Commons.

Why Amartya Sen Remains the Century’s Great Critic of Capitalism

amartya-sen

Nobel laureate Amartya Kumar Sen in 2000, Wikipedia


Tim Rogan | Aeon Ideas

Critiques of capitalism come in two varieties. First, there is the moral or spiritual critique. This critique rejects Homo economicus as the organising heuristic of human affairs. Human beings, it says, need more than material things to prosper. Calculating power is only a small part of what makes us who we are. Moral and spiritual relationships are first-order concerns. Material fixes such as a universal basic income will make no difference to societies in which the basic relationships are felt to be unjust.

Then there is the material critique of capitalism. The economists who lead discussions of inequality now are its leading exponents. Homo economicus is the right starting point for social thought. We are poor calculators and single-minded, failing to see our advantage in the rational distribution of prosperity across societies. Hence inequality, the wages of ungoverned growth. But we are calculators all the same, and what we need above all is material plenty, thus the focus on the redress of material inequality. From good material outcomes, the rest follows.

The first kind of argument for capitalism’s reform seems recessive now. The material critique predominates. Ideas emerge in numbers and figures. Talk of non-material values in political economy is muted. The Christians and Marxists who once made the moral critique of capitalism their own are marginal. Utilitarianism grows ubiquitous and compulsory.

But then there is Amartya Sen.

Every major work on material inequality in the 21st century owes a debt to Sen. But his own writings treat material inequality as though the moral frameworks and social relationships that mediate economic exchanges matter. Famine is the nadir of material deprivation. But it seldom occurs – Sen argues – for lack of food. To understand why a people goes hungry, look not for catastrophic crop failure; look rather for malfunctions of the moral economy that moderates competing demands upon a scarce commodity. Material inequality of the most egregious kind is the problem here. But piecemeal modifications to the machinery of production and distribution will not solve it. The relationships between different members of the economy must be put right. Only then will there be enough to go around.

In Sen’s work, the two critiques of capitalism cooperate. We move from moral concerns to material outcomes and back again with no sense of a threshold separating the two. Sen disentangles moral and material issues without favouring one or the other, keeping both in focus. The separation between the two critiques of capitalism is real, but transcending the divide is possible, and not only at some esoteric remove. Sen’s is a singular mind, but his work has a widespread following, not least in provinces of modern life where the predominance of utilitarian thinking is most pronounced. In economics curricula and in the schools of public policy, in internationalist secretariats and in humanitarian NGOs, there too Sen has created a niche for thinking that crosses boundaries otherwise rigidly observed.

This was no feat of lonely genius or freakish charisma. It was an effort of ordinary human innovation, putting old ideas together in new combinations to tackle emerging problems. Formal training in economics, mathematics and moral philosophy supplied the tools Sen has used to construct his critical system. But the influence of Rabindranath Tagore sensitised Sen to the subtle interrelation between our moral lives and our material needs. And a profound historical sensibility has enabled him to see the sharp separation of the two domains as transient.

Tagore’s school at Santiniketan in West Bengal was Sen’s birthplace. Tagore’s pedagogy emphasised articulate relations between a person’s material and spiritual existences. Both were essential – biological necessity, self-creating freedom – but modern societies tended to confuse the proper relation between them. In Santiniketan, pupils played at unstructured exploration of the natural world between brief forays into the arts, learning to understand their sensory and spiritual selves as at once distinct and unified.

Sen left Santiniketan in the late 1940s as a young adult to study economics in Calcutta and Cambridge. The major contemporary controversy in economics was the theory of welfare, and debate was affected by Cold War contention between market- and state-based models of economic order. Sen’s sympathies were social democratic but anti-authoritarian. Welfare economists of the 1930s and 1940s sought to split the difference, insisting that states could legitimate programmes of redistribution by appeal to rigid utilitarian principles: a pound in a poor man’s pocket adds more to overall utility than the same pound in the rich man’s pile. Here was the material critique of capitalism in its infancy, and here is Sen’s response: maximising utility is not everyone’s abiding concern – saying so and then making policy accordingly is a form of tyranny – and in any case using government to move money around in pursuit of some notional optimum is a flawed means to that end.

Economic rationality harbours a hidden politics whose implementation damaged the moral economies that groups of people built up to govern their own lives, frustrating the achievement of its stated aims. In commercial societies, individuals pursue economic ends within agreed social and moral frameworks. The social and moral frameworks are neither superfluous nor inhibiting. They are the coefficients of durable growth.

Moral economies are not neutral, given, unvarying or universal. They are contested and evolving. Each person is more than a cold calculator of rational utility. Societies aren’t just engines of prosperity. The challenge is to make non-economic norms affecting market conduct legible, to bring the moral economies amid which market economies and administrative states function into focus. Thinking that bifurcates moral on the one hand and material on the other is inhibiting. But such thinking is not natural and inevitable, it is mutable and contingent – learned and apt to be unlearned.

Sen was not alone in seeing this. The American economist Kenneth Arrow was his most important interlocutor, connecting Sen in turn with the tradition of moral critique associated with R H Tawney and Karl Polanyi. Each was determined to re-integrate economics into frameworks of moral relationship and social choice. But Sen saw more clearly than any of them how this could be achieved. He realised that at earlier moments in modern political economy this separation of our moral lives from our material concerns had been inconceivable. Utilitarianism had blown in like a weather front around 1800, trailing extremes of moral fervour and calculating zeal in its wake. Sen sensed this climate of opinion changing, and set about cultivating ameliorative ideas and approaches eradicated by its onset once again.

There have been two critiques of capitalism, but there should be only one. Amartya Sen is the new century’s first great critic of capitalism because he has made that clear.Aeon counter – do not remove

Tim Rogan

This article was originally published at Aeon and has been republished under Creative Commons.

How Al-Farabi drew on Plato to argue for censorship in Islam

Israel-2013(2)-Jerusalem-Temple_Mount-Dome_of_the_Rock_(SE_exposure)

Andrew Shiva / Wikipedia

Rashmee Roshan Lall | Aeon Ideas

You might not be familiar with the name Al-Farabi, a 10th-century thinker from Baghdad, but you know his work, or at least its results. Al-Farabi was, by all accounts, a man of steadfast Sufi persuasion and unvaryingly simple tastes. As a labourer in a Damascus vineyard before settling in Baghdad, he favoured a frugal diet of lambs’ hearts and water mixed with sweet basil juice. But in his political philosophy, Al-Farabi drew on a rich variety of Hellenic ideas, notably from Plato and Aristotle, adapting and extending them in order to respond to the flux of his times.

The situation in the mighty Abbasid empire in which Al-Farabi lived demanded a delicate balancing of conservatism with radical adaptation. Against the backdrop of growing dysfunction as the empire became a shrunken version of itself, Al-Farabi formulated a political philosophy conducive to civic virtue, justice, human happiness and social order.

But his real legacy might be the philosophical rationale that Al-Farabi provided for controlling creative expression in the Muslim world. In so doing, he completed the aniconism (or antirepresentational) project begun in the late seventh century by a caliph of the Umayyads, the first Muslim dynasty. Caliph Abd al-Malik did it with nonfigurative images on coins and calligraphic inscriptions on the Dome of the Rock in Jerusalem, the first monument of the new Muslim faith. This heralded Islamic art’s break from the Greco-Roman representative tradition. A few centuries later, Al-Farabi took the notion of creative control to new heights by arguing for restrictions on representation through the word. He did it using solidly Platonic concepts, and can justifiably be said to have helped concretise the way Islam understands and responds to creative expression.

Word portrayals of Islam and its prophet can be deemed sacrilegious just as much as representational art. The consequences of Al-Farabi’s rationalisation of representational taboos are apparent in our times. In 1989, Iran’s Ayatollah Khomeini issued a fatwa sentencing Salman Rushdie to death for writing The Satanic Verses (1988). The book outraged Muslims for its fictionalised account of Prophet Muhammad’s life. In 2001, the Taliban blew up the sixth-century Bamiyan Buddhas in Afghanistan. In 2005, controversy erupted over the publication by the Danish newspaper Jyllands-Posten of cartoons depicting the Prophet. The cartoons continued to ignite fury in some way or other for at least a decade. There were protests across the Middle East, attacks on Western embassies after several European papers reprinted the cartoons, and in 2008 Osama bin Laden issued an incendiary warning to Europe of ‘grave punishment’ for its ‘new Crusade’ against Islam. In 2015, the offices of Charlie Hebdo, a satirical magazine in Paris that habitually offended Muslim sensibilities, was attacked by armed gunmen, killing 12. The magazine had featured Michel Houellebecq’s novel Submission (2015), a futuristic vision of France under Islamic rule.

In a sense, the destruction of the Bamiyan Buddhas was no different from the Rushdie fatwa, which was like the Danish cartoons fallout and the violence wreaked on Charlie Hebdo’s editorial staff. All are linked by the desire to control representation, be it through imagery or the word.

Control of the word was something that Al-Farabi appeared to judge necessary if Islam’s biggest project – the multiethnic commonwealth that was the Abbasid empire – was to be preserved. Figural representation was pretty much settled as an issue for Muslims when Al-Farabi would have been pondering some of his key theories. Within 30 years of the Prophet’s death in 632, art and creative expression took two parallel paths depending on the context for which it was intended. There was art for the secular space, such as the palaces and bathhouses of the Umayyads (661-750). And there was the art considered appropriate for religious spaces – mosques and shrines such as the Dome of the Rock (completed in 691). Caliph Abd al-Malik had already engaged in what has been called a ‘polemic of images’ on coinage with his Byzantine counterpart, Emperor Justinian II. Ultimately, Abd al-Malik issued coins inscribed with the phrases ‘ruler of the orthodox’ and ‘representative [caliph] of Allah’ rather than his portrait. And the Dome of the Rock had script rather than representations of living creatures as a decoration. The lack of image had become an image. In fact, the word was now the image. That is why calligraphy became the greatest of Muslim art forms. The importance of the written word – its absorption and its meaning – was also exemplified by the Abbasids’ investment in the Greek-to-Arabic translation movement from the eighth to the 10th centuries.

Consequently, in Al-Farabi’s time, what was most important for Muslims was to control representation through the word. Christian iconophiles made their case for devotional images with the argument that words have the same representative power as paintings. Words are like icons, declared the iconophile Christian priest Theodore Abu Qurrah, who lived in dar-al Islam and wrote in Arabic in the ninth century. And images, he said, are the writing of the illiterate.

Al-Farabi was concerned about the power – for good or ill – of writings at a time when the Abbasid empire was in decline. He held creative individuals responsible for what they produced. Abbasid caliphs increasingly faced a crisis of authority, both moral and political. This led Al-Farabi – one of the Arab world’s most original thinkers – to extrapolate from topical temporal matters the key issues confronting Islam and its expanding and diverse dominions.

Al-Farabi fashioned a political philosophy that naturalised Plato’s imaginary ideal state for the world to which he belonged. He tackled the obvious issue of leadership, reminding Muslim readers of the need for a philosopher-king, a ‘virtuous ruler’ to preside over a ‘virtuous city’, which would be run on the principles of ‘virtuous religion’.

Like Plato, Al-Farabi suggested creative expression should support the ideal ruler, thus shoring up the virtuous city and the status quo. Just as Plato in the Republic demanded that poets in the ideal state tell stories of unvarying good, especially about the gods, Al-Farabi’s treatises mention ‘praiseworthy’ poems, melodies and songs for the virtuous city. Al-Farabi commended as ‘most venerable’ for the virtuous city the sorts of writing ‘used in the service of the supreme ruler and the virtuous king.’

It is this idea of writers following the approved narrative that most clearly joins Al-Farabi’s political philosophy to that of the man he called Plato the ‘Divine’. When Al-Farabi seized on Plato’s argument for ‘a censorship of the writers’ as a social good for Muslim society, he was making a case for managing the narrative by controlling the word. It would be important to the next phase of Islamic image-building.

Some of Al-Farabi’s ideas might have influenced other prominent Muslim thinkers, including the Persian polymath Ibn Sina, or Avicenna, (c980-1037) and the Persian theologian Al-Ghazali (c1058-1111). Certainly, his rationalisation for controlling creative writing enabled a further move to deny legitimacy to new interpretation.Aeon counter – do not remove

Rashmee Roshan Lall

This article was originally published at Aeon and has been republished under Creative Commons.