Does Microdosing Improve your Mood and Performance? Here’s what the Research Says

File 20190206 86233 7rvfkk.jpg?ixlib=rb 1.1
Microdosers take such small quantities of psychedelic substances that there are no noticeable effects.
By AppleZoomZoom

Vince Polito, Macquarie University

Microdosing means regularly taking very small doses of psychedelic substances such as LSD or psilocybin (magic mushrooms) over a period of weeks or months. The practice has made countless headlines over the past couple of years, with claims it can improve health, strengthen relationships, and increase productivity.

These claims are surprising because microdosers take doses so small there are no noticeable effects. These can be just 1/20th of a typical recreational dose, often every three or four days. With such small amounts, microdosers go about their daily business, including going to work, without experiencing any typical drug effects.

Previous research suggests microdosing may lead to better mood and energy levels, improved creativity, increased wisdom, and changes to how we perceive time.


Read more:
LSD ‘microdosing’ is trending in Silicon Valley – but can it actually make you more creative?


But these previous studies have mainly involved asking people to complete ratings or behavioural tasks as one-off measures.

Our study, published today in PLOS One, tracked the experience of 98 users over a longer period – six weeks – to systematically measure any psychological changes.

Overall, the participants reported both positive and negative effects from microdosing, including improved attention and mental health; but also more neuroticism.

What we did

As you would expect, there are many legal and bureaucratic barriers to psychedelic research. It wasn’t possible for us to run a study where we actually provided participants with psychedelic substances. Instead, we tried to come up with the most rigorous design possible in the current restrictive legal climate.

Our solution was to recruit people who were already experimenting with microdosing and to track their experiences carefully over time, using well validated and reliable psychometric measures.

Microdosers go about their lives without any typical drug effects.
Parker Byrd

Each day we asked participants to complete some brief ratings, telling us whether they had microdosed that day and describing their overall experience. This let us track the immediate effects of microdosing.

At the beginning and end of the study participants completed a detailed battery of psychological measures. This let us track the longer-term effects of microdosing.

In a separate sample, we explored the beliefs and expectations of people who are interested in microdosing. This let us track whether any changes in our main sample were aligned with what people generally predict will happen when microdosing.

What we found

There are five key findings from our study.

1. A general positive boost on microdosing days, but limited residual effects of each dose.

Many online accounts of microdosing suggest people microdose every three or four days. The thinking is that each microdose supposedly has a residual effect that lasts for a few days.

The daily ratings from participants in our study do not support this idea. Participants reported an immediate boost in all measures (connectedness, contemplation, creativity, focus, happiness, productiveness and wellness) on dosing days. But this was mostly not maintained on the following days.

However, there was some indication of a slight rebound in feelings of focus and productivity two days after dosing.

Microdosers experienced increased focus.
Rawpixel

2. Some indications of improvements in mental health

We also looked at cumulative effects of longer term microdosing. We found that after six weeks, participants reported lower levels of depression and stress.

We recruited people who were not experiencing any kind of mental illness for the study, so levels of depression and stress were relatively low to begin with. Nevertheless, ratings on these measures did drop.

This is an intriguing finding but it’s not clear from this result whether microdosing would have any effect on more significant levels of mood disturbance.

3. Shifts in attention

The microdosers in our study reported reduced mind wandering, meaning they were less likely to be distracted by unwanted thoughts.

They also reported an increase in absorption, meaning they were more likely to experience intense focused attention on imaginative experiences. Absorption has been linked to strong engagement with art and nature.

4. Increases in neuroticism and some challenging experiences

Not everyone had a good time microdosing. Some participants reported unpleasant and difficult experiences. In some cases, participants tried microdosing just once or twice, then didn’t want to continue.

Overall, participants reported a small increase in neuroticism after six weeks of microdosing, indicating an increase in the frequency of unpleasant emotions.

5. Changes do not entirely match people’s expectations

People have strong expectations about the effects of microdosing. But when we looked at the specific variables participants most expected would change, these didn’t match up with the changes actually reported by our microdosers.

Two of the biggest changes microdosers expected were increases in creativity and life satisfaction, but we found no evidence of shifts in these areas. This suggests the changes we found were not simply due to people’s expectations.

What does it all mean?

This complex set of findings is not what’s typically reported in media stories and online discussions of microdosing. There are promising indications of possible benefits of microdosing here, but also indications of some potential negative impacts, which should be taken seriously.


Read more:
Opening up the future of psychedelic science


It’s important to remember this was an observational study that relied heavily on the accuracy and honesty of participants in their reports. As such, these results need to be treated cautiously.

It’s early days for microdosing research and this work shows that we need to look more carefully at the effects of low dose psychedelics on mental health, attention, and neuroticism.The Conversation

Vince Polito, Postdoctoral Research Fellow in Cognitive Science, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How Seeing Snakes in the Grass Helped Primates to Evolve

snake

Phrynonax poecilonotus, Wikipedia


Lynne A Isbell | Aeon Ideas

Evolution has favoured the modification and expansion of primate vision. Compared with other mammals, primates have, for example, greater depth perception from having forward-facing eyes with extensively overlapping visual fields, sharper visual acuity, more areas in the brain that are involved with vision, and, in some primates, trichromatic colour vision, which enables them to distinguish red from green hues. In fact, what separates primates from other mammals most is their much greater reliance on vision as the main sensory interface with the environment.

Vision is a window onto the world, its qualities determined by natural selection and the constraints of both animals’ bodies and the environments in which they live. Despite their long, shared evolutionary history, mammals don’t all see the world in the same way because they inhabit a variety of niches with different selective pressures. What were those selective pressures for primates, our lineage, that led to their having visual systems more expansive and more complex than those of other mammals?

In 2006, I published a new idea that could answer that question and more: the ‘snake detection theory’. I hypothesised that when large-gaped constricting snakes appeared about 100 million years ago and began eating mammals, their predatory behaviour favoured the evolution of changes in the vision of one kind of prey, the lineage that was to become primates. In other words, the ability to see immobile predatory snakes before getting too close became a highly beneficial trait for them to have and pass on to their descendants. Then, about 60 million years ago, venomous snakes appeared in Africa or Asia, adding more pressure on primates to detect and avoid them. This has also had repercussions on their visual systems.

There is a consistency between the degree of complexity in primate visual systems and the length of evolutionary time that primates have spent with venomous snakes. At one extreme, the lineage that comprises Old World monkeys, apes and humans has the best vision of all primates, including excellent visual acuity and fully trichromatic colour vision. Having evolved roughly at the same time and in the same place as venomous snakes, these primates have had continuous coexistence with them. They are also uniformly wary of snakes.

At the opposite end of the spectrum, Malagasy primates have the simplest visual systems. Among other things, they have low visual acuity because the fovea, a depression in the retina that is responsible for our visual acuity wherever we focus our eyes, is poorly developed (when it’s present at all). Although Madagascar has constricting snakes, it has no venomous snakes, so primates on that island never had to face that particular selective pressure. Behavioural evidence also reveals that they don’t all react fearfully toward snakes. Some can even walk on snakes or snake models, treating them as if they’re just another branch.

The visual systems of New World monkeys are in the middle. They have better visual acuity than Malagasy primates but more variability in their visual systems than Old World monkeys. For example, New World howler monkeys are all trichromatic, but in other New World primate species, only some individuals are able to distinguish red from green hues. New World primates were originally part of the anthropoid primate lineage in Africa that also includes Old World monkeys and apes, and so had to deal with venomous snakes for about 20-25 million years, but then, some 36 million years ago, they left Africa and arrived in South America where venomous snakes were not present until roughly 15 million years later. By then, New World monkeys had begun to diversify into different genera, and so each genus evolved separate solutions to the renewed problem caused by the arrival again of venomous snakes. As far as I know, no other explanation for the variation in their visual systems exists.

Since I proposed the snake detection theory, several studies have shown that nonhuman and human primates, including young children and snake-naive infants, have a visual bias toward snakes compared with other animate objects, such as lizards, spiders, worms, birds and flowers. Psychologists have discovered that we pick out images of snakes faster or more accurately than other objects, especially under cluttered or obscuring conditions that resemble the sorts of environments in which snakes are typically found. Snakes also distract us from finding other objects as quickly. Our ability to detect snakes faster is also more pronounced when we have less time to detect them and when they are in our periphery. Moreover, our ‘primary visual area’ in the back of the brain shows stronger electrophysiological responses to images of snakes than of lizards 150-300 milliseconds after people see the images, providing a measurable physical correlate of our greater visual bias toward them.

Since vision is mostly in the brain, we need to turn to neuroscience to understand the mechanisms for our visual bias toward snakes. All vertebrates have a visual system that allows them to distinguish potential predators from potential prey. This is a nonconscious visual system that involves only subcortical structures, including those that in mammals are called the superior colliculus and the pulvinar, and it allows for very fast visual detection and response. When an animal sees a predator, this nonconscious visual system also taps directly into motor responses such as freezing and darting.

As vertebrates, mammals have this nonconscious visual system, but they have also incorporated vision into the neocortex. No other animals have a neocortex. This somewhat slower, conscious visual system allows mammals to become cognizant of objects for what they really are. The first neocortical stop is the primary visual area, which is particularly sensitive to edges and lines of different orientations.

In a breakthrough study, a team of neuroscientists probed the responses of individual neurons in the pulvinar of Japanese macaques as they were shown images of snakes, faces of monkeys, hands of monkeys, and simple geometric shapes. Sure enough, many pulvinar neurons responded more strongly and more quickly to snakes than to the other images. The snake-sensitive neurons were found in a subsection of the pulvinar that is connected to a part of the superior colliculus involved in defensive motor behaviour such as freezing and darting, and to the amygdala, a subcortical structure involved in mediating fear responses. Among all mammals, the lineage with the greatest evolutionary exposure to venomous snakes, the anthropoid monkeys, apes and humans, also have the largest pulvinar. This makes perfect sense in the context of the snake detection theory.

What is it about snakes that makes them so attention-grabbing to us? Naturally, we use all the cues available (such as body shape and leglessness) but it’s their scales that should be the most reliable, because a little patch of snake might be all we have to go on. Indeed, wild vervet monkeys in Africa, for instance, are able with their superb visual acuity to detect just an inch of snake skin within a minute of coming near it. In people, electrophysiological responses in the primary visual area reveal greater early visual attention to snake scales compared with lizard skins and bird feathers. Again, the primary visual area is highly sensitive to edges and lines of different orientations, and snake skins with their spades offer these visual cues in spades.

The snake detection theory takes our seemingly contradictory attitudes about snakes and makes sense of them as a cohesive whole. Our long evolutionary exposure to snakes explains why ophiophobia is humanity’s most-reported phobia but also why our attraction and attention to snakes is so strong that we have even included them prominently in our religions and folklore. Most importantly, by recognising that our vision and our behaviour have been shaped by millions of years of interactions with another type of animal, we admit our close relationship with nature. We have not been above or outside nature as we might like to think, but have always been fully a part of it.Aeon counter – do not remove


Lynne A Isbell is professor of anthropology at the University of California, Davis. She is the author of The Fruit, the Tree, and the Serpent: Why We See So Well (2009). She is interested in primate behaviour and ecology.

This article was originally published at Aeon and has been republished under Creative Commons. Visit the original article here.

Modern Technology is akin to the Metaphysics of Vedanta

whitehead-vedanta.jpg

Akhandadhi Das | Aeon Ideas

You might think that digital technologies, often considered a product of ‘the West’, would hasten the divergence of Eastern and Western philosophies. But within the study of Vedanta, an ancient Indian school of thought, I see the opposite effect at work. Thanks to our growing familiarity with computing, virtual reality (VR) and artificial intelligence (AI), ‘modern’ societies are now better placed than ever to grasp the insights of this tradition.

Vedanta summarises the metaphysics of the Upanishads, a clutch of Sanskrit religious texts, likely written between 800 and 500 BCE. They form the basis for the many philosophical, spiritual and mystical traditions of the Indian sub-continent. The Upanishads were also a source of inspiration for some modern scientists, including Albert Einstein, Erwin Schrödinger and Werner Heisenberg, as they struggled to comprehend quantum physics of the 20th century.

The Vedantic quest for understanding begins from what it considers the logical starting point: our own consciousness. How can we trust conclusions about what we observe and analyse unless we understand what is doing the observation and analysis? The progress of AI, neural nets and deep learning have inclined some modern observers to claim that the human mind is merely an intricate organic processing machine – and consciousness, if it exists at all, might simply be a property that emerges from information complexity. However, this view fails to explain intractable issues such as the subjective self and our experience of qualia, those aspects of mental content such as ‘redness’ or ‘sweetness’ that we experience during conscious awareness. Figuring out how matter can produce phenomenal consciousness remains the so-called ‘hard problem’.

Vedanta offers a model to integrate subjective consciousness and the information-processing systems of our body and brains. Its theory separates the brain and the senses from the mind. But it also distinguishes the mind from the function of consciousness, which it defines as the ability to experience mental output. We’re familiar with this notion from our digital devices. A camera, microphone or other sensors linked to a computer gather information about the world, and convert the various forms of physical energy – light waves, air pressure-waves and so forth – into digital data, just as our bodily senses do. The central processing unit processes this data and produces relevant outputs. The same is true of our brain. In both contexts, there seems to be little scope for subjective experience to play a role within these mechanisms.

While computers can handle all sorts of processing without our help, we furnish them with a screen as an interface between the machine and ourselves. Similarly, Vedanta postulates that the conscious entity – something it terms the atma – is the observer of the output of the mind. The atma possesses, and is said to be composed of, the fundamental property of consciousness. The concept is explored in many of the meditative practices of Eastern traditions.

You might think of the atma like this. Imagine you’re watching a film in the cinema. It’s a thriller, and you’re anxious about the lead character, trapped in a room. Suddenly, the door in the movie crashes open and there stands… You jump, as if startled. But what is the real threat to you, other than maybe spilling your popcorn? By suspending an awareness of your body in the cinema, and identifying with the character on the screen, we are allowing our emotional state to be manipulated. Vedanta suggests that the atma, the conscious self, identifies with the physical world in a similar fashion.

This idea can also be explored in the all-consuming realm of VR. On entering a game, we might be asked to choose our character or avatar – originally a Sanskrit word, aptly enough, meaning ‘one who descends from a higher dimension’. In older texts, the term often refers to divine incarnations. However, the etymology suits the gamer, as he or she chooses to descend from ‘normal’ reality and enter the VR world. Having specified our avatar’s gender, bodily features, attributes and skills, next we learn how to control its limbs and tools. Soon, our awareness diverts from our physical self to the VR capabilities of the avatar.

In Vedanta psychology, this is akin to the atma adopting the psychological persona-self it calls the ahankara, or the ‘pseudo-ego’. Instead of a detached conscious observer, we choose to define ourselves in terms of our social connections and the physical characteristics of the body. Thus, I come to believe in myself with reference to my gender, race, size, age and so forth, along with the roles and responsibilities of family, work and community. Conditioned by such identification, I indulge in the relevant emotions – some happy, some challenging or distressing – produced by the circumstances I witness myself undergoing.

Within a VR game, our avatar represents a pale imitation of our actual self and its entanglements. In our interactions with the avatar-selves of others, we might reveal little about our true personality or feelings, and know correspondingly little about others’. Indeed, encounters among avatars – particularly when competitive or combative – are often vitriolic, seemingly unrestrained by concern for the feelings of the people behind the avatars. Connections made through online gaming aren’t a substitute for other relationships. Rather, as researchers at Johns Hopkins University have noted, gamers with strong real-world social lives are less likely to fall prey to gaming addiction and depression.

These observations mirror the Vedantic claim that our ability to form meaningful relationships is diminished by absorption in the ahankara, the pseudo-ego. The more I regard myself as a physical entity requiring various forms of sensual gratification, the more likely I am to objectify those who can satisfy my desires, and to forge relationships based on mutual selfishness. But Vedanta suggests that love should emanate from the deepest part of the self, not its assumed persona. Love, it claims, is soul-to-soul experience. Interactions with others on the basis of the ahankara offer only a parody of affection.

As the atma, we remain the same subjective self throughout the whole of our life. Our body, mentality and personality change dramatically – but throughout it all, we know ourselves to be the constant observer. However, seeing everything shift and give way around us, we suspect that we’re also subject to change, ageing and heading for annihilation. Yoga, as systematised by Patanjali – an author or authors, like ‘Homer’, who lived in the 2nd century BCE – is intended to be a practical method for freeing the atma from relentless mental tribulation, and to be properly situated in the reality of pure consciousness.

In VR, we’re often called upon to do battle with evil forces, confronting jeopardy and virtual mortality along the way. Despite our efforts, the inevitable almost always happens: our avatar is killed. Game over. Gamers, especially pathological gamers, are known to become deeply attached to their avatars, and can suffer distress when their avatars are harmed. Fortunately, we’re usually offered another chance: Do you want to play again? Sure enough, we do. Perhaps we create a new avatar, someone more adept, based on the lessons learned last time around. This mirrors the Vedantic concept of reincarnation, specifically in its form of metempsychosis: the transmigration of the conscious self into a new physical vehicle.

Some commentators interpret Vedanta as suggesting that there is no real world, and that all that exists is conscious awareness. However, a broader take on Vedantic texts is more akin to VR. The VR world is wholly data, but it becomes ‘real’ when that information manifests itself to our senses as imagery and sounds on the screen or through a headset. Similarly, for Vedanta, it is the external world’s transitory manifestation as observable objects that makes it less ‘real’ than the perpetual, unchanging nature of the consciousness that observes it.

To the sages of old, immersing ourselves in the ephemeral world means allowing the atma to succumb to an illusion: the illusion that our consciousness is somehow part of an external scene, and must suffer or enjoy along with it. It’s amusing to think what Patanjali and the Vedantic fathers would make of VR: an illusion within an illusion, perhaps, but one that might help us to grasp the potency of their message.Aeon counter – do not remove

Akhandadhi Das

This article was originally published at Aeon and has been republished under Creative Commons.

 

What Einstein Meant by ‘God Does Not Play Dice’

Einstein with his second wife Elsa, 1921. Wikipedia.

Jim Baggott | Aeon Ideas

‘The theory produces a good deal but hardly brings us closer to the secret of the Old One,’ wrote Albert Einstein in December 1926. ‘I am at all events convinced that He does not play dice.’

Einstein was responding to a letter from the German physicist Max Born. The heart of the new theory of quantum mechanics, Born had argued, beats randomly and uncertainly, as though suffering from arrhythmia. Whereas physics before the quantum had always been about doing this and getting that, the new quantum mechanics appeared to say that when we do this, we get that only with a certain probability. And in some circumstances we might get the other.

Einstein was having none of it, and his insistence that God does not play dice with the Universe has echoed down the decades, as familiar and yet as elusive in its meaning as E = mc2. What did Einstein mean by it? And how did Einstein conceive of God?

Hermann and Pauline Einstein were nonobservant Ashkenazi Jews. Despite his parents’ secularism, the nine-year-old Albert discovered and embraced Judaism with some considerable passion, and for a time he was a dutiful, observant Jew. Following Jewish custom, his parents would invite a poor scholar to share a meal with them each week, and from the impoverished medical student Max Talmud (later Talmey) the young and impressionable Einstein learned about mathematics and science. He consumed all 21 volumes of Aaron Bernstein’s joyful Popular Books on Natural Science (1880). Talmud then steered him in the direction of Immanuel Kant’s Critique of Pure Reason (1781), from which he migrated to the philosophy of David Hume. From Hume, it was a relatively short step to the Austrian physicist Ernst Mach, whose stridently empiricist, seeing-is-believing brand of philosophy demanded a complete rejection of metaphysics, including notions of absolute space and time, and the existence of atoms.

But this intellectual journey had mercilessly exposed the conflict between science and scripture. The now 12-year-old Einstein rebelled. He developed a deep aversion to the dogma of organised religion that would last for his lifetime, an aversion that extended to all forms of authoritarianism, including any kind of dogmatic atheism.

This youthful, heavy diet of empiricist philosophy would serve Einstein well some 14 years later. Mach’s rejection of absolute space and time helped to shape Einstein’s special theory of relativity (including the iconic equation E = mc2), which he formulated in 1905 while working as a ‘technical expert, third class’ at the Swiss Patent Office in Bern. Ten years later, Einstein would complete the transformation of our understanding of space and time with the formulation of his general theory of relativity, in which the force of gravity is replaced by curved spacetime. But as he grew older (and wiser), he came to reject Mach’s aggressive empiricism, and once declared that ‘Mach was as good at mechanics as he was wretched at philosophy.’

Over time, Einstein evolved a much more realist position. He preferred to accept the content of a scientific theory realistically, as a contingently ‘true’ representation of an objective physical reality. And, although he wanted no part of religion, the belief in God that he had carried with him from his brief flirtation with Judaism became the foundation on which he constructed his philosophy. When asked about the basis for his realist stance, he explained: ‘I have no better expression than the term “religious” for this trust in the rational character of reality and in its being accessible, at least to some extent, to human reason.’

But Einstein’s was a God of philosophy, not religion. When asked many years later whether he believed in God, he replied: ‘I believe in Spinoza’s God, who reveals himself in the lawful harmony of all that exists, but not in a God who concerns himself with the fate and the doings of mankind.’ Baruch Spinoza, a contemporary of Isaac Newton and Gottfried Leibniz, had conceived of God as identical with nature. For this, he was considered a dangerous heretic, and was excommunicated from the Jewish community in Amsterdam.

Einstein’s God is infinitely superior but impersonal and intangible, subtle but not malicious. He is also firmly determinist. As far as Einstein was concerned, God’s ‘lawful harmony’ is established throughout the cosmos by strict adherence to the physical principles of cause and effect. Thus, there is no room in Einstein’s philosophy for free will: ‘Everything is determined, the beginning as well as the end, by forces over which we have no control … we all dance to a mysterious tune, intoned in the distance by an invisible player.’

The special and general theories of relativity provided a radical new way of conceiving of space and time and their active interactions with matter and energy. These theories are entirely consistent with the ‘lawful harmony’ established by Einstein’s God. But the new theory of quantum mechanics, which Einstein had also helped to found in 1905, was telling a different story. Quantum mechanics is about interactions involving matter and radiation, at the scale of atoms and molecules, set against a passive background of space and time.

Earlier in 1926, the Austrian physicist Erwin Schrödinger had radically transformed the theory by formulating it in terms of rather obscure ‘wavefunctions’. Schrödinger himself preferred to interpret these realistically, as descriptive of ‘matter waves’. But a consensus was growing, strongly promoted by the Danish physicist Niels Bohr and the German physicist Werner Heisenberg, that the new quantum representation shouldn’t be taken too literally.

In essence, Bohr and Heisenberg argued that science had finally caught up with the conceptual problems involved in the description of reality that philosophers had been warning of for centuries. Bohr is quoted as saying: ‘There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature.’ This vaguely positivist statement was echoed by Heisenberg: ‘[W]e have to remember that what we observe is not nature in itself but nature exposed to our method of questioning.’ Their broadly antirealist ‘Copenhagen interpretation’ – denying that the wavefunction represents the real physical state of a quantum system – quickly became the dominant way of thinking about quantum mechanics. More recent variations of such antirealist interpretations suggest that the wavefunction is simply a way of ‘coding’ our experience, or our subjective beliefs derived from our experience of the physics, allowing us to use what we’ve learned in the past to predict the future.

But this was utterly inconsistent with Einstein’s philosophy. Einstein could not accept an interpretation in which the principal object of the representation – the wavefunction – is not ‘real’. He could not accept that his God would allow the ‘lawful harmony’ to unravel so completely at the atomic scale, bringing lawless indeterminism and uncertainty, with effects that can’t be entirely and unambiguously predicted from their causes.

The stage was thus set for one of the most remarkable debates in the entire history of science, as Bohr and Einstein went head-to-head on the interpretation of quantum mechanics. It was a clash of two philosophies, two conflicting sets of metaphysical preconceptions about the nature of reality and what we might expect from a scientific representation of this. The debate began in 1927, and although the protagonists are no longer with us, the debate is still very much alive.

And unresolved.

I don’t think Einstein would have been particularly surprised by this. In February 1954, just 14 months before he died, he wrote in a letter to the American physicist David Bohm: ‘If God created the world, his primary concern was certainly not to make its understanding easy for us.’


Jim Baggott

This article was originally published at Aeon and has been republished under Creative Commons.

Food for Thought with Elon Musk

[00:00:00] 20k Not a Flamethrower sold in 4 days

[00:04:00] Boring company – why it started

[00:07:00] Boring company – how it started

[00:10:00] How Elon manages his time

[00:12:30] AI

[00:21:00] AI + Regulations

[00:24:30] AI – Neuralink: Major announcement in the following months

[00:29:30] Whisky time

[00:34:00] Chimps and bonobos

[00:38:00] Social media and effects on people

[00:42:00] VR and simulation

[00:48:00] Simulations

[00:54:00] Checking out weapons

[00:55:00] Tesla time

[01:05:00] Tunnels and Flat Earth movement

[01:08:30] Tesla Roadster performance package (cold gas thrusters, 10k psi)

[01:10:00] Flying cars / magnet roads

[01:13:00] Planes

[01:21:00] Energy consumption

[01:26:00] Back on Tesla and cars – Roadster, Autopilot

[01:33:00] On people suing Tesla when they break a foot instead of dying

[01:45:30] Bottlenecks and what is holding Elon’s companies

[01:53:00] What keeps you up at night? (Tesla bottlenecks + history of Tesla’s bottlenecks)

[02:00:00] Tesla Solar roof

[02:04:00] Tesla may make products for the house (A/C?)

[02:05:00] Watches

[02:10:00] Puffing on a joint

[02:15:30] What is the hardest part being you?

[02:28:00] Space colonization

[02:33:00] Twitter

[02:36:00] Ending


Thanks to Rooney for the timestamps and topic organization.

Why it’s only Science that can Answer all the Big Questions

amplituhedron

An amplituhedron is a geometric structure introduced in 2013 by Nima Arkani-Hamed and Jaroslav Trnka. It enables simplified calculation of particle interactions in some quantum field theories. – Wikipedia

Peter Atkins | Aeon Ideas

Science has proved itself to be a reliable way to approach all kinds of questions about the physical world. As a scientist, I am led to wonder whether its ability to provide understanding is unlimited. Can it in fact answer all the great questions, the ‘big questions of being’, that occur to us?

To begin with, what are these big questions? In my view, they fall into two classes.

One class consists of invented questions that are often based on unwarranted extrapolations of human experience. They typically include questions of purpose and worries about the annihilation of the self, such as Why are we here? and What are the attributes of the soul? They are not real questions, because they are not based on evidence. Thus, as there is no evidence for the Universe having a purpose, there is no point in trying to establish its purpose or to explore the consequences of that purported purpose. As there is no evidence for the existence of a soul (except in a metaphorical sense), there is no point in spending time wondering what the properties of that soul might be should the concept ever be substantiated. Most questions of this class are a waste of time; and because they are not open to rational discourse, at worst they are resolved only by resort to the sword, the bomb or the flame.

The second class of big questions concerns features of the Universe for which there is evidence other than wish-fulfilling speculation and the stimulation provided by the study of sacred texts. They include investigations into the origin of the Universe, and specifically how it is that there is something rather than nothing, the details of the structure of the Universe (particularly the relative strengths of the fundamental forces and the existence of the fundamental particles), and the nature of consciousness. These are all real big questions and, in my view, are open to scientific elucidation.

The first class of questions, the inventions, commonly but not invariably begin with Why. The second class properly begin with How but, to avoid a lot of clumsy language, are often packaged as Why questions for convenience of discourse. Thus, Why is there something rather than nothing? (which is coloured by hints of purpose) is actually a disguised form of How is it that something emerged from nothing? Such Why questions can always be deconstructed into concatenations of How questions, and are in principle worthy of consideration with an expectation of being answered.

I accept that some will criticise me along the lines that I am using a circular argument: that the real big questions are the ones that can be answered scientifically, and therefore only science can in principle elucidate such questions, leaving aside the invented questions as intellectual weeds. That might be so. Publicly accessible evidence, after all, is surely an excellent sieve for distinguishing the two classes of question, and the foundation of science is evidence.

Science is like Michelangelo. The young Michelangelo demonstrated his skill as a sculptor by carving the ravishing Pietà in the Vatican; the mature Michelangelo, having acquired and demonstrated his skill, broke free of the conventions and created his extraordinary later quasi-abstractions. Science has trod a similar path. Through its four centuries of serious endeavour, from Galileo onwards, when evidence was mingled with mathematics, and the extraordinary reticulation of concepts and achievements emerged, science has acquired maturity, and from the elucidation of simple observations it is now capable of dealing with the complex. Indeed, the emergence of computation as a component of the unfolding implications of theories and the detection of patterns in massive data sets has extended the reach of the rational and greatly enriches the scientific method by augmenting the analytic.

The triple-pronged armoury of science – the observational, the analytic and the computational – is now ready to attack the real big questions. They are, in chronological order: How did the Universe begin? How did matter in the Universe become alive? and How did living matter become self-conscious? When inspected and picked apart, these questions include many others, such as – in the first question – the existence of the fundamental forces and particles and, by extension, the long-term future of the Universe. It includes the not-so-little problem of the union of gravitation and quantum mechanics.

The second question includes not only the transition from inorganic to organic but details of the evolution of species and the ramifications of molecular biology. The third includes not merely our ability to cogitate and create but also the nature of aesthetic and moral judgment. I see no reason why the scientific method cannot be used to answer, or at least illuminate, Socrates’ question ‘How should we live?’ by appealing to those currently semi-sciences (the social sciences) including anthropology, ethology, psychology and economics. The cyclic raises its head here too, for it is conceivable that the limitations of consciousness preclude full comprehension of the deep structure of the fabric of reality, so perhaps in the third, arising as it does from the first, the first finds itself bounded. We are already seeing a hint of that with quantum mechanics, which is so far removed from common experience (I could add, as it maps on to our brains) that no one currently really understands it (but that has not inhibited our ability to deploy it).

The lubricant of the scientific method is optimism, optimism that given patience and effort, often collaborative effort, comprehension will come. It has in the past, and there is no reason to suppose that such optimism is misplaced now. Of course, foothills have given way to mountains, and rapid progress cannot be expected in the final push. Maybe effort will take us, at least temporarily, down blind alleys (string theory perhaps) but then the blindness of that alley might suddenly be opened and there is a surge of achievement. Perhaps whole revised paradigms of thought, such as those a century or so ago when relativity and quantum mechanics emerged, will take comprehension in currently unimaginable directions. Maybe we shall find that the cosmos is just mathematics rendered substantial. Maybe our comprehension of consciousness will have to be left to the artificial device that we thought was merely a machine for simulating it. Maybe, indeed, circularity again, only the artificial consciousness we shall have built will have the capacity to understand the emergence of something from nothing.

I consider that there is nothing that the scientific method cannot elucidate. Indeed, we should delight in the journey of the collective human mind in the enterprise we call science.Aeon counter – do not remove

Peter Atkins

This article was originally published at Aeon and has been republished under Creative Commons.

What makes People distrust Science?

square-stationary-earth

A Map of the Square and Stationary Earth by Professor Orlando Ferguson, South Dakota, 1893. Photo courtesy Wikipedia

Bastiaan T Rutjens | Aeon Ideas

Today, there is a crisis of trust in science. Many people – including politicians and, yes, even presidents – publicly express doubts about the validity of scientific findings. Meanwhile, scientific institutions and journals express their concerns about the public’s increasing distrust in science. How is it possible that science, the products of which permeate our everyday lives, making them in many ways more comfortable, elicits such negative attitudes among a substantial part of the population? Understanding why people distrust science will go a long way towards understanding what needs to be done for people to take science seriously.

Political ideology is seen by many researchers as the main culprit of science skepticism. The sociologist Gordon Gauchat has shown that political conservatives in the United States have become more distrusting of science, a trend that started in the 1970s. And a swath of recent research conducted by social and political psychologists has consistently shown that climate-change skepticism in particular is typically found among those on the conservative side of the political spectrum. However, there is more to science skepticism than just political ideology.

The same research that has observed the effects of political ideology on attitudes towards climate change has also found that political ideology is not that predictive of skepticism about other controversial research topics. Work by the cognitive scientist Stephan Lewandowsky, as well as research led by the psychologist Sydney Scott, observed no relation between political ideology and attitudes toward genetic modification. Lewandowsky also found no clear relation between political conservatism and vaccine skepticism.

So there is more that underlies science skepticism than just political conservatism. But what? It is important to systematically map which factors do and do not contribute to science skepticism and science (dis)trust in order to provide more precise explanations for why a growing number of individuals reject the notion of anthropogenic climate change, or fear that eating genetically modified products is dangerous, or believe that vaccines cause autism.

My colleagues and I recently published a set of studies that investigated science trust and science skepticism. One of the take-home messages of our research is that it is crucial not to lump various forms of science skepticism together. And although we were certainly not the first to look beyond political ideology, we did note two important lacunae in the literature. First, religiosity has so far been curiously under-researched as a precursor to science skepticism, perhaps because political ideology commanded so much attention. Second, current research lacks a systematic investigation into various forms of skepticism, alongside more general measures of trust in science. We attempted to correct both oversights.

People can be skeptical or distrusting of science for different reasons, whether it is about one specific finding from one discipline (for example, ‘The climate is not warming, but I believe in evolution’), or about science in general (‘Science is just one of many opinions’). We identified four major predictors of science acceptance and science skepticism: political ideology; religiosity; morality; and knowledge about science. These variables tend to intercorrelate – in some cases quite strongly – which means that they are potentially confounded. To illustrate, an observed relation between political conservatism and trust in science might in reality be caused by another variable, for example religiosity. When not measuring all constructs simultaneously, it is hard to properly assess what the predictive value of each of these is.

So, we investigated the heterogeneity of science skepticism among samples of North American participants (a large-scale cross-national study of science skepticism in Europe and beyond will follow). We provided participants with statements about climate change (eg, ‘Human CO2 emissions cause climate change’), genetic modification (eg, ‘GM of foods is a safe and reliable technology’), and vaccination (eg, ‘I believe that vaccines have negative side effects that outweigh the benefits of vaccination for children’). Participants could indicate to what extent they agreed or disagreed with these statements. We also measured participants’ general faith in science, and included a task in which they could indicate how much federal money should be spent on science, compared with various other domains. We assessed the impact of political ideology, religiosity, moral concerns and science knowledge (measured with a science literacy test, consisting of true or false items such as ‘All radioactivity is made by humans’, and ‘The centre of the Earth is very hot’) on participants’ responses to these various measures.

Political ideology did not play a meaningful role when it came to most of our measures. The only form of science skepticism that was consistently more pronounced among the politically conservative respondents in our studies was, not surprisingly, climate-change skepticism. But what about the other forms of skepticism, or skepticism of science generally?

Skepticism about genetic modification was not related to political ideology or religious beliefs, though it did correlate with science knowledge: the worse people did on the scientific literacy test, the more skeptical they were about the safety of genetically modified food. Vaccine skepticism also had no relation to political ideology, but it was strongest among religious participants, with a particular relation to moral concerns about the naturalness of vaccination.

Moving beyond domain-specific skepticism, what did we observe about a general trust in science, and the willingness to support science more broadly? The results were quite clear: trust in science was by far the lowest among the religious. In particular, religious orthodoxy was a strong negative predictor of faith in science and the orthodox participants were also the least positive about investing federal money in science. But notice here again political ideology did not contribute any meaningful variance over and beyond religiosity.

From these studies there are a couple of lessons to be learned about the current crisis of faith that plagues science. Science skepticism is quite diverse. Further, distrust of science is not really that much about political ideology, with the exception of climate-change skepticism, which is consistently found to be politically driven. Additionally, these results suggest that science skepticism cannot simply be remedied by increasing people’s knowledge about science. The impact of scientific literacy on science skepticism, trust in science, and willingness to support science was minor, save for the case of genetic modification. Some people are reluctant to accept particular scientific findings, for various reasons. When the aim is to combat skepticism and increase trust in science, a good starting point is to acknowledge that science skepticism comes in many forms.Aeon counter – do not remove

Bastiaan T Rutjens

This article was originally published at Aeon and has been republished under Creative Commons.

 

Seven Types of Atheism

But you will have gathered what I am driving at, namely, that it is still a metaphysical faith upon which our faith in science rests—that even we seekers after knowledge today, we godless anti-metaphysicians still take our fire, too, from the flame lit by a faith that is thousands of years old, that Christian faith which was also the faith of Plato, that God is the truth, that truth is divine. —But what if this should become more and more incredible, if nothing should prove to be divine any more unless it were error, blindness, the lie—if God himself should prove to be our most enduring lie?

– Friedrich Nietzsche, The Gay Science (1882)


seven-types-atheism


From the provocative author of Straw Dogs comes an incisive, surprising intervention in the political and scientific debate over religion and atheism. A meditation on the importance of atheism in the modern world – and its inadequacies and contradictions – by one of Britain’s leading philosophers.

When you explore older atheisms, you will find that some of your firmest convictions―secular or religious―are highly questionable. If this prospect disturbs you, what you are looking for may be freedom from thought.

For a generation now, public debate has been corroded by a shrill, narrow derision of religion in the name of an often vaguely understood “science.” John Gray’s stimulating and enjoyable new book, Seven Types of Atheism, describes the complex, dynamic world of older atheisms, a tradition that is, he writes, in many ways intertwined with and as rich as religion itself.

Along a spectrum that ranges from the convictions of “God-haters” like the Marquis de Sade to the mysticism of Arthur Schopenhauer, from Bertrand Russell’s search for truth in mathematics to secular political religions like Jacobinism and Nazism, Gray explores the various ways great minds have attempted to understand the questions of salvation, purpose, progress, and evil. The result is a book that sheds an extraordinary light on what it is to be human.


“In former times, one sought to prove that there is no God—today one indicates how the belief that there is a God arose and how this belief acquired its weight and importance: a counter-proof that there is no God thereby becomes superfluous.—When in former times one had refuted the ‘proofs of the existence of God’ put forward, there always remained the doubt whether better proofs might not be adduced than those just refuted: in those days atheists did not know how to make a clean sweep.”

– Friedrich Nietzsche, Daybreak (1881)


See also: VICE Interviews John Gray about Seven Types of Atheism

Why Religion is not Going Away and Science will not Destroy It

church-holy-saviour-istanbul

At the Church of the Holy Saviour in Chora, Istanbul. Photo by Guillen Perez/Flickr

Peter Harrison | Aeon Ideas


In 1966, just over 50 years ago, the distinguished Canadian-born anthropologist Anthony Wallace confidently predicted the global demise of religion at the hands of an advancing science: ‘belief in supernatural powers is doomed to die out, all over the world, as a result of the increasing adequacy and diffusion of scientific knowledge’. Wallace’s vision was not exceptional. On the contrary, the modern social sciences, which took shape in 19th-century western Europe, took their own recent historical experience of secularisation as a universal model. An assumption lay at the core of the social sciences, either presuming or sometimes predicting that all cultures would eventually converge on something roughly approximating secular, Western, liberal democracy. Then something closer to the opposite happened.

Not only has secularism failed to continue its steady global march but countries as varied as Iran, India, Israel, Algeria and Turkey have either had their secular governments replaced by religious ones, or have seen the rise of influential religious nationalist movements. Secularisation, as predicted by the social sciences, has failed.

To be sure, this failure is not unqualified. Many Western countries continue to witness decline in religious belief and practice. The most recent census data released in Australia, for example, shows that 30 per cent of the population identify as having ‘no religion’, and that this percentage is increasing. International surveys confirm comparatively low levels of religious commitment in western Europe and Australasia. Even the United States, a long-time source of embarrassment for the secularisation thesis, has seen a rise in unbelief. The percentage of atheists in the US now sits at an all-time high (if ‘high’ is the right word) of around 3 per cent. Yet, for all that, globally, the total number of people who consider themselves to be religious remains high, and demographic trends suggest that the overall pattern for the immediate future will be one of religious growth. But this isn’t the only failure of the secularisation thesis.

Scientists, intellectuals and social scientists expected that the spread of modern science would drive secularisation – that science would be a secularising force. But that simply hasn’t been the case. If we look at those societies where religion remains vibrant, their key common features are less to do with science, and more to do with feelings of existential security and protection from some of the basic uncertainties of life in the form of public goods. A social safety net might be correlated with scientific advances but only loosely, and again the case of the US is instructive. The US is arguably the most scientifically and technologically advanced society in the world, and yet at the same time the most religious of Western societies. As the British sociologist David Martin concluded in The Future of Christianity (2011): ‘There is no consistent relation between the degree of scientific advance and a reduced profile of religious influence, belief and practice.’

The story of science and secularisation becomes even more intriguing when we consider those societies that have witnessed significant reactions against secularist agendas. India’s first prime minister Jawaharlal Nehru championed secular and scientific ideals, and enlisted scientific education in the project of modernisation. Nehru was confident that Hindu visions of a Vedic past and Muslim dreams of an Islamic theocracy would both succumb to the inexorable historical march of secularisation. ‘There is only one-way traffic in Time,’ he declared. But as the subsequent rise of Hindu and Islamic fundamentalism adequately attests, Nehru was wrong. Moreover, the association of science with a secularising agenda has backfired, with science becoming a collateral casualty of resistance to secularism.

Turkey provides an even more revealing case. Like most pioneering nationalists, Mustafa Kemal Atatürk, the founder of the Turkish republic, was a committed secularist. Atatürk believed that science was destined to displace religion. In order to make sure that Turkey was on the right side of history, he gave science, in particular evolutionary biology, a central place in the state education system of the fledgling Turkish republic. As a result, evolution came to be associated with Atatürk’s entire political programme, including secularism. Islamist parties in Turkey, seeking to counter the secularist ideals of the nation’s founders, have also attacked the teaching of evolution. For them, evolution is associated with secular materialism. This sentiment culminated in the decision this June to remove the teaching of evolution from the high-school classroom. Again, science has become a victim of guilt by association.

The US represents a different cultural context, where it might seem that the key issue is a conflict between literal readings of Genesis and key features of evolutionary history. But in fact, much of the creationist discourse centres on moral values. In the US case too, we see anti-evolutionism motivated at least in part by the assumption that evolutionary theory is a stalking horse for secular materialism and its attendant moral commitments. As in India and Turkey, secularism is actually hurting science.

In brief, global secularisation is not inevitable and, when it does happen, it is not caused by science. Further, when the attempt is made to use science to advance secularism, the results can damage science. The thesis that ‘science causes secularisation’ simply fails the empirical test, and enlisting science as an instrument of secularisation turns out to be poor strategy. The science and secularism pairing is so awkward that it raises the question: why did anyone think otherwise?

Historically, two related sources advanced the idea that science would displace religion. First, 19th-century progressivist conceptions of history, particularly associated with the French philosopher Auguste Comte, held to a theory of history in which societies pass through three stages – religious, metaphysical and scientific (or ‘positive’). Comte coined the term ‘sociology’ and he wanted to diminish the social influence of religion and replace it with a new science of society. Comte’s influence extended to the ‘young Turks’ and Atatürk.

The 19th century also witnessed the inception of the ‘conflict model’ of science and religion. This was the view that history can be understood in terms of a ‘conflict between two epochs in the evolution of human thought – the theological and the scientific’. This description comes from Andrew Dickson White’s influential A History of the Warfare of Science with Theology in Christendom (1896), the title of which nicely encapsulates its author’s general theory. White’s work, as well as John William Draper’s earlier History of the Conflict Between Religion and Science (1874), firmly established the conflict thesis as the default way of thinking about the historical relations between science and religion. Both works were translated into multiple languages. Draper’s History went through more than 50 printings in the US alone, was translated into 20 languages and, notably, became a bestseller in the late Ottoman empire, where it informed Atatürk’s understanding that progress meant science superseding religion.

Today, people are less confident that history moves through a series of set stages toward a single destination. Nor, despite its popular persistence, do most historians of science support the idea of an enduring conflict between science and religion. Renowned collisions, such as the Galileo affair, turned on politics and personalities, not just science and religion. Darwin had significant religious supporters and scientific detractors, as well as vice versa. Many other alleged instances of science-religion conflict have now been exposed as pure inventions. In fact, contrary to conflict, the historical norm has more often been one of mutual support between science and religion. In its formative years in the 17th century, modern science relied on religious legitimation. During the 18th and 19th centuries, natural theology helped to popularise science.

The conflict model of science and religion offered a mistaken view of the past and, when combined with expectations of secularisation, led to a flawed vision of the future. Secularisation theory failed at both description and prediction. The real question is why we continue to encounter proponents of science-religion conflict. Many are prominent scientists. It would be superfluous to rehearse Richard Dawkins’s musings on this topic, but he is by no means a solitary voice. Stephen Hawking thinks that ‘science will win because it works’; Sam Harris has declared that ‘science must destroy religion’; Stephen Weinberg thinks that science has weakened religious certitude; Colin Blakemore predicts that science will eventually make religion unnecessary. Historical evidence simply does not support such contentions. Indeed, it suggests that they are misguided.

So why do they persist? The answers are political. Leaving aside any lingering fondness for quaint 19th-century understandings of history, we must look to the fear of Islamic fundamentalism, exasperation with creationism, an aversion to alliances between the religious Right and climate-change denial, and worries about the erosion of scientific authority. While we might be sympathetic to these concerns, there is no disguising the fact that they arise out of an unhelpful intrusion of normative commitments into the discussion. Wishful thinking – hoping that science will vanquish religion – is no substitute for a sober assessment of present realities. Continuing with this advocacy is likely to have an effect opposite to that intended.

Religion is not going away any time soon, and science will not destroy it. If anything, it is science that is subject to increasing threats to its authority and social legitimacy. Given this, science needs all the friends it can get. Its advocates would be well advised to stop fabricating an enemy out of religion, or insisting that the only path to a secure future lies in a marriage of science and secularism.Aeon counter – do not remove

Peter Harrison

This article was originally published at Aeon and has been republished under Creative Commons.