Why atheists are also dualists (and therefore crypto-religious)

This is an extract from my forthcoming book “In Our Own Image” (Rider Books, 2015) that I edited out; and therefore free of copyright and happy to share with you.

Social anthropologist Pascal Boyer[1] explains that a belief in non-physical beings, or spirits, is the most common feature in religions. Spirits violate intuitive biological and physical knowledge; they can live forever, pass through walls or exist in many different places and times simultaneously. There are three other recurrent features in religious ideologies. Firstly, that a non-physical component of a person can survive after death and remain as a being with beliefs and desires. Secondly, that certain individuals receive direct inspiration or messages from supernatural agencies. And thirdly, that performing certain rituals in an exact way can bring about change in the natural world. Let us examine these three characteristics of religion separately, and how they result from the cognitive makeup of our modern minds.

We are born dualists, regardless of our beliefs (Image courtesy of radiologist Andrew Newberg.).

We are born dualists, regardless of our beliefs (Image courtesy of radiologist Andrew Newberg.).

Even self-declared atheists honour the dead. And although they might claim that they honour the “memory” of the deceased rather than the corpse itself, the fact is that our minds – atheistic or not – find it hard to come to terms with the notion that a person actually “dies”. To our minds, there is always something about a person that survives his or her death. One may chose to call it “memory” or “soul” depending on one’s convictions. However we approach the end of life our inner inability to deal with death compels us to invent something that does not perish, even after the body dies.

This should not surprise us. Death wrecks havoc with our cognitive systems. Death severs social connections, and by consequence destroys the cognitive social map that make us who we are. Death replaces people we love and care for with empty holes. It hacks at emotional bonds forged over long periods of time. The loss of a loved one is the virtual destruction of our mental universe. The intensity of the feeling of loss is usually proportional to the degree of connection with the deceased. The further the connection of the person who died to our inner social network the lesser our loss, our sadness, and our devastation. Although we may feel shocked and saddened when thousands are killed in a distant country because of a natural disaster or war, we never feel sad in the way we do when a person close to us dies[2].

Being social primates whose survival depends on a closely knitted group of blood relatives, our modern minds had to invent a way to overcome the death of loved ones. So we invented the notion that life continues beyond death; that the dead keep on living in the hereafter. Indeed, life beyond death is an invention of the modern mind. The earliest human burial dates back 100,000 years. Human skeleton remains discovered in the Skhul Cave at Qafzeh, Israel were stained with red ochre. Around them there were several goods designed to escort them to the life beyond; the mandible of a wild boar was found in the arms of one of the skeletons. No human burials have been discovered prior to that date[3]. Once the concepts of an afterlife and the survival of the soul had been conceived, logic kicked in to produce numerous new ideas relating to these notions. To this invisible realm access was privileged to the very few, or under very special circumstances to the many. The few would be called shamans – and later priests, prophets, messiahs, or gurus. The special circumstances would occur during communal rituals where rhythmic dances or the use of hallucinogenic drugs induced ecstatic states of mind[4]. Shamans and hallucinations further enforced beliefs in non-physical beings. The roots of religion were sown.

The Sorcerer: Palaeolithic painting depicting mythical chimera, or possibly a shaman.

A cave painting of the Upper Palaeolithic from the cavern of Trois-Freres in Ariege, France probably shows one of those “special individuals”. The painting was made around 13,000 years ago and it depicts a being with an upright posture and hands that look human, the back and ears of a herbivore, the antlers of a reindeer, the tail of a horse and a phallus positioned like that of a feline. The “sorcerer” – as the painted figure is known – is interpreted by scientists as some kind of great spirit, or master of animals. The French anthropologist Henri Breuil has suggested[5] that the painting depicts a shaman performing a ritual.

Most archaeologists are convinced that painted prehistoric caves were sites for the practice of magical ceremonies. This may also explain one of the greatest puzzles about prehistoric cave paintings: why were they painted in the darkest of places, in the deepest of caves, in spaces where people did not live. By 35,000 years ago humans had mostly abandoned cave dwelling. We had ceased to be “cavemen”. Most lived in small, makeshift camps which were befitting to a species of nomadic hunter-gatherers. Nevertheless, the beautiful, naturalistic cave art of the Upper Palaeolithic is profound testament that caves remained a focal point of human life for thousands of years to come. For millennia humans continued to use these caves for purposes other than dwelling. But what were they doing in there? Before beginning to speculate let us look at some interesting facts.

From Gibraltar to the Ural Mountains archaeologists have discovered around 150 caves with wall paintings. Given the enormous time span from the beginnings of cave art to the end of the Upper Palaeolithic around 12,000 years ago, these caves must have been used only sporadically. On average, for the whole of Europe, there was one painted cave for every five generations of people. Even the most utilized caves seem to have been in use for only a restricted time by a limited number of people. Astonishingly, there is a thematic similarity across all of these disparate caves: big animals, few humans, many geometric designs recur in almost every cave. It is as if there was a common set of beliefs that span peoples that lived across thousands of miles, and across tens of thousands of years.

There are other common features as well. Most paintings are placed in the deepest and darkest of places. At Niaux most images are located at the end of a deep galley. At Chauvet one must descend down a narrow shaft. At Lascaux one has to follow several passages with wall paintings continuing to the very end of the caves several meters deep. These passages and caverns produce few finds of human debris, suggesting that no one lived there on a permanent basis. Engravings, tucked away in narrow or low niches suggest individual devotions. Footprints of adults, adolescents and children suggest that dances were performed. All evidence points to the conclusion that in these deep, dark, underground places, people gathered for special occasions only.

We cannot possibly know what went on in there but many archaeologists believe that our forefathers performed some kind of religious rituals. Caves echo with reverence down the ages to our day, which may indeed have its roots in prehistory. The ancient Greeks believed that caves connected to the underworld; the Eleusinian mysteries were performed in caves. Persians and Romans worshiped Mithras, the god of light, in caves; a tradition that was emulated by the first Christians who worshipped in catacombs. Later Christian iconography depicts Jesus born in a cave. The relationship between caves and religion is not a western phenomenon only; for the Incas caves were places of emergence and origin; for the Maya a conduit to the other world. One of the holiest shrines in Hinduism is the Amarnath cave in Jammu and Kashmir, dedicated to Shiva.

The combination of dances and paintings, reveals evidence of something equally important to the birth of religion: the manifestation and communion of narratives. I can imagine people in those underground caves telling stories about hunts and hero-hunters who transformed into animals, of supernatural beings, of the creation of cosmos. Perhaps those rituals were somewhat like a Palaeolithic movie-theatre-cum-church: a shaman with torchlight at hand leading a procession of faithful into the cave’s mystical innards; stopping under a mural with lions and horses and re-enacting a magical story.

These stories must have passed from mouth to mouth, travelling across space and time, finally arriving at the dawn of the agricultural revolution around 10,000 years ago. The prehistoric stories, retold numerous times, were now transformed, but ever so slightly. The magical beasts remained; and so did the heroes, although some of them were elevated to gods. The Neolithic man, the farmer, the soldier, the priest, the king, edited the stories of his Palaeolithic past, using the same cognitive apparatus, the modern mind with its wired-in dualism. Complex, agricultural and proto-industrial civilizations reconfigured the rituals of communal dancing and ecstasy into the religious narratives of Ancient Egypt, of Mesopotamia, of Greece. It would take another book to trace these stories into the narratives of the Abrahamic religions. But this is a book about the human mind, and how it can be reproduced in a machine. It is also why we have thought of intelligent machines in the first place. And why we have weaved so many stories about them. But before we draw some crucial conclusions about the questions that concern us based on the archaeological evidence with regards to the big bang of the modern mind, let us return to the human brain for a moment and examine why it so profoundly loves making up stories.

[1] Boyer, P., (1994), The naturalness of religious ideas. A cognitive theory of religion, Berkeley: University of California Press.

[2] A frequent exception to this rule happens at the death of a loved celebrity. Even without a direct blood relationship to the deceased many people (fans) feel genuine grief. This phenomenon is probably an indication of the power of empathy to transcend blood relatives. A celebrity becomes one of “us”. Witness the genuine sorrow felt by millions of people at the death of Princess Diana.

[3] There is disputed evidence that the Neanderthals buried their dead in shallow graves, however this does not necessarily prove belief in an afterlife, they might have buried them for sanitary reasons only.

[4] Interestingly, modern neuroimaging research in altered states of mind shows that during ecstasy the limbic system of the brain takes over. It is as if our modern mind is disconnected and we re-experience the minds of our distant ape ancestors where the “self” is dissolved and we feel “part of the whole cosmos”.

[5] Breuil, H., (1954), Quatre cent siècles d’art paiétal, p. 166, Montignac.

Gerald Edelman and AI

Gerald Edelman

Gerald Edelman (1929-2014)

Gerald Edelman passed away on May 17, 2014 in La Jolla, California. In 1972 he won the Nobel Prize (together with Rodney Porter) for solving the antibody structure, and explaining how the immune system functions.  His research into antibodies led him to realize the enormous explanatory potential of selective-recognition systems. I had read most of his books, before meeting him in person in Tucson Arizona, during the World Conference on Consciousness in 2004. In his smart suite, this tall, radiantly intelligent and witty man, explained to his audience how his work on the immune system could provide an explanation for consciousness.

Basically, Edelman discovered that we have a great number of structurally different antibody cells in our body. When a bacterium or a virus enters our body these antibody cells  (also called “immunoglobulins”) rush towards them and test how well their structures “match” those of the intruders. This structural variability lies at the heart of antibody-based recognition. Edelman noticed that the adaptive immune response had all the hallmarks of an evolutionary process. The antibody recognition system “evolved” very quickly in order to adapt to the bacterial or viral attack. This was similar to a species adapting to environmental pressure.

Edelman posited that this evolutionary biological mechanism could also explain consciousness. Two significant discoveries strengthened his hypothesis. Firstly, that a fundamental property of cortical neurons is that they are organized in discrete groups of cells. Secondly, that synapses strengthen through use. Edelman theorized that our brain manages to recognize and process information thanks to selection on neuron groups that differ in their connectivity patterns. Several group cells would respond to incoming sensory information; their response would be modified by repetitive recognition that would strengthen, abstract and associate their connectivity. Edelman was in fact describing a cybernetic system with multiple positive feedback loops (he called them “re-entry” loops). Recent research by Stanislas Dehaene on the neural correlates (or “signatures”) of consciousness has shown that this re-entry mechanism is fundamental to how groups of cells respond to sensory information, and how a local recognition event becomes global (i.e. whole brain).

A Darwin robot

A Darwin robot

To demonstrate his theory Edelman and his colleagues built a number of “noetic machines” he called “Darwins”, or “brain-based devices” (BBDs). Built around a model of the neural connectivity of a simple brain a Darwin would “discover” the world around it, like an animal would.

Edelman’s robotic research has received very little notice, or appreciation, from the mainstream AI and robot research community. The reason for this is that the mainstream has abandoned long time ago the original goal of AI, which was to build a conscious machine. Since the 1980s AI research (as well as autonomous robots research) has focused on practical applications where pattern recognition and matching is paramount: for instance driveless cars, image and speech recognition, medical diagnosis, etc. This is where the money is nowadays, as the recent acquisition of the company Deep Mind by Google has amply demonstrated.

Edelman’s idea of simulating the brain in the robots is a close kin to neuromorphic computer technologies, still at their infancy. And yet – as I will aim to demonstrate in my forthcoming book “In Our Own Image” (Rider Books, due April 2014) – conscious machines can only evolve with a computer architecture like the one implemented in Edelman’s Darwins. Current AI and robotics will never be able to produce a self-aware machine. The breakthrough for this will come from genetics and a deeper understanding of developmental biology. How a cell divides and evolves into a nervous system? How does a nervous system differentiate across species? How does it develop into a brain? How does this brain communicate with the whole body, processing internal as well as external sensory information? How does the brain cells adapt, and modulate? Answers to these questions will come from biology and neuroscience. When we have the answers, we will have cracked the mechanism that Edelman hypothesized. And then, his curious Darwins may be remembered as the “amoebas”, the protozoa of a new line of evolution on Earth, that of the intelligent machines.

Hybrid thinking: an idea from Ray Kurzweil

In his March 20, 2014 TED talk Ray Kurzweil suggested that in a couple of decades from now we will be able to increase our neocortex’s power many-fold at an instant by accessing the processing power of the cloud. He suggested that this will be possible thanks to nanorobots injected in our brain which will act as interfaces between neocortex brain cells and digital “brain cells” with the ability to scale like a cloud application would. It is a very intriguing idea. So let us examine its premises, and its consequences.

Kurzweil’s principal assumption is that the medium is not important when it comes to thinking: biological brain cells are equivalent to digital brain cells or brain cells made of water pipes, as long as the function is the same. This functional perspective of intelligence is one that I agree with. Like Kurzweil I too see no reason why one cannot have an artificial brain cell. Indeed McCulloch and Pitts have shown the equivalence of a neuron and a Turing machine.

Nevertheless, having a digital equivalent of the basic unit of information processing in the brain does not in any way solve the problem of global information processing of the brain as a whole, let alone the problem of a brain becoming aware of itself. Global information processing in the brain is the result of highly complex structures with multi-level positive feedback loops. Our brains are not a collection of brain cells, but complex cybernetic systems whose fundamental circuitry although evolved over millennia, changes with every second of our lives. This plasticity of the brain, coupled with its self-corrective mechanisms, posits a fundamental technological problem when it comes to devising smart algorithms for intelligence, like the text-recognition algorithm that Kurzweil alluded in his TED talk.

Accessing a “cloud of neocortex” and endow oneself with “hybrid thinking” requires that we solve two major problems. First that we crack how developmental human biology encodes the complexity of the brain, and that we can then decode this in order to produce artificial intelligence. I believe that this is possible, but that it will take a much longer time than Kurzweil predicts. It will also be a matter of biology to understand, rather than computer science. Perhaps in a few years we will see a new discipline called “biocomputer science”, that will decode brain structures in alternative non-biological mediums.

The second problem that we need to crack for hybrid thinking is the ability to scale artificial intelligence at will. I believe that this problem is unsolvable because it involves a contradiction. The key word here is “will”; whose will exactly? Assuming that we discover how to interface our brain with an artificial brain wouldn’t this mean the interfacing of two different personalities? Hybrid thinking therefore seems very problematic for the reason that you sharing someone else’s consciousness (in this case a machine’s) would lead to psychosis.

Hybrid thinking appears like a synonym for digital schizophrenia.