English Language and Science: a lecture and discussion

I was honoured to be a participant in a public discussion about Science and the English Language, held at the English Speaking Union in London on 13the November 2014. My co-panelists are astrophysicist Roberto Trotta and geneticist Aarathi Prasad. The discussion follows a lecture by Trotta around his book entitled “The Edge of he Sky”. The event was co-organised by the British Council.

Gerald Edelman and AI

Gerald Edelman

Gerald Edelman (1929-2014)

Gerald Edelman passed away on May 17, 2014 in La Jolla, California. In 1972 he won the Nobel Prize (together with Rodney Porter) for solving the antibody structure, and explaining how the immune system functions.  His research into antibodies led him to realize the enormous explanatory potential of selective-recognition systems. I had read most of his books, before meeting him in person in Tucson Arizona, during the World Conference on Consciousness in 2004. In his smart suite, this tall, radiantly intelligent and witty man, explained to his audience how his work on the immune system could provide an explanation for consciousness.

Basically, Edelman discovered that we have a great number of structurally different antibody cells in our body. When a bacterium or a virus enters our body these antibody cells  (also called “immunoglobulins”) rush towards them and test how well their structures “match” those of the intruders. This structural variability lies at the heart of antibody-based recognition. Edelman noticed that the adaptive immune response had all the hallmarks of an evolutionary process. The antibody recognition system “evolved” very quickly in order to adapt to the bacterial or viral attack. This was similar to a species adapting to environmental pressure.

Edelman posited that this evolutionary biological mechanism could also explain consciousness. Two significant discoveries strengthened his hypothesis. Firstly, that a fundamental property of cortical neurons is that they are organized in discrete groups of cells. Secondly, that synapses strengthen through use. Edelman theorized that our brain manages to recognize and process information thanks to selection on neuron groups that differ in their connectivity patterns. Several group cells would respond to incoming sensory information; their response would be modified by repetitive recognition that would strengthen, abstract and associate their connectivity. Edelman was in fact describing a cybernetic system with multiple positive feedback loops (he called them “re-entry” loops). Recent research by Stanislas Dehaene on the neural correlates (or “signatures”) of consciousness has shown that this re-entry mechanism is fundamental to how groups of cells respond to sensory information, and how a local recognition event becomes global (i.e. whole brain).

A Darwin robot

A Darwin robot

To demonstrate his theory Edelman and his colleagues built a number of “noetic machines” he called “Darwins”, or “brain-based devices” (BBDs). Built around a model of the neural connectivity of a simple brain a Darwin would “discover” the world around it, like an animal would.

Edelman’s robotic research has received very little notice, or appreciation, from the mainstream AI and robot research community. The reason for this is that the mainstream has abandoned long time ago the original goal of AI, which was to build a conscious machine. Since the 1980s AI research (as well as autonomous robots research) has focused on practical applications where pattern recognition and matching is paramount: for instance driveless cars, image and speech recognition, medical diagnosis, etc. This is where the money is nowadays, as the recent acquisition of the company Deep Mind by Google has amply demonstrated.

Edelman’s idea of simulating the brain in the robots is a close kin to neuromorphic computer technologies, still at their infancy. And yet – as I will aim to demonstrate in my forthcoming book “In Our Own Image” (Rider Books, due April 2014) – conscious machines can only evolve with a computer architecture like the one implemented in Edelman’s Darwins. Current AI and robotics will never be able to produce a self-aware machine. The breakthrough for this will come from genetics and a deeper understanding of developmental biology. How a cell divides and evolves into a nervous system? How does a nervous system differentiate across species? How does it develop into a brain? How does this brain communicate with the whole body, processing internal as well as external sensory information? How does the brain cells adapt, and modulate? Answers to these questions will come from biology and neuroscience. When we have the answers, we will have cracked the mechanism that Edelman hypothesized. And then, his curious Darwins may be remembered as the “amoebas”, the protozoa of a new line of evolution on Earth, that of the intelligent machines.

Hybrid thinking: an idea from Ray Kurzweil

In his March 20, 2014 TED talk Ray Kurzweil suggested that in a couple of decades from now we will be able to increase our neocortex’s power many-fold at an instant by accessing the processing power of the cloud. He suggested that this will be possible thanks to nanorobots injected in our brain which will act as interfaces between neocortex brain cells and digital “brain cells” with the ability to scale like a cloud application would. It is a very intriguing idea. So let us examine its premises, and its consequences.

Kurzweil’s principal assumption is that the medium is not important when it comes to thinking: biological brain cells are equivalent to digital brain cells or brain cells made of water pipes, as long as the function is the same. This functional perspective of intelligence is one that I agree with. Like Kurzweil I too see no reason why one cannot have an artificial brain cell. Indeed McCulloch and Pitts have shown the equivalence of a neuron and a Turing machine.

Nevertheless, having a digital equivalent of the basic unit of information processing in the brain does not in any way solve the problem of global information processing of the brain as a whole, let alone the problem of a brain becoming aware of itself. Global information processing in the brain is the result of highly complex structures with multi-level positive feedback loops. Our brains are not a collection of brain cells, but complex cybernetic systems whose fundamental circuitry although evolved over millennia, changes with every second of our lives. This plasticity of the brain, coupled with its self-corrective mechanisms, posits a fundamental technological problem when it comes to devising smart algorithms for intelligence, like the text-recognition algorithm that Kurzweil alluded in his TED talk.

Accessing a “cloud of neocortex” and endow oneself with “hybrid thinking” requires that we solve two major problems. First that we crack how developmental human biology encodes the complexity of the brain, and that we can then decode this in order to produce artificial intelligence. I believe that this is possible, but that it will take a much longer time than Kurzweil predicts. It will also be a matter of biology to understand, rather than computer science. Perhaps in a few years we will see a new discipline called “biocomputer science”, that will decode brain structures in alternative non-biological mediums.

The second problem that we need to crack for hybrid thinking is the ability to scale artificial intelligence at will. I believe that this problem is unsolvable because it involves a contradiction. The key word here is “will”; whose will exactly? Assuming that we discover how to interface our brain with an artificial brain wouldn’t this mean the interfacing of two different personalities? Hybrid thinking therefore seems very problematic for the reason that you sharing someone else’s consciousness (in this case a machine’s) would lead to psychosis.

Hybrid thinking appears like a synonym for digital schizophrenia.