Writing a cybernetic novel

The Island Survival Guide and narrative reflexivity

The Island Survival Guide and narrative reflexivity

I would like to define a cybernetic novel as one that writes itself, or one where the reader is also the narrator. A novel that possesses self-reflexivity. I made the sketch (see above)  some time ago while thinking about my novel “The Island Survival Guide“. When I say “I thought about my novel” I mean as a reader, not as a writer. In fact, cybernetic writing blurs the distinction between writer and reader, and finally break it down completely: the writer is the reader who is the writer, and so on. As it does so it also undermines and destroys a more significant dichotomy, the difference between the narrative (object) and the narrator/reader (subject). The two become one, one reflecting into the other. This is of course a logical paradox. Cybernetic writing is a logical paradox based on reflexivity.

The paradox of narrative reflexivity that defines a cybernetic novel is what makes it what it is; it is a paradox that creates an escape hatch, or a quantum wormhole, connecting two different universes that exist in different dimensions. The mind is free to travel between these two narrative universes. As it travels it transfers experiences and knowledge between the two universes. Thus, the paradox of narrative reflexivity becomes the act of creation. The novel is created as a dialogue between the 3-dimensional (+time) universe of the narrator/reader/writer (the terms cease to have distinct meaning in a reflexive narrative context) and the multi-dimensional universe of the novel.

DrawingHands

M C Escher’s rendering of reflexivity in narrative: cybernetic writing

As old meaning breaks down because of continuous feedback between the narrator and the narrative, new meaning is created.

The Island Survival Guide was my second experiment in writing a cybernetic novel (my first being The Secrets of the Lands Without).

Advertisements

Gerald Edelman and AI

Gerald Edelman

Gerald Edelman (1929-2014)

Gerald Edelman passed away on May 17, 2014 in La Jolla, California. In 1972 he won the Nobel Prize (together with Rodney Porter) for solving the antibody structure, and explaining how the immune system functions.  His research into antibodies led him to realize the enormous explanatory potential of selective-recognition systems. I had read most of his books, before meeting him in person in Tucson Arizona, during the World Conference on Consciousness in 2004. In his smart suite, this tall, radiantly intelligent and witty man, explained to his audience how his work on the immune system could provide an explanation for consciousness.

Basically, Edelman discovered that we have a great number of structurally different antibody cells in our body. When a bacterium or a virus enters our body these antibody cells  (also called “immunoglobulins”) rush towards them and test how well their structures “match” those of the intruders. This structural variability lies at the heart of antibody-based recognition. Edelman noticed that the adaptive immune response had all the hallmarks of an evolutionary process. The antibody recognition system “evolved” very quickly in order to adapt to the bacterial or viral attack. This was similar to a species adapting to environmental pressure.

Edelman posited that this evolutionary biological mechanism could also explain consciousness. Two significant discoveries strengthened his hypothesis. Firstly, that a fundamental property of cortical neurons is that they are organized in discrete groups of cells. Secondly, that synapses strengthen through use. Edelman theorized that our brain manages to recognize and process information thanks to selection on neuron groups that differ in their connectivity patterns. Several group cells would respond to incoming sensory information; their response would be modified by repetitive recognition that would strengthen, abstract and associate their connectivity. Edelman was in fact describing a cybernetic system with multiple positive feedback loops (he called them “re-entry” loops). Recent research by Stanislas Dehaene on the neural correlates (or “signatures”) of consciousness has shown that this re-entry mechanism is fundamental to how groups of cells respond to sensory information, and how a local recognition event becomes global (i.e. whole brain).

A Darwin robot

A Darwin robot

To demonstrate his theory Edelman and his colleagues built a number of “noetic machines” he called “Darwins”, or “brain-based devices” (BBDs). Built around a model of the neural connectivity of a simple brain a Darwin would “discover” the world around it, like an animal would.

Edelman’s robotic research has received very little notice, or appreciation, from the mainstream AI and robot research community. The reason for this is that the mainstream has abandoned long time ago the original goal of AI, which was to build a conscious machine. Since the 1980s AI research (as well as autonomous robots research) has focused on practical applications where pattern recognition and matching is paramount: for instance driveless cars, image and speech recognition, medical diagnosis, etc. This is where the money is nowadays, as the recent acquisition of the company Deep Mind by Google has amply demonstrated.

Edelman’s idea of simulating the brain in the robots is a close kin to neuromorphic computer technologies, still at their infancy. And yet – as I will aim to demonstrate in my forthcoming book “In Our Own Image” (Rider Books, due April 2014) – conscious machines can only evolve with a computer architecture like the one implemented in Edelman’s Darwins. Current AI and robotics will never be able to produce a self-aware machine. The breakthrough for this will come from genetics and a deeper understanding of developmental biology. How a cell divides and evolves into a nervous system? How does a nervous system differentiate across species? How does it develop into a brain? How does this brain communicate with the whole body, processing internal as well as external sensory information? How does the brain cells adapt, and modulate? Answers to these questions will come from biology and neuroscience. When we have the answers, we will have cracked the mechanism that Edelman hypothesized. And then, his curious Darwins may be remembered as the “amoebas”, the protozoa of a new line of evolution on Earth, that of the intelligent machines.

Predicting the future

10171774_10154080398075487_287814402190230601_n

Prediction is difficult, specially about the future“, said the father of quantum physics Niels Bohr, his words often quoted – wrongly – as a tautology. Bohr did not mean to state the obvious but made reference to the non-deterministic nature of quantum phenomena. Although the microscopic world of quanta is bound by a set of natural laws, and unlike Newtonian physics, it is truly impossible to predict the evolution of a quantum event in the future. The only thing you can predict is the probability of this event, or its “wave function” as it is called.

Are there any lessons to be learned by social futurologists from quantum physics? Economics is a prime example of a social science aspiring but failing to make reliable predictions for the future. Often economics is called the “dismal science” – but I think this is somewhat unfair.Economics could not possibly predict the future, not because of quantum phenomena, but because of the instability and indeterminism of macroscopic complex phenomena, such as markets and real-life events. But who knows? Perhaps “black swans” (unpredictable events that change the route of history) are related to quantum physics after all, although this needs to be further explored. Nevertheless, the media insist in demanding predictions from experts of all sorts, and salivate when these predictions fail. We seem to need our prophets and their prophesies badly; and to condemn them when they fail. Ultimately, we need to believe that the future can be foretold somehow, by magic or science, by a hallucinating Pythia or a number-crunching supercomputer. Why?

Futurology we owe mostly to the Victorians. Literary scientific romance (e.g. Erewhon by Samuel Butler) were in fact satyrical extrapolations of contemporary norms and trends. The future was like the present, only with a lot more wild-dreamed technology. Jules Verne, Edwin Abbott, and H.G. Wells take us on journeys where people are constant while everything else around them changes.  This wrongful perception for the future persisted in later science fiction as well. It always amuses me when I see the humans of the 23rd century in Star Trek t be so much like us, to be faced with our contemporary dilemma and culture wars. Such predictions assume that cultural perceptions and societal values remain constant and what changes is technology. They ignore (a) happenstance and (b) that technology changes cultural perceptions and societal values.

 

FaceTime

Despite the shortcomings of human prescience the academic field of Future Studies appeared in the 1960s and has since flourished in several Universities around the world. Methodologies, such as foresight, have been developed to inform worried politicians, businessmen and investors about what is to come. These methods have become rather sophisticated over the years. History is analysed in computers and patterns emerge. For instance, it has been noted that whenever  there are many young unemployed men society explodes, and politics change.

And yet, I sometimes imagine myself drinking tea and reading The Times in London, some sunny day of June 1914. A friend comes along, and we pick up a conversation about all the wonderful things that have been happening in the world during the past forty years. The economy is booming, international trade connects the continents, science and technology is progressing with leaps and bounds, and the well-being of people increases with each passing year. Importantly, the Great Powers have found, at last, a way to keep the peace since the Napoleonic Wars ravaged the continent. The Kaiser, the English King and the Tsar are cousins.My imaginary friend asks me then: “How do you think the world will be in 100 years?“. “In the year 2014“, I reply rather confidently, “after one hundred years of peace and prosperity, people will have built homes to the moon and the stars, our great-grandsons will go to work on flying machines...”

A few days later, history self-folds into a terrible nightmare that starts with the invasion of Belgium by the German Imperial Army and ends in 1990 with the fall of the Berlin Wall.

Complexity renders all predictions about the future is evidently a fool’s errand. So why do we do it? Why are we so obsessed with predicting the future?

A utilitarian would say that predicting the future reduces the risk of investment in the present – which is right of course, and that is what risk managers do in investment banks. Similar utilitarian explanations apply to just about every decision we make, for instance to buy a house (we hope prices will rise and not fall in the future), or move jobs, or marry and have a family, or play diplomatic games in the international arena. There is something about our cognitive system that compels us to imagine the future; it’s what made our ancestors great strategists in hunting big game. We cannot escape our minds: predicting the future is what shapes us in the present, what makes us who we are and what we decide to do – and that is why we do it. Alas, often is the case that we fall in love with our prophesies, only to be surprised and shocked when they do not come true.