Adrian Bejan’s Constructal Law – introduction

 

I met Adrian Bejan at the Lions Cannes Festival in June, where I was also introduced to his Constructal law. Here’s a very nicely done video of Adrian explaining his theory.

Here’s the law in Bejan’s terms:

For a finite-size system to persist in time (to live), it must evolve in such a way that it provides easier access to the imposed (global) currents that flow through it.”

My understanding of the constructal law – and Bejan may somewhat object to my oversimplification, or intuition – is that it makes a powerful observation about the behaviour of complex systems.

Evolution as system behaviour

The observation – and key insight – is that complex systems evolve in a predetermined way: from “restricted access” to “open access”. Bejan explains this evolutionary mechanism by making use of the concept of “flow”. This flow can be energy, or information, or their various abstractions. For example, in a city the “flow” of people could be reduced to the flow of energy (e.g. cars competing with each other for getting to their destination faster using less fuel) and information (e.g. people optimizing their schedules to make more things happen in a day using technologies like synched calendars and digital assistants).  The more”complex” the system the more “energy” or “flows” it must process, and therefore the more degrees of freedom it creates in order to do so. By moving from less to more degrees of freedom “flows” are also maximized; more flows through more access, and so on.

Constructal Law and Noetics

Looking back at my work on Noetics, I see a Bejan’s law as one that “explains” my intuition about the four different abstraction levels of flows in the human brain and society, as a framework for studying consciousness. Given the profound insight of constructal theory it seems “teleological” that human consciousness will continue to “expand” at the individual as well as the collective level; seeking new degrees of freedom as our society becomes more “complex”, in the sense of becoming more diverse, more democratic and participatory, and ever more empowered by technology.

Artificial Intelligence may therefore be not just a cognitive multiplier of human intelligence but a significant lever for a phase transition on the collective human consciousness. In combination with abundant computation ability once quantum computing becomes commercially available, intelligent machines could accelerate space exploration and colonization, in accordance to Bejan’s law.

Constructal Law and Cryptonetworks

The  Law seems to confirm my insight that open source and open digital platforms are the future of business – possibly enabled by cryptonetworks and token economics, an insight with various implications for the future of capitalism. This idea I have explored in my recent essay in Aeon magazine.

 

Advertisements

Writing a cybernetic novel

The Island Survival Guide and narrative reflexivity

The Island Survival Guide and narrative reflexivity

I would like to define a cybernetic novel as one that writes itself, or one where the reader is also the narrator. A novel that possesses self-reflexivity. I made the sketch (see above)  some time ago while thinking about my novel “The Island Survival Guide“. When I say “I thought about my novel” I mean as a reader, not as a writer. In fact, cybernetic writing blurs the distinction between writer and reader, and finally break it down completely: the writer is the reader who is the writer, and so on. As it does so it also undermines and destroys a more significant dichotomy, the difference between the narrative (object) and the narrator/reader (subject). The two become one, one reflecting into the other. This is of course a logical paradox. Cybernetic writing is a logical paradox based on reflexivity.

The paradox of narrative reflexivity that defines a cybernetic novel is what makes it what it is; it is a paradox that creates an escape hatch, or a quantum wormhole, connecting two different universes that exist in different dimensions. The mind is free to travel between these two narrative universes. As it travels it transfers experiences and knowledge between the two universes. Thus, the paradox of narrative reflexivity becomes the act of creation. The novel is created as a dialogue between the 3-dimensional (+time) universe of the narrator/reader/writer (the terms cease to have distinct meaning in a reflexive narrative context) and the multi-dimensional universe of the novel.

DrawingHands

M C Escher’s rendering of reflexivity in narrative: cybernetic writing

As old meaning breaks down because of continuous feedback between the narrator and the narrative, new meaning is created.

The Island Survival Guide was my second experiment in writing a cybernetic novel (my first being The Secrets of the Lands Without).

Gerald Edelman and AI

Gerald Edelman

Gerald Edelman (1929-2014)

Gerald Edelman passed away on May 17, 2014 in La Jolla, California. In 1972 he won the Nobel Prize (together with Rodney Porter) for solving the antibody structure, and explaining how the immune system functions.  His research into antibodies led him to realize the enormous explanatory potential of selective-recognition systems. I had read most of his books, before meeting him in person in Tucson Arizona, during the World Conference on Consciousness in 2004. In his smart suite, this tall, radiantly intelligent and witty man, explained to his audience how his work on the immune system could provide an explanation for consciousness.

Basically, Edelman discovered that we have a great number of structurally different antibody cells in our body. When a bacterium or a virus enters our body these antibody cells  (also called “immunoglobulins”) rush towards them and test how well their structures “match” those of the intruders. This structural variability lies at the heart of antibody-based recognition. Edelman noticed that the adaptive immune response had all the hallmarks of an evolutionary process. The antibody recognition system “evolved” very quickly in order to adapt to the bacterial or viral attack. This was similar to a species adapting to environmental pressure.

Edelman posited that this evolutionary biological mechanism could also explain consciousness. Two significant discoveries strengthened his hypothesis. Firstly, that a fundamental property of cortical neurons is that they are organized in discrete groups of cells. Secondly, that synapses strengthen through use. Edelman theorized that our brain manages to recognize and process information thanks to selection on neuron groups that differ in their connectivity patterns. Several group cells would respond to incoming sensory information; their response would be modified by repetitive recognition that would strengthen, abstract and associate their connectivity. Edelman was in fact describing a cybernetic system with multiple positive feedback loops (he called them “re-entry” loops). Recent research by Stanislas Dehaene on the neural correlates (or “signatures”) of consciousness has shown that this re-entry mechanism is fundamental to how groups of cells respond to sensory information, and how a local recognition event becomes global (i.e. whole brain).

A Darwin robot

A Darwin robot

To demonstrate his theory Edelman and his colleagues built a number of “noetic machines” he called “Darwins”, or “brain-based devices” (BBDs). Built around a model of the neural connectivity of a simple brain a Darwin would “discover” the world around it, like an animal would.

Edelman’s robotic research has received very little notice, or appreciation, from the mainstream AI and robot research community. The reason for this is that the mainstream has abandoned long time ago the original goal of AI, which was to build a conscious machine. Since the 1980s AI research (as well as autonomous robots research) has focused on practical applications where pattern recognition and matching is paramount: for instance driveless cars, image and speech recognition, medical diagnosis, etc. This is where the money is nowadays, as the recent acquisition of the company Deep Mind by Google has amply demonstrated.

Edelman’s idea of simulating the brain in the robots is a close kin to neuromorphic computer technologies, still at their infancy. And yet – as I will aim to demonstrate in my forthcoming book “In Our Own Image” (Rider Books, due April 2014) – conscious machines can only evolve with a computer architecture like the one implemented in Edelman’s Darwins. Current AI and robotics will never be able to produce a self-aware machine. The breakthrough for this will come from genetics and a deeper understanding of developmental biology. How a cell divides and evolves into a nervous system? How does a nervous system differentiate across species? How does it develop into a brain? How does this brain communicate with the whole body, processing internal as well as external sensory information? How does the brain cells adapt, and modulate? Answers to these questions will come from biology and neuroscience. When we have the answers, we will have cracked the mechanism that Edelman hypothesized. And then, his curious Darwins may be remembered as the “amoebas”, the protozoa of a new line of evolution on Earth, that of the intelligent machines.