Renown neurobiologists Christof Koch and Giulio Tononi recently suggested an alternative Turing Test to examine whether an intelligent machine is conscious or not: instead of having a human-machine conversation they propose a psychological test where the machine decides through a dialogue if a series of photographs are “right” or “wrong”. Their strategy assumes that demonstrating such knowledge on the part of the machine implies that it has subjective understanding of the world.
(Left: What is “wrong” with this picture?)
Looking at the above photograph it should be obvious to any human observer from age of 6 upwards that it is not “real”. People do not fly into the air. Could an intelligent machine make the same inference? If yes then, according to Koch and Tononi, this machine must be regarded as conscious.
Koch’s and Tononi’s proposal is based on Tononi’s integrated information theory for consciousness. According to this theory, for consciousness to occur information must be highly integrated. There is a measure for the integration of information in systems called “Φ”. This quantity signifies how much information a system contains above and beyond the information contained in its parts. In other words, it expresses the degree that the individual parts are interconnected. The higher the degree (i.e. the higher the value of Φ) the more “surplus” information the system contains.
For example, in the cerebral cortex individual neurons contain their own specific information (say, their ion potential levels); but they also have many specific interconnections. The Φ in the cerebral cortex is high because the amount of information that is contained in the whole system far exceeds the information in its parts. The degree of Φ may be correlated to the degree of consciousness. Because Φ can be measured in any system – including brains and intelligent machines – Tononi hopes that measuring Φ may pave a quantitative path towards ascertaining consciousness in machines.
This is a very promising approach to the problem of machine consciousness. Complex systems, such as brains, can be regarded as information systems that exhibit types of behavior which cannot be deduced from the behavior of their interconnected parts. Brains behave as if they have a “subjective” understanding of the world. It is this understanding that we generally refer to as “consciousness”. The correlation between the complexity of an information system (e.g. a brain) as measured by Φ, and conscious behavior of that system is very strong; and that is why I have been such a strong proponent of applying systems theory to the problem of consciousness (read my “Noetics” paper). It follows that a complex enough machine that exhibits similar types of behavior ought to be judged by the same criteria, therefore the psychological test (“what’s wrong with that picture?” ) suggested by Koch and Tononi.
The two neurobiologists concede that current technology cannot possibly arrive to the levels of integration required for consciousness. This has been a major problem for “general knowledge AI”: algorithms do not suffice to solve the problem of general inference not only because of the theoretical limitations posed by Godel’s theorem of incompleteness, but also because of limitations in machine architectures (see also my post on “brain-like” computers). Regardless how powerful modern computers are, or may become, information on their electronic components remains unintegrated. For Φ to reach levels of human-like information integration we need machines where knowledge is embedded on highly integrated systems.
If such machines arrive, then I would like to propose an extension to the test suggested by Koch and Tononi.
(Left: What is “right” about this picture?)
There are many states of consciousness. Although under “normal” circumstances we find it “wrong” to see a picture of a man flying, we may not have issue with such an event if we see it in a dream. Dream states, naturally or artificially induced, are a unique characteristic of consciousness. Arguably, they are the foundation of human creativity in the arts as well as the sciences. An angel is a flying human and, for many cultures around the world, there is nothing “wrong” with a human-like creature having wings. Culture can only be produced by conscious agents. By the same token culture can only be “understood” (or “appreciated”) by conscious agents too. Let me then suggest that a conscious machine will be able to convince us that is conscious when it manages to amaze us, and move us, by composing a piece of art; a poem, a novel, a drawing, or a piece of music that will speak to our hearts.
Reference: C. Koch and G. Tononi, “A Test for Consciousness”, Scientific American, June 2011, pp. 44-47.