Super Turing machines and oracles: the making of a artificial mind

A major theoretical – as well as philosophical – problem in Artificial Intelligence is incomputability. Although there are many formal definitions of the concept of incomputability, it really boils down to this: there are many things that the human mind does which cannot be expressed in an algorithmic fashion. The most prominent is what we commonly call “intuition”. The simplest form of intuition is when we find solutions to novel problems, solely on the basis of past experience and with incomplete knowledge.

Pythia at work

The scope of human intuition is ubiquitous and all-pervasive; virtually every discovery made in science and engineering, and the whole of the arts, are products of intuition. This “leap” of the human mind when we “intuitively” see the whole picture by connecting seemingly unconnected dots – when we get inspired to write a novel or invent a new machine – cannot be mapped in any formal mathematical notation (or computing language, which is the “computer age equivalent”).

Furthermore, problems that need “intuition” to be solved (such as proving mathematical theorems) cannot be known in advance and thus fall under the spectre of the “halting problem” in computation as defined by Alan Turing: a Turing machine may compute forever and thus never arrive at the solution (i.e. it will never “halt”). This notion is another way of saying that the computer may never find the answer. Computers, which are Turing machines operating with formal algorithms, cannot be intuitive. Therefore, Artificial Intelligence based on such computers will never be really “intelligent” in any general sense, but always confined to addressing specific problems within a narrow space of possible solutions.

Alan Turing was well aware of this limitation in computing machines. In 1939 he wrote in his ordinal logics paper a short statement about “oracle machines” (or “o-machines”).

Let us suppose we are supplied with some unspecified means of solving number-theoretic problems; a kind of oracle as it were.. . . this oracle . . . cannot be a machine. With the help of the oracle we could form a new kind of machine (call them o-machines), having as one of its fundamental processes that of solving a given number-theoretic problem.”

This is virtually all Turing said of oracle machines. His description was only a page and a half long of that was devoted to the insolvability of related problems such as whether an o-machine will output an infinite number of 0’s or not. Since then, Turing left the topic never to return.

Not quite what Alan had in mind

Oracle machines are “super Turing machines”: they are machines encompassing a classic Turing machine connected to an “oracle”, a black box that answers “yes” or “no” to a decision problem that the Turing machine cannot solve. Obviously, every o-machine has its own limitations. The oracle may not be able to answer either “yes” or “no” to a given problem; in which case another, “higher-order”, oracle is necessary. Oracle machines thus tend to cluster one-within-another, in infinite nests, like Russian dolls: as their numbers tend to infinity, incomputability tends to zero.

Oracle machines solve the problem of incomputability by means of an infinite series. In a replay of Zeno’s paradox super Turing machines forever get closer to intuition without ever reaching it. And although mathematicians like Turing would have been content with such a mathematical description philosophers are a tough bunch to convince that this is anything more than a Pyrrhic victory .

Evidently, computational scientists are the children of mathematics than philosophy. In the current issue of Neural Computation Hava Siegelmann of the University of Massachusetts Amherst and post-doctoral research colleague Jeremie Cabessa, describe a “super-Turing machine” that, they claim, will increase artificial intelligence by many orders of magnitude.

Each step in Siegelmann’s model starts with a new Turing machine that computes once and then adapts. The size of the set of natural numbers is represented by the notation aleph-zero, א0, representing also the number of infinite calculations possible by classical Turing machines in a real-world environment on continuously arriving inputs. By contrast, Siegelmann’s most recent analysis demonstrates that Super-Turing computation has 2 א 0, possible behaviours. “If the Turing machine had 300 behaviours, the Super-Turing would have 2300, more than the number of atoms in the observable universe,” she explains in “Daily Science“.

According to Siegelmann, the Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way closer to that of the constantly adapting and evolving brain.

This approach closely resembles an oracle machine. The machine will not try to forcefully calculate all possible outcomes before deciding, but will adapt to the problem’s parameters by “connecting the dots”, i.e. asking an “oracle” about “yes” or “no” at each step before proceeding any further. The scientists at Amherst intend to implement their theoretical model on analogue recurrent neural networks. It will be interesting to see the results.

Reference: A. M. Turing (1939), Systems of logic based on ordinals, Proc. London Math. Soc. 45 Part 3, 161{228}.


The woman who remembered everything

Human memory is different from computer memory in many important ways. Computers store information in specific locations. While there are ways of storing meta-data with each piece of information, computer memory is very limited when it comes to context. For example, the stored image of your boyfriend may be given a title and a short description, but when the computer retrieves it, it will be a hard task to infer from the image multi-dimensional data such as character, events about this person, emotions, etc.

Unlike computer memory which is designed human memory is the product of millions of years of evolution. Mammalian brains such as ours do not use fixed-address systems, but store memories in a very haphazard fashion; memories tend to overlap, combine or simply disappear. Neuroscience has not yet cracked the code of human memory but it does give us some first clues: our memories live in the hippocampus and the prefrontal cortex. Whenever we “remember” a rich set of data is retrieved which is contextually intertwined with emotions. Human memories are never like a video or a photograph or a text file; they are never “objective”. They are always “subjective”, i.e. value-laden. The plasticity of our brains might be the cause for our memories changing over time, or under a variety of emotional conditions (such as stress, excitement, sadness, etc.).

Jill Price

An interesting case made news several years ago of an American lady named Jill Price who could remember virtually everything. Ms Price had a perfect recollection of every single event in her life since she was 12 years old. Her case has been studied by neuroscientist James McGaugh of UC Irvine. McCaugh and his collaborators named Ms Price’s syndrome as “hyperthymetic”, a Greek word meaning “supermemoriser”. Although, understanding what exactly happens in Ms Price’s brain is beyond the capabilities of current brain scan technologies, current observations indicate that her brain shares many characteristics of people with obsessive-compulsive syndrome. Ms Price is obsessive in “collecting” items (e.g. puffy toys) that remind her of things that happen to her; she is also going over and over again thinking about things that happen to her (she keeps a detailed diary), something that tends to reinforce neural pathways. Nevertheless these observations explain almost nothing. Her capability of remembering everything is truly “super-human”.

Rachel remembered a mother

Imagining intelligent androids of the future has failed to deal satisfactorily with the issue of memory. In Blade Runner, for example, the android Rachel has been programmed with false memories; a childhood she never had.

Tyler Corporation have given her photographs of her “parents” which Rachel treasures, since they convince her that she had a human past. Such emotional reaction to memories requires a human-like brain. Androids that can hold memories in a human-like fashion will be prone to all the problems that we face with our memories; ultimately we lose them, or they mutate into a subjective narrative that reflects our inner wishes rather than the facts that actually took place.  Like humans, androids must be able to lie about their past, without necessarily intending to. But that seems like a waste. Unless human programmers decide to install faulty memories in their creations, intelligent androids will be more like the hyperthymetic Ms Price.