A major theoretical – as well as philosophical – problem in Artificial Intelligence is **incomputability**. Although there are many formal definitions of the concept of incomputability, it really boils down to this: there are many things that the human mind does which cannot be expressed in an algorithmic fashion. The most prominent is what we commonly call “**intuition**”. The simplest form of intuition is when we find solutions to novel problems, solely on the basis of past experience and with incomplete knowledge.

The scope of human intuition is ubiquitous and all-pervasive; virtually every discovery made in science and engineering, and the whole of the arts, are products of intuition. This “leap” of the human mind when we “intuitively” see the whole picture by connecting seemingly unconnected dots – when we get inspired to write a novel or invent a new machine – cannot be mapped in any formal mathematical notation (or computing language, which is the “computer age equivalent”).

Furthermore, problems that need “intuition” to be solved (such as proving mathematical theorems) cannot be known in advance and thus fall under the spectre of the “halting problem” in computation as defined by Alan Turing: a Turing machine may compute forever and thus never arrive at the solution (i.e. it will never “halt”). This notion is another way of saying that the computer may never find the answer. Computers, which are Turing machines operating with formal algorithms, cannot be intuitive. Therefore, Artificial Intelligence based on such computers will never be really “intelligent” in any general sense, but always confined to addressing specific problems within a narrow space of possible solutions.

Alan Turing was well aware of this limitation in computing machines. In 1939 he wrote in his ordinal logics paper a short statement about “**oracle machines**” (or “**o-machines**”).

“*Let us suppose we are supplied with some unspecified means of solving number-theoretic problems; a kind of oracle as it were.. . . this oracle . . . cannot be a machine. With the help of the oracle we could form a new kind of machine (call them o-machines), having as one of its fundamental processes that of solving a given number-theoretic problem.”*

This is virtually all Turing said of oracle machines. His description was only a page and a half long of that was devoted to the insolvability of related problems such as whether an o-machine will output an infinite number of 0’s or not. Since then, Turing left the topic never to return.

Oracle machines are “super Turing machines”: they are machines encompassing a classic Turing machine connected to an “oracle”, a black box that answers “yes” or “no” to a decision problem that the Turing machine cannot solve. Obviously, every o-machine has its own limitations. The oracle may not be able to answer either “yes” or “no” to a given problem; in which case another, “higher-order”, oracle is necessary. Oracle machines thus tend to cluster one-within-another, in infinite nests, like Russian dolls: as their numbers tend to infinity, incomputability tends to zero.

Oracle machines solve the problem of incomputability by means of an infinite series. In a replay of Zeno’s paradox super Turing machines forever get closer to intuition without ever reaching it. And although mathematicians like Turing would have been content with such a mathematical description philosophers are a tough bunch to convince that this is anything more than a Pyrrhic victory .

Evidently, computational scientists are the children of mathematics than philosophy. In the current issue of *Neural Computation* **Hava Siegelmann** of the University of Massachusetts Amherst and post-doctoral research colleague **Jeremie Cabessa**, describe a “super-Turing machine” that, they claim, will increase artificial intelligence by many orders of magnitude.

Each step in Siegelmann’s model starts with a new Turing machine that computes once and then adapts. The size of the set of natural numbers is represented by the notation aleph-zero, **א**_{0}, representing also the number of infinite calculations possible by classical Turing machines in a real-world environment on continuously arriving inputs. By contrast, Siegelmann’s most recent analysis demonstrates that Super-Turing computation has 2** ^{א}**

^{0}, possible behaviours. “If the Turing machine had 300 behaviours, the Super-Turing would have 2

^{300}, more than the number of atoms in the observable universe,” she explains in “

*Daily Science*“.

According to Siegelmann, the Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way closer to that of the constantly adapting and evolving brain.

This approach closely resembles an oracle machine. The machine will not try to forcefully calculate all possible outcomes before deciding, but will adapt to the problem’s parameters by “connecting the dots”, i.e. asking an “oracle” about “yes” or “no” at each step before proceeding any further. The scientists at Amherst intend to implement their theoretical model on analogue recurrent neural networks. It will be interesting to see the results.

**Reference: **A. M. Turing (1939), *Systems of logic based on ordinals*, Proc. London Math. Soc. 45 Part 3, 161{228}.