The goal of Artificial Intelligence (AI) is usually described as the making of intelligent machines. This may seem like a well-defined goal however AI has been plagued with misunderstandings and misgivings since its modern reinvention in the 1950s. The word “intelligence” is laden with cultural, philosophical and political frustrations, and the only way to see clearly behind the smoke is to deconstruct intelligence in the context of AI research and ideology.
To begin there is the engineering, hands-on, objective of AI in trying to answer what an intelligent machine actually is and how it differs from any other computer program. The difference is this: an intelligent machine perceives its environment, makes inferences based on accumulated knowledge and current environmental stimuli, and thereof takes appropriate actions to maximize the chances of succeeding in its goals. Put another way, the engineering goal of AI is to develop techniques and technologies for autonomous decision-making in uncertain circumstances. This is a well-defined, prosaic even, technical objective. It is therefore not surprising that, as a discipline of engineering and computer science, AI has enjoyed much success over the past years. There are numerous successful AI applications today, from analyzing markets to enhancing the experience of video games.
The cultural fascination with AI starts where engineering stops. Building “intelligent machines” unavoidably begs the more general question “what is intelligence?” The nature of this question is not mechanistic but psychological. It deals with how we can tell a machine is intelligent, as well as how we relate to that machine once we have decided that it is intelligent.
A technical distinction must be borne in mind when exploring the psychological meaning of intelligent machines. In AI there are systems that aim for “specific intelligence” (such as expert systems), where the application space is both narrow and deep; for example medical diagnosis or financial analysis. These systems can be quite successful because the programmer may use heuristics (e.g. rules of thumb) to readily encapsulate human expertise.
Another class of AI systems aspires for “general intelligence” where the application space is wide and shallow; an example would be speech recognition or automatic translation systems. Here the situation is different because of the complex interconnectedness of general knowledge. Knowing a priori the relevance between facts is impossible. Coding to infer from general observations is not only technically challenging but theoretically precarious as well. A “universal machine” that infers everything from everything beckons at Russell’s paradox: can this theoretical machine infer itself? Is the set of all sets a member of itself?
We may of course fall back to a behaviorist position and agree that machines can be said to exhibit intelligent behavior providing their performance is adaptive and therefore unpredictable. Things become interesting when people react to such machines. Affective computing coupled with AI can provoke emotive responses from human beings that include sympathy and empathy. Humans can relate psychologically to intelligent machines.
And this is how we find ourselves up against the deepest, philosophical, question of all: how “true” is this machine intelligence? Is it just an engineering illusion, not unlike the “mechanical Turk”, only more sophisticated? Shouldn’t true intelligence involve some degree of consciousness?
Alan Turing suggested that, for all intents and purposes, a machine may be regarded as intelligent if it can fool you in believing that it is. The infamous “Turing Test” posits that if you cannot tell the difference between the responses of a human and the responses of a machine, then the machine is “truly intelligent”.
The Test was debunked by the philosopher John Searle who suggested that a system may appear to give “intelligent” answers to human questions without necessarily having any intrinsic knowledge – or consciousness – of the answers it gave. It may just follow a set of rules that made perturbations of symbols that were meaningful to human receivers but not to the machine itself. According to Searle intelligence without consciousness is not “true” intelligence. If we accept his notion then we can ask whether it is possible to create artificial consciousness. After all this was the dream of AI’s modern godfathers in the 1940s and 50s. It still is the central dogma of “strong AI”. However, artificial consciousness is not (yet) an engineering problem but a philosophical question of the most fundamental importance and gravitas.
There are many ethical corollaries to the questions posed by AI in its broader techno-cultural dimensions. For example, should we aim to develop intelligent machines further, and if so where should we stop? We can imagine a future where we relinquish control of many human affaires to intelligent computer systems and networks. Energy, trade, defense, international relations, the economy, could arguably become better-run without the evolutionary defects of biological agents (humans). Should we aim for a techno-Utopia of a world “all watched over by machines of loving care”? Other moral issues relate to political rights that may or may not be given to machines that exhibit “true” intelligence. Such issues become increasingly complex, and therefore more interesting, when the demarcation line between human biology and machines blur, as in the case of cyborgs.
AI is a controlling technology. It is the “brains” (conscious or not) of our global technological infrastructure. As computer networks link every facet of our civilization, intelligent control means that we are relinquishing the keys of planet Earth to our digital brethren. Exploring the questions posited above may help us understand what we are dealing with and, hopefully, prepare us for what will come.