Are we zombies?

What is the difference between thinking and appearing to be thinking? How can one tell them apart? An interesting answer comes from philosophy of mind in the shape and form of zombies.

philosophical zombie (or “p-zombie”) is a hypothetical being indistinguishable from a human but without conscious experience, or “qualia”. When pinched, a p-zombie will feel nothing but will nevertheless cry “ouch!” convincingly enough, so that we will be unable to tell the difference.

P-zombies have been used by dualist philosophers in their attacks of physicalism. Dualists believe there are two essences in the natural world, matter and something else beyond the scope of science. Physicalists hold the view that everything is matter and nothing else exists but matter. In the case of consciousness physicalists believe that our thoughts and feelings can be reduced to neurobiological interactions. Au contraire, dualists claim that consciousness is much more that the sum of biological pathways and brain states.

So let us imagine a hypothetical world of synthetic beings with artificial intelligence looking and behaving identically to us; a mirror world of artificial p-zombies on another planet or another dimension. Now say that something happens and while you were asleep you were transposed in that mirror world, whilst your double p-zombie was zapped over here, to our “real” world. When you wake up, how will you tell which world you inhabit now? And how will your friends and family tell that the “you” who walks down the stairs for breakfast is in fact a p-zombie from a mirror universe?

The answer to both these questions is the same: neither you, nor your family will know the difference.

In fact, both physicalists and dualists are at a loss in suggesting a way to distinguish the two world experiences. The former because for physicalists a p-zombie is impossible: as said, a physicalist believes that consciousness is the result of physical processes. If a zombie is the physical equivalent of a non-zombie, if every cell and function has been precisely copied in the zombie as is in the non-zombie, then there can be no distinction between the two.

A dualist will also be unable to resolve your conundrum but for a different reason. She will not have any test to offer that may tell which world is the real one and which one is the zombie-world. Such “test” would require third-person verification, i.e. some objective measurement of “something”, in other words it must be a scientific test. But dualists believe that the extra essence that separates real beings from zombies is non-physical and therefore impossible to measure by scientific methods.

Whichever you look at it you may never know if you now inhabit a zombie world or a world of “truly” conscious beings.

This rather unnerving realization leaves you with the only question that you can seemingly answer in the positive: are you a zombie? Of course not, you may hasten to answer.

But let’s look at your answer somewhat deeper . In answering “of course not” you are in fact asserting your inner experience of “being somebody”, your so-called “self-awareness”.  Of course, as far as we, your listeners, are concerned we must remain unimpressed by your answer. We can neither trust your answer, nor the way you look or behave, because for all the reasons I explained you could be a zombie pretending to be a real human being.

Maybe, for exactly the same reasons, you should be skeptical of your answer too!

For, how do you know that your so-called “self-awareness” is not an artificially programmed agent which when triggered by the question “are you a zombie?” returns the answer “no”? What if this agent while answering places a memory in your artificial memory banks of having just answered the question, thus creating a feedback loop which you, rather arbitrarily, call “self awareness”? What if “you”, your “inner experience”, your “memories”, are programs? What if “you” are the multi-agent, artificial being from the mirror world of p-zombies, which slipped into our “real” world?

Unfriendly AI: tales from the battlefield

Isaac Asimov, confronted with the problem of imagining future intelligent machines with potentially destructive capabilities, suggested his famous three laws of robotics. The first law forbids a robot from harming a human; the second compels it to obey human commands unless they conflict with the first law; the third demands that a robot protects its existence unless it conflicts with laws one and two. Asimov expanded his set by adding an extra law (the zeroth) where the “human” in law one was replaced with “humanity”.

Asimov’s laws have been repeatedly debated to what extend they could be a basis for building “friendly” AI systems, i.e. machine intelligence that will not bring harm to its human creators. The issue of how an AI “feels” about human beings becomes of vital significance when AI surpasses human intelligence, or when AI is no longer under our control. Let us take these two cases in turn starting from the latter.

It is very likely that AI is already beyond our control. We live in a wired world where highly sophisticated computer networks, some incorporating layers of AI systems, communicate without the intervention of human operators. Forex trading is mostly done by computers. In 2010 a computer virus allegedly created by an elite Israeli unit attacked and rendered inoperable many ofIran’s nuclear facilities. The Stuxnet virus demonstrated how effective a cyberattack can be, as well as how vulnerable we are should intelligent programs “decide” without the consent or instigation of human programmers to, say, trigger a nuclear war.

Perhaps the only way to save ourselves from an impeding global disaster would be to program an AI “defense” systems. That would be, for example, roaming intelligent agents on the Internet checking for emerging patterns signaling that “dummier” systems have entered into a potentially perilous state. This “higher” protecting AI would have to be programmed so that it has humanity’s preservation as its goal. Perhaps Asimov’s laws could form a basis for this programming. Unless the people who program this “cyberpolice” are the military. They, as we will see, may have a more sinister agenda.

The second case, i.e. AI becoming smarter that humans, is much harder to analyse. How would a superintelligent machine “think”?  Will it have a code of ethics similar to ours? Will it care if people died? The most honest answer to such questions is that we cannot possibly know.

Depending on how superintelligence evolves it may or may not care about humans, or biological life in general. Even if it cared it may evolve different goals that ours. For instance, while we strive to preserve our natural environment for future generations, superintelligent AI may decide that its reason for living is to dismantle Earth and the solar system and use the energy to increase its computing power. Who is to say what the higher “ideal” is.

We are used to be the arbitrators of ethical reasoning on our planet because we have reshaped the planetary environment in order to become the top predators. Our position as top predators will be compromised in a new bio-digital environment shaped by the will, and whim, or superintelligent AIs. If this happened to what “higher authority” could we turn for justice?

Faced with such daunting technological dilemmas hoi polloi may opt for the precautionary principle and cry out:”stop AI research now!” Unfortunately, this may not be such an easy option.

War is the father of all things“, said Heraclitus. The US military has been using drones for quite a while. They have proven their operational worth in taking out terrorist operatives in north Pakistan and elsewhere. As far as we civilians may know the current stage of drone technology is strike by remote, i.e. a human officer decides when to release a deadly weapon. Nevertheless, it makes military sense to develop and deploy semi-autonomous, or fully autonomous systems. One could persuasively argue that such systems would be less error-prone and more effective. Robot warfare has dawned.

Robotic warfare is riddled with many ethical issues. Because the most hideous cost of war is the loss of human lives a military power that can deploy a robot army will be less hesitant to do so. What interests me however in robot warfare is how military AI technology is developed away from the political and philosophical spotlights that scrutinize civilian  research.

War seems to be an intricate part of our primate nature. Our cousin species, the chimpanzees, are known to stage wars in the Congo. History and anthropology show us that warring human societies have different needs than peaceful ones. Willing to sacrifice one’s life in the battlefield may appear heroic to many but, as many poets since Euripides have shown us, it is ultimately pointless; an act of abject nihilism. Militarism is the ideology that sanctions forcing of one’s will against another not by argument or persuasion, but by the application of superior firepower. This nihilistic view of the world has no qualms to develop unfriendly AI in order to win.

Our only defense against a militaristic, anti-humanist, worldview is to strengthen the international institutions that prevent us from going to war. Indeed, we must design systems for resolving crises where war is not an option. A peaceful world where war is pointless will give us the time to determine, and direct, our technological advancement towards a humanistic future. Our future survival is a matter of political choice, and applied game theory.

Measuring the IQ of intelligent machines

How can we know if intelligent machines are getting smarter? The simple answer is by measuring their IQ. Nevertheless there are some obvious, and perhaps some less obvious, problems with such an approach. The most obvious hindrance is the plethora of AI approaches and methodologies that technologists follow in building their intelligent machines.

On one end of the spectrum are the “symbolists”, those who develop algorithms that manipulate symbols in universal Turing machines (such as your PC). Their most successful products so far are called “expert systems”. At the other end there are the “connectionists”; they mimic the human brain by building artificial neural networks. Many encouraging developments have come from connectionist architectures, mostly applied in pattern recognition and machine learning. Other technologists follow hybrid approaches that fall between those two extremes.

The problem with comparing the IQ of these variable machines is this: if one assumes I to be the input of information in a machine and O the output, one lacks a common T, where T is the transformation of I into O. Proposing a universal method for testing the IQ of machines must include a caveat that the method will apply to all machines, irrespective of their “internal” T. This means that we agree to test for intelligence irrespective of what happens “inside” the machine. This is equivalent to testing the IQ of biological intelligent beings who have evolved in different planets.

The second, equally profound stumbling block for a universal IQ test has to do with definitions. What do we mean by the word “intelligence” anyway? Various people mean various things so we must be specific. To overcome semantics of intelligence is helpful to remember what the original aims of AI are. Generally speaking, AI aims to achieve four broad objectives for intelligent machines:

1. Thinking humanly, i.e. to be conscious of thinking

2. Acting humanly, i.e. to make decisions and take actions by applying evolved moral reasoning, as well as appear to be “human-like” in the action

3. Thinking rationally, i.e. processing information in a rational manner.

4. Acting rationally, i.e. producing outcomes that comply with rational reasoning.

Most serious philosophical arguments bedevil the first two objectives, while a few mild ones have issues with the third. The fourth one however, the purely behavioral one – wisely chosen by Alan Turing when he proposed his famous test- is where AI delivers its best. A machine may be said to act rationally if it appears to do so to human observers. It follows that if we endeavor to apply a universal method for testing machine IQ we must ignore “how” the machine works. If we do not we will fall prey to the philosophical wrangling of objectives 1 to 3.

So in order to arrive at a universal IQ test we must (a) ignore the internal mechanism of the machine that transforms inputs to outputs, and (b) measure only the degree of rational outcomes. So the next question is: how bad is that? It turns out that it is not bad at all. To see why, let us see what happens when human beings test for IQ.

The measurement of human intelligence was conceived in 1905 by French psychologists Alfred Binet and his assistant Theodore Simon. The French government of the time wished to ensure that adequate education was given to mentally handicapped children, so the two psychologists were commissioned to find a way to measure the “beautiful pure intelligence” of the children. Binet observed that children solved problems in the same way that younger, “normal”, children did. So he tested the possibility that intelligence was related to age. The tests that he and Simon developed were thus adapted to age: if a child was able to answer the questions that were answerable by the majority of children age 8, but unable to answer the respective questions for children age 9, she was said to have the “mental age” of 8.

IQ (Intelligence Quotient) was therefore defined as: IQ=100 x (mental age/chronological age)

Plotting this equation (IQ measurements over number of individuals tested, for each chronological age) one gets a “bell curve” with most individuals falling in the middle (the middle area of the curve defined as “normal”).

Modern tests of human IQ follow the same principles determined by Binet and Simon. They ignore internal brain mechanisms (the “T” of intelligent machines) and are only interested in outcomes (the answers to the questions). Developing a universal machine IQ test that only tests and compares rational outcomes we simply do what humans do for themselves.

Nevertheless human IQ testing is riddled with controversy. Since its inception it was noted that defining “normal” depends heavily on the statistical sample chosen for the measurement. For example white, middle class European children are better fed and better educated than poor black children in rural Africa. This difference in lifestyle impacts IQ measurements because IQ testing does not factor in social circumstances, proven by modern neuroscience to have enormous impact in brain development.

Notably, Binet and Simon’s approach was first criticized by the Russian psychologist L.S. Vygotsky who made the distinction between “really developed mental functions” and “potentially developed human functions”. IQ tests measure mostly the former. Since Vygotsky many have had issues with IQ testing, most notably H. Gardner who suggested not one but seven different types of human intelligence including linguistic, musical, mathematical, etc.

Measuring machine IQ may stumble upon disputable definitions of “normalcy”. As machines develop further issues of cultural influence may also creep in. Will Japanese robots score higher marks because the Japanese culture is more robot-friendly?

An interesting approach for a universal test of machine intelligence has been proposed by Shane Legg and Marcus Hutter. Trying to measure machine intelligence in a pure and abstract form the two researchers have suggested measuring outcomes of intelligent agents’ performance in a probability game based on what strategies should yield the best results and the biggest rewards over time.

Their suggestion appears to be viable in the context already defined, namely that we must be satisfied with measuring rational outcomes only, and not ask the difficult “how” question. Sticking to AI objective 4 we can agree about defining “universal intelligence” for machines in terms of acting rationally only.

Their proposition encapsulates an evolutionary dimension too: living creatures tend to seek rewards (food, mates, authority) while seeking the best strategies over time. Applying Legg and Hutter’s probability game at various stages of development in machine intelligence one can compare various machines now, as well as monitor the development of machines over time. If you worry about machines becoming more “intelligent” than humans in the future Legg and Hutter’s measurements should provide ample warning for the forthcoming “Singularity”.
Reference: Shane Lee and Marcus Hutter, Universal Intelligence: a definition of machine intelligence, work supported by NSF grant 200020-107616.