Isaac Asimov, confronted with the problem of imagining future intelligent machines with potentially destructive capabilities, suggested his famous three laws of robotics. The first law forbids a robot from harming a human; the second compels it to obey human commands unless they conflict with the first law; the third demands that a robot protects its existence unless it conflicts with laws one and two. Asimov expanded his set by adding an extra law (the zeroth) where the “human” in law one was replaced with “humanity”.
Asimov’s laws have been repeatedly debated to what extend they could be a basis for building “friendly” AI systems, i.e. machine intelligence that will not bring harm to its human creators. The issue of how an AI “feels” about human beings becomes of vital significance when AI surpasses human intelligence, or when AI is no longer under our control. Let us take these two cases in turn starting from the latter.
It is very likely that AI is already beyond our control. We live in a wired world where highly sophisticated computer networks, some incorporating layers of AI systems, communicate without the intervention of human operators. Forex trading is mostly done by computers. In 2010 a computer virus allegedly created by an elite Israeli unit attacked and rendered inoperable many ofIran’s nuclear facilities. The Stuxnet virus demonstrated how effective a cyberattack can be, as well as how vulnerable we are should intelligent programs “decide” without the consent or instigation of human programmers to, say, trigger a nuclear war.
Perhaps the only way to save ourselves from an impeding global disaster would be to program an AI “defense” systems. That would be, for example, roaming intelligent agents on the Internet checking for emerging patterns signaling that “dummier” systems have entered into a potentially perilous state. This “higher” protecting AI would have to be programmed so that it has humanity’s preservation as its goal. Perhaps Asimov’s laws could form a basis for this programming. Unless the people who program this “cyberpolice” are the military. They, as we will see, may have a more sinister agenda.
The second case, i.e. AI becoming smarter that humans, is much harder to analyse. How would a superintelligent machine “think”? Will it have a code of ethics similar to ours? Will it care if people died? The most honest answer to such questions is that we cannot possibly know.
Depending on how superintelligence evolves it may or may not care about humans, or biological life in general. Even if it cared it may evolve different goals that ours. For instance, while we strive to preserve our natural environment for future generations, superintelligent AI may decide that its reason for living is to dismantle Earth and the solar system and use the energy to increase its computing power. Who is to say what the higher “ideal” is.
We are used to be the arbitrators of ethical reasoning on our planet because we have reshaped the planetary environment in order to become the top predators. Our position as top predators will be compromised in a new bio-digital environment shaped by the will, and whim, or superintelligent AIs. If this happened to what “higher authority” could we turn for justice?
Faced with such daunting technological dilemmas hoi polloi may opt for the precautionary principle and cry out:”stop AI research now!” Unfortunately, this may not be such an easy option.
“War is the father of all things“, said Heraclitus. The US military has been using drones for quite a while. They have proven their operational worth in taking out terrorist operatives in north Pakistan and elsewhere. As far as we civilians may know the current stage of drone technology is strike by remote, i.e. a human officer decides when to release a deadly weapon. Nevertheless, it makes military sense to develop and deploy semi-autonomous, or fully autonomous systems. One could persuasively argue that such systems would be less error-prone and more effective. Robot warfare has dawned.
Robotic warfare is riddled with many ethical issues. Because the most hideous cost of war is the loss of human lives a military power that can deploy a robot army will be less hesitant to do so. What interests me however in robot warfare is how military AI technology is developed away from the political and philosophical spotlights that scrutinize civilian research.
War seems to be an intricate part of our primate nature. Our cousin species, the chimpanzees, are known to stage wars in the Congo. History and anthropology show us that warring human societies have different needs than peaceful ones. Willing to sacrifice one’s life in the battlefield may appear heroic to many but, as many poets since Euripides have shown us, it is ultimately pointless; an act of abject nihilism. Militarism is the ideology that sanctions forcing of one’s will against another not by argument or persuasion, but by the application of superior firepower. This nihilistic view of the world has no qualms to develop unfriendly AI in order to win.
Our only defense against a militaristic, anti-humanist, worldview is to strengthen the international institutions that prevent us from going to war. Indeed, we must design systems for resolving crises where war is not an option. A peaceful world where war is pointless will give us the time to determine, and direct, our technological advancement towards a humanistic future. Our future survival is a matter of political choice, and applied game theory.