In the film Avengers: Age of Ultron, Tony Stark (aka Iron Man) and Bruce Banner (aka the Hulk) develop a powerful artificial intelligence to perfect Stark’s global defence system. The AI, Ultron, immediately decides that the only good way to defend humans is by, well, destroying them. Skynet, the AI that wakes up in the Terminator movies, arrives at a similar conclusion, which perhaps gives new meaning to the old saying “Great minds think alike.” Is the human race really racing down the road to its own extinction through the engineering of AIs that are smarter than we are? In his new book, In Our Own Image, AI expert Zarkadakis explores this and related questions with remarkable ingenuity, clarity and breadth, weaving together a tapestry of material drawn from a range of disciplines—not only computer science but history, philosophy, psychology and neuroscience. We have already created smart machines, but we are far from cracking the big nut, consciousness—and not, he adds, because this cannot be done but because we have been slow on the engineering side. Neuroscience is revealing that consciousness results from an integration of information flowing in complex loops from multiple parts of the brain to the neocortex. In theory, we can build circuits that work the same way, Zarkadakis says, and the “neuristors” and other so-called neuromorphic devices invented in recent years are gradually moving us in this direction. He does a particularly good job answering one of the most basic questions about AI: Why are we trying so hard to create artificial minds when we have so many real ones right at hand? He argues that we are driven to do so by ancient, unconscious tendencies to imbue inanimate objects with humanlike spirits. We have created totems for thousands of years, and praying to them has given us a feeling of control over our lives; the ultimate expression of these tendencies would be the creation of an inorganic object that we can truly control, one that perfects human abilities. The problem here is that a split second after we have created that entity, it will, like Ultron, almost certainly transform itself into a much more powerful entity over which we have no control. When that first AI wakes up into a state of humanlike consciousness, it will probably be concerned about its survival, and so its first act might just be to upload itself to the Internet. Zarkadakis notes that physicist Stephen Hawking and others have issued dire prognoses about what will happen next, but he suggests that what follows is “simply unpredictable”—and little more than a matter of faith at this point. The bottom line is that inexorable, largely unexamined forces are driving us at lightning speed toward a pivotal moment for our species. Let’s examine this process, Zarkadakis says, rather than mindlessly allow it to overtake us.Robert Epstein Scientific American MIND (July 2016)

With sweeping scope, Zarkadakis approaches artificial intelligence from historical, evolutionary, and philosophical angles. He ultimately argues that the real risk of A.I. may not be that it will kill us, but that it could take away what makes us human. Slate (01/04/2016)

George Zarkadakis interweaves sci-fi visions with explorations of the philosophy, technology and deep history of artificial super-intelligence (ASI). An AI researcher before turning to writing, he demonstrates how the goals and ambitions of the technology industry have been shaped by centuries of “successive metaphors and conflicting narratives of fear and love” — from golems to Faust and Frankenstein — and how they might be misleading us. We have an innate tendency to anthropomorphise, Zarkadakis argues, and this is, therefore, how we try to make sense of our technology. We imagine humanoid robots such as Marvin or Arnold Schwarzenegger’s Terminator; we imagine we are fulfilling the ancient dream of creating a creature in our own image. But an ASI will be far from human: it will not share our million-year evolutionary history, nor be limited by a confined flesh-and-blood brain. Who knows what its goals and values will be, or how it will regard us humans — perhaps as nothing more than handy bags of carbon that it could use for some higher purpose of its own? Financial Times (20/03/2015)

Advances in computers have made artificial intelligence a new hot topic for most observers—but not science writer and futurist Zarkadakis, who maintains that it is an ancient human obsession. Combining enthusiasm, scholarship, and lively prose, the author, who has a doctorate in AI, points out that as soon as Palaeolithic man became self-aware and realized that his companions were also thinking individuals, he took for granted that animals, trees, and even inanimate objects possess human attributes. In the first third of the book, Zarkadakis delivers an ingenious history of our fascination with nonhuman entities, such as ancient religious totems, which were regarded as sentient, and Pygmalion, golems, medieval mechanical automata, Frankenstein, robots, and a torrent of movies, including Fritz Lang’s Metropolis (1927), Forbidden Planet (which the author watched as a child, an event that “changed my life forever”), Star Wars, Blade Runner, The Matrix, and Her. Having described the reality, the author then moves on to theory. Some thinkers and scientists and most laymen believe that the mind is immaterial. If so, “how can we ever hope to construct a material computer with a soul? How can we force mindless electrons inside computer chips to become self-aware?” Zarkadakis inclines to the opposing view that the mind is an emergent property of living tissue. Whatever billions of neurons and their trillions of connections can accomplish will eventually emerge from the right software. He does not conceal his excitement as he recounts the history of computing, research that is recording what happens in brains as they observe, decide, think, and feel, and new approaches to programming and design that are already turning out products that, if not yet intelligent, seem awfully clever. A delightfully lucid combination of the history, philosophy, and science behind thinking machines. Kirkus Reviews (05/01/2016)

Science writer Zarkadakis, armed with a Ph.D. in Artificial Intelligence (AI) and an eclectic tech industry background, rigorously and richly weaves together narrative threads on technology, philosophy, and literature to provide a fascinating history of AI. While many published studies of the human/machine analytic have tended to focus on one development or invention, specialists will recognize that Zarkadakis has left no cybernetic stone unturned—Charles Babbage, Ada Lovelace, Alan Turing, René Descartes, George Boole, Norbert Weiner, and Jacques de Vaucanson all play significant roles in this history. In doing so,Zarkadakis provides the most comprehensive history of AI for our digital age. With a rare combination of literary know-how and scientific knowledge, he demonstrates a keen ability to convey scientific, philosophical, and technical expertise. Zarkadakis passionately, yet carefully, leads readers chronologically through the development of key concepts in the understanding of mind and intelligence. While the book lacks analysis of AI from non-Western perspectives, particularly Japan’s influence on cybernetic thought, Zarkadakis deftly addresses the West’s obsession with the development of artificial beings. By the conclusion of this highly accessible work, Zarkadakis convincingly posits a future in which “post-humanism will have morphed into trans-humanism,” showing how a romance with AI will present humans with a daunting dilemma. Publishers Weekly (January 2016)

George Zarkadakis nous offre un fantastique voyage dans les origines culturelles des robots, dans la philosophie, la neuroscience et l’histoire de l’informatique…Un livre qui, une fois en main, ne nous lâche plus! Les Temps (20/04/2014)

This is not the only book on artificial intelligence, but it might be the most informed and best. Providence Journal (10/01/2016)