21 Century Business Review Interview

“21 Century Business Review” is issued by the Nanfang Media Group, and is a monthly business journal under the 21 Century newspapers and magazines system. It was founded on 1st September 2004, and has attracted over 1 million readers by now with its “one journal + one website + APP + Wechat account” all-round media coverage and the ability to create videos, cartoons, and H5. 21 Century Business Review focuses on business strategies, logics, ideas, channels, and modes. By commenting from professional and exclusive perspectives, it demonstrates the most advanced business models in the world and their effective reflections on business in China. Here’s the English text of my interview in 21 Century Business Review.

Would you please brief on the readers the origin of AI ideology, and in which direction it would guide the future of mankind?

We have always dreamt of non-human minds, ever since we humans became conscious beings. Those minds were often disembodied – think of invisible spirits – or embodied in animals, trees, rivers, gods. All cultures have stories of robot-like creatures, and the more technology advanced those stories guided much of what engineers and scientists strived for: the creation of machines with intellectual capabilities, namely computers. Artificial Intelligence is the logical continuation of this age-long quest to create an artificial human-like being, and in the process understand deeper what makes us human. We have now reached a historical milestone in this quest. We have created “specialized”, narrow, Artificial Intelligence, which can act as a cognitive amplifier, as the means to scale human intelligence at an exponential level. Equipped with such a technology we have the potential to solve some of the most complex problems that face humanity, such as poverty, health and well-being, environmental degradation, energy, space exploration and many more. If we use and develop AI wisely, the 21st century can be a century of science, prosperity and progress for all humanity.

In your opinion, philosophy should become the core course for majors of science and engineering, for it can help locate the key of AI. Why?

For three main reasons: first because philosophy is essential thought-engineering, in other words you need philosophy in order to examine your thoughts and ensure that they reflect the values that you aspire for.

Secondly, because philosophy tests the limits of certainty by creating doubt, and doubt is essential for progress. If you are certain about something your natural reaction is to defend your position. Scientists and engineers should always keep an open mind and invite criticism. We cannot build systems that could affect millions of lives by becoming blind and deaf to other ideas and criticism.

Thirdly, and most importantly, because scientists and engineers need philosophy in order to develop ethical systems. We need to have a balanced approach to problem-solving that includes the requirements for human well-being, safety, and happiness, and not just for better performance.

Warnings on AI mainly focus on employment, but according to you, if a machine is granted “true livelihood”, what would be the most urgent threat to mankind in the future?

I do not think that AI is a danger to humanity, quite the opposite. I believe that this technology has the potential to accelerate human progress. However, like all powerful technologies, there is a danger of misuse. For instance, creating autonomous systems that can be used against enemies in war, by completely removing humans from the decision loop, can lead to a new arms race with devastating consequences. We are still far from developing human-level, “general artificial intelligence” but if, for the sake of argument, we imagine that one day we do indeed create such a system, then granting it “agency” should be the logical thing to do. In fact, we do not need to go so far in technological advancement in order to grant “agency” and “legal responsibilities” to an intelligent machine. I can easily think of a driverless car being a “legal entity”. We already have non-human entities that have legal rights and responsibilities – they are called “companies”. Intelligent systems could be governed by the same laws of incorporation, including liability and tax laws.

Would you please speculate on the conditions with which the “AI singularity point” would occur?

Advocates of the “AI Singularity” claim that human-like, general artificial intelligence is a certainty, and that it will come about in a couple of decades because of the exponential way by which computer power increases, the so-called “Moore’s Law”. In my book “In Our Own Image” I am refuting their arguments. One of the reasons that I do so, is that I cannot see how we can possibly develop general artificial intelligence when we cannot yet define what intelligence is. In fact it not just a question of definition, but of understanding the nature of intelligence. We do not know why we are conscious, or how conscious comes about in the brain. We have models and abstractions about neural processes, and some of them are already used in artificial deep neural networks to emulate machine learning. But we confuse “models” with “reality”. Our brains do not run software, which is the other big reason that makes me doubt the AI Singularity. The only possible way I can see us developing general artificial intelligence is by completely rethinking hardware, as I describe in my book, and making it resemble more closely our brain’s biological architecture. In other words, by developing “neuromorphic” systems without the current software-hardware dichotomy; for instance using memristors and neuristors. But if we do manage to build such systems of some complexity, and those systems begin to show behaviours that resemble human consciousness (for instance, emotions, introspection, etc.) then indeed we would be reaching a singularity point in human history of great significance. But not in the way that the AI Singularity advocates describe, i.e. not in that those machines will replace humans and take over the world, possibly destroying us in the process. Such intelligent machines will provide the key to understanding our own minds, and possibly expand them beyond our current imagination. They will be our allies, not our foes, on a new, epic journey in unravelling the deepest mysteries of the universe.

With the development of AI technology, the society would undergo tremendous changes. From the perspective of education, what suggestions do you have for parenting?

My first intuition is to always trust my child. They grow into a world that is different from the world that their parents – me and my wife – grew in. When I was a child we did not have a television or a telephone. Children nowadays have a natural affinity to digital technologies, in itself proof of the agility of the human mind. Beyond trusting our children we should encourage them to develop skills in three main areas. Firstly, on working with others as a team. Human collaboration has always been the key to success, but in the AI-era those who have developed good social and communication skills will be the natural leaders of tomorrow. Secondly, to think synthetically; by that I mean to be able to combine many perspectives in solving a problem. Lastly, the ability to continuously learn and relearn, for they and the generations that will follow, must be able to reinvent themselves several times in their lives.

 In the future, how would you participate in and accelerate the development of AI technology?

With a team of colleagues we have a common vision to democratize AI development, by opening it up to small teams, startups, and individuals. There are too many silos and obstacles today in AI development, particularly on the availability of large data sets to train machine learning algorithms. This reality creates a very uneven playing field between large corporations and smaller firms. The field must be levelled if we aspire to make the most of human ingenuity, and I’m currently working with those colleagues in formulating an innovative AI development platform that will do exactly that.

Advertisements

The fallacy of thinking intelligence as software

During the  Enlightenment the human body was thought of as a kind of clock. That was because the dominant technology of the day was mechanical engineering.Christiaan Huygens invented the pendulum clock in the 1600s, and in the following decades and centuries, all across Europe, the miraculous ticking of interconnected gears and springs felt akin to the periodic and cyclical nature of the human biology, and indeed of the whole universe. God was thought of as an architect, or an engineer. Everything in the cosmos was placed in perfect relation to everything else; an idea referred in philosophy as “determinism”. The human brain was mechanical too, and excreted thoughts – as other machines exhaled gases or fumes or fluids – and was powered by a mystical “soul”. This metaphor mutated by the late 20th century, as western societies rejected religion and adopted a new form of technology: computers.

Computers seemed to do “smart” things, like manipulating numbers, which was something that only humans were able to do till then. Computers did so by codifying a calculating process into a “program” that could then be “executed” on a machine. The program was called “software” and the machine “hardware”. The “smart” part of computing lay in the software, because that’s where the knowledge of solving a problem resided. The hardware was important of course, and necessary, but one could imagine all kinds of hardware, not necessary built with silicon chips and electronics, but with billiard balls, light bulbs, paper clips, whatever. This curious juxtaposition between hardware and software led to the following conclusion: that we can engineer intelligent behaviour as long as we code the right programs (or “algorithms”); executing those algorithms was of secondary importance and independent of the physical substrate. As long as you had a smart algorithm you had intelligence, not unlike having a smart genie that you could then place inside any bottle, or lamp, you liked.

Thinking of intelligence as something independent of the physical substrate (the “hardware”) was an idea that originated in computing and nowadays dominates our everyday thinking. We are using the computing metaphor in our everyday speech, as if it was a given. Our brains are the “hardware”, and our minds the “software”. We are thinking of Artificial Intelligence as computers becoming more and more “intelligent” because of algorithms.

The computer metaphor has led people like Stephen Hawing and Max Tegmark suggest that the future of humanity is to transfer our intelligence and consciousness to computers; to “upload” our consciousness and free ourselves from the frailty and perishable nature of biological bodies; thus bequeathing the keys of biological, and cosmic, evolution to our computer descendants. This is the main thesis of Life 3.0, the new book by Max Tegmark, although the idea is not new and was also explored in the “Anthropic Principle” by John Barrow and Frank Tippler published in 1988.

But of course, such thinking is fallacious. That’s because these otherwise very smart people confuse the computer metaphor of software versus hardware as the real thing. Like people in the Enlightenment who thought of the human body as a clock powered by an immaterial soul, Tegmark et al are regarding the self as an immaterial algorithm trapped inside a biological prison. Such thinking is also irrational because it has not being substantiated by any scientific evidence. In fact, the contrary is true: neuroscience and neurobiology show that intelligence is inextricable from the physical aspects of the brain. “We” are not an algorithm. We are unitary biological creatures.

Confusing metaphor with reality would have been unremarkable if it was not for how it frames the current debate on Artificial Intelligence. When powerful, successful and highly intelligent people adopt the metaphor when speaking publicly about the future of AI they offer validation to a fallacy that could have serious consequences in the economy, society and politics.  Artificial Intelligence is not intelligence but an imitation of intelligence. It is imitation because it fools us into believing it is the real thing. This idea of “imitation” is fundamental in AI, and was put forward since the beginning from none other than Alan Turing. In his “Imitation Game” paper he suggests how a computer could fool us into believing it was a human.

Once we adopt the computer metaphor without thinking then we render ourselves incapable of distinguishing between reality and the imitation of reality. As a result we are talking about AI “ethics”, or AI “bias”, as if they were real. They are not. Machines cannot have ethics, or uphold values, or have opinions or preferences. These words only have meaning to creatures like us, with the ability of self-refection. It is because we can examine the content and meaning of our thinking that we can decide between right and wrong. Self-refection is a property of biology. Machines cannot have self-reflection, and that is what will forever differentiate them from us.