AI is a transformative technology with enormous economic, scientific and social benefit, one that is driving the fourth industrial revolution. Advanced machine learning algorithms are already dramatically changing how private and public institutions function, by automating numerous cognitive processes, generating new efficiencies, and enabling deeper, data-derived insights and predictions. Despite these benefits, there is a growing concern that AI systems are alienating the wider public by transferring power from humans to machines. The geopolitical implications of social alienation may impede the advance of AI, particularly in democratic states where public opinion matters. It is therefore vital to understand the reasons behind the problem that AI currently has, and examine ways to ameliorate the risk of social backlash.
The root of the problem is that the philosophical foundation of AI rests on the idea of “machine autonomy”. This idea was explicitly stated in the original AI manifesto at the historical Dartmouth Workshop of 1956 that founded Artificial Intelligence: “[the study of AI is] to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.[..]”. Since then AI has advanced in juxtaposition – and arguably in competition – to humans. Take for instance just two of AI’s historical milestones: the 1997 victory of IBM’s Deep Blue over Kasparov in chess, and the 2016 victory of DeepMind’s AlphaGo over Lee Sedol in Go. In both cases AI’s “triumph” was beating the best of humans at their own game.
There are very powerful arguments in favour of machine autonomy. From a utility perspective autonomous AI systems are key to exploring environments where human survival is either very challenging or nigh impossible – for example deep space exploration, deep ocean exploration, or environments exposed to high radiation. Moreover, autonomous AI systems are necessary for complex pattern recognition problems in big data – for example astronomical, biological or financial data. The recent success of DeepMind’s AlphaFold to determine the structures of more than 200 million proteins from some 1 million species, covering almost every known protein on our planet, is nothing short of spectacular.
Autonomy’s dilemma
While autonomous AI systems – such as AlphaFold – make impressive contributions to science and society, there are multitude of ethical problems that arise from the very nature of machine autonomy. This is because AI systems are mostly designed to solve the so-called “canonical problem in AI”, which is a solitary machine confronting a non-social environment. In effect, autonomous AI systems are like aliens from another world landing on Earth where humans are the obstacles. This situation often results in misalignment between machine and human objectives, especially when these objectives may tend to vary and differ significantly between nations, and even within nations . Thus, whenever an AI algorithm takes an autonomous decision that may affect the wellbeing of a human being an ethical problem also arises. Think, for example, an autonomous car deciding on a life or death situation, or an algorithm that decides on someone’s parole based on their probability of reoffending, or an intelligent system that determines the issuance of a loan or an insurance policy. As more and more autonomous AI systems embed in IT processes such ethical problems will multiply and public trust will be eroded. We are faced with a classical principal-agent problem where the human principles, may have different incentives and priorities than our machine agents. To solve this problem we need to rethink AI systems so that their goals align with ours. This can only happen if those machine intelligence systems are embedded into human systems, social, economic or political. In such use cases “Autonomous AI” needs to become “Cooperative AI”.
Human-machine cooperation
Researchers have identified four elements of cooperative machine intelligence that are necessary for embedding AI systems into human society (1):
- Understanding; whereby the consequences of machine actions are taken into account;
- Communication; which suggests transparency and sharing of information in order to understand behaviour, intentions and preferences;
- Commitment; i.e. the ability to make credible promises when needed for cooperation;
- Norms and Institutions; the social infrastructure – such as shared beliefs or rules – that reinforces understanding, communication and commitment.
Examples of existing cooperative AI are collaborative industrial robots and care robots working alongside humans, or personal assistants that help us schedule our work more efficiently. Key to developing cooperative AI is implementing iterative interactions with humans while executing a task. Training such AI systems therefore requires a social environment, for instance a multi-player game, or a human-machine dialogue while training a language understanding model. Such “cooperative” AI systems tend to augment, rather than replace, human actors. Designing such systems requires a multi-disciplinary approach to avoid a purely engineering “tunnel vision”. For example, a social network where autonomous AI algorithms optimize the servicing of content by maximizing “likes” will result in echo chambers and polarization. Redesigning the optimization process of an AI algorithm to include other factors, for example exposure to opposing views, would drive different, and hopefully better, social and political outcomes. Policy makers should therefore encourage cross-disciplinary research to include psychology, sociology, game theory, biology, anthropology, and political science with AI research. This is vital for designing cooperative AI systems that optimize for socially-accepted parameters.
Moreover, human-centric design decisions for AI systems need appropriate governance. Embedding human-centred governance can help solve the current principal-agent problem in AI and realign objectives and incentives between humans and machines.
ChatGPT and the dilemma of AI governance
ChatGPT’s phenomenal success is an example of how difficult is for society to adopt advanced AI technologies. ChatGPT got to one million users in just five days after its launch. And it is seemingly everywhere. People are using it to produce essays and articles which they are posting on social media. Rumours from colleges suggest that hundreds of student essays have been written by ChatGPT, and that professors doubt whether they should grade any student’s essay at all. The examples are legion across many, many domains.
It seems that the availability of a powerful AI tool such as ChatGPT has put social trust at peril. AI can help us produce anything connected to text effortlessly; but not necessarily truthfully. Text generating AI systems are particularly important because we spend many hours browsing content on web pages that consist mostly of text. Never in human history text has been so important. The temptation to use AI-generated text to mislead and misinform others appears to be great. Perhaps human morality has not evolved quickly enough to catch up with AI development. There are voices calling for ChatGPT and its ilk to be removed immediately from the public domain.
Arguments against the open availability to everyone of powerful AI tools imply that a central authority, such as government regulator, must take control of these tools and act as a filter that decides who may use these tools and under what circumstances. But can we trust these central governors to defend our common interest? Theoretically, proper regulation can act as facilitator for technological advancement, but very often it merely increases the cost of entry for challengers (think innovative startups that are not well capitalized) and protects incumbents (think Big Tech). Moreover, regulation is influenced by powerful lobbies and often ignores the interests of citizens at large. AI is the definitive technology of the 21st century. Keeping it away from the hands of the many, while supporting the interests of a small ultra-wealthy minority, will create more social and economic inequality, with potentially catastrophic consequences for our societies and democratic political systems.
How can we therefore balance the enormous value of AI so that we apply effective ethical controls for its use while we also distribute its use as diffusely as possible?
How to implement decentralised, democratic AI governance
One way to square the circle of the AI governance dilemma is to implement feedback loops where human communities act as governors of AI systems. Such an idea is currently researched by Voxiberate, a startup that I have founded, which is developing tools for participatory democracy on the web using a citizen assemblies’ model and semantic clustering AI algorithms. In a typical use case of community-based AI governance, a human community (e.g. citizens of a smart city) may assess AI systems performance and outcomes, and decide improvements or changes. By applying the four principles of Cooperative AI, those human governors are augmented by AI systems to better understand the implications of machine actions; for example, by an AI system that monitors human deliberations and clusters related perspective to enable dialogue and consensus, or an AI system that personalizes learning that is necessary to bridge information and knowledge asymmetries. These internal feedback loops between AI and humans, are then embedded into a wider feedback loop whereby the AI-augmented human community takes decisions on the further development and evolution of AI systems. Participatory democratic methods and web-based tools, such as the ones that Voxiberate are developing, are of critical importance so that everyone in a community is fairly represented in the decision-making process of AI governance. In practical terms, and given that AI systems are trained on massive data sets, human communities may – for instance – decide how the data are collected and processed, how privacy and liberty are protected, and what features to prioritize based on human social and cultural values.
Although autonomous AI systems are important and must be further advanced, the nature of intelligence is evidently social; and this is true across all living species on Earth, not only humans. AI research must expand its scope beyond machine autonomy and into human-machine collaboration as well. As we make progress in developing intelligent machines further we must keep this idea in mind, and appropriately embed AI systems into human society in a democratic, collaborative, and productive way.
Reference
(1) A Dafoe, Y Bachrach, G Hadfield, E Horvitz, K Larson & T Graepel, Cooperative AI: machines to find common ground, In: Nature, Vol 593, 6 May 2021, pp 33-36.