A recent study in machine learning reported a high degree of accuracy in machines understanding the character and intentions of humans. Mario Rojas and colleagues at Barcelona University together with researchers at the Department of Psychology at Princeton University developed software that can learn to “read” human traits from human faces. The researchers trained their algorithm on nine categories (attractive, competent, trustworthy, dominant, mean, frightening, extroverted, threatening and likable). The highest degree (over 90%) of successful recognition of the shown trait was for “dominant”, “mean” and “threatening”. Notably, these traits seem to play the most crucial role in structuring hierarchies in human societies. Quoting from Science Daily Mr Rojas said: “The perception of dominance has been shown to be an important part of social roles at different stages of life, and to play a role in mate selection.”
Making machines understand human traits will help develop better interactive systems. When taken to their apogee, embedded systems such as these will equip humanoid robots of the future to interact naturally with humans, facilitating the harmonious incorporation of artificial beings into human societies. Or won’t they?
This latest research news is part of a large worldwide research effort that aims to move Artificial Intelligence beyond logical reasoning and problem-solving and into the realm of emotional understanding. This is very crucial. Human beings are ruled much more by emotion than pure logic. Notwithstanding Plato’s aphorism that poets should be banned from a perfect society (he took exception for Homer only), our lives and thoughts are better expressed through poetic vision and works of art – or crimes of passion if you prefer a darker side – rather than cool “Mr. Spock-like” reasoning. In a future world where artificial and non-artificial beings intermingle to form complex relationships understanding each other is a sine qua non.
But there is also a caveat. Furnishing artificial beings with emotional reasoning assumes a sort of “mirror” that reflects our traits unto them. However, there are several issues with mirrors which must not be overlooked (pun not intended). Looking at a robot whilst pulling a face of, say, meanness we would expect it to understand the trait and react accordingly; hopefully by trying to appease us; to curl away and wag its tail if possible. This expectation assumes that we programmed the robot with a “dog instinct” so that it knows a priori who the master is and who the slave. But what happens if we forget to do so? What happens if the robot, a mirror of our humanity, pulls the same face back?
Journal Reference: Mario Rojas Q., David Masip, Alexander Todorov, Jordi Vitria. Automatic Prediction of Facial Trait Judgments: Appearance vs. Structural Models. PLoS ONE, 2011; 6 (8): e23323 DOI: 10.1371/journal.pone.0023323