Is AI Ready to Compete with the Human Workforce?

Part 2: The emotional hurdle

AI systems lack anything even remotely like our emotional systems. How are they going to successfully co-exist and interact with the humans around them?

We are emotional beings before we are rational ones. Most of what we feel, think and do starts on an emotional level before it reaches our conscious, rational mind.

When we communicate, we don’t just communicate information. We also communicate feelings, complex emotions and social signals, all needed to help us build and maintain relationships in our social environment. And we use our emotions to make sense of the world around us. To collaborate with us humans, AI systems must be able to understand why and how we communicate.

But understanding the literal content of what people are saying is difficult enough by itself.

Current AI applications achieve this by operating in a limited universe of discourse. They only recognise information the AI can actually respond to. This means that emotions, context, innuendos and other socio-emotional signals are filtered out. AI is generally blind to irony, humour and sarcasm. Applications like Alexa, Siri and Google may have become quite good at understanding direct commands and questions, but they still don’t make very good conversation partners. They may understand what we say, but remain ignorant of how we say it and why we say it.

Even relatively simple commands and questions can expose the AI’s lack of human understanding. And as the domains AI applications operate in broaden and deepen, the trick of limiting recognition to a limited domain will no longer work. The broader the universe of discourse, the more room there is for ambiguity and vagueness to confuse the communication.

Will AI really understand human beings?

For humans to trust them, such systems will also have to behave in ways that engender and maintain trust. Trust is a human emotion. It is not a rational decision based on an analysis of prior interactions and results. Two things engender trust between people:

  • recognising familiar motivations, intentions and character traits in the other person;
  • confidence in our ability to predict the other person’s future actions.

When we recognise the familiar in someone else we see them more like ‘one of us’, instead of ‘one of them’. This makes us feel closer together. The more confident we are in predicting someone’s behaviour, the more relaxed and open we become with that person.

Trust is never unconditional. Our trust can diminish quickly when a trusted person behaves in a completely unexpected manner. Or when their behaviour contradicts the mental image we have of their motivations and character. It takes time to build up our trust in someone; it takes only one big surprise to break it completely.

Designing AI systems that engender lasting trust in people is going to be a real challenge.

We find simple systems – that do a limited number of things – easy to deal with. We simply assume they have no other drive than to perform the tasks they were designed for. That doesn’t make simple systems ‘one of us’ but it also doesn’t make them ‘one of them’: they are just ‘things’. We don’t have to read too much intention into their actions. As long as such systems are limited in what they can do, they are fairly predictable, too. The more stable they are and do what we expect them to do, the easier it is to trust them.

When AI systems become more complex and more versatile, this changes. We tend to see complex behaviour as a sign of ‘person-hood’. We will want to see motivations, intentions and character traits. We will look for something to help us feel we can predict the future actions of such systems. But the actions of AI systems are not driven by human emotions and motivations. Which can make us feel they behave in strange and unpredictable ways.

To solve this, we may have to design AI systems with the ability to mimic human character traits. They also must be able to ‘explain’ their actions in a way that makes sense to people. They may even need to have some kind of emotional system to make their behaviours more relatable to us.

These are not a trivial tasks. AI systems do not ‘reason’ like we do. Some of their behaviour is preprogrammed by their designers. Other behaviours ’emerge’ through complex learning algorithms. Explaining such behaviour is in itself a complex task. We may need to develop specialised forms of AI to do this. And adding emotions to AI systems may have unintended consequences. How would we deal with the mood swings of an emotional vehicle?

Work is being done to give AI the capability to ‘recognise’ human emotions based on facial expressions, tone of voice and body language. I have no doubt this will become quite sophisticated over time. What I do doubt, however, is whether ‘rationally’ recognising emotions is enough to truly ‘understand’ them.

Being able to detect and classify emotional markers in a stream of communication may be the easy part. Humans are good at reading and interpreting each other emotions and intentions. Not perfect, but good enough to build and maintain social relationships with each other. We can do this because we share the same biology, including the emotional systems we use to make sense of the world. This allows us to empathise. We can imagine what it feels like to be in the other person’s shoes. That helps us read much of the unspoken subtext of what others are communicating. It helps us understand all the things that are not really said but all the more implied.

Without an emotional system to ‘feel’ the significance of human emotions, it will be hard for AI systems to completely ‘get’ the subtleties and hidden meaning of human communication. And without an emotional system to motivate their actions it will be equally hard for us humans to ‘get’ – and therefore to trust – the AI systems we have to interact with.

Note: this blog is part of a series. The other parts are: Introduction, The Complexity Hurdle and The Social Hurdle.

Disclosure

The views and opinions in this analysis are my own and do not represent positions or opinions of The Analyst Syndicate. Read more on the Disclosure Policy.

Leave a Reply