Facebook develops empathetic AI for better private chats

Sign up for our newsletter, Artificiality, to get our latest AI insights delivered directly to your email inbox.

Conversational AI — the ability to converse with a machine — is driving a revolution in human / machine collaboration. Language is unique to humans. We use language to convey information that is rich, nuanced and amplified by empathy, which is our ability to imagine someone else’s experience. Humans find it easy to understand someone’s emotional state in a statement and to be able to answer in an appropriate way.

All the major platforms have their conversational AI research, including Facebook. Now the company has released new research and a new dataset for training emotionally-aware chat systems. The goal is to be able to predict the most empathetic and appropriate emotional response. According to Facebook, models retrained on this dataset exhibit more empathetic responses (judge for yourself based on the examples below). The new dataset and benchmark is relevant for companies developing chat or AI that requires understanding human emotional context.

When appropriately designed and deployed, AI can be useful in applications that require recognizing emotions, either through voice expression or in facial expressions (although this is currently under scrutiny). This research extends the work of emotional AI in text-based chat. Most recent powerful language architectures are trained on vast amounts of barely curated text scrapes or social media conversations. Facebook researchers discovered that training datasets derived from social media chat are not necessarily useful in messaging because public social media content occurs in front of large “peripheral audiences” whereas messaging involves people sharing more intense and negative emotions through private channels. Models trained on this data have exhibited callous or aggressive responses when used in internet conversations in a more spontaneous way. Developing an internet-based chat that exhibits empathy in a reliable way is therefore a desirable but so far difficult to achieve goal.

This research accomplishes does two things: delivers a new dataset (25,000 labeled statements and responses) and provides empirical evidence that models trained on this dataset behave in a more empathetic way.

Here are a couple of examples from the dataset, showing how the first human worker (the speaker) is given an emotion label and writes their own description of a situation when they’ve felt that way. Then, the speaker tells their story in a conversation with a second human worker (the listener).

Two examples from EMPATHETICDIALOGUES training set.

The distribution of emotion / conversation labels and words that apply are in the table below. It makes you realize how often we say “really” in expressing something emotional and how often we say “that’s” as part fo the response.

Once this dataset is used to train the models (and assigning code names to each), they do quite well, according to Facebook. Although, there are definitely a few responses that would be weird and would probably tip you off that you were talking to a bot:

Sometimes the right response is to be empathetic and at other times, it’s to be unemotional and more matter-of-fact. There is a balance between responding empathetically and allowing the conversation to wonder and being less emotional but staying on topic. Humans are highly sensitive to the empathetic appropriateness of a response and this research shows that bots have mixed results in trying to mimic humans.

This research and dataset form part of the ever-increasing capabilities of bots to mimic humans in very human ways. This research shows progress in AI being able to generate appropriate responses but some of the examples above show weaknesses. For instance, saying “I’m glad to hear that” as a response to “Someone came to my door with a gun the other day” would only be appropriate in extraordinarily rare circumstances. And, saying “Did your son start school already?” in response to “My son failed his exams” would make you wonder if the bot is even listening.

All that said, the research shows progress and it’s fair to assume that this progress could make the bots even better. A major on-going concern with emotional AI is how this capability could help creators of deep fakes and other mimicking technologies develop even deeper fakes—ones which are smart to the emotional needs of humans. Another concern is how those who wish to manipulate people could do so more effectively with advanced emotional AI. We’d love to think that Facebook would be concerned about these concerns too and limit the use of technology that could be used to prey on the emotions of billions of people. Unfortunately, Facebook hasn’t shown itself to be a trusted guardian.

Photo by Prateek Katyal on Unsplash

About Sonder Scheme: We are on a mission to help humans win in the age of AI by making AI design easier, more inclusive, more ethical and more human. Through our workshops, learning journeys and the Sonder Scheme Studio, we enable people to use design thinking-based tools to create innovative, effective and ethical human-machine systems. Both offline and online, Sonder Scheme empowers companies around the world to design human-centered AI.

Share on email
Share on facebook
Share on linkedin
Share on twitter