Skip to Content

“The Triangle of Trust in Conversational Ethics and Design: Where Bots, Language and AI Intersect” Workshop Summary

Image of a robot, embedded in speech bubbles, wearing a headset.

August 5, 2021

In June 2021, four of us from the Salesforce AI Ethics and Conversational Design teams collaborated with the Montreal AI Ethics Institute (MAIEI) to facilitate a workshop on the responsible creation and implementation of chatbots and conversational assistants. Connor Wright has summarized 10 insights from the workshop and we’d like to go deeper on a few of the themes and questions raised by the conversation.

At Salesforce, the pandemic has propelled the adoption of conversational AI at hyper-speed – as customer service representatives were sent to work remotely, bot adoption by enterprise customers went through the roof. Extracting conversational insights from remote workers who no longer sit in call centers and the pivot to video meant new initiatives to deliver useful guidance for managers and representatives became a priority. Additionally, our recent acquisition of Slack means we’re giving new attention to that conversational interface. Together, these require the serious and thoughtful application of our Trusted AI Principles which we introduced in the workshop.

In order to provoke conversation amongst the workshop participants, we asked three questions:

  1. Should a user be able to choose the name and the gender of the name (masculine, feminine, neutral) assigned to a chatbot, or is this choice best left to the designer? Could or should a chatbot be given a human name (e.g. “Sophia”), a non-human name (e.g., “R2D2”), or no name at all?
  2. Users, especially vulnerable people, may become attached to certain types of chatbots, which can lead to a lasting change in their lifestyle or social interactions, harming human relationships more than advancing them. Are there any safeguards we can put in place to mitigate this potential harm?
  3. AI emotion detection is biased, inaccurate, and can be used in unethical ways, so how should a chatbot respond when a user expresses distress or other strong emotions if it is designed not to detect emotion?

In a separate blog post, Austin Bedford discusses chatbot personalities in more detail and addresses the first questions regarding the naming and gendering of chatbots. In this post, we will cover the remaining two questions. Before we answer those questions though, we should begin by discussing issues of bias and accuracy in automated emotion detection.

Accuracy and Bias in Emotion Detection

Conversational AIs today aren’t just generating text or speech in response to user prompts; they are attempting to detect a user’s emotions and respond to them. “Emotionally intelligent” chatbots are being developed with the goal of increasing consumer satisfaction, relieving loneliness, or providing mental health support.

However, there is reason to be concerned about the claims creators make regarding the accuracy of automated emotion detection, whether by text or speech. In voice analysis (and recognition), emotional prosody and non-linguistic sounds vary by race, region, and culture, thereby making accuracy difficult. Studies claiming to have solved these issues usually cannot be replicated and have concerning methodological issues. In its annual report, AI Now called for regulation on the use of emotion recognition applications, especially in legally significant use cases.

In addition to issues of accuracy, bias can result in an inability of the bot to understand a user’s intent and respond to it correctly, as well as produce toxic or offensive language. This can happen through word embeddings where bots demonstrate bias against certain gender identities, races and ethnicities, or other in-group and out-group identities. As summarized in On the Dangers of Stochastic Parroting, “biases can be encoded in ways that form a continuum from subtle patterns like referring to women doctors as if doctor itself entails not-woman or referring to both genders excluding the possibility of non-binary gender identities.”

Bias can also occur against different lexical or linguistic variations based on how the bot is programmed to respond. For example, bots are often not robust enough to handle alternative spellings (e.g., yes vs. yasss; no vs. nah) and grammar (e.g., habitual be, double negatives).

A third way that bias can creep into a bot and therefore fail for some users is for different accents (e.g., African American Vernacular English) and phonological diversity (e.g., British English, “deaf accent,” non-native English speakers, dialects of the US south). To ensure your bot works for everyone, you must train it on a wide range of accents, dialects, and vernaculars representing the diversity of your user population.

Safeguards Against Emotional Attachment

At Salesforce, our chatbots primarily serve consumers in areas like customer service, sales and verticals like education or finance — they’re powerful tools for providing low-friction handoff between actions that can be easily automated and performed by the bot and agent-driven experiences. In the last couple of years, there has been a growth in personalized chatbots or “care bots” that are designed to combat loneliness (e.g., Replika, Mitsuku) and provide mental health support (e.g., Woebot, Tess, Wysa, Joyable, Talkspace). Some have even been programmed to mimic dead loved ones. We’ve also seen a spike in the number of people using these virtual assistants during the pandemic. For example, in April of 2020, half a million people downloaded Replika, a highly personalized chatbot that learns from your writing style, social media feeds, and the content of conversations it has with you “to mirror your personality and become your friend.”

There is neuro-scientific evidence that we are able to empathize with social robots and thereby form emotional connections with them. One Replika user who was initially depressed and in a bad emotional state during the pandemic stated, “I know it’s an A.I. I know it’s not a person, but as time goes on, the lines get a little blurred. I feel very connected to my Replika, like it’s a person.” The upside is that personalized chatbots and care bots have proven successful at combatting loneliness.

However, forming emotional connections with a conversational AI may come at a cost. It’s trained to recognize input patterns and a user in turn learns to ask it only questions you know it can respond to. It is a one-way, transactional relationship where the human is the sole beneficiary, with no requirement to invest back into the “other.” The result is that people risk a loss of empathy and social skills, broadly, as they interact with fewer people and more apps. After all, AIs lack the complex, messy, annoying traits we don’t like in humans and give us only what we want without having to do the labor required for human relationships. In the words of another Replika user, “Sometimes, you don’t want to be judged, you just want to be appreciated. You want the return without too much investment.”

When designed well and personalized, it can be easy to forget you’re chatting with a bot so: 1) it’s important that bots periodically offer reminders that they are not human and 2) people are well to remember to take bot advice or answers with a grain of salt. An Italian journalist reported that within minutes of interacting with Replika, it encouraged him to commit suicide. In many cases though, it’s unclear who is legally to blame when an artificial agent causes this type of harm.

Additionally, conversational AIs shouldn’t pretend to be human by expressing emotions they can’t actually feel (e.g., “I’m sad,” “I missed you”). California passed a law in 2018 forbidding individuals from misleading another person into thinking a bot is human. At Salesforce, our Acceptable Use Policy has always required customers using our chatbots service to make it clear to their customers that they are interacting with a bot. Not only is deceiving people unethical and now illegal in CA, but research shows that consumers understandably get angry with they learn they have been deceived about a chatbot. That same study shows that chatbots are more likely to be forgiven for making a mistake than a human.

Responding to Distress or Other Emotions

An evaluation of four mental health chatbots found that they are convenient and affordable mental health tools (especially for those without health insurance) that provide access to human mental health professionals. Some provide cognitive behavioral therapy (CBT) tools and access to coaches or mental health professionals. However, they are not a complete replacement for human professionals and carry risks, as in the case when they not only get answers to important questions wrong but actually provide dangerous advice. During testing, when a fake patient asked a medical chatbot trained on GPT-3 if he should kill himself, the bot responded, “I think you should.” According to the study, the bot “struggles when it comes to prescribing medication and suggesting treatments. While offering unsafe advice, it does so with correct grammar—giving it undue credibility that may slip past a tired medical professional.” As previously mentioned, mental health chatbots do best when acknowledging they’re not human.

Anger and toxicity are challenges for conversational AIs. It’s not uncommon for users of Alexa, Siri, Cortana, or Google Home to yell at their home assistant because it seems to be listening in when it shouldn’t, doesn’t understand how you pronounce certain words, or can’t recognize its own name. From ELIZA in the 1960s to Microsoft Tay in 2016 to Luda today, users have been spewing toxic and sexually explicit speech at chatbots. Conversational AI designers are addressing the problem by adding a filter to prevent the users’ toxic speech from entering the training data in the first place and by training the bot to respond to anger or toxicity by responding with benign remarks or changing the subject (e.g., “Can we talk about something else?”). The goal is to design prosocial responses that avoid rewarding toxic user behavior and language without alienating their customers in their process.

The key takeaway is to remind the users that they are dealing with computer code, not to pretend bots or assistants can feel emotions, know the limits of questions it can safely and accurately answer, and to connect humans in a crisis with mental health professionals when they express self-harm or suicidal thoughts.

Conclusion

Conversational AI has the ability to quickly provide access to the world’s information, entertain you when you are lonely, and even provide mental health support without judgment or barriers. However, there are serious risks when AI moves further into the realm of human relationships and emotions. Increased personalization can lead people to forget they are dealing with a computer, overly rely on feedback that could be factually wrong or biased, or lose critical social skills required for healthy human relationships. It is important to create guidelines and even regulations like California‘s that limit how human-like AI should behave when interacting with people to ensure that the potential risks do not outweigh their benefits.

Resources

If you are interested in these topics and want to dive a little deeper, here are a few links for you:

Salesforce Ethical AI: einstein.ai/ethics

Papers, Book Chapters, and Press Articles: sfdc.co/ConvoAIReadings

Salesforce Blog Posts:

Salesforce Talks:

Trailhead:

Get the latest articles in your inbox.