Generative artificial intelligence (AI) is the shiny object that’s making many of us wonder how fundamentally it will change our work. It even begs the question whether “old school” chatbots have a future.
The short answer is that, yes, generative AI bots or chatbots do have a purpose in this new world of large language models (LLM) and generative AI. And, if you want to deliver a great customer experience, you’ll still be able to with a traditional chatbot – though you’ll need to reckon with new expectations from customers.
What do I mean? People tend to respond to chatbots as if they were human agents. How many times have you talked to your device’s voice assistant as if it were a friend or cursed it when it made an error? So if your company wants to build a chatbot, you might ask: Should the chatbot talk like a human? The answer isn’t what you might expect.
Make generative AI chatbot design less robotic
As a conversation designer in Salesforce’s user experience group, I’d argue that it’s not about making AI more human. In fact, when bots look and behave too much like a human, it can result in an unintended eerie or creepy quality – the so-called “uncanny valley.” Instead, we need to make bots less robotic. Let me explain the nuance and share how we approach designing bot conversations.
Conversation design is the future
Still learning about how to design for conversational AI? Read more about best practices, ethical considerations, and more.
Chatbots should be helpful
Bots are designed to address a specific scope of use cases. For customer service chatbots, the goal is to help the user get answers to questions or resolve an issue. But, if the bot talks like a human, it might cause the customer to expect responses the bot isn’t designed to offer. Giving bots a “personality” and the ability to have more human-like conversations might come at the expense of good customer service.
Expanding the bot conversation beyond the intended scope creates extra burden on the design – it’s difficult to account for all the possible requests or questions a customer might have.
So, instead of focusing on creating a human personality for a bot, consider reframing your generative AI chatbot design and center on its conversational look and feel to guide your users toward what can be done.
Language design for chatbots
Our team works mainly with chat, so we focus primarily on language design from syntax to diction. Some elements and components of conversation we must consider:
- Level of diction: The level of vocabulary and formality of the bot’s language. Use of jargon may leave certain users out, but with a specific, skilled audience in mind, it could also speed up time to resolution.
- Length of turns: The amount of messages and time your bot sends dialogs before a user responds. Keeping this low keeps users engaged, but it can be tough to do so with complex issues.
- Emoji use: Whether your bot uses emojis or not, and which emojis are acceptable to use. This can also be an accessibility issue for screen readers and also emojis that may be interpreted in a variety of ways, such as hand signs.
- Punctuation: Which symbols your bot uses and when. Exclamation points might be used for emphasis or celebration.
- Bot name: The name of your bot can set the stage for how it’s perceived as a brand. We generally advise designers not to use gendered or human names. The same goes for the bot avatar, or the bot’s profile image.
- Apologies and celebrations: When a user is successful, how does your bot handle it? What about with unhappy paths where a user’s need wasn’t met? You might end with a simple OK or take the time to tailor conversation to empathize with the user.
How a chatbot sounds
Voice adds another layer to how one perceives personality. Some include:
- Pitch and tone: The general voice of your bot. Often, voice assistants get higher pitched voices.
- Speech rate: How rapidly your bot speaks. For instructions, you might include extra pauses. Calming meditation apps could speak more slowly.
- Discourse markers: Words or phrases that signal shifts in conversation. These are also used in chat to acknowledge users. For example: “Got it!” or “OK.” or “So…” to demonstrate different levels of excitement and focus on the user’s goal. “So” indicates another task to be done.
- Dialect: Similar to pitch, dialect is subject to different cultural perceptions. Across different languages, certain dialects may seem like a standard variant that may be considered more professional.
Another aspect to consider is that users often gender language on their own. We don’t recommend designing bots with a specific gender identity, because it reinforces stereotypes around communication. It also doesn’t meaningfully influence syntax and flow. You might consider giving your bot a more neutral pitch and tone – though it depends on what messaging you want to express through your product and brand.
Beyond these factors, think about overall conversational flow. This might include: timing of bot response delay between messages; how to make dialog variations appear more intelligent and engaging; and disambiguation for error handling. While this isn’t an exhaustive list, it gives you an idea what a conversation might look like versus an image of what a human version of the bot might be like.
To be or not to be human
So many people are enamored by the concept that bots can be human. But to do so would require many more advancements in machine learning and NLP (natural language processing). The same goes for generative AI bots built with advanced large language models. To use these models, we need to define guardrails for everything from conversational techniques to human emotions to service use cases.
Even in the new world of LLMs, there’s still a human who needs to design prompts. We need to safeguard against unintended behaviors caused by the wide array of responses that might come from customers – and false responses generated by the LLM.
The same way that teams have guidelines and templates for live service agents responding to each situation, they will have to have the same for virtual assistants. We have much to learn about how best to address concerns about ethics and bias related to training AI. Just because we can make bots talk more like humans, doesn’t mean we should.