If generative AI is so intelligent, why are off-the-shelf LLMs so bad at writing a personalized email or answering a customer service question? The simple answer: There’s a lot that AI models simply don’t know. Although LLMs are trained on billions of data points, the information they don’t have tends to be precisely what you would need to generate a meaningful email or accurate service reply to one of your customers. Worse, a lack of contextual data can cause LLMs to hallucinate, giving you entirely inaccurate information in its responses.
To avoid this, organizations use a process called grounding that infuses LLM prompts with your internal data — including structured data (like Excel spreadsheets and CRM data) and unstructured data (like PDFs, chat logs, email messages, and blog posts) — “grounding” the prompt in relevant context. It’s what turns a generic generative output into something you might have written yourself.
How grounding works
The simplest way to ground an LLM prompt is by copying and pasting relevant data directly into the prompt’s context window. Unfortunately, this isn’t an option for most enterprises because it can potentially expose sensitive company information.
To maintain privacy and protect your data, you want to work within a secure environment. This includes organizing your information into a vector database and using a technique such as retrieval augmented generation (RAG) to easily retrieve it. In the background, data masking tools prevent the information entered into the prompt from being sent back to the LLM, ensuring privacy and security. If you’re using Salesforce, you can also pull in data through Flow or Apex.
Grounding helps generate more accurate and personalized AI outputs
Imagine you work in sales for a sneaker brand and want a department store to carry a new product. You already have a relationship with the buyer at the company as they have carried your products in the past. You use generative AI to craft a sales pitch to reintroduce yourself and your new product. Without grounding, the email might sound basic, dry, and impersonal.
But if your CRM data is up to date, the grounded prompt will then allow the LLM to reference your last point of contact; mention the sneakers they last purchased (including the quantity they bought); and mention that you offered a bulk discount because they purchased a larger amount. You might even have a note in the buyer’s account that they loved the shoes so much they bought a pair for themselves. By grounding the LLM prompt with this context, the AI can add specific information and a personal touch – for instance, enthusiastically telling the buyer you think they’ll love these sneakers, too.
Grounding helps communicate directly with customers, too. Let’s say you’re the head of marketing for a bank and want to send an email invitation to a customer to attend your upcoming conference. Without any context, the email invite would be like any other customer service note that might immediately get deleted. But your CRM has historical data on this person’s relationship with the bank, their interactions, and interests. By grounding the prompt with that context, your email can include specific details about the conference, why it’s relevant for this person, and can offer them a discount on the conference pass.
By grounding your LLM with your own data and metadata, you can have the most up-to-date information and accurate results when working with generative AI.