Large companies are investing in enterprise LLMs, left and right. Why? These LLMs lay the foundation for AI tools that can chat with shoppers, detect fraud, diagnose medical issues, and much more.
Want to translate a product explainer in 10 languages in minutes, while staying true to your brand voice and tone? An enterprise large language model (LLM) can do that. Need to gauge the sentiment of your customer’s service interactions in real time? Tap an LLM. Want to analyze and summarize 500 pages of financial data in minutes? An LLM’s got you.
Clearly, LLMs hold enormous promise. In fact, the venture capital firm Andreesen Horowitz wrote that “pre-trained AI models represent the most important architectural change in software since the internet.”
What is a large language model?
Large language models (LLMs) are a type of AI that can generate human-like responses by processing natural-language inputs, or prompts. LLMs are trained on massive data sets, which gives them a deep understanding of a broad range of information. This allows LLMs to reason, make logical inferences, and draw conclusions.
Enterprise LLMs may seem like a magic wand, enabling your teams to process enormous amounts of proprietary and public data in seconds to inform intelligent business outputs. That’s the good news.
The not-so-good news is that you can’t just grab an off-the rack LLM and expect it to give you information tailored perfectly to your needs and in your brand voice. The results are, simply, too basic to be useful.
This is just one of several universal challenges teams encounter as they implement an enterprise LLM. What are these challenges and how can they be minimized?
What are the challenges of an enterprise LLM?
Accuracy and reliability
One of the biggest concerns around generative AI and enterprise LLMs is ensuring the data, and thus the outputs of the AI, is accurate and reliable. One way to do that is through prompt grounding. Grounding is when you provide specificity and context in the prompt, which results in much better outputs because the LLM bases its responses on your real-world context.
Consider this example. A salesperson tells her AI assistant to schedule a meeting with Candace Customerman of Acme Corp. The system doesn’t know who Candace is, the topic of the meeting, what time zone she’s in, or what products she may have purchased or service issues she’s had in the past.
“In this case it’s just trying to guess, which can produce hallucinations and low-quality outputs that the salesperson would need to tailor by hand,” said David Egts, field CTO of public sector at Mulesoft. “But if you can ground it in real-world customer data, that’s where it’s helpful.”
The grounding Egts described is done with application program interfaces (APIs), which connect different software applications so they can communicate and share information with each other. These API connectors can help AI systems ground prompts in real-world, up-to-date information, even information that exists outside your CRM — like invoicing, inventory, and billing. Many companies, including Salesforce and Mulesoft, provide APIs.
“Without grounding, a sales email has no context and may encourage customers to buy products they already own,” said Egts. “A tone-deaf email would desensitize your customers from not only acting on your email but opening the next one.”
Good generative AI starts with good prompts
The basic questions you ask LLMs may generate impressive but unusable responses. Why? They’re missing crucial, relevant context, which is called grounding. How can you create your own trusted AI prompts?
Integration
An enterprise LLM may be trained on thousands of data sets. But a one-size-fits-all model doesn’t automatically reflect your brand voice and definitely does not include your company’s proprietary data. That’s why any AI project must start with a solid data foundation.
“Off-the-shelf LLMs weren’t trained on your company’s data, so you need to either ground your prompts and/or tailor your model,” said Egts. “Otherwise, you’ll get a very vanilla, unhelpful response.”
Consider this example: A customer sends a note to a merchant that part of their order is missing. When data is disconnected, a system might produce a generic response like, “John, we apologize you did not receive all your items. We’ll make sure to resolve this for you. We can either issue a refund for the missing item or arrange for a replacement to be delivered to you as soon as possible.”
When there is context, and all systems are connected, automated responses are much more personalized and satisfying for the customer.
Hi John,
I apologize for the missing order. I can offer you two options: I can either process a refund of $12.37 to your Visa ending in 0123, or I can arrange for a replacement of the missing red socks. As a Loyalty Program Member, the replacement will be a priority delivery within 2-4 days.
What’s the secret to making this happen? Integrations that connect your data and applications from any system, whether cloud or on-premise. In this scenario, CRM, payment and logistics systems work in harmony to create a better customer experience.
“You’ve got to figure out the data sources you need to unlock, how they will feed your LLM, and how you can create a 360-degree view of your customer,” said Egts. Without addressing systems integration challenges, your AI relies on generic data that doesn’t benefit your business or your customers.
Trusted AI starts with a technology trust layer
A trust layer helps your team benefit from generative AI without compromising customer data. From data masking to toxicity detection, see how the Einstein Trust Layer delivers security guardrails that keep your data safe.
Security and data privacy
Data security has long been at the top of companies’ priorities. Salesforce research shows generative AI brings the added risk of proprietary company data leaking into public large language models. When you provide your company’s information to an LLM, you may be inadvertently giving it sensitive customer and company data that may be used to train its next model.
The solution? When you’re shopping for an enterprise LLM, make sure it includes secure data retrieval, data masking, and zero retention. We’ll explain what these terms mean below.
With secure data retrieval, governance policies and permissions are enforced in every interaction to ensure only those with clearance have access to the data. This lets you bring in the data you need to build contextual prompts without worry that the LLM will save the information.
Next is data masking, which automatically anonymizes sensitive data to protect private information and comply with security requirements. This is particularly useful in ensuring you’ve eliminated all personally identifiable information like names, phone numbers and addresses, when writing AI prompts.
You also need to ensure that no customer data is stored outside your systems. When data masking is in effect, Generative AI prompts and outputs are never stored in the enterprise LLM, and are not learned by the LLM. They just disappear.
Start your enterprise LLM journey today
An enterprise LLM consisting of all your organization’s proprietary data may eventually be the most powerful tool you have to serve customers even better, uncover buried intelligence, operate with unprecedented levels of efficiency, and a lot more.
Thankfully, these tools and techniques can help you meet the common challenges that may come up at the beginning of your enterprise LLM journey.