Skip to Content

Generative AI: 5 Guidelines for Responsible Development

Paula Goldman & Kathy Baxter. Background image of a person wearing glasses looking at a screen of code

We have established a set of five guidelines that build upon our Trusted AI Principles to provide more detailed guidance for the responsible development and implementation of GenAI.

Einstein Copilot has arrived! Find out more about the conversational AI for CRM here.

Originally published on February 7th, 2023 on the Salesforce Newsroom

Generative artificial intelligence (AI) has the power to transform the way we live and work in profound ways and will challenge even the most innovative companies for years to come.

But generative AI is not without risks. It gets a lot of things right but many things wrong. As businesses race to bring this technology to market, it’s critical that we do so inclusively and intentionally. It’s not enough to deliver the technological capabilities of generative AI, we must prioritize responsible innovation to help guide how this transformative technology can and should be used — and ensure that our employees, partners, and customers have the tools they need to develop and use these technologies safely, accurately, and ethically.

Generative AI at Salesforce

The potential for generative AI at Salesforce — and enterprise technology more broadly — is vast.

AI is already an integral part of the Customer 360 platform, and our Einstein AI technologies deliver nearly 200 billion predictions every day across Salesforce’s business applications, including:

  • Sales, which uses AI insights to identify the best next steps and close deals faster.
  • Service, which uses AI to have human-like conversations and provide answers to repetitive questions and tasks, freeing up agents to handle more complicated requests.
  • Marketing, which leverages AI to understand customer behavior and personalize the timing, targeting, and content of marketing activities.
  • Commerce, which uses AI to power highly personalized shopping experiences and smarter ecommerce.

Now, generative AI has the potential to help our customers connect with their audiences in new, more personalized ways across many sales, customer service, marketing, commerce, and IT interactions. We’re even exploring the use of AI-generated code to help our customers – even those without certified Salesforce developers on staff – write high-quality code faster, using fewer lines of code and therefore, requiring less CPU.

Guidelines for Trusted Generative AI

Like all of our innovations, we are embedding ethical guardrails and guidance across our products to help customers innovate responsibly — and catch potential problems before they happen.

Given the tremendous opportunities and challenges emerging in this space, we’re building on our Trusted AI Principles with a new set of guidelines focused on the responsible development and implementation of generative AI.

We are still in the early days of this transformative technology, and these guidelines are very much a work in progress — but we’re committed to learning and iterating in partnership with others to find solutions.

Below are five guidelines we’re using to guide the development of trusted generative AI, here at Salesforce and beyond.

  1. Accuracy: We need to deliver verifiable results that balance accuracy, precision, and recall in the models by enabling customers to train models on their own data. We should communicate when there is uncertainty about the veracity of the AI’s response and enable users to validate these responses. This can be done by citing sources, explainability of why the AI gave the responses it did (e.g., chain-of-thought prompts), highlighting areas to double-check (e.g., statistics, recommendations, dates), and creating guardrails that prevent some tasks from being fully automated (e.g., launch code into a production environment without a human review).
  2. Safety: As with all of our AI models, we should make every effort to mitigate bias, toxicity, and harmful output by conducting bias, explainability, and robustness assessments, and red teaming. We must also protect the privacy of any personally identifying information (PII) present in the data used for training and create guardrails to prevent additional harm (e.g., force publishing code to a sandbox rather than automatically pushing to production).
  3. Honesty: When collecting data to train and evaluate our models, we need to respect data provenance and ensure that we have consent to use data (e.g., open-source, user-provided). We must also be transparent that an AI has created content when it is autonomously delivered (e.g., chatbot response to a consumer, use of watermarks).
  4. Empowerment: There are some cases where it is best to fully automate processes but there are other cases where AI should play a supporting role to the human — or where human judgment is required. We need to identify the appropriate balance to “supercharge” human capabilities and make these solutions accessible to all (e.g., generate ALT text to accompany images).
  5. Sustainability: As we strive to create more accurate models, we should develop right-sized models where possible to reduce our carbon footprint. When it comes to AI models, larger doesn’t always mean better: In some instances, smaller, better-trained models outperform larger, more sparsely trained models.

Building Generative AI We Can Trust
INSIGHTS FROM SALESFORCE’S CHIEF SCIENTIST

Learn more about Trusted AI at Salesforce, including the tools we deliver to our employees, customers, communities, and partners for developing and using AI responsibly, accurately, and ethically.

Get the latest articles in your inbox.