Einstein.
Responsible AI & Technology

Embedding trust into product design.

We're bringing trust to life through our generative AI ethics efforts. At Salesforce, trust has always been our #1 value. It’s critical to ensure that AI is safe and inclusive for all, and it starts with developing AI accurately and ethically.

The Office of Ethical & Humane Use guides the responsible development and deployment of AI, both internally in our products and externally with our customers through best practices, tools, and frameworks for developing and using Salesforce products to ensure our products, features, models, and apps are trustworthy, avoid biases and discriminatory outcomes, and prioritize transparency. We have developed an internal trust product framework, calling for model safety testing, human-at-the-helm product design, and in-app guidance to help Salesforce users interpret results. We developed a set of guidelines for responsible generative AI that are as follows:

Salesforce guidelines for responsible AI

Be accurate and convey uncertainty when the answer isn’t clear. Enable fact-checking when possible.

Example: Include using customer data to ground models and citing sources so users can trust where the information is coming from.

Mitigate bias, toxicity, and harmful content. Protect personally identifiable information (PII) and prevent data leakage.

Example: Include conducting bias assessments and red-teaming to find and mitigate gaps that could cause unsafe, bias, or toxic outputs.

Respect data provenance and make clear that content is AI-produced when autonomously delivered.

Example: Include a citation or source so that a human can help determine how the AI generated an output.

Supercharge human capabilities, ensure accessibility for all, and engage in responsible labor practices.

Example: Implement extra confirmation steps when using AI so humans remain in control and better poised to focus on high-level decisions.

Prioritize high-quality, representative training data.

Example: Develop right-sized models to reduce carbon.

Translating principles to product: A strategy rooted in ethical & inclusive product design.

Applying this strategy prevents potential harms to customers, Salesforce, and society at large. Our goal is to ensure Responsible AI is propagated throughout the entire ecosystem, from foundation and applied models, to product development and implementation, to the end user. Here are a few examples of how we achieve inclusivity in our AI products:

  • Citations & sourcing to allow users will see where the generated information is coming from and build trust in the accuracy of the information output.
  • Transparent disclosure through a UI element that discloses the use of generative AI, which is now required for all Einstein features or services.
  • Feedback capture that allows us to see 3 kinds of feedback to ensure our generated content is actually useful and accurate, and can adjust if it isn’t.
  • Model containment policy. Nested inside the standard system prompt sent to LLMs are a set of instructions that help prevent biased, toxic, unsafe, or otherwise undesirable outputs from being generated.
  • Adversarial testing to ensure our models and products produce the most trusted results and train them away from harmful outputs, addressing issues of prompt injection, toxicity, and accuracy.

Beyond Ethical AI, extending to all products.

Another area of focus is a human-centric approachopens in a new window to customer service, sales, and commerce. We strive to prevent the reinforcement of implicit biases and harmful verbiage in our technical content and code through our inclusive product language initiativeopens in a new window while building more inclusive featuresopens in a new window for all. Considering all possible outcomes when designing and developing products and features, our internal teams employ Consequence Scanningopens in a new window, a responsible innovation methodology that considers a diverse set of perspectives and encourages problem-solving prompts.

Learn about the current and future world of ethical AI.

Woman wearing a pink suit jacket speaking

Salesforce's Ethics Chief on the Future of Ethical AI

A woman speaking wearing a brown jacket

TDX How to Put Humans at the Helm of your AI experience

View additional focus areas.

Astro standing with a backpack

Ethical Use Policy

Guiding the responsible use of our platform with principles, processes, and policies.

A character with a backpack

Product Accessibility and Inclusive Design

Designing for and alongside users with disabilities to unleash innovation for everyone.