Skip to Content
Skip to Footer

How Salesforce Builds Trust in Our AI Products

Salesforce is taking steps to ensure AI is trusted and reliable by empowering humans at the helm through product design

AI holds incredible promise for humanity and for enterprises. However, one of the greatest risks to AI adoption may be getting people to trust the technology. 

According to a new survey, 63% of global workers say human oversight would build their trust in AI.

Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce

According to a new survey, 63% of global workers say human oversight would build their trust in AI. That’s why, at Salesforce, we’re designing powerful, system-wide controls that put humans at the helm of our AI products. 

As we move into more autonomous phases of AI, human control is more important than ever to make sure that AI acts alongside our consent and supervision.

To that end, our Responsible AI & Technology team has been working together with Salesforce design, product, and engineering teams to build standardized human-at-the-helm patterns. These patterns are standard guardrails implemented across Salesforce AI products designed to improve safety, accuracy, and trust while empowering the human user. Salesforce human-at-the-helm patterns fall into five primary categories: 

  1. Mindful Friction: System-wide controls and product design that create pauses in the user experience to ensure intentional human engagement at critical junctures. This allows us to thoughtfully steer, review, and act upon AI-generated content to support trustworthy AI. 
  2. Awareness of AI: Functionality that creates transparency and awareness of the presence of AI-generated content.
  3. Bias & Toxicity Safeguards: Guardrails to help make sure AI systems don’t produce harmful or malicious content.
  4. Explainability & Accuracy: Designed experiences that help make AI more reliable and that clearly explain the AI’s action while delivering clear and correct information. 
  5. Hallucination Reduction: A set of policies and prompt instructions to limit the scope of what an AI can generate. 

Combined with our Einstein Trust Layer, these patterns are an important way that Salesforce is building trust in AI. Here are just a few of the patterns, features, and guidelines that can be found across the Salesforce AI product suite:

  • Citations: With citations, users can understand where the information is coming from through citing sources and documentation. Our new design for citations makes Einstein generated content readable and clear on its source, giving users the option to validate the source flagged.
  • Transparency of AI Content: Our AI Acceptable Use Policy requires that customers must disclose when end users are interacting directly with an automated system. This is also required for all Einstein features and services. Additionally, our UI element clearly and transparently discloses when content is AI-generated. The Einstein “sparkles” icon helps indicate and alert the user that they are about to use or are using generative technology within the Salesforce platform. This helps bring awareness to the user in the moment. Real-time check marks are also displayed for processes that AI has completed. 
  • Model Containment: Prompt instructions are set to reduce the potential for toxic and biased outputs, including toxic mirroring. For example, a rule is built into the LLM so it won’t use any words or phrases that are toxic, hateful, biased, inflammatory or offensive. The LLM can be guided to prevent the use of gender identity, age, race, sexual orientation, socioeconomic status, education level, religion, or physical/mental ability of a sender or recipient and/or their perceived habits, desires, or social practices.
  • Feedback: Our Einstein products solicit user feedback to improve quality, relevance, and accuracy over time through four different mediums: edits, hover-overs, explicit feedback, and thumbs up/thumbs down. These all work to make sure our generated content is useful and accurate. And users can adjust content if it isn’t on track with their expectations. For example, our email generation tools require a confirmation step before saving or sending for human review. This action helps reinforce the human role in generated email composition. 
  • Valence of Buttons: Buttons across our cloud products are the same color and font, which reduces the risk of users hitting send on an Einstein email generation before reading it first. For example, “send” or “submit” are the same color as “edit” or “regenerate.”
  • Unchecked Demographics: By default, demographic attributes are unchecked when generated in marketing segments to help mitigate unintended stereotype bias.

What’s next for human and artificial intelligence

This work will continue to evolve as technology does, and these trust patterns will scale and expand with it. For example, as our Einstein Copilot gets even more advanced, new trust patterns will be added to future releases, including:

  • Trust Safety Detectors, which provide toxicity warnings within Prompt Builder. Enabled by the admin for a defined use case, these warnings alert users if a prompt has safety concerns, right in the context of where the incident occurred for clarity and explainability. For example, a warning alert includes popovers that display an explanation of the detection, dynamic actions based on the detection type, and offer the user a chance to provide specific feedback on the detection. 
  • Confirmation Steps, which incorporate mindful friction across steps within Copilot. These are intended to identify critical junctures for human involvement where AI mistakes could happen. As we continue to innovate for more automotive experiences, we’re offering users a chance to reconfirm or change direction, such as before changes are made to data, feedback patterns, and text fields to describe feedback.

As we transition to a world in which fully autonomous AI agents exist, human-at-the-helm patterns are the controls that will allow us to govern and oversee AI agents. This sets AI agents up for success and allows humans to focus on what we’re best at: creativity, connection, and decision-making.

This sets AI agents up for success and allows humans to focus on what we’re best at: creativity, connection, and decision-making.

Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce

By combining the best of human and artificial intelligence, we’re unlocking incredible new solutions and ensuring trustworthy AI experiences for all.

More information:

Astro

Get the latest Salesforce News