We’re pleased to announce that Salesforce has been selected to join the National Institute of Standards and Technology (NIST)’s U.S. AI Safety Institute Consortium (AISIC) as a member of its inaugural class.
Following the White House’s AI Executive Order, NIST created the Consortium and called upon experts to lend their time and knowledge. Building on our prior work with NIST, historical responsible innovation initiatives, and Office of Ethical and Humane Use of Technology, Salesforce stepped up and answered this call.
We look forward to participating in working groups, hosting convenings, and lending the Consortium our expertise in building and deploying trusted AI.
Public and private entities must collaborate as AI advances
AI innovation is happening at a speed once thought to be impossible. But high-speed innovation without guardrails can lead to varied and unexpected risks.
While governments moved last year to address the rapid development of AI — including the UK’s AI Safety Summit, the EU’s AI Act, the White House’s AI Executive Order, and the G7’s landmark code of conduct — overall consumer and business trust in AI remains elusive. In the years ahead, public-private-sector collaboration will be key to bridging that gap.
The Consortium can serve as a model for this work. Already, Salesforce team members have been partnering with NIST through the AI Risk Management Framework (AI RMF) and the National AI Advisory Committee (NAIAC). The Consortium’s role as a national convener between the public and private sectors to prioritize responsible innovation will help us all build AI systems that are safe, secure, and worthy of societal trust.
The Consortium’s role as a national convener between the public and private sectors to prioritize responsible innovation will help us all build AI systems that are safe, secure, and worthy of societal trust.
Salesforce and trusted AI
At Salesforce, trust has always been our #1 value. It’s critical to ensure that AI is safe and inclusive for all, and it starts with developing AI safely, accurately, and ethically.
For over a decade, Salesforce has led the way by investing in ethical AI. Our Office of Ethical and Humane Use continues to guide the responsible development and deployment of AI, both internally and with our customers. In the past year alone, we’ve released our Guidelines for Generative AI, published an AI Acceptable Use Policy, and worked closely with our product teams to ensure trust keeps pace with our technology.
Salesforce is also proactively engaging with governments, industry, academia, and civil society to advance responsible, risk-based, and globally applicable AI norms. This includes signing the White House’s Voluntary Commitments to help advance the development of safe, secure, and trustworthy AI, and pledging $4 billion from Salesforce’s U.K. business over the next five years to support AI innovation.
Working together for our AI future
We’re excited to see governments and industry continue to work together toward a future that safely leverages the power and possibilities of trusted AI, and honored to be a part of a Consortium that can help bring it to bear.
Go deeper:
- Read Salesforce’s five guidelines for responsible generative AI development
- Learn about opportunities to deepen trust in AI
- Read more about Salesforce’s AI Acceptable Use Policy
- Discover how Salesforce develops ethical generative AI