Skip to Content
Skip to Footer

Salesforce Supports AI Regulation Advancing Digital Trust and Innovation

Editor’s Note: AI Cloud, Einstein GPT, and other cloud GPT products are now Einstein. For the latest on Salesforce Einstein, go here.

AI, especially generative AI, is the next seismic technology shift – on the level of the internet and mobile. But new technology has its limits, and can lead to a wide range of risks, including issues of accuracy, bias and inequality, privacy and security, and sourcing of content. 

As companies rapidly move forward and gain productivity with AI, trust must be the top priority. That’s why, earlier this year, Salesforce released five guidelines for responsible generative AI development to provide tips to address trust concerns. 

In addition, Salesforce supports tailored, risk-based AI regulation that differentiates contexts and uses of the technology and ensures the protection of individuals, builds trust, and encourages innovation. 

Making AI accessible, trusted, and ethical

A tailored approach is key: A one-size-fits-all approach to regulation may hinder innovation, disrupt healthy competition, and delay the adoption of the technology that consumers and businesses around the world are already using to boost productivity

A tailored approach is key: A one-size-fits-all approach to regulation may hinder innovation, disrupt healthy competition, and delay the adoption of the technology that consumers and businesses around the world are already using to boost productivity

For example, while Salesforce encourages responsible AI development for all use cases, a small team of engineers developing a generative AI chatbot to teach college students to cook should have fewer guardrails and oversight than healthcare providers using AI tools used to help diagnose patients and develop care plans. 

Salesforce commends policymakers who apply a nuanced approach when developing regulation, and to consider the following concepts to help society navigate this important moment:  

  1. Risk-based framework: The context in which technology is used matters, and some industries, like healthcare, are more likely to create higher risks for their users and society than others. Risk-based AI regulation would focus most on high-risk applications, especially those with legal, financial, and ethical implications that could cause significant harm or impact someone’s rights and freedoms. 
  2. Differentiation based on context, control, and use: Regulation should differentiate the context, control, and uses of the technology and assign guardrails accordingly. Generative AI developers, for instance, should be accountable for how the models are trained and the data they are trained on, while those deploying the technology decide how the tool is being used and should establish rules governing that interaction. 
  3. Data privacy laws: Data protection laws that protect the fundamental human right to privacy are a foundation of responsible AI regulation. AI is powered by data. Additional rules specific to generative AI should address the use and privacy of personal data for training future models, safeguarding personal data within the AI ecosystem. 
  4. Transparency: AI systems might operate as “black boxes,” making it difficult to understand their decision-making processes. Individuals should be informed of and empowered to understand the “why” behind AI-driven recommendations, and they should be aware if they are interacting with a human or a simulated persona. 
  5. Accountability and government oversight: AI impact assessments are one way to promote accountability and trust with high-risk AI systems. Licenses or notifications can serve a useful role with accountability and compliance, but should be implemented through a risk-based approach, and balance compliance with market entry, competition, and innovation. That’s why risk management frameworks like those shared by NIST will bring a foundational understanding to the field. 
  6. Harmonization and consistency with existing rules: Many existing laws and policies already provide some guardrails around AI, such as global data protection laws. As regulators and other stakeholders develop new guidance, they should assess and clarify whether there is an existing law addressing these concerns. 
  7. Future-proof and universal applications: Given the pace of innovation, AI regulations should be globally interoperable, and be both durable and flexible.  The regulations should provide a policy framework for the ethical development and deployment of AI systems, rather than focused on a specific technology at a specific time. 

Governments, industry, academia, and civil society need to work together 

AI is a critical and rapidly-evolving issue in society, and Salesforce is proactively engaging with governments and all stakeholder groups to advance responsible, risk-based, and globally applicable AI norms. 

  • Salesforce supports governments and industry partnering together, like the work occurring in the G7 and the longstanding AI work of the Organization for Economic Co-operation and Development (OECD). 
  • In the United States, Salesforce submitted comments to the U.S. National Telecommunications and Information Administration (NTIA) AI Accountability Policy docket and to the U.S. Office of Science and Technology Policy (OSTP). 
  • In the European Union, Salesforce welcomes the continued progress on the EU AI Act and strongly recommends that the risk-based approach is reflected in the Act’s final text. 
  • Salesforce has also pledged to invest $4 billion in its UK business over the next five years to support AI innovation.
  • Salesforce representatives are actively participating in multi-stakeholder discussions through the U.S. National AI Advisory Committee, the Singapore Advisory Council for the Ethical Use of AI, Singapore’s AI Verify Foundation, and the U.S. Chamber of Commerce Artificial Intelligence Commission. 

Salesforce is committed to building trusted, transparent, and accountable AI systems that prioritize fairness, accuracy, privacy, and positive societal impact, and will continue this commitment as the technology continues to advance.

Go deeper:

Astro

Get the latest Salesforce News