Skip to Content
Skip to Footer

Salesforce Commends Progress of EU AI Act: Promoting Safe, Trusted Artificial Intelligence

Editor’s Note: This post was updated on July 12, 2024 to reflect the EU AI Act’s publication in the EU Official Journal and on March 13, 2024 following the European Parliament’s endorsement of the EU AI Act.


Today, the European Union published the EU Artificial Intelligence Act (EU AI Act) in the Official Journal. While the law will apply beginning August 1, 2024, it will take up to three years for all aspects of the legislation to come into full effect.

Salesforce takes compliance requirements very seriously, and as a provider of AI systems in the EU, will continue to work closely with EU policy makers during the implementation phase of the law, anchoring in trust as the cornerstone of AI development.


March 13, 2024

Today marks a historic milestone for artificial intelligence (AI): the European Parliament has endorsed, by an overwhelming majority, the European Union’s Artificial Intelligence Act (EU AI Act).

The EU AI Act is the first comprehensive legal framework of its kind for AI governance, and sets the stage for responsible AI adoption, fostering trust and transparency in the technology. 

Fostering public trust in AI with governance frameworks 

In an era where AI-powered services are revolutionizing productivity, and governments are grappling with the idea of regulating this important technology, the EU AI Act provides an important tool to guide public and private entities as they continue to develop and innovate with AI. 

Research confirms that the European public is keen on such guidance. Citizens are curious about AI, but are concerned about data privacy, misinformation, and other risks and are counting on lawmakers to address these risks through guardrails and standards. 

In an era where AI-powered services are revolutionizing productivity, and governments are grappling with the idea of regulating this important technology, the EU AI Act provides an important tool to guide public and private entities as they continue to develop and innovate with AI. 

We view it as an appropriate and constructive role for governments, in consultation with other stakeholders, to take definitive and coordinated action toward building trust in AI, and advancing responsible, safe, risk-based, and globally interoperable AI policy frameworks.

How the EU AI Act aligns with Salesforce’s commitment to trusted AI

At Salesforce, trust has always been our #1 value. We’ve spent over a decade investing in ethical AI, both in our business and with our customers. Our Office of Ethical & Humane Use has played a critical role in developing and deploying trusted AI. Building on our Trusted AI Principles, in the past year alone, we have published Guidelines for Generative AI, an AI Acceptable Use Policy, and guidelines for having a Human at the Helm.

That is why Salesforce has been active in advocating for guardrails in line with what the EU AI Act provides for, including:

  1. Risk-based regulation: EU regulators have realized that AI cannot be regulated through a one-size-fits-all approach. Salesforce supports this nuanced approach of having tailored, risk-based AI regulation that differentiates contexts and uses of the technology and ensures the protection of individuals, builds trust, and encourages innovation. 
  2. Transparency as a key principle: The EU AI Act includes transparency obligations as well as an allocation of responsibilities along the AI value chain. These measures enable individuals to make informed choices and understand how AI systems affect them, ensuring accountability. 
  3. Alignment with data privacy laws: We have long advocated for comprehensive data privacy legislation that protects the fundamental human right to privacy. The AI Act introduces additional data governance rules, including data accuracy and bias detection.
  4. Stimulating innovation: The EU AI Act allows smaller businesses to develop AI solutions in so-called ‘regulatory sandboxes’: an environment where businesses can explore and experiment with new innovations.
  5. Harmonized implementation, supported by public-private collaboration: We believe the public and private sectors, along with civil society and academia, must commit to work together to develop and deploy trusted AI. Having an “EU AI Office” to oversee the enforcement of rules across Europe, and to contribute to new standards and testing practices allows for this collaborative approach to emerging new technologies. 
  6. Global cooperation: The EU AI Act incorporates the OECD’s definition of artificial intelligence, which was slightly adapted to align with the law. Such alignment supports a globally coordinated approach around AI, and around the types of systems that should be regulated.

What’s next for the AI Act

With the AI Act’s endorsement by the European Parliament, a few steps remain before it’s passed into law. 

  • The EU AI Act is expected to be officially endorsed by Member States and published in the Official Journal of the EU shortly after today’s vote and will start to apply 20 days later. 
  • Companies will then have two years to comply with its requirements, with the EU AI Act expected to be fully applicable by mid-2026. Rules on general-purpose AI will start applying after one year, by mid-2025. 

We believe that by creating risk-based frameworks such as the EU AI Act, pushing for commitments to ethical and trustworthy AI, and convening multi-stakeholder groups, regulators can make a substantial positive impact. Salesforce applauds EU institutions for taking leadership in this domain. 

We believe that by creating risk-based frameworks such as the EU AI Act, pushing for commitments to ethical and trustworthy AI, and convening multi-stakeholder groups, regulators can make a substantial positive impact. Salesforce applauds EU institutions for taking leadership in this domain. 

We look forward to further contributing to this collective challenge and collaborating with EU policy makers during the implementation phase, anchoring trust as the cornerstone of AI development. 

Go deeper: 


February 13, 2024

Today, the European Union’s Artificial Intelligence Act (EU AI Act) was endorsed by the European Parliament at the committee level — the penultimate step before becoming EU law. The EU AI act is the first comprehensive legal framework of its kind globally, and is aimed at regulating AI. 

Why it’s important: While companies have quickly adopted AI-powered services to supercharge productivity, governments around the world continue to grapple with the best way to ensure AI is built and deployed in a safe, trusted, and fair manner. Proposed by regulators in 2021, the EU AI Act provides a framework to guide public and private entities as they evolve and adopt the technology.

Salesforce perspective: Eric Loeb, EVP of Global Government Affairs, explained how regulation can align with Salesforce’s commitment to trusted AI.

  • “Salesforce believes that harnessing the power of AI in a trusted way will require governments, businesses, and civil society to work together to advance responsible, safe, risk-based, and globally interoperable AI policy frameworks,” said Loeb. 
  • “The progress of the EU AI Act has meaningfully advanced AI policy discussions across the globe. Salesforce commends the policymakers behind the EU AI Act for working with care toward nuanced approaches, including a risk-based approach and ethical guardrails,” he continued.
  • Furthermore, Loeb said, “The development of risk-based frameworks should address the entire value chain of AI activity. AI is not a one-size-fits-all approach, and effective frameworks should protect citizens while encouraging inclusive innovation and competition.”

Salesforce believes that harnessing the power of AI in a trusted way will require governments, businesses, and civil society to work together to advance responsible, risk-based, and globally applicable AI norms.

Eric Loeb, EVP of Global Government Affairs, Salesforce

Explore further:

Astro

Get the latest Salesforce News