Astro is holding the Einstein logo, standing in front of windows displaying Einstein AI's capabilities.

EU AI Act: Key Details and How to Prepare

Learn about the EU AI Act, how it impacts organisations developing and using AI, and steps to comply. Stay ahead with insights on AI, GDPR, and data protection.

*This article is prepared for general information purposes only. It should not be regarded as legal advice.

Summary: Key Points of the EU AI Act

  • The EU AI Act is a new European regulation introducing strict guidelines for AI development, deployment, and use, sorting AI systems by risk and mandating human oversight and the use of “human-machine interface tools” such as keypads for all high-risk AI systems.
  • The EU AI Act applies to organisations located within the EU if they place an AI system or general-purpose AI model on the market in the EU or provide an AI system for deployment in the EU. It also applies to organisations outside the EU if the output produced by an AI system is used in the EU.
  • The EU AI Act entered into force on August 1, 2024 but it is not yet fully operational. Different transition periods apply depending on the type of system, with specific provisions for prohibited AI practices and general-purpose AI from 2025 and high risk AI systems from 2026 or 2027.
  • To prepare, companies should assess their AI systems against the EU AI Act’s requirements, update relevant documentation, inform relevant stakeholders, and keep updated on the evolution of the legislation and subsequent guidance. Salesforce's Data Cloud is a great tool to effectively manage data and AI strategies.
Navigating New AI Rules: What the EU AI Act means for your business

What is the EU AI Act?

August 1, 2024 marked a significant regulatory shift for AI across Europe with the introduction of the EU AI Act.

The EU AI Act applies to providers established in the EU that place on the market or put into service AI systems but also providers and deployers outside of the EU including UK companies where the output produced by the AI system is used in the EU.

The EU AI Act allocates roles and responsibilities to the different parties involved in the development, commercialisation and use of AI systems (from AI providers to AI deployers), sorts AI systems by risk, prohibits certain AI practices that pose unacceptable risks, and sets firm rules for high-risk applications.

Main Points of the EU AI Act

Here is a breakdown of key points from the EU AI Act that you should know.

Risk-Based Classification:

  • Unacceptable Risk: AI systems that pose unacceptable risks to people's safety, rights, or freedoms are forbidden e.g. AI manipulating human behaviour to circumvent users' free will.
  • High-Risk AI: AI systems covered by certain EU product safety legislation or the “critical” areas covered in the EU AI Act, such as education and employment face strict obligations, and the parties must comply with detailed requirements.
  • General Purpose AI: General-purpose AI models that can be integrated into downstream systems are subject to specific provisions focused on the particular risks.
  • Limited Risk AI: AI systems that pose risks due to a lack of transparency for the user must comply with basic transparency requirements, like letting users know when they’re talking with a chatbot.
  • Minimal Risk AI: the EU AI Act allows the free use of minimal-risk AI (e.g. spam filters).

Responsibilities for Providers and Deployers:

There are different obligations that attach to the various stakeholders in the AI supply chain, depending on the risk categorisation of their AI system. For example:

  • Providers: Providers have a number of obligations to consider from this legislation from establishing a risk management system to developing instructions for use.
  • Deployers: The rules for deployers are a bit lighter, but if you’re using high-risk AI systems, there are still obligations you’ll need to comply with, including assigning human oversight of the high-risk AI system.

The EU AI Act also places obligations on importers and distributors of AI systems as well as authorised representatives appointed by certain providers of AI systems or general-purpose AI models established outside the EU.

General Purpose AI (GPAI): GPAI models are “AI models that display significant generality, are capable to competently perform a wide range of distinct tasks and that can be integrated into a variety of downstream systems or applications”. The EU AI Act introduces special requirements applying to the providers of GPAI such as:

  • Technical Documentation: Providers of GPAI need to make sure they’re keeping detailed records, following copyright laws, and giving a clear summary of what data they’ve used for training.
  • Systemic Risk: If your GPAI models pose any major risks, you’ll have to go through extra evaluations and adversarial testing to keep everything secure.

A code of practice for GPAI model providers is under preparation and will be published in May 2025. It will help GPAI model providers demonstrate compliance with the EU AI Act requirements.

These points show how the EU AI Act aims to make AI systems safe, transparent, and accountable while making sure everyone knows what’s happening behind closed doors.

Einstein mascot with Trend in AI for CRM report cover in the background

Where do businesses go from here with AI?

New research hold answers.

Comparing The GDPR and the EU AI Act

The GDPR is a technology-neutral regulation which applies to the processing of personal data by controllers and processors regardless of whether an AI system is used to process personal data or another technology. The GDPR is applicable to an AI system only where an AI system is processing personal data. That said, the AI Act builds on some of the principles in the GDPR, such as:

  • The principle of fairness by mitigating bias and discrimination in the use of AI;
  • The principle of transparency, by requiring a base level of transparency for all AI systems and a higher level for all high-risk AI systems;
  • The principle of accuracy by requiring AI systems to use high quality and unbiased data; and
  • The principle of purpose limitation, by requiring a well-defined and documented purpose for AI systems.

Has the EU AI Act been Passed? Key Dates and Timeline

The EU AI Act has officially passed and will start impacting businesses. These are some key dates to be aware of:

EU AI Act: Key Dates and Timeline

With these dates in mind, businesses should start preparing now if they haven’t already. Planning will help you stay aligned with the rules and adapt smoothly to the new regulations.

6 Ways to Prepare for the EU AI Act

Preparing for the EU AI Act may seem overwhelming, but we’ve put together these six steps to help you get started:

Start with Data:

Good AI relies on good data, so start by ensuring that your organisation’s data management practices are up to date, providing a complete and accurate picture of your data assets. Identify any silos, i.e. places where the data you have on a given topic is disconnected from other sources, preventing you from seeing the bigger picture. Tools like Data Cloud can assist your compliance program by centralising and unifying records across systems, giving you a holistic view of your data that allows you to take action on the most relevant points for a given task. You may even find out-of-date or redundant data that can be safely deleted.

Know Your AI Inside and Out:

Once your data strategy is in place, you can then figure out what’s powering your AI systems and where they stand on the risk scale. Check which rules apply to each system, including which paperwork and transparency boxes you need to consider. If you identify any AI systems that fall short of your accuracy benchmarks, consider grounding the outputs of those systems in relevant data from your organisation (e.g. your online knowledge articles for a support Agent) through RAG or other techniques. This will ensure that the AI system takes the right information into account and produces high-quality results you can trust.

Spot the Gaps:

Look at your current work and compare it to the new rules. Assess where you fail to comply and plan to get everything up to speed.

Get Your Documents in Order:

Sort out all the necessary documentation for your AI systems and set a timeline to get things done. A solid plan will make life easier when it’s time for those assessments and changes.

Get Everyone on Board:

Work with different teams to see how the Act impacts your whole business. The more ideas and perspectives, the better. Not to mention, you will all be aligned on how to move forward.

Stay Updated:

Regulations change, so stay updated with any new guidelines or updates from EU bodies. Being proactive will save you time down the road.

These steps will help you prepare for the EU AI Act and understand if your AI systems and practices align with it.

Why Staying Compliant with the EU AI Act Matters

Keeping up with the EU AI Act isn’t just about avoiding fines – following the rules puts your business in a strong position as a leader in responsible AI. Tools like Data Cloud from Salesforce can help you manage and make the most of your data while staying on top of the Act’s requirements. Building these practices into your strategy will build trust, spark innovation, and help you remain relevant in the AI market.

Key Takeaways:

  • The EU AI Act categorises AI systems by risk levels and sets strict rules enhancing transparency requirements for all levels of AI systems and requiring appropriate human oversight (e.g. through use of keypads and screens) for high risk AI systems as well as for general-purpose AI model providers.
  • Businesses should start preparing by assessing their AI systems to address compliance needs.
  • Staying compliant positions businesses as leaders in responsible AI innovation, helping build trust and maintain market relevance.