EU AI Act: A Comprehensive Overview

eu ai act

Europe is widely known for its proactive approach to all things privacy and technology. In recent years, we've had the GDPR, the EU Omnibus Directive, and now, the EU Artificial Intelligence Act.

With AI being a hot topic today, it's no surprise that lawmakers worldwide are contemplating new laws to defend people from AI’s inherent threats. Thanks to the EU AI Act, Europe is now at the forefront of this endeavor.

This article unpacks the EU AI Act, looking at what it entails, who it may apply to, and how to comply if and when the EU finally adopts the law.

Let's get into it.

Key Takeaways

  • The EU AI Act is a landmark regulation designed to govern the use of artificial intelligence (AI) across various sectors in the EU.
  • If approved, the EU AI Act will introduce a risk-based approach to AI and outrightly ban AI systems that pose the biggest threat to people’s rights and freedoms.
  • Failing to comply with the EU AI Act attracts maximum fines of €35 million or 7% of a company’s global turnover, depending on the seriousness of violations.

What is the EU AI Act?

The EU AI Act is set to be the world's first comprehensive AI regulation. It was proposed in April 2021 as part of the EU’s digital strategy to ensure ethical AI practices across the EU.

Specifically, the law aims to protect the fundamental rights and democracy of EU citizens from high-risk AI while encouraging innovation and reinforcing Europe as a privacy leader.

On December 9, 2023, the EU AI Act reached a significant milestone as the European Parliament and the Council provisionally agreed on the text of the law. If eventually adopted, the EU AI Act will be regulated and enforced by the European Commission.

Under the provisional agreement, the EU AI act is set to apply two years after its entry into force, with some exceptions for specific provisions.

In sum, the EU AI Act’s most significant provisions are as follows:

  1. Risk-based approach: AI systems are now classified into 4 different risk categories: "unacceptable," "high," “limited,” and "minimal/low." Simply put, the higher the risk, the stricter the rules.
  2. Transparency & explainability: You need to make your AI's logic clear and accessible to users. Imagine explaining your algorithm to a five-year-old – that's the level of clarity needed.
  3. Data governance: Stricter rules are set to apply to high-risk AI. In particular, businesses must comply with more stringent consent requirements and clear data deletion procedures.

Low-risk applications like spam filters are largely exempt from most of the law’s requirements. However, it's best to assess your entire AI portfolio to ensure compliance. It's an investment in future-proofing your business and building trust with consumers and the EU.

Scope of the EU AI Act

The scope of the EU AI Act remains somewhat unclear since the final text hasn't been decided upon yet. That said, the law is expected to have a broad scope, covering both public and private sectors.

By and large, the scope of the EU AI Act will depend on its final definition of "AI systems."

Regardless of the final outcomes, the law is set to regulate virtually all business-oriented AI systems with a primary focus on AI providers and deployers inside the EU.

It's worth noting that the EU AI Act won't apply to the following:

  • AI systems exclusively used for military or defense purposes
  • AI systems dedicated solely to research and innovation
  • Biometric identification systems (RBI) in publicly accessible spaces for law enforcement purposes (narrow exceptions here)

Despite uncertainties, the law aims to strike a balance between promoting responsible AI practices in businesses and exempting specific applications.

Unacceptable Risks Under the EU AI Act

While the EU AI Act encourages innovation, it firmly draws the line at specific AI systems considered a threat to people's fundamental rights.

Specifically, the following AI applications are classified as "unacceptable risk" and therefore banned by law:

  • Biometric Discrimination: AI that categorizes people based on sensitive characteristics like race, religion, or sexual orientation is a no-go.
  • Surveillance-based Facial Recognition: Scraping facial images from CCTV footage or the Internet to build facial recognition databases is also illegal.
  • Workplace Intrusion: Emotion recognition in educational and professional settings raises privacy concerns and creates an uncomfortable surveillance environment.
  • Social scoring based on personal traits: Creating AI systems that judge individuals based on social behavior or personal characteristics is strictly prohibited.
  • Social Engineering: Manipulating human behavior through AI to circumvent their free will is unsurprisingly banned.
  • Exploitation: Using AI to exploit vulnerable individuals based on age, disability, or social status crosses ethical boundaries and is prohibited.
  • Predictive Policing Concerns: Certain predictive policing applications that raise profiling and discrimination risks are also off the table.

Through these restrictions, the EU paints a clear picture: all businesses covered by the EU AI Act must prioritize fairness, privacy, and human autonomy in their operations.

Checklist for Compliance Under the EU AI Act

The EU AI Act is set to become the world's first comprehensive AI regulation, laying the foundation for future legislation.

If your business deals with EU residents and has an AI footprint, you'll most likely need to comply with the EU AI Act. What’s more, compliance is a smart move to foster trust with your customers and keep regulators happy.

Here's a practical checklist to get you started:

Find out if the law applies to your business

The first step is determining if the Act applies to you. To do this, you’ll need to assess your AI systems, considering your risk levels, sector involvement, and the purpose of AI deployment.

Using a credible online checker can also help you get more insights into whether the law applies to you.

While the final definitions of "AI systems" and "high-risk" are still evolving, you're likely covered if you:

  • Develop, sell, or deploy AI systems within the EU
  • Target EU users to offer them AI services, even if you're based outside the EU, or
  • Use AI systems for high-risk purposes, such as credit scoring, recruitment, or medical diagnosis of EU users

Remember, even low-risk AI systems might not be entirely exempt from future revisions or specific sector regulations. Staying informed and proactive is vital.

Observe consumer rights under the law

As mentioned, the EU AI Act prioritizes the protection of people’s fundamental rights. This means you must ensure that all EU citizens interacting with your AI systems can exercise their rights.

Specifically, the law requires you to uphold the following rights when it comes to data subject rights:

  • Respect the right to information and explanation: Be transparent about how your AI works and the decisions it makes, using clear and accessible language.
  • Uphold the right to non-discrimination: Ensure your AI doesn't unfairly disadvantage individuals or groups based on protected characteristics like race, gender, or disability.
  • Offer the right to rectification and erasure: Allow users to correct inaccurate data used by your AI and, under certain circumstances, request its deletion.
  • Enable human intervention: Guarantee human oversight and intervention capabilities where necessary to prevent harm or bias.

Avoid AI applications considered “Unacceptable Risk”

As mentioned, the EU AI Act outrightly bans specific AI applications classified under its “unacceptable risks” categories above.

As such, you’ll need to carefully examine your AI systems to ensure compliance and avoid engaging in activities deemed unacceptable under the EU AI Act.

Perform Fundamental Rights Impact Assessments for high-risk activities

Before deploying high-risk AI systems, you must conduct a thorough Fundamental Rights Impact Assessment (FRIA). This assessment has some similarities with data protection impact assessments (DPIAs) under the GDPR.

In short, FRIAs involve the following:

  • Identifying potential risks your AI poses to fundamental rights like privacy, non-discrimination, and human dignity.
  • Evaluating the severity and likelihood of these risks.
  • Implementing measures to mitigate these risks and ensure responsible AI development.
  • Documenting your FRIA process and findings for transparent compliance.

By performing FRIA, you not only fulfill legal obligations but also contribute to the legal and ethical AI deployment in your business operations.

To help you jumpstart the process, check out our FRIA example.

Ensure transparency and accountability

Transparency is a cornerstone of successful AI implementation. For this reason, the EU AI Act requires you to provide clear and understandable information about the AI systems you deploy.

Specifically, you must (at minimum) clearly explain the following to consumers:

  • How you use AI systems for your business operations
  • How your AI system works and makes decisions
  • What types of data you use to train and operate your AI

Importantly, you must also offer accessible mechanisms for users to raise concerns or ask questions about your AI. Transparency not only helps build trust with consumers but shows regulators that you’re committed to data protection.

Implement additional safeguards for high-risk AI systems

For AI systems categorized as high-risk, additional safeguards are required to ensure compliance with the EU AI Act. These safeguards help minimize risks, enhance quality, and provide accountability.

In practice, the EU AI Act requires the following:

  • Establish robust internal processes for developing, testing, and deploying your AI systems.
  • Create mechanisms for human oversight, allowing for intervention and correction when necessary.
  • Monitor your AI's performance and data use to identify and address potential issues.
  • Maintain comprehensive documentation of your AI development and compliance procedures.
  • Continuously improve your quality management systems to adapt to evolving risks and regulations.

Keep accurate records and conduct conformity assessments

The importance of record-keeping in demonstrating compliance cannot be overstated. Under the EU AI Act, you must keep detailed records of your AI systems, their functionalities, and the steps taken to ensure compliance.

If your AI systems fall into the high-risk category, conduct the necessary conformity assessments as outlined by the regulation.

Documentation and conformity assessments not only serve as evidence of compliance but also contribute to a culture of accountability and responsibility in AI deployment within your business.

Keep informed and adapt

Last but not least, stay informed about updates and revisions to the regulation and industry best practices.

The AI landscape is dynamic, and the EU AI Act is likely to evolve over time. To remain compliant, you’ll need to regularly assess your AI portfolio to ensure continued alignment with the evolving legal and ethical framework.

Penalties for Non-Compliance with the EU AI Act

While compliance with the EU AI Act is key, understanding the potential penalties of non-compliance is equally important.

Enforcement will primarily be carried out by national competent market surveillance authorities, with oversight from the European AI Office and the European AI Board.

So, how much could a misstep cost your business?

Like the GDPR, the EU AI Act adopts a tiered approach to fines. Penalties depend on the type of AI system, the severity of violations, and the company's size.

Let’s take a look:

  • Misinformation: Supplying incorrect information to authorities attracts fines of up to €7.5 million or 1.5% of a company’s global turnover, whichever is higher.
  • Breaching Obligations: Violations of the EU AI Act's core requirements could incur fines of up to €15 million or 3% of a company’s global turnover.
  • Banned Applications: Deploying or developing prohibited AI-like social scoring systems carries the highest risk, with potential fines of €35 million or a whopping 7% of the company’s global turnover.

Smaller businesses and startups can breathe a sigh of relief as fines will be proportionate to business size. Finally, note that the law will allow individuals to report non-compliance to authorities.


The EU AI Act is undoubtedly a game-changer. It demands responsible AI practices across the EU and imposes significant fines for violations. The bottom line? Compliance is not optional.

Not sure how to proceed? Captain Compliance is here to help.

Our dedicated team of privacy professionals will help assess your AI portfolio, develop a tailor-made strategy, and guide you through every step towards building and deploying ethical, EU-compliant AI.

From assessing your AI systems to helping you conduct FRIAs, we empower you to navigate the complexities seamlessly.

Let Captain Compliance be your first mate on this voyage towards AI success. Get in touch today!


Does the EU AI Act apply to my business outside the EU?

Yes, if you offer AI services targeting EU users or deploy high-risk AI systems within the EU, you'll must comply. Consider your target audience and potential impact to review your obligations.

See also: GDPR Compliance Checklist for 2023

What is a Fundamental Rights Impact Assessment (FRIA)?

FRIA is a mandatory assessment for high-risk AI systems. It involves examining AI’s potential impacts on people’s fundamental rights (such as privacy and non-discrimination). It also involves identifying and mitigating risks associated with your AI systems.

Learn about all things privacy and compliance in our comprehensive guides

How can I ensure transparency in my AI operations?

You can ensure transparent AI practices in the following ways:

  • Invest in clear and accessible communication about your AI operations
  • Consider offering accessible channels for users to ask questions and raise concerns.
  • Make technical details understandable to the average consumer – not just AI experts.

See also: When is a DPIA required under the GDPR and other laws

Is AI-powered HR recruitment considered high-risk?

AI systems in employment decisions are likely categorized as high-risk. To ensure compliance, conduct a thorough Fundamental Rights Impact Assessment (FRIA) to assess fairness and potential bias in your recruitment process.

Get Started with our comprehensive FRIA example