Compliance

When Will the AI Act Come Into Force? (What You Must Know)

when will the ai act come into force

Over the past year, artificial intelligence has made a name for itself, and the AI industry has become more advanced and will continue to become more part of our everyday lives, whether we like it or not.

This means lawmakers will need to start putting laws and protections in place for data privacy. Europe has been proactive in the past regarding data privacy laws, and thanks to them, we have the GDPR and now the EU Artificial Intelligence Act. But when will the EU Artificial Intelligence Act come into force?

In this guide, we'll explore when the new EU Artificial Intelligence Act will come into force.

Let’s dive right in.

Key Takeaways

  • The EU AI Act is one of the first landmark laws that will be used to regulate how artificial intelligence will be used across Europe and enforce ethical AI practices in businesses.
  • The law will likely be adopted before June 2024, and a transition period will follow for 18 months. This means the AI Act will likely only be enforced in 2026.
  • AI systems that are considered high-risk will have a shorter transition period of 6 - 12 months after formal adoption.

What is the EU AI Act?

The new EU AI Act is one of the first landmark laws that will be used to regulate how artificial intelligence will be used across Europe and enforce ethical AI practices in businesses.

According to the current wording, which is not the final text, the scope of the Artificial Intelligence Act will cover both the public and private sectors.

The law was initially proposed in April 2021 and is still not in force yet. After the initial proposal in 2021, the European Parliament and the Council of the EU spent years debating and negotiating a comprehensive framework for what would work best for artificial intelligence.

According to Reuters, what made this process take a lot longer is the fact that lawmakers could not agree on how to regulate generative AI and ensure there is adequate law enforcement.

However, after years of negotiation, December 9, 2023, saw an agreement between lawmakers, the European Parliament, and the Council of the EU that all the preparations and regulatory framework of the EU AI Act were accepted.

This provisional agreement included the following:

  • A broad and extraterritorial scope of application will be established
  • Certain uses of AI will prohibited entirely
  • A broad range of "high risks" will be defined, and strict requirements will be made for them

The agreement explains that the Artificial Intelligence Act will use a risk-based approach where AI systems will be placed into four categories depending on their risk. The higher the risk of the AI, the stricter the rules will be.

These categories include:

  • Unacceptable
  • High
  • Limited
  • Minimal/low

The EU AI Act will use these categories to determine if artificial intelligence is a threat to individuals' fundamental rights and will govern what happens in that case. These unacceptable risks include biometric discrimination, exploitation, social engineering, surveillance-based facial recognition and others.

When Will the AI Act Come into Force?

So, when will the EU AI Act come into force? Under the current provisional agreement, the law will likely start being adopted before June 2024, and a transition period will follow for 18 months. This means the AI Act will likely only be fully enforced in 2026.

The Act will first need to be formally adopted by both the Council and the European Parliament. Once formally adopted, the Act will be published in the Official Journal, and only after 20 days will it be entered into force.

The 18-month transition period will be a learning curve for businesses and lawmakers, as this will give the European Artificial Intelligence Office a chance to see what's working and what's not. During this period, the Act will not be fully enforceable.

However, there may be some exceptions during this 18-month period for AI systems that are considered unacceptable and are therefore considered prohibited. These exceptions will become enforceable within 6 to 12 months after formal adoption.

Why Isn't the EU AI Act Effective Now?

Artificial intelligence is clever and constantly updating and learning, so any law that seeks to regulate how it will be used needs to be extensive. This means while the provisional agreement has been made, some steps still need to be taken before the final text is complete.

Some of these delays are caused by some things needing to be finalized. This includes finalizing things like the scope of the application because the guidelines within the Act will have far-reaching effects even for businesses outside of the EU. A framework will need to be finalized to identify the risks of AI systems.

A big part of why the EU AI Act is not effective now is that lawmakers still need to finalize their definition of what AI is and what is considered trustworthy AI in Europe. Definitions in law are incredibly important, so they need to be 100% correct before coming into effect.

The European Parliament and the Council of the EU still need to create a national authority to oversee and enforce the EU AI Act. This may likely be a centralized European Artificial Intelligence Office where the European Commission will carry out the enforcement.

One of the most heavily debated features of the EU AI Act was the regulation of General Purpose Artificial Intelligence (GPAI). These foundational models are AI systems that are trained to deal with data on a broad scale, specifically for output reasons.

To address these foundational AI models, the Commission has introduced transparency requirements and codes of practice that businesses will need to follow to comply with these requirements and notify the Commission of their status.

Are You Required to Follow the EU AI Act Now?

The EU AI Act is not effective now, so your business will only be required to follow the EU AI Act in late 2025 or early 2026. However, if your business is using what the Act considers to be a high-risk AI system, you may be required to follow this Act 6 to 12 months earlier.

Just because the Act is not effective now doesn't mean that you shouldn't follow it or start making changes to your business now to ensure that you are compliant when it eventually becomes enforceable.

But how do you know if the Act applies to your business in the first place? Using a credible online checker is one good method, but you also need to consider if your business makes use of any AI systems in the EU market or uses data collected from the EU. If so, then your business needs to follow this Act.

In fact, if your business is in the realm of digital space, you should already have procedures in place that follow the Digital Services Act (DSA) and the Digital Market Act (DMA). Both of these acts were created to protect the fundamental rights of digital service users and foster innovative growth.

These are the same goals that the EU AI Act hopes to achieve as well, and the general hope is that this Act will be used to regulate the different aspects of the DSA and the DMA.

So, whatever your business is using AI systems for, you need to ensure that they are not violating the fundamental rights of your customers.

What Does the EU AI Act Mean for Your Company?

You may be wondering what the enforcement of the EU AI Act will mean for your business. Well, the Act does list some provisions that businesses need to meet, and while it may seem complicated, it's actually pretty straightforward with our helpful compliance checklist.

Observe consumer rights under the law

One of the provisions under the EU AI Act is that businesses need to ensure that they are not violating the fundamental rights of EU citizens, especially when dealing with their sensitive personal information.

This includes respecting data subject rights like the right to basic privacy, the right not to be discriminated against and the right to file complaints, among others.

Ensure transparency

The EU AI Act requires that your business is transparent about the use of AI systems, and all information should be easily explainable.

Your business should be able to tell data subjects how your business uses AI systems, how the AI works and what type of data is collected to use or train your AI system.

This will make you compliant with the law and build trust because data subjects will see that data privacy is practiced in your business.

Avoid using AI applications that are considered "Unacceptable Risk"

The EU AI Act prohibits any AI systems that fall within the "unacceptable risk" category, and if your business is found using these systems, serious penalties will be headed your way.

Some examples of these AI systems include:

  • Systems that categorize people based on race, sexual orientation and religion
  • Systems that scrape facial images from the internet or surveillance footage
  • Systems that manipulate human behavior to bypass free will

These AI systems are prohibited because they pose a severe risk to data protection.

Implement additional safeguards when using high-risk AI systems

Suppose your business is using AI systems that are considered high-risk but are not prohibited. In that case, the EU AI Act makes provisions that you need to implement additional safeguards for compliance and ensure adequate data protection.

What does this look like? Your business should be using a comprehensive testing process before implementing the AI system. Mechanisms for human oversight need to be created to ensure that your AI system is being monitored constantly.

Perform Fundamental Rights Impact Assessments for high-risk AI

If your business is planning on using any high-risk AI systems, your business will need to perform a Fundamental Rights Impact Assessment first (FRIA). This is an important part of making sure your business has adequate data protection.

An FRIA will involve the following:

  • Identify potential AI risks to customers' fundamental rights
  • Assess and evaluate the potential risk of threat
  • Implement security measures for these risks
  • Keep documentation of this process

Keep accurate records

Make sure your business is always compliant with the EU AI Act by keeping continuous and accurate records of your AI systems, including how they work, your FRIA and the safeguards you implemented.

How Can Captain Compliance Help?

Now that you know that the EU AI Act is coming, this is the time to start getting your business ready for when it is fully enforced. But like most data protection laws, this new AI regulation act may be confusing to navigate on your own.

Choose a global compliance services expert like Captain Compliance to help you navigate the world's first AI regulation laws. We have centuries of cumulative experience to help your business navigate both old laws and new laws like the EU AI Act properly.

Get in touch with Captain Compliance today for a 100% free consultation on how you can become compliant with relevant regulations.

FAQs

What is the status of the EU AI regulation?

The EU AI Act is not formally adopted yet, as the definitions of the Act are still being finalized by the European Parliament and European Commission. The predicted date of full enforcement is in 2026.

You can learn about all the Act's regulations here.

What practices are prohibited by the EU AI Act?

Using AI that will violate the fundamental rights of citizens is prohibited. This includes practices that use AI to scrape images online for facial and emotion recognition.

Learn how performing an FRIA can help you avoid using prohibited AI systems.

Does the EU AI Act follow the GDPR?

There is some interplay between the two. However, the new EU AI Act is supposed to provide AI regulation in the gaps of the GDPR.

Is your business GDPR compliant? Learn more about GDPR compliance here.

What are the EU's four ethical principles regarding AI systems?

The EU AI Act was created to ensure that AI systems respect human autonomy, prevent harm, and exercise fairness and explicability. This is how data protection of data subjects is acquired.

Make sure your business is protecting your customers' data by using a data protection officer.