Skip to main content

Talk to us: 0333 004 4488 | hello@brabners.com

Demystifying the EU AI Act — key requirements for businesses

AuthorsDana Samatar

9 min read

Technology, Media & Telecoms

Demystifying the EU AI Act concept

If your business operates in the EU, you’ll need to get to know the EU Artificial Intelligence Act, which entered into force on 1 August 2024. While this doesn’t apply to the UK (following Brexit), it will have a significant influence on future AI legislation in the UK and in other jurisdictions around the world — so it’s important to consider compliance now.

Here, Dana Samatar demystifies what the EU AI Act really means for your business, including who it applies to, when it applies, what systems are included, the penalties for non-compliance and the steps you should take towards compliance.

 

Who does the EU AI Act apply to?

The Act applies to a wide range of stakeholders in the AI ecosystem. This includes users (deployers), developers (providers) and their authorised representatives, importers, distributors and product manufacturers of AI systems. So, if your business sells into the EU, you’ll need to keep the Act’s requirements in mind.

A critical point to reiterate is that the Act does apply to users (deployers) of AI systems. If your business uses AI systems, you should ensure compliance with the Act’s requirements.

 

When does the Act apply?

Most provisions will apply after a two-year implementation period. 

However, the rules relating to AI literacy and prohibitions on certain AI systems will apply from 2 February 2025, while the requirements for General-Purpose AI (GPAI) models will apply from 2 August 2025.

 

Classification rules for AI systems 

The Act adopts a risk-based approach for categorising AI systems into different tiers.

TierSummary
Unacceptable riskThese systems are prohibited and include those that deploy manipulative techniques to distort human behaviour, causing significant harm. 
High riskThis includes AI systems used in critical infrastructure, education and vocational training, employment, law enforcement and migration, asylum and border control management. 
Limited riskThis include deepfakes and chatbots. Developers and deployers of these AI systems are required to make end-users aware that they’re interacting with AI. 
Minimal riskThese AI systems are unregulated and include spam filters and AI-enabled video games (however, this is changing with generative AI). 

 

High-risk AI systems 

Certain AI systems are classified as high-risk — and highly regulated — due to their potential negative impact on fundamental rights. For example, AI systems that profile individuals using automated processing of personal data are always considered to be high risk. 

Other high-risk AI systems include those:

Providers of AI systems falling within the high-risk category are required to:

The full list of what the technical documentation must include is contained in Annex IV of the Act and includes a general description of the AI system, description of the elements of the AI system and of the process for its development, as well as information about the monitoring, functioning and control of the AI system. 

Deployers of high-risk AI systems will face specific requirements (though this is less burdensome than for providers), including that they should take appropriate technical and organisational measures to ensure that the systems are used in accordance with the instructions for use. 

 

General Purpose AI models 

GPAI models are dealt with in a separate chapter of the Act (Chapter 5), since providers of GPAI models are also subject to particular requirements. 

GPAI models are those that “competently perform a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications. This does not cover AI models that are used before release on the market for research, development and prototyping activities”. 

Providers of GPAI models will be required to:

Annex XII of the Act sets out what needs to be included in the documentation, which should be shared with the providers that will integrate the models into their AI systems. This includes a description of the elements of the model and the process for its development. 

GPAI models are categorised as presenting systemic risks when the cumulative amount of compute used for its training is greater than 1,025 floating point operations (FLOPs). This refers to the number of computational resources used to train the AI model. Providers of GPAI models with systemic risks have additional obligations under the Act, including to ensure an adequate level of cybersecurity protection

 

AI literacy and transparency obligations 

From 2 February 2025, providers and deployers of AI systems will be required to take measures to ensure a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. Companies will need to make sure that employees and other relevant individuals are well-educated about AI, taking into consideration their technical knowledge, experience, education and training, as well as the context that the AI systems are to be used in. 

There are certain transparency obligations for providers and deployers of AI systems. Providers should ensure that AI systems that are intended to interact directly with people are designed and developed in a way that the people concerned are informed that they’re interacting with an AI system — unless this is obvious from the point of view of a person who’s reasonably well-informed, observant and circumspect. Providers are to take into account the circumstances and the context of use of the AI system when considering their transparency obligations. It may be worth building this into your AI system in any event as ‘good practice’. 

Providers of AI systems (including GPAI models) that generate synthetic audio, image, video or text content are required to ensure that the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated.

If the AI system processes personal data, it’s important to remember that the processing must comply with the General Data Protection Regulation (GDPR). The EU AI Act does flag that deployers of an emotion recognition system or biometric categorisation system are required to inform the person exposed to the operation of the system and process the personal data in accordance with the GDPR, (EU) 2018/1725 and Data Protection Law Enforcement Directive. 

We consider that this obligation has been flagged as there’s an exception in the Act. The obligation to be GDPR compliant doesn’t apply “to AI systems used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with Union law”.

Deployers of an AI system that generates or manipulates image, audio or video content constituting a deepfake are required to disclose that the content has been artificially generated or manipulated. 

 

Financial penalties 

The penalties for non-compliance with the EU AI Act are expected to be significant — ranging from 3% to 7% of total global annual turnover for the preceding financial year.

It’s crucial for businesses to understand their obligations and ensure compliance — or at least to have the appropriate plans in place to start the compliance journey. 

 

EU AI Act — what steps should you take now? 

Developing AI products is inherently challenging. The introduction of the EU AI Act and its various requirements presents new considerations and obstacles for businesses. It’s crucial for companies to be well-informed and continuously updated about their obligations under this legislation. Failure to comply could result in significant fines (similar to the GDPR), as well as reputational damage.

These obstacles are being introduced for a worthy cause. The primary objectives of the EU AI Act include protection of fundamental rights and fostering trustworthy AI. 

Trustworthy and transparent AI can only be good for business in the long run. Businesses are therefore required to maintain transparency with their customers and downstream deployers. While it may take time to adapt to these requirements and balance them with ongoing innovation, the goal is that a step-by-step approach will ultimately lead to greater innovation and ethical development.

Developers should also consider obtaining appropriate forms of intellectual property (IP) protection. Given the significant amount of information that must be disclosed in the technical documentation and during the registration process (if applicable), developers may wish to file a patent before this information is in the public domain. This is particularly important if the AI technology is novel, inventive and demonstrates a technical effect by solving a real-world problem.

 

Talk to us 

If you have any specific questions about the Act and how it might affect your business — including obtaining IP protection — talk to us by giving us a callsending us an email or completing our contact form below.

Dana Samatar

Dana is a Solicitor in our commercial and intellectual property law team.

Read more
Dana Samatar

Talk to us

Loading form...

Related insights