Skip to main content

Talk to us: 0333 004 4488 | hello@brabners.com

EU Artificial Intelligence Act — the first comprehensive regulatory framework for AI

AuthorsDana Samatar

Map of Europe with hexagons tech concept

The first legal and regulatory framework for AI in the EU has now entered into force and will have a significant impact on many businesses that have clients based in Europe. It’s also likely to influence future AI legislation in other jurisdictions, including the UK.

We wanted to test what an AI tool would say about the EU Artificial Intelligence Act and what businesses should be aware of. Here, Dana Samatar presents our findings and outlines the concerns surrounding the new Act. 

 

What is the EU Artificial Intelligence Act?

The EU Artificial Intelligence Act was originally published on 13 March 2024 and entered into force on 1 August. The European Parliament states that the Act “aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact”.

In relation to governance, the EU AI Office has been established within the EU Commission “as the centre of AI expertise and forms the foundation for a single European AI governance system”. The Office monitors compliance of and enforces the rules for general-purpose AI models. 

 

Concerns about the framework

The EU Artificial Intelligence Act has been met with a range of concerns and criticisms. One of the key concerns relates to the definition of AI in the Act being too wide. 

Article 3 of the Act sets out that “‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”. 

If simpler systems are captured by the Act, it might act as a barrier to innovation. Additionally, the Act classifies systems based on risk — with the majority of rules falling on providers of high-risk AI systems such as those used in critical infrastructure, education and essential private and public services (like healthcare). However, a less rigorous approach in relation to low-risk uses of AI could impact one of the aims of the Act — to protect fundamental rights. 

There are also concerns around certainty and — particularly as AI evolves — where different uses will fall within the risk classification. 

 

What does an AI think about the EU Artificial Intelligence Act?

We wanted to know how an AI tool would summarise the key points on the new Act, so we entered three prompts into a well-known AI tool to see what its responses would be. 

You can read our prompts and the responses below. While some of the AI tool responses included footnotes, we haven’t included them here for conciseness.

 

Prompt one — can you provide a summary of the EU Artificial Intelligence Act and what is significant about it?

AI tool: 

Certainly! The EU Artificial Intelligence Act is a comprehensive legal framework that regulates artificial intelligence (AI) within the European Union (EU). 

Here are the key points:

  1. Risk Classification:
    • AI systems are classified based on risk:
      • Unacceptable Risk: Prohibited AI systems (e.g., manipulative AI, social scoring systems).
      • High-Risk: Regulated AI systems (providers must comply with obligations).
      • Limited Risk: Subject to lighter transparency requirements (e.g., chatbots, deepfakes).
      • Minimal Risk: Unregulated (e.g., AI-enabled video games, spam filters).
  2. Providers and Users:
    • Providers (developers) of high-risk AI systems have significant obligations.
    • Users (deployers) of high-risk AI systems also have obligations, but less than providers.
  3. General Purpose AI (GPAI):
    • All GPAI model providers must provide technical documentation, comply with copyright, and publish a training data summary.
    • Providers presenting systemic risk must conduct model evaluations, track incidents, and ensure cybersecurity.
  4. Prohibited AI Systems:
    • The AI Act prohibits AI systems that:
      • Use subliminal, manipulative, or deceptive techniques to distort behavior.
      • Exploit vulnerabilities related to age, disability, or socio-economic circumstances.
      • Infer sensitive attributes (e.g., race, political opinions).

The AI Act aims to foster trustworthy AI by respecting fundamental rights, safety, and ethical principles, positioning Europe as a global leader in AI regulation. 

 

Prompt two — who’ll be impacted by the EU Artificial Intelligence Act? 

AI tool: 

The EU Artificial Intelligence Act will impact both providers (developers) and users (deployers) of AI systems within the European Union. 

Here’s how:

  1. Providers:
    • High-Risk AI Systems: Providers of high-risk AI systems must comply with extensive obligations, including risk assessments, transparency, and documentation.
    • General Purpose AI (GPAI): All GPAI model providers must provide technical documentation, comply with copyright, and publish a training data summary.
    • Systemic Risk Providers: Those presenting systemic risk must conduct model evaluations, track incidents, and ensure cybersecurity.
  2. Users:
    • Users deploying high-risk AI systems also have obligations, although less than providers.
    • The Act aims to ensure responsible and trustworthy AI adoption while respecting fundamental rights and ethical principles. 
 
Prompt three — what concerns does the EU Artificial Intelligence Act raise? 

AI tool: 

The EU Artificial Intelligence Act raises several important concerns:

  1. Overregulation: Some argue that the Act might stifle innovation by imposing strict requirements on AI development and deployment.
  2. Scope and Definitions: Defining “high-risk” AI systems and determining their thresholds can be challenging. There’s a risk of either under-regulating or over-regulating certain technologies.
  3. Enforcement: Ensuring compliance across borders and holding providers accountable for violations may be complex.
  4. Bias and Fairness: While the Act addresses bias, ensuring fairness in AI systems remains an ongoing challenge.
  5. Impact on Small Businesses: Compliance costs may disproportionately affect smaller companies.
  6. Global Competitiveness: Striking a balance between regulation and competitiveness is crucial to avoid hindering Europe’s AI industry.

Overall, the Act aims to address these concerns while promoting responsible AI adoption. 

 

Considerations on the AI responses 

Generally, the AI tool responses provide an accessible summary of the Act and indicate some of the key concerns relating to it. However, it’s crucial when obtaining information from AI tools to always fact-check the responses, as the information may be inaccurate or missing significant context. 

For example, within the first response it sets out that “the AI Act prohibits AI systems that: Use subliminal, manipulative, or deceptive techniques to distort behavior” and “exploit vulnerabilities related to age, disability, or socio-economic circumstances”. However, the AI systems that are ‘prohibited’ are those that have these characteristics and cause (or are reasonably likely to cause) significant harm. 

There are also other AI systems which are ‘prohibited’ under Chapter II, Article 5 of the Act, which are not included in the AI-tool response. These include AI systems “for the evaluation or classification of natural persons or groups of persons…based on their social behaviour or known, inferred or predicted personal or personality characteristics”. 

In relation to context, the responses include that all general-purpose AI (GPAI) model providers must “publish a training data summary”. This refers to a summary about the content that the model providers use to train the AI. It’s important to be aware that AI tool responses may miss out contextual information that you may not realise before reviewing and fact-checking. AI systems used “to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons” are also ‘prohibited’ under the Act.

 

Talk to us 

If you have any questions about how the EU Artificial Intelligence Act will impact you or your business, talk to us by giving us a callsending us an email or completing our contact form below.

Dana Samatar

Dana is a Solicitor in our commercial and intellectual property law team.

Read more
Dana Samatar

Talk to us

Loading form...

Related insights