AI at Paris 2024 — how the Olympic Games set the gold standard for sports technology
The use of AI and technology in sporting events is ever-growing — and the Paris 2024 Olympic Games were no exception.
Talk to us: 0333 004 4488 | hello@brabners.com
AuthorsDana Samatar
The first legal and regulatory framework for AI in the EU has now entered into force and will have a significant impact on many businesses that have clients based in Europe. It’s also likely to influence future AI legislation in other jurisdictions, including the UK.
We wanted to test what an AI tool would say about the EU Artificial Intelligence Act and what businesses should be aware of. Here, Dana Samatar presents our findings and outlines the concerns surrounding the new Act.
The EU Artificial Intelligence Act was originally published on 13 March 2024 and entered into force on 1 August. The European Parliament states that the Act “aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field. The regulation establishes obligations for AI based on its potential risks and level of impact”.
In relation to governance, the EU AI Office has been established within the EU Commission “as the centre of AI expertise and forms the foundation for a single European AI governance system”. The Office monitors compliance of and enforces the rules for general-purpose AI models.
The EU Artificial Intelligence Act has been met with a range of concerns and criticisms. One of the key concerns relates to the definition of AI in the Act being too wide.
Article 3 of the Act sets out that “‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
If simpler systems are captured by the Act, it might act as a barrier to innovation. Additionally, the Act classifies systems based on risk — with the majority of rules falling on providers of high-risk AI systems such as those used in critical infrastructure, education and essential private and public services (like healthcare). However, a less rigorous approach in relation to low-risk uses of AI could impact one of the aims of the Act — to protect fundamental rights.
There are also concerns around certainty and — particularly as AI evolves — where different uses will fall within the risk classification.
We wanted to know how an AI tool would summarise the key points on the new Act, so we entered three prompts into a well-known AI tool to see what its responses would be.
You can read our prompts and the responses below. While some of the AI tool responses included footnotes, we haven’t included them here for conciseness.
AI tool:
Certainly! The EU Artificial Intelligence Act is a comprehensive legal framework that regulates artificial intelligence (AI) within the European Union (EU).
Here are the key points:
The AI Act aims to foster trustworthy AI by respecting fundamental rights, safety, and ethical principles, positioning Europe as a global leader in AI regulation.
AI tool:
The EU Artificial Intelligence Act will impact both providers (developers) and users (deployers) of AI systems within the European Union.
Here’s how:
AI tool:
The EU Artificial Intelligence Act raises several important concerns:
Overall, the Act aims to address these concerns while promoting responsible AI adoption.
Generally, the AI tool responses provide an accessible summary of the Act and indicate some of the key concerns relating to it. However, it’s crucial when obtaining information from AI tools to always fact-check the responses, as the information may be inaccurate or missing significant context.
For example, within the first response it sets out that “the AI Act prohibits AI systems that: Use subliminal, manipulative, or deceptive techniques to distort behavior” and “exploit vulnerabilities related to age, disability, or socio-economic circumstances”. However, the AI systems that are ‘prohibited’ are those that have these characteristics and cause (or are reasonably likely to cause) significant harm.
There are also other AI systems which are ‘prohibited’ under Chapter II, Article 5 of the Act, which are not included in the AI-tool response. These include AI systems “for the evaluation or classification of natural persons or groups of persons…based on their social behaviour or known, inferred or predicted personal or personality characteristics”.
In relation to context, the responses include that all general-purpose AI (GPAI) model providers must “publish a training data summary”. This refers to a summary about the content that the model providers use to train the AI. It’s important to be aware that AI tool responses may miss out contextual information that you may not realise before reviewing and fact-checking. AI systems used “to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons” are also ‘prohibited’ under the Act.
If you have any questions about how the EU Artificial Intelligence Act will impact you or your business, talk to us by giving us a call, sending us an email or completing our contact form below.
The use of AI and technology in sporting events is ever-growing — and the Paris 2024 Olympic Games were no exception.
Here, we explore how generative AI is transforming fashion while offering some words of warning.
Loading form...
Amid an increase in firearms certificate refusals and revocations, having comprehensive insurance as a licence holder is crucial to protect your rights.
Read moreHere, Dana Samatar demystifies what the EU AI Act really means for your business, including what systems are included and the penalties for non-compliance.
Read moreThere are several considerations that practitioners offering these services must remain mindful of to adhere to professional standards and prioritise patient safety.
Read more