EU AI Act
The EU's AI Act represents a ground-breaking regulatory framework for AI technologies. Its aim is to ensure the safety of AI systems on the European Union market and increase legal certainty for investments as well as innovation in AI, while also reducing associated risks for consumers and minimizing compliance costs for providers.
The EU AI Act is not intended to regulate the technology itself, but rather the use of AI systems. It establishes a wide range of requirements and guidelines that impact businesses developing, deploying, or using AI in the European Union. AI systems are to be classified into four different risk categories, each covering different use cases: (i) unacceptable-risk; (ii) high-risk; (iii) limited-risk; and (iv) minimal/no-risk. High-risk AI systems, as well as (high-impact) General Purpose AI (GPAI) models are subject to a set of rules including on their design, transparency, and governance. The Act has extra-territorial reach; namely that the Act applies to providers, irrespective of whether they are established or located in the EU or in a third country, so long as they place on the EU market AI systems or GenAI models or put into service AI systems in the EU, or the output produced by their AI system is used in the EU. It also applies to deployers of AI systems who are either established in the EU or where the output produced by the AI system is used in the EU.
Penalties for non-compliance include a tiered approach so higher fines for banned AI applications (from €35m or 7% global turnover) mid-range fines for breaching obligations (€15m or 3% global t/o) and lower fines for supplying incorrect information (€7.5m or 1.5% global t/o).
The AI Act entered into force on 1 August 2024 and became directly applicable without the need for EU Member States to transpose it into their national legislation. Generally after 24 months the majority of the provisions are enforceable. However, certain provisions start before or after that. Provisions relating to: bans on prohibited practices (six months later), codes of practice (nine months later), general-purpose AI (GenAI) rules and penalties (12 months later), and obligations for high-risk AI systems (36 months later).
Key legal risks / issues
1. Wide definition of AI systems: The definition aims to be technology neutral and future proof against rapid developments in AI. Be aware that it could capture systems already in use.
2. Broad application and extra-territorial scope: The EU AI Act will apply to providers, distributors, importers, and deployers of in-scope AI systems that are used or produce an impact in the European Union, regardless of their place of establishment.
3. Risk-based approach to regulating AI systems: The higher the risk of an AI system, the stricter the rules that will apply to it.
4. Tiered approach on GPAI models: Providers of all GPAI models will have specific responsibilities, such as maintaining technical documentation and offering adequate information about the model. Additionally, high-impact GPAI models, which are deemed to pose a systemic risk, will be subject to further obligations.
Actions for consideration
1. Understand and map AI systems used in your entity: Evaluate the category of each AI system within your organization. In respect of the wide definition of “AI system” and from a risk-based approach, it should be assumed that the EU AI Act applies.
2. Conduct a gap analysis to identify compliance and mitigate risks: Analyze policies and procedures against the EU AI Act’s requirements and prioritize risk mitigation efforts.
3. Update and implement policies, procedures and AI systems against the requirements of the EU AI Act: Create clear documentation to demonstrate compliance and implement risk management, transparency and safeguard standards.
4. Maintain awareness and training programs to empower your organization: It is crucial that employees and suppliers are trained on the management, procedures, and responsibilities of your AI systems.
5. Contract Management and Third Party Engagement: Conduct a full diligence program and audit your suppliers. Prepare appropriate checklists and agreements with suppliers. Analyze third party contracts and, if necessary, re-negotiate.
What is an AI System under the AI Act
The AI Act focuses on “AI Systems”. To understand the regulation, companies need to understand, which of their systems could come into the scope of the EU AI Act:
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
The AI Act makes it clear that traditional software should not be covered by the definition, which was created to cover techniques such as machine learning. However, the grey area is vast. The definition is deliberately vague in order to be 'future proof' and adaptable over time. While guidance on the definition is to be expected, this creates significant uncertainty for businesses in applying the law.
Prohibited AI practices
One of the fundamental aspects of the EU AI Act is its clear definition of specific practices that are strictly prohibited. These use cases are deemed unacceptable by the EU and are banned from the market. The prohibitions are among the first provisions of the AI Act to take effect on February 2, 2025. Consequently, companies have a limited timeframe to create comprehensive inventories of their AI systems and ensure that none of their systems fall under any of the prohibited practices.
The following AI practices are banned:
Subliminal Techniques: AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm.
Exploiting Vulnerabilities: Exploitation of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.
Social Scoring: Use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, leading to detrimental or unfavourable treatment in social contexts which are unrelated to the contexts in which the data was originally generated or collected, or treatment that is unjustified or disproportionate to their social behaviour or its gravity (social scoring).
Predictive policing: Use of an AI system for making risk assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling of a natural person or on assessing their personality traits and characteristics.
Facial Databases: Use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage
Emotion Recognition Systems: Use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions.
Biometric Classification: Use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation.
Biometric Real-Time Identification: Use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement.
If you need help with the assessment or want to understand potential exceptions, feel free to reach out to our EU AI team.
AI Literacy Training
From 2 February 2025, the AI Act requires every provider and user (deployer) to take steps to promote 'AI literacy' among their employees and third party suppliers involved in the supply chain, regardless of the classification of their AI system.
AI literacy means that people working with or affected by AI must have the skills, knowledge and understanding to make informed decisions about AI systems. This includes awareness of the opportunities and risks of AI and the potential harm it can cause.
Companies therefore need to provide training to their stakeholders. Any AI literacy training should include the provision of basic concepts and skills about AI systems and how they work, including the different types of products and applications, their risks and benefits.
The training should also take into account the specific role of the person dealing with the AI system and their technical knowledge, experience, education and training. A software developer creating AI systems will need access to different training than a product manager deciding on the use of an AI system.
We provide training for a variety of audiences and can help you tailor any training to meet your needs.
Related contacts
© Eversheds Sutherland. All rights reserved. Eversheds Sutherland is a global provider of legal and other services operating through various separate and distinct legal entities. Eversheds Sutherland is the name and brand under which the members of Eversheds Sutherland Limited (Eversheds Sutherland (International) LLP and Eversheds Sutherland (US) LLP) and their respective controlled, managed and affiliated firms and the members of Eversheds Sutherland (Europe) Limited (each an "Eversheds Sutherland Entity" and together the "Eversheds Sutherland Entities") provide legal or other services to clients around the world. Eversheds Sutherland Entities are constituted and regulated in accordance with relevant local regulatory and legal requirements and operate in accordance with their locally registered names. The use of the name Eversheds Sutherland, is for description purposes only and does not imply that the Eversheds Sutherland Entities are in a partnership or are part of a global LLP. The responsibility for the provision of services to the client is defined in the terms of engagement between the instructed firm and the client.
Share this page