What is the EU AI Act?

The EU AI Act is a comprehensive regulatory framework introduced by the European Commission in April 2021 to govern the development, marketing, and use of artificial intelligence (AI) systems within the European Union (EU). Its primary goals are to ensure the safety, transparency, and protection of fundamental rights by applying a risk-based approach that imposes varying obligations based on the potential impact of AI systems. The EU AI Act applies to the entire lifecycle of AI systems and has extraterritorial reach, affecting companies worldwide that offer AI-related products or services impacting EU citizens.

Types of Risks According to the EU AI Act

Risk-Based Approach

The EU AI Act adopts a risk-based approach to regulation, categorizing AI systems into three main risk levels:

  • Unacceptable Risk AI Systems (Article 5): These systems are outright banned due to their potential for significant harm, such as social scoring, subliminal manipulation, and real-time biometric identification for law enforcement purposes​​​.
  • High-Risk AI Systems (Articles 6-15): These systems are subject to strict oversight and pre-market conformity assessments. Examples include AI in education, employment, law enforcement, and healthcare​​. High-risk AI must meet requirements for data governance, risk management, technical documentation, transparency, and human oversight​​.
  • Limited Risk AI: These systems may only be subject to voluntary codes of conduct, with limited transparency obligations if they interact directly with individuals​​​.
  • Minimal or No Risk: Systems like AI-enabled video games or spam filters face the least regulation.

General Purpose AI Models

The EU AI Act also introduces specific provisions for General Purpose AI (GPAI) models, including foundational and generative AI systems. These models, which serve multiple purposes, are subject to different risk assessments depending on their application and potential systemic impact​​.

EU AI Act Readiness Assessment

To assist organizations in navigating the complexities of the AI Act, Holistic AI has developed the EU AI Act Readiness Assessment. Our assessment aims to:

  • Guide organizations through the intricacies of regulatory requirements outlined in the AI Act.
  • Evaluate the use of AI systems within the organization and determine the extent to which the regulation applies.
  • Aid organizations in understanding their readiness to comply with the regulation and identify gaps requiring prioritized attention.·
  • Conduct a detailed analysis of specific AI systems to prepare for legal requirements stipulated by the AI Act.

Compliance Timeline and Implications

  • July 2024: The EU AI Act was officially published in the Official Journal of the European Union.
  • August 1, 2024: The Act comes into force, initiating the countdown for its phased implementation.
  • August 2, 2026: Full enforcement of most provisions, including regulations concerning high-risk AI systems. Organizations using or developing high-risk AI systems must comply with stringent requirements such as data governance, risk management, and transparency obligations.
  • Penalties for Non-Compliance: Fines of up to €35 million or 7% of global annual turnover can be imposed for non-compliance, depending on the severity of the breach.
  • Ongoing Compliance Obligations: Organizations must continuously assess and manage the risks associated with AI systems and update compliance measures throughout the lifecycle of AI systems to avoid fines.

The EU AI Act readiness assessment encompasses a structured series of steps and evaluations. Initially, it is essential to accurately identify AI systems, their risk categories, and the roles of the entities involved. This initial step is crucial, as it determines the subsequent mapping of technical requirements and obligations for operators.

The readiness assessment not only examines the nature and requirements of AI systems but also provides solutions that support the fulfillment of these requirements and obligations, either fully or partially. This ensures that entities are not just informed about their duties under the AI Act but are also equipped with the necessary resources and strategies to align with the regulatory framework.

The EU AI Act's internal conformity assessment procedure hinges on an examination of an organizations’ AI systems' alignment with mapped requirements and obligations. In this context, the readiness assessment is specifically designed to prepare organizations not only for internal evaluations but also for meeting the standards set by the EU AI Act’s conformity assessment.

Given that achieving compliance can be more challenging and costly once AI systems are operational, early preparation is essential.Start your journey towards AI Act compliance today with our AI Act Readiness Assessment. Ensure your organization is prepared for the future of AI regulation in the European Union.

Why Should Businesses Care?

The EU AI Act holds critical importance for enterprises, particularly those operating or planning to operate within the EU market. For global companies, the extraterritorial nature of the Act means compliance is necessary to avoid penalties and maintain market access​​. High-risk systems, such as those used in recruitment or healthcare, must undergo rigorous conformity assessments, data governance, and human oversight procedures​​.

Practical Steps for Compliance

  • Conduct an AI Risk Assessment: Evaluate all AI systems to determine their risk classification under the Act​​.
  • Develop Governance and Compliance Frameworks: Ensure data governance, risk management, and documentation practices meet regulatory requirements, especially for high-risk and GPAI models​​.
  • Stay Updated: Keep track of evolving guidance, such as conformity assessment requirements and transparency obligations​.

FAQs related to the AI Act Assessment

1. Who needs to prepare for the EU AI Act assessment?

The Act applies to providers, deployers, distributors, and importers of AI systems that are placed on the market or put in service within the European Union. The level of preparedness required under the Act is different for each operator. For providers and deployers, the Act may also apply extraterritorially, meaning that providers and deployers of AI system may need to prepare for the EU AI Act even if they are based outside the European Union.

2. What are the different risk categories of AI systems under the EU AI Act?

The Act introduces separate risk-based classifications for AI systems and general-purpose AI (GPAI) models. There are three main risk levels for AI systems:

  1. Certain AI systems, such as manipulative systems or systems that undertake real-time biometric identification, are considered posing unacceptable risk, and, hence, these are prohibited.
  2. Another group of AI systems considered high-risk based on either their relationship to already-regulated areas such as heavy machinery, vehicles, or medical devices or their use cases.
  3. The remaining systems are commonly referred to as low-risk and minimal-risk AI systems and are not subject to binding rules.

There is a subset of AI systems that are associated with certain transparency obligations. These are commonly labeled as “limited risk” systems in practice. However, this is not an exclusive risk-level to the classification mentioned above. Both high-risk and minimal-risk AI systems may face these transparency obligations depending on their functions.

Regarding the GPAI models, the Act classifies a group of them as the GPAIs with systemic risk provided that these models have high impact capabilities. It also sets forth that GPAI models for the training of which the cumulative amount of computing power used is greater than 10^25 measured in floating point operations (FLOPs) shall be presumed to have high impact capabilities.

3. What are the key requirements for high-risk AI systems in the EU AI Act?

There are seven key design-related requirements for the high-risk AI systems under the EU AI Act:

  1. Establishment of a risk management system
  2. Maintaining appropriate data governance and management practices
  3. Drawing up a technical documentation
  4. Record-keeping
  5. Ensuring transparency and the provision of information to deployers
  6. Maintaining appropriate level of human oversight
  7. Ensuring appropriate level of accuracy, robustness, and cybersecurity

4. What are the potential consequences of non-compliance with the EU AI Act?

Non-compliance with the provisions of the EU AI Act sanctioned with hefty administrative fines.

5. When will the EU AI Act come into effect?

The Act completed the final stages of the EU legislative process with the Council's approval on 21 May 2024 and was officially published in the EU's Official Journal on 12 July 2024 as Regulation (EU) 2024/1689. Most of its provisions will start applying in July 2026, 24 months after its publication. However, certain provisions, such as those on prohibited AI practices, began applying on 8 February 2025, six months after the Act entered into force on 8 August 2024.

6. What are the benefits of being compliant with the EU AI Act?

The AI Act is a comprehensive framework, and preparing for it cannot happen overnight—it will take time. While the general application date for most of the Act's provisions is set for July 2026, it is important to note that some provisions, such as those on prohibited AI practices, began applying as early as 8 February 2025. Starting preparations early is crucial for entities aiming to gain a competitive advantage in the evolving regulatory landscape.

7. How can the EU AI Act impact my business operations?

The Act introduces a set of design-related requirements for AI systems and obligations for covered entities. The classification of AI systems, identification of the respective obligations, and preparation for compliance take both time and resources. These could significantly affect the operations of AI developers, deployers, and users, or even, require the termination of operations for some AI systems.

8. What are the benefits of being compliant with the EU AI Act?

The AI Act is a comprehensive framework, and preparing for it will require significant time and effort. While the general application date for most of the Act's provisions is set for July 2026, it is important to remember that some provisions, such as those on prohibited AI practices, began applying on 8 February 2025. Starting preparations early is essential for entities aiming to stay compliant and gain a competitive advantage in the evolving regulatory landscape.

Schedule a demo with us to get more information

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.