The EU AI Act is a comprehensive regulatory framework introduced by the European Commission in April 2021 to govern the development, marketing, and use of artificial intelligence (AI) systems within the European Union (EU). Its primary goals are to ensure the safety, transparency, and protection of fundamental rights by applying a risk-based approach that imposes varying obligations based on the potential impact of AI systems. The EU AI Act applies to the entire lifecycle of AI systems and has extraterritorial reach, affecting companies worldwide that offer AI-related products or services impacting EU citizens.
The EU AI Act adopts a risk-based approach to regulation, categorizing AI systems into three main risk levels:
The EU AI Act also introduces specific provisions for General Purpose AI (GPAI) models, including foundational and generative AI systems. These models, which serve multiple purposes, are subject to different risk assessments depending on their application and potential systemic impact.
To assist organizations in navigating the complexities of the AI Act, Holistic AI has developed the EU AI Act Readiness Assessment. Our assessment aims to:
The EU AI Act readiness assessment encompasses a structured series of steps and evaluations. Initially, it is essential to accurately identify AI systems, their risk categories, and the roles of the entities involved. This initial step is crucial, as it determines the subsequent mapping of technical requirements and obligations for operators.
The readiness assessment not only examines the nature and requirements of AI systems but also provides solutions that support the fulfillment of these requirements and obligations, either fully or partially. This ensures that entities are not just informed about their duties under the AI Act but are also equipped with the necessary resources and strategies to align with the regulatory framework.
The EU AI Act's internal conformity assessment procedure hinges on an examination of an organizations’ AI systems' alignment with mapped requirements and obligations. In this context, the readiness assessment is specifically designed to prepare organizations not only for internal evaluations but also for meeting the standards set by the EU AI Act’s conformity assessment.
Given that achieving compliance can be more challenging and costly once AI systems are operational, early preparation is essential.Start your journey towards AI Act compliance today with our AI Act Readiness Assessment. Ensure your organization is prepared for the future of AI regulation in the European Union.
The EU AI Act holds critical importance for enterprises, particularly those operating or planning to operate within the EU market. For global companies, the extraterritorial nature of the Act means compliance is necessary to avoid penalties and maintain market access. High-risk systems, such as those used in recruitment or healthcare, must undergo rigorous conformity assessments, data governance, and human oversight procedures.
The Act applies to providers, deployers, distributors, and importers of AI systems that are placed on the market or put in service within the European Union. The level of preparedness required under the Act is different for each operator. For providers and deployers, the Act may also apply extraterritorially, meaning that providers and deployers of AI system may need to prepare for the EU AI Act even if they are based outside the European Union.
The Act introduces separate risk-based classifications for AI systems and general-purpose AI (GPAI) models. There are three main risk levels for AI systems:
There is a subset of AI systems that are associated with certain transparency obligations. These are commonly labeled as “limited risk” systems in practice. However, this is not an exclusive risk-level to the classification mentioned above. Both high-risk and minimal-risk AI systems may face these transparency obligations depending on their functions.
Regarding the GPAI models, the Act classifies a group of them as the GPAIs with systemic risk provided that these models have high impact capabilities. It also sets forth that GPAI models for the training of which the cumulative amount of computing power used is greater than 10^25 measured in floating point operations (FLOPs) shall be presumed to have high impact capabilities.
There are seven key design-related requirements for the high-risk AI systems under the EU AI Act:
Non-compliance with the provisions of the EU AI Act sanctioned with hefty administrative fines.
The Act completed the final stages of the EU legislative process with the Council's approval on 21 May 2024 and was officially published in the EU's Official Journal on 12 July 2024 as Regulation (EU) 2024/1689. Most of its provisions will start applying in July 2026, 24 months after its publication. However, certain provisions, such as those on prohibited AI practices, began applying on 8 February 2025, six months after the Act entered into force on 8 August 2024.
The AI Act is a comprehensive framework, and preparing for it cannot happen overnight—it will take time. While the general application date for most of the Act's provisions is set for July 2026, it is important to note that some provisions, such as those on prohibited AI practices, began applying as early as 8 February 2025. Starting preparations early is crucial for entities aiming to gain a competitive advantage in the evolving regulatory landscape.
The Act introduces a set of design-related requirements for AI systems and obligations for covered entities. The classification of AI systems, identification of the respective obligations, and preparation for compliance take both time and resources. These could significantly affect the operations of AI developers, deployers, and users, or even, require the termination of operations for some AI systems.
The AI Act is a comprehensive framework, and preparing for it will require significant time and effort. While the general application date for most of the Act's provisions is set for July 2026, it is important to remember that some provisions, such as those on prohibited AI practices, began applying on 8 February 2025. Starting preparations early is essential for entities aiming to stay compliant and gain a competitive advantage in the evolving regulatory landscape.