EU AI Act Readiness Assessment

Navigate the EU AI Act with confidence with our end to end Enterprise AI Governance Platform

What is the EU AI Act?

The EU AI Act, proposed by the European Commission in April 2021, aims to regulate the use of AI in the EU by protecting users from AI-related harm and prioritizing human rights. Using a risk-based approach, the EU AI Act imposes obligations that are proportional to the risk posed by an AI system or a general purpose AI model.

After a lengthy consultation process that saw several amendments proposed, it passed in the European Parliament in June 2023, marking the start of a six-month Trilogue period. At the end of this process, a provisional agreement was reached in December 2023, before the Coreper I (Committee of the Permanent Representatives) reached a political agreement in February 2024. This agreement paved the way for the European Parliament AI Act plenary vote, which is scheduled for 13 March 2024.

This landmark legislation is set to become the global gold standard for AI legislation and will have important implications for organizations both within and outside of the EU due to its extraterrestrial scope.

Types of Risks According to the EU AI Act

The EU AI Act adopts a risk-based approach to regulation, categorizing AI systems into three main risk levels:

  • Unacceptable risk AI systems (prohibited AI systems, Article 5): These include systems used in biometric identification or social scoring, deemed inherently risky and prohibited under the Act without prior evaluation.
  • High-risk AI systems (Article 6): Subject to stringent requirements (Articles 8-15), these systems cover specific use cases such as education, employment, and law enforcement.· Low-risk (or minimal-risk) AI systems: Systems not falling into the prohibited or high-risk categories are governed by voluntary codes of conduct (Article 69).
  • Low-risk (or minimal-risk) AI systems: Systems not falling into the prohibited or high-risk categories are governed by voluntary codes of conduct (Article 69).

Additionally, there are AI systems posing limited transparency risks, such as emotion recognition or deep fake generation, addressed under Article 52.

New categories introduced by the Council and Parliament drafts include foundational models and general-purpose AI systems, each subject to distinct requirements. In the latest version of the text, these provisions are combined under a new chapter dedicated to general purpose AI models along with a stricter regime for high-impact general purpose AI models that may pose systemic risk.

The classification of AI systems remains a crucial step for enterprises to ensure compliance, particularly if their systems are deemed prohibited or high-risk under the EU AI Act.

About AI Act Readiness Assessment

To assist organizations in navigating the complexities of the AI Act, Holistic Ai has developed the AI Act Readiness Assessment. Our assessment aims to:

  • Guide organizations through the intricacies of regulatory requirements outlined in the AI Act.
  • Evaluate the use of AI systems within the organization and determine the extent to which the regulation applies.
  • Aid organizations in understanding their readiness to comply with the regulation and identify gaps requiring prioritized attention.·
  • Conduct a detailed analysis of specific AI systems to prepare for legal requirements stipulated by the AI Act.

The EU AI Act readiness assessment encompasses a structured series of steps and evaluations. Initially, it is essential to accurately identify AI systems, their risk categories, and the roles of the entities involved. This initial step is crucial, as it determines the subsequent mapping of technical requirements and obligations for operators.

The readiness assessment not only examines the nature and requirements of AI systems but also provides solutions that support the fulfillment of these requirements and obligations, either fully or partially. This ensures that entities are not just informed about their duties under the AI Act but are also equipped with the necessary resources and strategies to align with the regulatory framework.

The EU AI Act's internal conformity assessment procedure hinges on an examination of an organizations’ AI systems' alignment with mapped requirements and obligations. In this context, the readiness assessment is specifically designed to prepare organizations not only for internal evaluations but also for meeting the standards set by the EU AI Act’s conformity assessment.

Given that achieving compliance can be more challenging and costly once AI systems are operational, early preparation is essential.Start your journey towards AI Act compliance today with our AI Act Readiness Assessment. Ensure your organization is prepared for the future of AI regulation in the European Union.

FAQs related to the AI Act Assessment

1. Who needs to prepare for the EU AI Act assessment?

The Act applies to providers, deployers, distributors, and importers of AI systems that are placed on the market or put in service within the European Union. The level of preparedness required under the Act is different for each operator. For providers and deployers, the Act may also apply extraterritorially, meaning that providers and deployers of AI system may need to prepare for the EU AI Act even if they are based outside the European Union.

2. What are the different risk categories of AI systems under the EU AI Act?

The Act introduces separate risk-based classifications for AI systems and general-purpose AI (GPAI) models. There are three main risk levels for AI systems:

  1. Certain AI systems, such as manipulative systems or systems that undertake real-time biometric identification, are considered posing unacceptable risk, and, hence, these are prohibited.
  2. Another group of AI systems considered high-risk based on either their relationship to already-regulated areas such as heavy machinery, vehicles, or medical devices or their use cases.
  3. The remaining systems are commonly referred to as low-risk and minimal-risk AI systems and are not subject to binding rules.

There is a subset of AI systems that are associated with certain transparency obligations. These are commonly labeled as “limited risk” systems in practice. However, this is not an exclusive risk-level to the classification mentioned above. Both high-risk and minimal-risk AI systems may face these transparency obligations depending on their functions.

Regarding the GPAI models, the Act classifies a group of them as the GPAIs with systemic risk provided that these models have high impact capabilities. It also sets forth that GPAI models for the training of which the cumulative amount of computing power used is greater than 10^25 measured in floating point operations (FLOPs) shall be presumed to have high impact capabilities.

3. What are the key requirements for high-risk AI systems in the EU AI Act?

There are seven key design-related requirements for the high-risk AI systems under the EU AI Act:

  1. Establishment of a risk management system
  2. Maintaining appropriate data governance and management practices
  3. Drawing up a technical documentation
  4. Record-keeping
  5. Ensuring transparency and the provision of information to deployers
  6. Maintaining appropriate level of human oversight
  7. Ensuring appropriate level of accuracy, robustness, and cybersecurity

4. What are the potential consequences of non-compliance with the EU AI Act?

Non-compliance with the provisions of the EU AI Act sanctioned with hefty administrative fines.

5. When will the EU AI Act come into effect?

The Act is currently going through the final stages of the EU legislation-making procedure. The Act is expected to be officially adopted in Early 2024. Most of its provisions will start applying 24 months after the adoption, with exceptions to certain provisions. The earliest application date belongs to the provisions on prohibited AI practices, which will apply starting from 6 months after the entry into force of the Act.

6. What are the benefits of being compliant with the EU AI Act?

The AI Act is a comprehensive framework and getting ready for it is not possible overnight but will take time. The general application date for the Act may not seem close, but it must not be forgotten that some of the provisions of the Act are likely to start applying by the end of 2024. Additionally, getting ahead and starting preparations early is important for entities seeking to have a competitive advantage.

7. How can the EU AI Act impact my business operations?

The Act introduces a set of design-related requirements for AI systems and obligations for covered entities. The classification of AI systems, identification of the respective obligations, and preparation for compliance take both time and resources. These could significantly affect the operations of AI developers, deployers, and users, or even, require the termination of operations for some AI systems.

8. What are the benefits of being compliant with the EU AI Act?

The AI Act is a comprehensive framework and getting ready for it is not possible overnight but will take time. The general application date for the Act may not seem close, but it must not be forgotten that some of the provisions of the Act are likely to start applying by the end of 2024. Additionally, getting ahead and starting preparations early is important for entities seeking to have a competitive advantage.

Schedule a demo with us to get more information

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.