The Artificial Intelligence Act prepared by the European Union on 13.06.2024 came into force on 02.08.2024. However, many provisions of the Act will enter into force on 02.02.2025. Regarding the classification of high-risk artificial intelligence systems and the special regulations introduced for these systems, the Act will take effect on 02.08.2027. The text of the Act itself has several noteworthy features and will undoubtedly serve as a model for future legislative developments.
We would like to emphasize that the provisions concerning high-risk artificial intelligence systems are particularly significant. For this reason, in this information note, we will focus specifically on the regulations introduced by Article 6 and Annex III.
Article 6: Classification of High-Risk Artificial Intelligence Systems
- Use as a Safety Component: An AI system is considered high-risk if it is to be used as a safety component of a product or if the system itself is to be placed on the market as a product. Such products must undergo third-party conformity assessment under the relevant EU legislation.
- Systems Listed in Annex III: Article 6(2) mandates that AI systems listed in Annex III are also considered high-risk. These systems are used in areas where they pose significant risks to health, safety, or fundamental rights.
Section III: List of High-Risk Artificial Intelligence Systems
- Biometric Systems: Remote biometric identification systems, biometric categorization, and emotion recognition systems are considered high-risk. These systems often pose significant risks to public safety and individual privacy.
- Critical Infrastructure: AI systems used as safety components in the management and operation of digital infrastructure, or in critical services such as traffic management, or the supply of water, gas, heating, and electricity, fall into this category.
- Education and Vocational Training: AI systems used to assess student performance or make decisions regarding admission to educational institutions are classified as high-risk.
- Employment and Workforce Management: AI systems used in recruitment, employee performance evaluation, or decision-making processes in employment relationships are also included in this list.
- Access to Services: AI systems used to determine access to essential public services or private services are considered high-risk.
Result and Evaluation
Article 6 and Annex III of the Artificial Intelligence Act provide a comprehensive framework for identifying high-risk AI systems, taking into account their potential impact on human health, safety, and fundamental rights. These regulations aim to ensure that AI systems are used in a safe and ethical manner. However, classifying systems as high-risk also entails stricter monitoring and conformity requirements for those systems.
In this era of rapidly evolving artificial intelligence technologies, the EU’s Artificial Intelligence Act aims to promote the ethical and safe development of technology, ensuring the security of both users and developers.