Posted

European Union Artificial Intelligence Act: Short Information Note on Article 6 and Annex III

The Artificial Intelligence Act prepared by the European Union on 13.06.2024 came into force on 02.08.2024. However, many provisions of the Act will enter into force on 02.02.2025. Regarding the classification of high-risk artificial intelligence systems and the special regulations introduced for these systems, the Act will take effect on 02.08.2027. The text of the Act itself has several noteworthy features and will undoubtedly serve as a model for future legislative developments.

We would like to emphasize that the provisions concerning high-risk artificial intelligence systems are particularly significant. For this reason, in this information note, we will focus specifically on the regulations introduced by Article 6 and Annex III.

Article 6: Classification of High-Risk Artificial Intelligence Systems

Article 6(1) sets out the fundamental rules for classifying artificial intelligence systems as high-risk. This classification is made if two main conditions are met:
 
  1. Use as a Safety Component: An AI system is considered high-risk if it is to be used as a safety component of a product or if the system itself is to be placed on the market as a product. Such products must undergo third-party conformity assessment under the relevant EU legislation.
  2. Systems Listed in Annex III: Article 6(2) mandates that AI systems listed in Annex III are also considered high-risk. These systems are used in areas where they pose significant risks to health, safety, or fundamental rights.
Article 6(3) specifies that certain AI systems listed in Annex III may not be classified as high-risk under specific conditions. This applies if the system does not pose a significant risk to human health, safety, or fundamental rights. However, these systems are still subject to a registration requirement in all cases.

Section III: List of High-Risk Artificial Intelligence Systems

Annex III provides a comprehensive list of areas where AI systems are classified as high-risk. This list includes the following areas:
 
  1. Biometric Systems: Remote biometric identification systems, biometric categorization, and emotion recognition systems are considered high-risk. These systems often pose significant risks to public safety and individual privacy.
  2. Critical Infrastructure: AI systems used as safety components in the management and operation of digital infrastructure, or in critical services such as traffic management, or the supply of water, gas, heating, and electricity, fall into this category.
  3. Education and Vocational Training: AI systems used to assess student performance or make decisions regarding admission to educational institutions are classified as high-risk.
  4. Employment and Workforce Management: AI systems used in recruitment, employee performance evaluation, or decision-making processes in employment relationships are also included in this list.
  5. Access to Services: AI systems used to determine access to essential public services or private services are considered high-risk.

Result and Evaluation

Article 6 and Annex III of the Artificial Intelligence Act provide a comprehensive framework for identifying high-risk AI systems, taking into account their potential impact on human health, safety, and fundamental rights. These regulations aim to ensure that AI systems are used in a safe and ethical manner. However, classifying systems as high-risk also entails stricter monitoring and conformity requirements for those systems.

In this era of rapidly evolving artificial intelligence technologies, the EU’s Artificial Intelligence Act aims to promote the ethical and safe development of technology, ensuring the security of both users and developers.

Leave a Reply

Your email address will not be published. Required fields are marked *

Open chat
Merhaba 👋
Hangi konuda danışmak istersiniz?