Introduction
Artificial intelligence (AI) is rapidly becoming an integral part of our lives, from virtual assistants and chatbots to self-driving cars and medical diagnosis systems. While AI has the potential to revolutionize many industries, it also poses significant risks and challenges, particularly in terms of safety, transparency, and fundamental rights. To address these concerns, the European Commission proposed a regulation on artificial intelligence (Proposal for a regulation of the European parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain Union legislative acts) in April 2021, known as the Artificial Intelligence Act[1]. This article will focus on the personal and subject scope of the regulation, as well as its main definitions and its potential impact.
Personal and Subject Scope
The Artificial Intelligence Act would apply to all AI systems used in the European Union, regardless of whether they were developed inside or outside the EU. This means that any business or organization that develops or uses AI in the EU would be subject to the regulation. However, the regulation would only apply to AI systems that are intended to be used in the EU or that are used in the EU, which means that businesses outside the EU would not be subject to the regulation if they do not intend to market their AI systems in the EU.
The regulation would create a classification system for AI systems, with four categories based on the level of risk they pose to safety and fundamental rights. The categories include unacceptable, high, low and minimal risk, with different requirements and obligations applying to each category. AI systems that fall under the high-risk category, such as those used in critical infrastructure, healthcare, or law enforcement, would be subject to the most stringent requirements, including mandatory testing, documentation, and human oversight of the AI system, as well as strict requirements for data protection and cybersecurity.
Main Definitions
The proposal sets out several key definitions that are important for understanding the scope and requirements of the regulation. Some of the main definitions include:
AI system: any software that performs tasks with a degree of autonomy by processing data and making decisions that affect the physical or virtual environment.
High-risk AI system : AI system shall be considered high-risk where both of the following conditions are fulfilled:
(a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in a separate annex (Annex II), which will allow for future additions to it given the necessity to address the fast-changing environment;
(b) the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in the annex.
The act foresees the opportunity to expand the definition by adding AI systems on an ad hoc basis.
Human oversight: the ability for a human to intervene in an AI system’s decision-making process or to monitor the system’s performance.
Transparency: the ability for a user to understand how an AI system works and why it makes certain decisions.
These definitions are important for understanding the obligations and requirements that apply to different categories of AI systems, as well as the transparency and accountability measures that businesses and organizations must implement to ensure that their AI systems are safe, ethical, and respectful of fundamental rights.
Prohibited AI Practices
The regulation prohibits certain AI practices that are considered to be unacceptable. These include the use of AI for social scoring, the creation or deployment of AI systems that manipulate human behavior, and the use of “real-time” remote biometric identification in publicly accessible spaces for law enforcement purposes. The prohibitions cover practices that have a significant potential to manipulate persons through subliminal techniques beyond their consciousness or exploit vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm.
High-Risk AI Systems – Obligations of Providers and Users
The regulation identifies several categories of high-risk AI systems, such as those used in critical infrastructure, healthcare, and transportation. Providers of high-risk AI systems must ensure that their products meet certain requirements, such as having appropriate documentation, a risk management plan, and a human oversight system. They must also conduct conformity assessments and obtain certificates from notified bodies.
Providers and users of high-risk AI systems must comply with specific obligations, such as ensuring the safety, security, and accuracy of their systems, as well as taking measures to prevent or mitigate harm caused by their systems. Other parties, such as importers and distributors of AI systems must also comply with certain obligations, such as verifying the conformity of the systems they are importing or distributing.
Providers of high-risk AI systems must notify the relevant authorities of their intention to place their products on the market, and provide detailed information about the characteristics and intended use of their systems. Notified bodies are responsible for carrying out conformity assessments and issuing certificates for high-risk AI systems.
Standards and Conformity Assessment,
The regulation establishes a framework for standards, conformity assessment, certificates, and registration for AI systems in the EU. It requires the development of harmonized standards for AI, the use of notified bodies for conformity assessment and certification, and the establishment of a centralized EU database for the registration of high-risk AI systems.
To ensure that AI systems placed on the market comply with the AI Regulation and meet the necessary requirements, standards for high-risk AI systems will be developed and updated by the European Commission. The Commission will also establish a system for conformity assessment, certification, and registration of high-risk AI systems. The system will be based on third-party conformity assessment bodies, which will evaluate the AI systems to ensure they meet the requirements of the AI Regulation.
The regulation imposes transparency obligations on certain AI systems, such as those used in deepfakes, chatbots, and voice assistants. Providers of these systems must disclose their use of AI and provide certain information to users, such as the fact that they are interacting with an AI system, the system’s capabilities and limitations, and any potential risks associated with its use.
Certain AI systems that interact with humans or that have an impact on individuals will be required to provide users with information about their characteristics and how they function. This includes information on the data used by the AI system, the parameters on which it operates, and the probability of the outcomes it generates. Additionally, the AI system must be designed in a way that ensures transparency and allows for human intervention when necessary.
European AI board and National competent authorities
The AI Regulation establishes a European AI Board, which will be responsible for advising the Commission on matters related to AI. The Board will also assist the Commission in the development and updating of the list of high-risk AI systems and the standards for these systems. Additionally, the Board will be responsible for coordinating the work of the national competent authorities and notified bodies, as well as for promoting cooperation between the authorities of the Member States.
Each Member State will be required to designate a national competent authority responsible for implementing and enforcing the AI Regulation. The national competent authorities will be responsible for carrying out market surveillance, conducting inspections and investigations, and imposing penalties for non-compliance with the AI Regulation. The authorities will also be required to cooperate with each other and with the European AI Board to ensure consistent implementation of the Regulation across the EU.
Sharing of information on incidents and malfunctioning. Market surveillance and control.
To ensure a high level of safety for individuals and the environment, providers of AI systems will be required to report incidents and malfunctions related to their AI systems to the national competent authorities. Additionally, the AI Regulation establishes a system for the sharing of information on incidents and malfunctions between the national competent authorities and the European Commission, to enable the Commission to take appropriate measures to address any risks to health, safety, or the environment.
The AI Regulation establishes a framework for market surveillance and control of AI systems in the Union market. The national competent authorities will be responsible for carrying out market surveillance activities, including conducting checks on the conformity of AI systems with the AI Regulation, and imposing penalties for non-compliance. The AI Regulation also provides for the possibility of prohibiting or restricting the marketing or use of AI systems that pose a risk to health, safety, or the environment.
Codes of conduct. Confidentiality and penalties.
The AI Regulation provides for the development of voluntary codes of conduct by industry and other stakeholders to promote compliance with the AI Regulation and ethical principles related to AI. The codes of conduct will be subject to approval by the national competent authorities and may be used as evidence of compliance with the AI Regulation.
To ensure the protection of confidential information related to AI systems, the AI Regulation provides for the protection of trade secrets and other confidential information in accordance with EU law. The AI Regulation also establishes penalties for non-compliance with its provisions, including fines of up to 6%
The regulation establishes rules for protecting confidential information obtained in the context of conformity assessments and market surveillance activities. It also provides for penalties in case of non-compliance with the regulation, which can range from fines to the suspension or withdrawal of the product from the market.
Conclusion
The proposed regulation aims to ensure that AI is developed, deployed, and used in a way that is safe and respects fundamental rights. By setting out clear rules for AI providers and users, the regulation seeks to promote trust in AI systems and support the development of the European AI industry. The regulation is part of a broader EU strategy on AI, which aims to boost Europe’s competitiveness in this field while safeguarding the interests of citizens and promoting the development of AI.
[1] https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206