AI Act
   HOME

TheInfoList



OR:

The Artificial Intelligence Act (AI Act) is a
European Union regulation A regulation is a legal act of the European Union which becomes immediately enforceable as law in all member states simultaneously. Regulations can be distinguished from directives which, at least in principle, need to be transposed into nation ...
concerning
artificial intelligence Artificial intelligence (AI) is the capability of computer, computational systems to perform tasks typically associated with human intelligence, such as learning, reasoning, problem-solving, perception, and decision-making. It is a field of re ...
(AI). It establishes a common regulatory and
legal framework A legal doctrine is a framework, set of rules, Procedural law, procedural steps, or Test (law), test, often established through precedent in the common law, through which judgments can be determined in a given legal case. For example, a doctrine ...
for AI within the
European Union The European Union (EU) is a supranational union, supranational political union, political and economic union of Member state of the European Union, member states that are Geography of the European Union, located primarily in Europe. The u ...
(EU). It came into force on 1 August 2024, with provisions that shall come into operation gradually over the following 6 to 36 months. It covers all types of AI across a broad range of sectors, with exceptions for AI systems used solely for military, national security, research and non-professional purposes. As a piece of product regulation, it does not confer rights on individuals, but regulates the providers of AI systems and entities using AI in a professional context. The Act classifies non-exempt AI applications by their risk of causing harm. There are four levels – unacceptable, high, limited, minimal – plus an additional category for general-purpose AI. * Applications with unacceptable risks are banned. * High-risk applications must comply with security, transparency and quality obligations, and undergo conformity assessments. * Limited-risk applications only have transparency obligations. * Minimal-risk applications are not regulated. For general-purpose AI, transparency requirements are imposed, with reduced requirements for
open source Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use and view the source code, design documents, or content of the product. The open source model is a decentrali ...
models, and additional evaluations for high-capability models. The Act also creates a European Artificial Intelligence Board to promote national cooperation and ensure compliance with the regulation. Like the EU's
General Data Protection Regulation The General Data Protection Regulation (Regulation (EU) 2016/679), abbreviated GDPR, is a European Union regulation on information privacy in the European Union (EU) and the European Economic Area (EEA). The GDPR is an important component of ...
, the Act can apply extraterritorially to providers from outside the EU if they have users within the EU. Proposed by the
European Commission The European Commission (EC) is the primary Executive (government), executive arm of the European Union (EU). It operates as a cabinet government, with a number of European Commissioner, members of the Commission (directorial system, informall ...
on 21 April 2021, it passed the
European Parliament The European Parliament (EP) is one of the two legislative bodies of the European Union and one of its seven institutions. Together with the Council of the European Union (known as the Council and informally as the Council of Ministers), it ...
on 13 March 2024, and was unanimously approved by the EU Council on 21 May 2024. The draft Act was revised to address the rise in popularity of
generative artificial intelligence Generative artificial intelligence (Generative AI, GenAI, or GAI) is a subfield of artificial intelligence that uses generative models to produce text, images, videos, or other forms of data. These models Machine learning, learn the underlyin ...
systems, such as
ChatGPT ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and released on November 30, 2022. It uses large language models (LLMs) such as GPT-4o as well as other Multimodal learning, multimodal models to create human-like re ...
, whose general-purpose capabilities did not fit the main framework.


Provisions


Risk categories

There are different risk categories depending on the type of application, with a specific category dedicated to general-purpose generative AI: * Unacceptable risk – AI applications in this category are banned, except for specific exemptions. When no exemption applies, this includes AI applications that manipulate human behaviour, those that use real-time remote
biometric Biometrics are body measurements and calculations related to human characteristics and features. Biometric authentication (or realistic authentication) is used in computer science as a form of identification and access control. It is also used t ...
identification (such as
facial recognition Facial recognition or face recognition may refer to: *Face detection, often a step done before facial recognition *Face perception, the process by which the human brain understands and interprets the face *Pareidolia, which involves, in part, seein ...
) in public spaces, and those used for social scoring (ranking individuals based on their personal characteristics, socio-economic status, or behaviour). * High-risk – AI applications that are expected to pose significant threats to health, safety, or the
fundamental rights Fundamental rights are a group of rights that have been recognized by a high degree of protection from encroachment. These rights are specifically identified in a constitution, or have been found under due process of law. The United Nations' Susta ...
of persons. Notably, AI systems used in health, education, recruitment, critical infrastructure management, law enforcement or justice. They are subject to quality, transparency, human oversight and safety obligations, and in some cases require a "Fundamental Rights Impact Assessment" before deployment. They must be evaluated both before they are placed on the market and throughout their life cycle. The list of high-risk applications can be expanded over time, without the need to modify the AI Act itself. * General-purpose AI – Added in 2023, this category includes in particular
foundation model In artificial intelligence (AI), a foundation model (FM), also known as large X model (LxM), is a machine learning or deep learning model trained on vast datasets so that it can be applied across a wide range of use cases.Competition and Markets ...
s like ChatGPT. Unless the weights and model architecture are released under free and open source licence, in which case only a training data summary and a copyright compliance policy are required, they are subject to transparency requirements. High-impact general-purpose AI systems including free and open source ones which could pose systemic risks (notably those trained using a computational capability exceeding 1025
FLOPS Floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance in computing, useful in fields of scientific computations that require floating-point calculations. For such cases, it is a more accurate measu ...
) must also undergo a thorough evaluation process. * Limited risk – AI systems in this category have transparency obligations, ensuring users are informed that they are interacting with an AI system and allowing them to make informed choices. This category includes, for example, AI applications that make it possible to generate or manipulate images, sound, or videos (like
deepfake ''Deepfakes'' (a portmanteau of and ) are images, videos, or audio that have been edited or generated using artificial intelligence, AI-based tools or AV editing software. They may depict real or fictional people and are considered a form of ...
s). * Minimal risk – This category includes, for example, AI systems used for video games or spam filters. Most AI applications are expected to fall into this category. These systems are not regulated, and Member States cannot impose additional regulations due to
maximum harmonisation In mathematical analysis, the maximum and minimum of a function are, respectively, the greatest and least value taken by the function. Known generically as extremum, they may be defined either within a given range (the ''local'' or ''relative ...
rules. Existing national laws regarding the design or use of such systems are overridden. However, a voluntary code of conduct is suggested.


Exemptions

Articles 2.3 and 2.6 exempt AI systems used for
military A military, also known collectively as armed forces, is a heavily armed, highly organized force primarily intended for warfare. Militaries are typically authorized and maintained by a sovereign state, with their members identifiable by a d ...
or
national security National security, or national defence (national defense in American English), is the security and Defence (military), defence of a sovereign state, including its Citizenship, citizens, economy, and institutions, which is regarded as a duty of ...
purposes or pure scientific research and development from the AI Act. Article 5.2 bans algorithmic video surveillance of people ("The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces") only if it is conducted in real time. Exceptions allowing real-time algorithmic video surveillance include policing aims including "a real and present or real and foreseeable threat of terrorist attack". Recital 31 of the act states that it aims to prohibit "AI systems providing social scoring of natural persons by public or private actors," but allows for "lawful evaluation practices of natural persons that are carried out for a specific purpose in accordance with Union and national law."
La Quadrature du Net La Quadrature du Net ('' Squaring the Net'' in French) is a French advocacy group that promotes digital rights and freedoms for its citizens. It advocates for French and European legislation to respect the founding principles of the Internet, m ...
interprets this exemption as permitting sector-specific social scoring systems, such as the suspicion score used by the French family payments agency
Caisse d'allocations familiales Family allocations make up the family-oriented sector of the French social security system, through a network known as the Caisse nationale des allocations familiales (National Office for Family Allocations) or CNAF and the 101 Caisse d'allocat ...
.


Governance

The AI Act establishes various new bodies in Article 64 and the following articles. These bodies are tasked with implementing and enforcing the Act. The approach combines EU-level coordination with national implementation, involving both public authorities and private sector participation. The following new bodies will be established: # AI Office: attached to the European Commission, this authority will coordinate the implementation of the AI Act in all Member States and oversee the compliance of general-purpose AI providers. # European Artificial Intelligence Board: composed of one representative from each Member State, the Board will advise and assist the Commission and Member States to facilitate the consistent and effective application of the AI Act. Its tasks include gathering and sharing technical and regulatory expertise, providing recommendations, written opinions, and other advice. # Advisory Forum: established to advise and provide technical expertise to the Board and the Commission, this forum will represent a balanced selection of stakeholders, including industry, start-ups, small and medium-sized enterprises, civil society, and academia, ensuring that a broad spectrum of opinions is represented during the implementation and application process. # Scientific Panel of Independent Experts: this panel will provide technical advice and input to the AI Office and national authorities, enforce rules for general-purpose AI models (notably by launching qualified alerts of possible risks to the AI Office), and ensure that the rules and implementations of the AI Act correspond to the latest scientific findings. While the establishment of new bodies is planned at the EU level, Member States will have to designate "national competent authorities." These authorities will be responsible for ensuring the application and implementation of the AI Act, and for conducting "market surveillance." They will verify that AI systems comply with the regulations, notably by checking the proper performance of conformity assessments and by appointing third-parties to carry out external conformity assessments.


Enforcement

The Act regulates entry to the EU internal market using the
New Legislative Framework The New Legislative Framework is a framework to design legislation, aiming to improve the internal market of the European Union. Adopted in 2008, it "aims to improve the internal market for goods and strengthen the conditions for placing a wide ran ...
. It contains essential requirements that all AI systems must meet to access the EU market. These essential requirements are passed on to European Standardisation Organisations, which develop technical standards that further detail these requirements. These standards are developed by CEN/CENELEC JTC 21. The Act mandates that member states establish their own notifying bodies. Conformity assessments are conducted to verify whether AI systems comply with the standards set out in the AI Act. This assessment can be done in two ways: either through self-assessment, where the AI system provider checks conformity, or through third-party conformity assessment, where the notifying body conducts the assessment. Notifying bodies also have the authority to carry out audits to ensure proper conformity assessments. Criticism has arisen regarding the fact that many high-risk AI systems do not require third-party conformity assessments. Some commentators argue that independent third-party assessments are necessary for high-risk AI systems to ensure safety before deployment. Legal scholars have suggested that AI systems capable of generating deepfakes for political misinformation or creating non-consensual intimate imagery should be classified as high-risk and subjected to stricter regulation.


Legislative procedure

In February 2020, the
European Commission The European Commission (EC) is the primary Executive (government), executive arm of the European Union (EU). It operates as a cabinet government, with a number of European Commissioner, members of the Commission (directorial system, informall ...
published "White Paper on Artificial Intelligence – A European approach to excellence and trust". In October 2020, debates between EU leaders took place in the
European Council The European Council (informally EUCO) is a collegiate body (directorial system) and a symbolic collective head of state, that defines the overall political direction and general priorities of the European Union (EU). It is composed of the he ...
. On 21 April 2021, the AI Act was officially proposed by the Commission. On 6 December 2022, the European Council adopted the general orientation, allowing negotiations to begin with the
European Parliament The European Parliament (EP) is one of the two legislative bodies of the European Union and one of its seven institutions. Together with the Council of the European Union (known as the Council and informally as the Council of Ministers), it ...
. On 9 December 2023, after three days of "marathon" talks, the EU Council and Parliament concluded an agreement. The law was passed in the European Parliament on 13 March 2024, by a vote of 523 for, 46 against, and 49 abstaining. It was approved by the EU Council on 21 May 2024. It entered into force on 1 August 2024, 20 days after being published in the ''
Official Journal A government gazette (also known as an official gazette, official journal, official newspaper, official monitor or official bulletin) is a periodical publication that has been authorised to publish public or legal notices. It is usually establish ...
'' on 12 July 2024. After coming into force, there will be a delay before it becomes applicable, which depends on the type of application. This delay is 6 months for bans on "unacceptable risk" AI systems, 9 months for codes of practice, 12 months for general-purpose AI systems, 36 months for some obligations related to "high-risk" AI systems, and 24 months for everything else.


Reactions

Experts have argued that though the jurisdiction of the law is European, it could have far-ranging implications for international companies that plan to expand to Europe.
Anu Bradford Anu H. Bradford (née Anu Piilola, born 1975) is a Finnish-American author, law professor, and expert in international trade law. In 2014, she was named the Henry L. Moses Distinguished Professor of Law and International Organization at the Columb ...
at Columbia has argued that the law provides significant momentum to the world-wide movement to regulate AI technologies.
Amnesty International Amnesty International (also referred to as Amnesty or AI) is an international non-governmental organization focused on human rights, with its headquarters in the United Kingdom. The organization says that it has more than ten million members a ...
criticized the AI Act for not completely banning real-time
facial recognition Facial recognition or face recognition may refer to: *Face detection, often a step done before facial recognition *Face perception, the process by which the human brain understands and interprets the face *Pareidolia, which involves, in part, seein ...
, which they said could damage "human rights, civil space and rule of law" in the European Union. It also criticized the absence of ban on ''exporting'' AI technologies that can harm human rights. Some tech watchdogs have argued that there were major loopholes in the law that would allow large tech monopolies to entrench their advantage in AI, or to lobby to weaken rules. Some startups welcomed the clarification the act provides, while others argued the additional regulation would make European startups uncompetitive compared to American and Chinese startups.
La Quadrature du Net La Quadrature du Net ('' Squaring the Net'' in French) is a French advocacy group that promotes digital rights and freedoms for its citizens. It advocates for French and European legislation to respect the founding principles of the Internet, m ...
(LQDN) described the AI Act as "tailor-made for the tech industry, European police forces as well as other large bureaucracies eager to automate social control." LQDN described the role of self-regulation and exemptions in the act to render it "largely incapable of standing in the way of the social, political and environmental damage linked to the proliferation of AI." Building on these critiques, scholars have raised concerns in particular about the Act's approach to regulating the secondary uses of trained AI models, which may have significant societal impacts. They argue that the Act’s narrow focus on deployment contexts and reliance on providers to self-declare intended purposes creates opportunities for misinterpretation and insufficient oversight. Additionally, the Act often exempts
open-source Open source is source code that is made freely available for possible modification and redistribution. Products include permission to use and view the source code, design documents, or content of the product. The open source model is a decentrali ...
models and neglects critical lifecycle phases, such as the reuse of trained models. Trained models store decision-mappings as parameters that approximate patterns from the
training data In machine learning, a common task is the study and construction of algorithms that can learn from and make predictions on data. Such algorithms function by making data-driven predictions or decisions, through building a mathematical model from ...
. This "model data" is distinct from the original training data and is typically classified as non-personal, as it often cannot be traced back to individual data subjects. Consequently, it falls outside the scope of other regulations like the
GDPR The General Data Protection Regulation (Regulation (EU) 2016/679), abbreviated GDPR, is a European Union regulation on information privacy in the European Union (EU) and the European Economic Area (EEA). The GDPR is an important component of ...
. Some scholars also criticize the AI Act for not sufficiently regulating the reuse of model data, warning of potentially harmful consequences for individual privacy, social equity, and democratic processes.


See also

*
Algorithmic bias Algorithmic bias describes systematic and repeatable harmful tendency in a computerized sociotechnical system to create " unfair" outcomes, such as "privileging" one category over another in ways different from the intended function of the a ...
* *
Ethics of artificial intelligence The ethics of artificial intelligence covers a broad range of topics within AI that are considered to have particular ethical stakes. This includes algorithmic biases, Fairness (machine learning), fairness, automated decision-making, accountabili ...
*
Regulation of algorithms Regulation of algorithms, or algorithmic regulation, is the creation of laws, rules and public sector policies for promotion and regulation of algorithms, particularly in artificial intelligence and machine learning. For the subset of AI algorith ...
* Regulation of artificial intelligence in the European Union *
Existential risk from artificial general intelligence Existential risk from artificial intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe. One argument for the importance of this r ...


Notes


References

{{authority control Policies of the European Union European Digital Strategy 2021 in law 2021 in the European Union 2024 in law 2024 in politics Data laws of Europe European Union regulations Regulation of robots Regulation of artificial intelligence 2024 in computing