Automation of routine tasks, rapid content creation, and national defense — these seemingly unrelated areas are all actively utilizing artificial intelligence (AI). The spread of disinformation, fraudulent activities, and uncontrolled influence on elections — AI plays a role here as well. It has become an invaluable tool for everyone, from conscientious users to fraudsters.
Following a well-established pattern, new technologies emerge first, followed by those who abuse them, thereby violating human rights. As a result, the legal community steps in to regulate these technologies and their outcomes. Active efforts to regulate AI began as early as 2018. One key result of these efforts, and a first step toward addressing the aforementioned issues, was the adoption of the Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law (hereinafter referred to as the Convention) and the Artificial Intelligence Act (AI Act).
In the Center for Democracy and Rule of Law’s analysis, we will examine European AI legislation in detail, focusing on the Convention and AI Act, how AI system regulations will change, and the mechanisms available to protect against the negative impacts of AI systems.
Framework Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law
Development Process
Over recent years, European states have adopted numerous documents that partially regulate the functioning of AI. However, the need remained for a unified act to establish a general standard for AI systems.
To initiate the drafting process, the Council of Europe established the Committee on Artificial Intelligence (hereinafter referred to as the Committee), which began working on the Convention in 2021. On May 17, 2024, the Committee of Ministers of the Council of Europe adopted the Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law. This became the first international treaty on AI, open to all countries worldwide.
During the negotiations and subsequent adoption of the Convention, the drafters considered several international legal and political instruments on artificial intelligence. Each of these documents establishes rules or recommendations for AI system operation. These include:
- Declaration of the Committee of Ministers of the Council of Europe on the Manipulative Capabilities of Algorithmic Processes, adopted on February 13, 2019. This declaration addresses the manipulation of human behavior and elections, the use of data to identify individual vulnerabilities, threats to the right to make decisions independently of automated systems, and other risks posed by automated systems. Member states agreed to focus on resolving these issues, consider the need for additional legal mechanisms to protect human rights from automated data processing, and promote digital literacy skills. They also highlighted the need to consider enhancing regulatory frameworks or taking other actions concerning algorithmic tools.
- Recommendation on Artificial Intelligence, adopted by the Council of the Organisation for Economic Co-operation and Development (OECD) on May 22, 2019. This recommendation outlines principles for responsible AI governance and addresses national policy issues, such as the obligation for AI system developers to provide information on data sets, processes, and decisions made by AI systems. It also suggests that governments encourage private investment in research and development, as part of international cooperation efforts.
- Recommendation of the Committee of Ministers of the Council of Europe to Member States on the Impact of Algorithmic Systems on Human Rights, adopted on April 8, 2020. This recommendation focuses on protecting human rights in the use of algorithmic systems. It calls on member states to review their legal frameworks and policies on algorithmic systems to ensure compliance with ethical principles and human rights, as well as to widely disseminate these recommendations. The document also emphasizes that the private sector, which designs, develops, and deploys algorithmic systems, must adhere to human rights and regional and international standards.
- UNESCO Recommendation on the Ethics of Artificial Intelligence, adopted on November 23, 2021.
The document provides recommendations on prohibiting mass surveillance and social scoring systems, ensuring algorithm transparency, protecting personal data, and more. - Resolutions and recommendations of the Parliamentary Assembly of the Council of Europe explore the opportunities and risks AI poses to human rights, democracy, and the rule of law and endorse a range of core ethical principles that should apply to AI systems.
- The G7 Hiroshima Process International Guiding Principles for organizations developing advanced AI systems and the International Code of Conduct of the Hiroshima Process for such organizations (adopted on October 30, 2023).
- The AI Act, which establishes a general regulatory framework for AI systems within the European Union market.
The following documents were also considered during the drafting process:
- Declaration of the Heads of State and Government, made at the 4th Council of Europe Summit in Reykjavik on May 16-17, 2023. This declaration established new standards for protecting human rights both online and offline, particularly regarding AI. The summit emphasized the importance of safeguarding human rights in the context of AI development, creating legal and regulatory frameworks aligned with democratic values, and implementing transparent and accountable algorithms. Leaders expressed support for international cooperation to ensure responsible AI development and highlighted the importance of education and training in this area.
- G7 Leaders’ Statements on the Hiroshima AI Process, issued on October 30 and December 6, 2023, stressed the importance of safe, responsible, and ethical AI development and the need for international cooperation to establish global standards and practices for AI.
- The Bletchley Declaration, adopted in November 2023. This document aims to enhance international cooperation in AI safety research by promoting the safe design, development, deployment, and use of AI systems. It covers public services such as education, healthcare, food security, science, energy, biodiversity, and climate. The declaration lists AI-related risks, and its signatories identified the creation of shared principles and codes of conduct as essential for ensuring AI’s safe functioning. A similar approach is reflected in the Artificial Intelligence Act (AI Act) regarding the development of codes of conduct. The declaration adopts a risk-based approach similar to the Convention and AI Act in terms of accountability: the higher the risk, the greater the responsibility placed on the AI system developer. Additionally, participants must ensure the transparency and accountability of their monitoring, measurement, and vulnerability mitigation plans. You can read more about the Bletchley Declaration here.
To Whom Does the Convention Apply?
The provisions of the Convention are mandatory for implementation into the national legislation of the Council of Europe member states, as well as for those non-member states that have ratified the Convention, and for the European Union.
The Convention establishes rules for the functioning of AI systems throughout their lifecycle to protect fundamental human rights and democratic processes. It covers the use of AI systems in both the public and private sectors.
Before delving into the details of the Convention, it is important to clarify how the document defines “artificial intelligence system.”
This concept is defined in Article 2 of the Convention: “Artificial intelligence system” refers to a machine system that, for explicit or implicit purposes, derives conclusions from input data it receives on how to generate outputs, such as predictions, content, recommendations, or decisions, which may affect physical or virtual environments. Different AI systems vary in their level of autonomy and adaptability after deployment.
This definition provides the Parties to the Convention (hereinafter “the Parties”) with a shared understanding of what constitutes AI systems. However, the Parties may further specify this definition within their national legal frameworks. This allows for additional legal certainty and precision without limiting the scope of application.
Additionally, it is essential to understand that the “lifecycle of an artificial intelligence system” includes the design, development, testing, deployment, operation, monitoring, evaluation, and improvement of the AI system.
Scope of the Convention
The Convention covers the lifecycle of AI systems operating in the public sector, used either by public authorities or private entities acting on their behalf, as well as in the private sector.
Any activities delegated by the state to private companies must also comply with the standards of the Convention. For example, if a private enterprise carries out procurement or contracting for the state’s use of AI systems.
In the private sector, the Convention allows the Parties to choose actions to fulfill its provisions. They may either directly adhere to the requirements outlined in Sections II-VI of the Convention regarding the activities of private entities or take other appropriate measures to fulfill the Convention’s obligations. In this case, the Party must submit a special declaration to the Secretary General of the Council of Europe, specifying how it will fulfill the obligation.
These provisions reflect a somewhat lighter regulatory regime for the private sector compared to the public sector. The Parliamentary Assembly of the Council of Europe (PACE) has expressed concern about the potential lowering of human rights protection standards due to the differentiated approach applied in the Convention to regulating the public and private sectors.
PACE specifically pointed out that insufficiently uniform application of the Convention’s provisions to private entities may create significant loopholes in its implementation, which could undermine the Convention’s effectiveness and lead to inequality in the protection of human rights between public authorities and the private sector.
The different levels of regulation for public and private sector activities under the Convention indeed pose potential risks. This could lead to inequalities in human rights protection and the impact of AI systems on human rights, especially when private sector actions significantly influence societal processes. For example, in the case of Cambridge Analytica, which used user data obtained from Facebook to influence the U.S. presidential election. They accomplished this through the “This Is Your Digital Life” app, perceived by users as a simple game. As a result, data from over 87 million users was leaked and later used in election campaigns to influence voter behavior. You can read more about this here.
Another notable example of human rights violations through automated data processing was the Amazon hiring system scandal. The system, designed to assist in candidate selection using AI, was later revealed to discriminate against women, rejecting their resumes for job positions.
However, there is another important aspect of this process. A flexible approach may encourage broader adoption of the Convention in countries where the private sector plays a significant role. Such an approach could be more appealing to states that do not want to overly restrict their technological development with strict regulations but still seek to ensure human rights protection by complying with the Convention’s provisions and attracting investments.
The Parties are not obligated to apply the Convention’s provisions to matters related to national security, provided such activities are conducted in accordance with applicable international law.
This provision aligns with Article 1, d of the Statute of the Council of Europe (ETS No. 1), which states that “matters relating to national defense are not within the scope of the Council of Europe’s competence.”
Scientific research activities are also not covered by the scope of the Convention. However, such activities must comply with human rights protection norms, national legislation, and recognized ethical and professional research standards. For example, requirements to reduce negative environmental impacts as outlined in the UNESCO Recommendation on the Ethics of Artificial Intelligence.
Artificial Intelligence Act
To ensure the development and application of reliable and safe AI systems in both the private and public sectors within the EU market, the European Union initiated work on the AI Act, which sets a global standard for regulating AI systems. The AI Act follows a risk-based approach: the higher the potential risk of an AI system to society, the stricter the rules that apply to it.
The AI Act does not cover systems used exclusively for military, defense, or research purposes. With the adoption of the AI Act, EU legislators aimed to emphasize the need for AI systems to operate in a manner that ensures transparency, accountability, and trust in new technologies.
Risk levels in AI systems according to the AI Act
The AI Act establishes strict regulatory frameworks for high-risk AI systems. Therefore, AI companies will soon need to determine whether their systems fall under the high-risk category to understand the requirements they must meet.
As mentioned, the AI Act regulates AI systems depending on their risk level. Under the AI Act, AI systems are categorized as:
- Prohibited AI systems (AI systems with unacceptable risk);
- High-risk AI systems;
- Limited-risk AI systems, such as general-purpose AI systems;
- Minimal-risk AI systems.
High-risk AI systems are permitted but must comply with a list of requirements and obligations to operate in the EU market.
High-risk AI systems include those used in the following areas:
- Critical infrastructure, such as water, gas, heating, and electricity supply;
- Education and training, such as monitoring and detecting prohibited behavior during testing;
- Employment, such as AI systems used for recruitment;
- Essential private and public services, such as the use of AI systems in banking or healthcare;
- Certain law enforcement systems, such as assessing evidence reliability during criminal investigations or evaluating the risk of crime or recidivism;
- Migration management and border-related matters, such as processing asylum, visa, or residency permit applications;
- Justice and democratic processes, such as AI systems that may impact elections.
High-risk AI systems are subject to a range of strict requirements listed below. Developers must assess and mitigate risks, maintain usage logs, ensure transparency and accuracy, and implement human oversight.
Training data in high-risk AI systems
During the training of high-risk AI models, systems must adhere to quality criteria for training datasets, including validation and testing. These datasets must be representative of the AI system’s training needs and free of errors (Article 10, Section 3 of the AI Act).
Datasets should be created considering the context and environment in which the AI system will be used, as well as the behavioral or functional conditions under which it will operate (Article 10, Section 4 of the AI Act). Additionally, datasets must be reviewed to identify and eliminate biases.
The focus on data quality and representativeness in the AI Act is crucial, as it can help AI systems function more fairly and effectively. The case of Amazon, previously mentioned in relation to AI bias, serves as an illustrative example. The system was trained on résumés received over a decade, most of which came from men, reflecting the gender imbalance in the tech industry. As a result, the algorithm learned to favor male résumés for technical roles.
This case highlights the importance of avoiding biases in datasets used to train AI systems and demonstrates why such requirements in the AI Act are critical for high-risk AI systems.
Technical Documentation for High-Risk AI Systems
Developers of high-risk AI systems are required to regularly update their technical documentation. It must be prepared before the high-risk AI system is placed on the market or put into operation (Article 11, Part 1).
Providing technical documentation plays a crucial role in ensuring that competent authorities have all the necessary information about the AI system and can assess its compliance with requirements. Additionally, it can potentially reduce the risk of negative consequences associated with the use of AI systems for which technical documentation was not submitted prior to their deployment, especially if such a system violates the Convention’s requirements.
Record-Keeping
Developers of high-risk AI systems are required to implement technical measures for the automatic logging of events throughout the system’s entire lifecycle. This is necessary to ensure an adequate level of traceability of its functioning (Article 12). Furthermore, certain high-risk AI systems must record the duration of each system use, including the date and time of the start and end of each session, as well as log input data.
These requirements ensure transparency and the possibility of auditing high-risk AI systems, which are critically important for guaranteeing their safe and effective use.
Transparency and Provision of Information on AI Systems
The AI Act also emphasizes the importance of transparency and informing users of high-risk AI systems. These systems must be designed to provide sufficient transparency in their operation, allowing users to correctly interpret the results of their activities. The systems must be accompanied by instructions with clear, concise, and accessible information in an appropriate digital or other format (Article 13, AI Act).
The instructions should include the following information:
- The identity and contact details of the provider and, where applicable, their authorized representative.
- The characteristics, capabilities, and limitations of the high-risk AI system, including its intended purpose and level of accuracy.
Additionally, information must be provided regarding human oversight, the resources required for the system’s operation, its expected operational lifetime, maintenance, and updates.
Comprehensive information about the AI system allows users and regulators to ensure its responsible use. This is critically important in areas where AI-driven decisions can have a significant impact on people’s lives and well-being, such as medical diagnosis or lending. For example, in AI systems used for disease diagnosis, it is vital to have clear guidelines on the system’s accuracy, limitations, and conditions under which its results may be inaccurate. This is necessary to minimize risks and avoid diagnostic errors that could affect patient treatment.
Human Oversight
In high-risk AI systems, effective human oversight must be ensured throughout their design, development, and use (Article 14 of the AI Act).
High-risk AI systems must also meet accuracy and reliability requirements and ensure an adequate level of cybersecurity (Article 15 of the AI Act).
Section 5 of the AI Act regulates the functioning of General-Purpose Artificial Intelligence (GPAI) models.
Obligations for GPAI Developers:
- Regularly update the model’s technical documentation (training processes, testing, and evaluation results) in accordance with the AI Act requirements.
- Create and regularly update information for other AI system providers planning to integrate GPAI into their systems. This documentation must help users understand the capabilities and limitations of GPAI.
- Develop a policy for compliance with EU copyright and related rights legislation.
- Create and publish a detailed report on the content used for training the GPAI.
Additional obligations apply to GPAI with systemic risks. To demonstrate compliance with their obligations, developers of such systems are recommended to sign codes of conduct under the AI Act. Those unwilling to sign such codes must demonstrate an alternative method of compliance with the AI Act.
Under the AI Act, citizens will have the right to file complaints about AI systems and receive explanations regarding decisions made by high-risk AI systems that affect their rights.
Before placing a General-Purpose AI model on the EU market, providers based in third countries must appoint an authorized representative within the EU by written mandate.
Unacceptable Risk AI Systems
These include biometric categorization systems, the exploitation of an individual’s or group’s vulnerabilities due to specific characteristics leading to significant harm, social scoring systems, emotion recognition systems in workplaces or educational institutions, and others.
However, AI-based emotion recognition may be allowed in specific contexts where it is critical for safety and efficiency—for example, monitoring the emotional state of pilots during flights. This could include systems that track fatigue, stress, or other psychological states that may affect a pilot’s ability to safely operate the aircraft. In such cases, emotion recognition AI would be classified as a high-risk system.
Entry into Force and Application of the AI Act
The AI Act will take effect 20 days after its publication in the Official Journal of the EU. It will become fully applicable 24 months after its entry into force, with the following exceptions:
- Provisions on codes of conduct will apply after 9 months.
- The ban on prohibited AI systems will apply 6 months after the Act enters into force.
- Rules for General-Purpose AI, including governance, will apply after 12 months.
- Obligations for high-risk systems will apply after 36 months.
AI Regulation in Ukraine
AI regulation in Ukraine is actively evolving. In October 2023, the Ministry of Digital Transformation of Ukraine published a roadmap outlining a phased approach to AI regulation.
The first phase involves preparing businesses for future regulatory requirements, including participation in HUDERIA, developing recommendations for various AI application areas, and signing voluntary codes of conduct. A key element is the “White Paper”, intended to familiarize businesses and citizens with the regulatory approach and future implementation stages.
The second phase includes the adoption of legislation similar to the AI Act, which will create legal regimes aligned with the European Union and simplify cooperation with European partners.
This approach is significant in the context of Ukraine’s European integration efforts. It helps balance the interests of key stakeholders and strike a balance between business needs and the protection of citizens’ rights.
Conclusions
The AI Act and the Convention together form a comprehensive regulatory framework that public and private entities must consider when designing, testing, training, and implementing AI systems. Effective compliance requires a deep understanding of both legal and technical aspects.
While the Convention opens new horizons for AI regulation, it also raises concerns about the full inclusion of the private sector. This underscores the need for close monitoring of national declarations submitted by future Parties to the Convention.
During the drafting process, many discussions revealed different perspectives and approaches to AI regulation. These debates played a crucial role in shaping the final provisions of the legislation, reflecting compromises and consensus among stakeholders.
Moreover, in the context of the Convention’s regulation of the private sector, states and civil society representatives must monitor and assess how various countries adapt its provisions at the national level. This should include verifying how states formulate their obligations in the AI sphere and whether this is done in line with the Convention’s objectives.
Transparency and openness are key to ensuring that AI regulation promotes the common good and protects human rights. Active participation by the expert community in projects like HUDERIA is also critical to ensuring the ethical functioning of AI systems.
With the adoption of regulatory acts, monitoring and analyzing how different countries implement these frameworks in their national systems will be essential—especially regarding private sector regulation under the AI Act and its implementation in Ukraine.
It is important to note that the AI Act is just one part of a broader legislative framework for AI. While it establishes basic principles and requirements for the safety and reliability of AI systems, other regulatory acts addressing data protection, copyright, and ethical issues also influence AI development and use.
Today, the world is at an exciting stage where legal regulation and technological development are progressing at remarkable speeds. Modern technologies present new challenges for lawyers at both the international and national levels. However, artificial intelligence has not only impressed the world with its capabilities but also sparked new ideas and thoughts in the legal sphere.
The regulation of AI and the protection of human rights are becoming increasingly important in the context of rapid technological progress. The AI Act and the Convention aim to strike a balance between innovation and legal norms, ensuring the protection and rights of every individual. AI regulation should include mechanisms for accountability and transparency to ensure that technologies serve the common good and do not pose a threat to fundamental rights and freedoms.