Rights and freedoms in digital reality: what was discussed at the CEDEM forum on artificial intelligence

July 26, 2021

100+ participants online and offline, four thematic blocks focusing on artificial intelligence: from general influence to its application in court cases. This was the forum “Artificial Intelligence: Rights and Freedoms in Digital Reality” held by the Centre for Democracy and Rule of Law on July 21.

With this event, CEDEM launched a discussion, a professional dialogue on artificial intelligence (AI) and its importance. After all, such systems are currently used in Ukraine as well. In particular, analytical systems that recognize faces and objects through cameras in a number of cities in our country.

Read about the start of the Forum at the link. The Centre for Democracy has also collected key theses from AI experts in each of the blocks.

Session I: General influence of artificial intelligence on human rights

The first session of the Forum on Artificial Intelligence was one small step for mankind, one giant leap for the Ukrainian experts. During this session, participants discussed how artificial intelligence should be treated, whether there are certain risks in its development and which areas should be considered the most and least risky. The discussion was moderated by Tetiana Avdieieva, media lawyer and CEDEM artificial intelligence project manager.

Vitalii Honcharuk, founder of Augmented Pixels and chairman of the expert committee on artificial intelligence at the Ministry of Digital Transformation of Ukraine, stressed that artificial intelligence should not be perceived as a “holy spirit” or something supernatural – it is a usual technology. The expert noted that “such technologies are created by specific people and specific people are responsible for them”. 

Ihor Rozkladai, CEDEM Deputy Director and Chief Media Expert, emphasized the inconspicuous but very significant impact of artificial intelligence on every process of our lives – a vivid example here are deepfakes and the possibility of changing the voice in the film industry. Ihor stressed that “technology is neutral, and it is important how it is used and who is responsible for it”. That is, artificial intelligence is rather a tool for solving certain problems.

Artificial intelligence can have a significant impact, despite the complexity or simplicity of the algorithms. Yevhenii Aizenberh, head of the human rights project in the field of artificial intelligence at Delft University of Technology, stressed that artificial intelligence can often replace humans, and therefore we should be careful about whether it is appropriate to use technology in every situation and whether it can help meet our needs fully and properly. Yevhenii stressed the importance of determining the level of human involvement in the process of using artificial intelligence: “one should study the social context before making certain decisions”.

However, it is not always possible to consider human rights at the beginning of technology development. This was emphasized by Vitalii Moroz, head of the new media program, Internews Ukraine, emphasizing that “when someone creates a startup, they think about giving some benefit to the people rather than about the risks”. Therefore, it is important that human rights are not forgotten in the future, as well as that artificial intelligence has as few hidden effects as possible, which users and developers do not even think about. 

Among the safest and most promising areas, the speakers singled out education, healthcare, agriculture and anti-corruption, while experts see the military-industrial sphere, justice and social media as the most risky ones. The influence of algorithms on human autonomy and self-representation is also considered dangerous. Finally, the speakers assured that fundamental changes in the development of artificial intelligence technologies can hardly be expected in the next 5-7 years.

Session II: The need for regulation: law, ethics or their successful combination

During the second session of the forum, participants discussed the feasibility and nature of artificial intelligence regulation. In particular, they discussed the advantages and disadvantages of national regulation, international acts, as well as ethical norms in this area. 

According to Oleksandr Kompaniiets, Director of the Department of Digital Economy of the Ministry of Finance, artificial intelligence is essentially a neutral technology. However, it is important how it will be used. Its use will determine whether AI will be the universal evil or universal good. 

According to other experts, when it comes to the threat to humanity, one should not generalize everything down to the AI technology itself as a threat. It depends on the specific areas of application, languages, societies, policies, and so on. Olha Kudina, Junior Professor at Delft University of Technology, Suazik Peniko, Head of Capacity Development at Etalab, and Vidushi Marda, Senior Head of Digital Team Programs at Article 19, agreed with the above.

On the issue of ethical regulation of artificial intelligence, according to Olha Kudina, the European community builds its recommendations on the principles of respect for and observance of human rights. That is why the issue of artificial intelligence is laced with ethical standards and rules. Also, they are important at the corporate level and in the communication of organizations with the users. 

And the lawyer of the Digital Security Laboratory Maksym Dvorovyi added that although the field of AI is laced with ethical issues, this industry cannot do without internal regulation and mandatory rules.

ECNL Senjor Legal Advisor Francesca Fanucci also advocated for the regulation of artificial intelligence: “We need to dispel the myth that regulation hinders innovation. We should not see it as black and white. Regulation is needed if it makes artificial intelligence safe, reliable and not violating the human rights.” At the Council of Europe level, draft regulatory frameworks, both mandatory and voluntary, can be drafted. 

Then the experts talked about the features of international and national regulation of artificial intelligence. 

According to Suazik Peniko, despite the fact that different countries have different rates and levels of AI development, international regulation of this area is needed. After all, without regulation, those countries that have achieved greater success will be able to impose their standards dismissively. International standards are needed to maintain equality. 

Oleksandr Kompaniiets, in turn, assured that Ukraine is actively participating in the global discussion on the artificial intelligence development. In particular, the concept of its development proposed by Ukraine was taken into account by international organizations and began to be studied in both the Council of Europe and UNESCO.

“As for the legislation. Do we need to develop a separate law on artificial intelligence?… There is a discussion here, and it is taking place not only within the country, but also on international platforms… I will share some insider information: The bloc of countries that consider strict regulation of AI technology to be inexpedient, at least at this stage, includes such leading countries in this field as Japan, the United States, Canada, Israel…” added Kompaniiets

Maksym Dvorovyi characterizes the healthy regulation of the field of artificial intelligence with the word “multisteakholderism“. That is, it must come from both the government and the businesses. It is also necessary to have a dialogue with NGOs.

If we talk about the superiority of national or international law in the field of AI, the best option could be a combination of the above. This was stated by Suazik Peniko. She added that this is also a question for discussion. “Artificial intelligence is more of an international issue than a national one. Because the Internet, for example, is not limited to a certain territory. So here is the question: how do we combine these things? It is worth considering this problem at different levels.” 

Session III: Face recognition and surveillance systems and human rights

The third session of the Forum focused on face recognition and surveillance systems. During this session, experts tried to find out how risky it is to install cameras with face recognition function everywhere, whether this technology can be used at all and under what conditions. The discussion was moderated by Ihor Rozkladai, Deputy Director and Chief Media Expert at CEDEM.

Marlena Wisniak, ECNL’s Senior Legal Consultant, said that her organization, along with many other civil society representatives, advocated a total ban on face recognition cameras. She stressed that “people should not be afraid to go to a protest or hide in spaces where they will not be seen” – this has a cooling effect on rights and freedoms. It is unrealistic to mitigate such damage, so there should be no precedent or room for human rights abuses. Of course, the expert believes that this rule applies only to cameras with the face recognition function, while in other areas the potential use of such systems still has a right to exist.

Vladyslav Vlasiuk, managing partner of EPravo, called the cameras “an inevitable evil”, the presence of which in our lives we will have to accept, because in addition to the obvious risks we should remember about their advantages – “cameras will always be there, many will not like them, but there will always be arguments about security, peace, investigation opportunities”. According to Vladyslav, the algorithm for face recognition is not good or bad, the algorithm itself cannot be banned. Therefore, the question is who and how uses such systems, not whether they should be banned altogether.

There is no single answer, it always depends on the circumstances under which the cameras are used, argued Tetiana Avdieieva, media lawyer and project manager for artificial intelligence at CEDEM. Conventional cameras can be legal, but “face recognition systems raise not only the surveillance issue in itself, but the issue of personal data storage and processing, discrimination, etc.” and this, according to the expert, makes them high-risk systems. At the same time, this does not mean that face recognition systems should be banned everywhere – they can be used on social media or to unlock the phone.

Jason Hsu, Chief Initiative Director at Taiwan AI Labs, shared his experience working in the Taiwan Parliament and developing personal data protection policies. He stressed that the artificial intelligence development should be regulated in the matters of personal data, because the risks are significant. The state can abuse its capabilities – in China, “the streets where protests are taking place are filled with cameras”, which is wrong. Technology should not be out of control. It is necessary to establish the basic rules of the game.

In conclusion, the experts agreed that while it is impossible to completely and comprehensively ban cameras with face recognition system, such technologies should be strictly regulated. This will help reduce human rights risks and protect citizens from abuse by states.

Session IV: Justice and artificial intelligence

During the fourth session, the experts discussed whether artificial intelligence technologies can be used in justice, the risks of their use, as well as public policy in this area. The discussion was moderated by Vasyl Babych, head of the multisectoral group on public policy at the Center for Economic Recovery.

Serhii Chornutskyi, Deputy Head of the State Judicial Administration, noted that “artificial intelligence can be used in the justice system, but not directly in the administration of justice, but as an auxiliary tool to assist judges”. That is, artificial intelligence does not replace judges, but only simplifies the judicial process and assists judges in making decisions, especially when there is established case law.

According to Ivan Piatak, CEDEM’s legal adviser to “Chesno. Filter the Judiciary!” Campaign, the biggest challenge for artificial intelligence in the field of justice is to make an impartial and unbiased decision. “Artificial intelligence is created by people who have their own prejudices, and this is the reason for making unjust decisions. Such cases exist in the United States. They are criticizing the systems that analyze the commission of a repeated crime by a person who has already committed a crime. There is also the issue of racial bias or prejudice against a person on the basis of his or her property status.” 

More pitfalls are described in the Ethical Charter on the Use of Artificial Intelligence in the Judicial Systems and Their Environment. Due to the risks, artificial intelligence is currently used only to predict decisions based on existing practice, and decision-making is left to judges.

Regarding public policy in this area, the initiative comes from the private sector. Roman Kuibida, Deputy Chairman of the Board of the Centre of Policy and Legal Reform, pointed out that it is private companies such as LIGA:ZAKON  and Court on a Palm offer their systems for predicting court decisions. “All of these are very useful things that can give potential plaintiffs information: what they can count on, given the practice. It can also simplify the process for judges,” stressed Roman.

Serhii Suchenko, USAID New Justice Program Legal Adviser, agreed that the state would not play a leading role in the development of artificial intelligence in the field of justice, and that cooperation between the state and businesses was needed. “In order for the systems related to algorithmic data processing to work, these data should be available and be of proper quality,” said Serhii. “Synergy is needed between businesses and the state, when on the one hand, the state provides information in the form in which it is necessary and convenient, and on the other hand, businesses will provide the opportunity to use their solutions, perhaps on preferential terms”.

Finally, experts noted that the introduction of artificial intelligence in Ukrainian courts is possible only after digitalization (introduction of electronic court instead of paper proceedings) and only with proper organizational support of the court.