On September 24, 2023, an additional event “Artificial Intelligence and Civil Society Organizations” was held as part of the annual Forum “Artificial Intelligence 2.0: Regulations and Work in the Times of War”. This was the expert discussion organized as part of the Project Ukraine Civil Society Sectoral Support Activity implemented by the Initiative Center to Support Social Action “Ednannia” in partnership with the Ukrainian Center for Independent Political Research (UCIPR) and Centre for Democracy and Rule of Law (CEDEM) with the sincere support of the American people through United States Agency for International Development.
The event brought together lawyers, international experts, business representatives and civil society leaders to talk about the impact of artificial intelligence (AI) on the civil society development.
See the video broadcast of the event here
THE MAIN TOPICS FOR DISCUSSION:
- Protecting the reputations of CSO leaders against the backdrop of deepfakes and shallowfakes and the right to reply;
- International and Ukrainian approaches to the legislative regulation of the use of CCTV cameras;
- Artificial intelligence for CSOs in interaction with partners;
- Peculiarities of regulating copyright to objects created by Chat GPT in the context of the implementation of donor-funded projects and other issues.
The future is already here, because artificial intelligence tools have already become our reality. Of course, experts note that we can expect the real artificial intelligence in approximately 30 years. At its current level of evolution, AI cannot exist without human involvement who will “validate” its results.
There is no doubt that AI should be used in the work of NGOs. And right now, we should learn to use AI tools responsibly and reasonably and shape a vision of what the future of society, in particular, and civil society will be like, with the rapid development and increase in the diversity of technologies. Snapchat, deepfakes, shallowfakes, digital fundraising, content creation, CCTV cameras with facial recognition are just the tip of the iceberg of AI tools that can impact the public sector.
It is important for CSOs to recognize opportunities in time and integrate them into their operations, identify potential threats and develop a mechanism to prevent them. And more importantly, think about what legislation we should have in place to regulate all this and make it CSO-friendly.
Therefore, the discussion during the event revolved around how to help civil society organizations to get their bearings in time to simplify their lives with the help of AI or to strengthen their digital sustainability.
“Artificial intelligence is both about opportunities and risks for Ukrainian civil society organizations. It is also important to raise the issue of legal regulation: what it is like today, what it should be like tomorrow,” Olesia Kholopik, CEDEM Director.
“We will no longer be able to live without artificial intelligence technologies, we just need to find the right way to live with them,” Maria Heletii, Deputy Executive Director of ISAR Ednannia.
“Challenges have been evolving for years, and we really need to think about where the boundary is ethical, where the boundary is legal. Just a few years ago, AI was not mentioned in the legal acts, but the situation is starting to change,” Ihor Rozkladai, Chief Media Lawyer, Deputy Director at CEDEM.
Ihor paid attention to the main regulatory acts, which to some extent regulate the use of AI. Thus, the Ministry of Digital Transformation has approved a particular strategy, there is a new version of the Law of Ukraine “On Copyright and Related Rights”. However, the expert emphasizes: for now, society cannot as much as imagine the scale of the AI impact on privacy, in particular when the law enforcement agencies use facial recognition cameras (for example, during peaceful assemblies and protests). After all, potential abuse of personal data by law enforcement officers is possible.
The expert analyzed the deep and shallow fakes in more detail and looked at cases of positive and negative use of these technologies.
Therefore, while such fakes can ruin (or build) the reputation of politicians and public figures, they are also of great importance to the art and creative industries.
Regarding the spread of deep and shallow fakes, which damage the reputation, the only currently effective means of dealing with the consequences would be to respond appropriately in crisis communication mode. After all, the right to reply or legal defense in the traditional sense cannot be exercised: there are no guarantees that the information refuting the fake will be seen by the same audience that saw the fake.
The search for legal solutions should continue. In particular, legislative barriers should be established for profiling, which violates human rights. This requires legislative changes in the field of personal data protection, as well as regulation for facial recognition devices. The expert advises listeners to strengthen the culture of using personal data.
Francesca Fanucci, ECNL’s Senior Legal Advisor, spoke about the legal and political standards for the use of cameras with an AI component (Council of Europe, OSCE/ODIHR, EU, Draft Regulation on Artificial Intelligence), as well as the derogation from general norms during martial law and emergency.
The expert explained in detail how facial recognition technology (which includes identification, verification, and classification) works – in real time and retrospectively.
According to her, the main risks are “false positives” (the system “finds” the wrong person), the data obtained can be used for other purposes or considered biased.
In addition, facial recognition systems can demotivate people, who will avoid public events so that information about them will not be collected.
From a legal standpoint, the main risks specifically of CCTV cameras that recognize faces are as follows: violation of the right to privacy, data protection, freedom of speech, expression and peaceful assembly, non-discrimination, right to dignity, right to a human decision, right to appeal/access to an effective remedy.
The expert talked about specific court decisions in human rights cases that arose when the government or local authorities abused AI technologies.
Tetiana Avdieieva, a lawyer at the Digital Security Laboratory, spoke about the challenges and opportunities facing the public sector in the context of war. These include the use of cameras to identify war criminals and the proportionality of the use of such cameras.
She proves: even in the face of war, minimum standards are needed in the legislative field. They will help prevent abuses and violations of human rights. Minimum technical standards are becoming especially relevant at this time – to protect such video surveillance systems from hacker attacks and “leaking” data to the enemy.
According to the expert, there is simply no national legislation regulating these issues. The exception is the Concept of the Artificial Intelligence Development, which mentions human rights 3 or 4 times. “Accordingly, there are neither technical standards for AI technologies that affect human rights, nor legal safeguards, and furthermore, there is no strict mandatory regulation.”
The expert mentioned the data of the analytical study previously conducted by CEDEM regarding the legality of using CCTV cameras. Therefore, CSOs should work now to eliminate these risks and introduce appropriate legislation and procedures.
Veronika Boiko, Head of Social Division at YouControl, an expert of the Open Data Association, analyzed the possibilities of using AI in the work of CSOs. She explained why YouControl currently considers it impossible to use AI elements for proper and correct verification of counterparties, identifying the key factors of AI “unreadiness”.
Next, the expert shared practical advice regarding the use of artificial intelligence tools (we were talking about content generation) to improve communication with partners and counterparties. According to an internal survey, 20% of this IT company uses AI tools (in particular for recruiting, writing grant applications, in content preparation, in organizing events, writing code, creating product strategies, communicating with partners, in trade and law), so the expert shared some tips during the discussion.
In addition, the speaker talked about the list of tools that her team members have used or tested, and identified 6 main methods of applying AI that have proven effective for them, namely: preparation of the content for the articles, presentations, publications, correspondence with partners, generating applications, tables, checklists, coding elementary functions, search for new ideas.
Olha Petriv, a CEDEM lawyer, spoke about how AI speeds up the search for information about various regulatory and legal acts and how it can speed up the process of developing legislation.
So, what should you pay attention to when generating AI works for the implementation of donor projects: you need to avoid problems with plagiarism, protect copyright and related rights to the content. In particular, this could be done through a variety of software that watermarks visual materials, preventing AI from using these works for learning.
The expert emphasized that not all materials are protected from AI, therefore, in all cases, a human should check on the basis of what and using which sources of AI created a particular product. The expert also commented on the moratorium on ChatGPT, which right now divided people into two camps.
In conclusion, it is worth noting that the worst has not yet happened – AI is currently unable to replace humans, because its products need to be carefully checked for reliability and correctness.
At the same time, the sector needs to pay close attention to the legislative regulation of the use of AI elements and tools in order to ensure respect for human rights, and to regulate the ethical and procedural aspects of the use of technology, and also to protect personal data and information critical for democracy and defense capability (regardless of wars or crises).
Among other things, CEDEM is preparing informational materials to raise awareness of AI and its potential as part of the Ukraine Civil Society Sectoral Support Activity.