As a part of the “Safe City” program, the Ukrainian capital was deployed with more than 4,000 CCTV cameras. Currently, 52 underground stations have 195 CCTV cameras, which among other things, are equipped with facial recognition function – the epicenter of new technologies are transfer, terminus and busiest stations. In addition, some cameras analyze the temperature of people, who intend to use the Kyiv underground. The data from the cameras is received by the police, the municipal guard, the State emergency services, and is a source of statistical information for the health department of the Kyiv City State Administration.
Key crossroads, railway stations, central streets and squares of Lviv are equipped with video surveillance cameras as part of the “Reforms without Illusions” project. It is also planned to install such cameras at the city bus stations. Technology is able to recognize faces and vehicle registration plates by delivering information to law enforcement agencies. 450 videos were used to search for persons by the police and the Security Service of Ukraine, only 18 of which have led to a positive result. The cameras operate as a single network, while access to the recordings is provided to the City Security Center Lviv Municipal Enterprise, the National Police and the Patrol Police Situation Room. ATP-1 (Motor Transport Enterprise), Lviv Municipal Enterprise “Lvivavtodor”, the Department of Transport of the Lviv City Council apply for data from the cameras if necessary.
In Zaporizhzhia, the “Safe City” program has also become a platform for installing cameras with face and vehicle registration plates recognition. Moreover, in July 2019, the city council even adopted the Regulations for the use and operation of the video surveillance system of Zaporizhzhia. The document, in particular, provides for the possibility of recording the entry of persons into the restricted areas, overcrowding (as a violation of quarantine requirements), counting the number of people and compliance with certain traffic rules. The Regulations state that the Laws of Ukraine “On Information” and “On Personal Data Protection” were taken into account during its development. However, later it turned out that cameras have a much lower quality than is necessary for the full performance of the tasks assigned to them. However, even imperfect data is used by the police to identify committed crimes.
Uzhhorod has taken over the baton by purchasing cameras with facial recognition and a video server to service these devices. The technology is capable of recognizing up to 30 people at a time, has a library of 30,000 faces and an alarm function based on the specified characteristics (persons, vehicles, vehicle registration plates). Law enforcement agencies have access to the data, while the purpose of such operations is to prevent the commission of crimes.
The Vinnytsia Region is introducing a “Vezha” video analytics system that will recognize and evaluate objects using artificial intelligence. The video stream brings together about 800 cameras used by the police to monitor and quickly locate possible violators (faces, cars, vehicle registration plates). The system is supervised by two employees, one of whom is in charge of current affairs, the other monitors and analyzes information from mass events.
Currently, law enforcement agencies have access to about 19,000 cameras in 25 regions, of which more than 2,000 have analytical systems based on artificial intelligence. Such technologies, in addition to the mentioned cities, are used in Dnipro, Odesa, Chernivtsi, Chernihiv and many other locations. Police say almost 30 percent of crimes are investigated with such cameras. According to the results of the poll “Artificial Intelligence: Ukrainian Dimension” in 2018, 38.2% of Ukrainians believe that the “Smart City” system should first be implemented in the field of security and video surveillance.
However, the legality of such systems remains a key issue: the existence of a legal basis for their functioning in Ukrainian legislation, the conditions and necessity of intervention in specific circumstances, the predictability of intervention for ordinary citizens, as well as appropriate safeguards against abuse.
Smile, you’re on camera!
Smart cameras installed in many Ukrainian cities as part of “Smart Cities” programs not only record video, but can also recognize faces, vehicle registration plates, analyze road traffic and calculate the occupancy of public transport in accordance with quarantine restrictions. The number of types of video surveillance systems is almost endless, so the mechanisms of their operation and information processing can differ significantly.
In particular, the cameras in Kyiv contain data on the perpetrators, and, if needed, are filled with information on missing persons (including children). When the necessary data – for example, a photo of the suspect – is uploaded to the system, the software begins to analyze video from surveillance cameras located in the crowded places. According to TSN studies, systems can identify a missing child with an accuracy of up to 95%. The description of the capital’s video surveillance systems using facial recognition technologies mentions not only the module that allows searching for offenders in livestream regime, but also the comparison of images with the database of offenders created by law enforcement agencies. In the underground, the camera takes 10-30 shots per second, then selects the top five and sends them to the server. The information from the devices is sent to the Data Processing Center, where it is stored for a month. The administration of the system is taken care of by KP “Informatika” of the Kyiv City State Administration, while the algorithms are developed by the Ministry of Internal Affairs and the Security Service of Ukraine.
In general, it is important to remember that facial recognition technologies can perform different functions. Specifically, according to the research of the EU Agency for Fundamental Rights, information from cameras is used to perform 3 main tasks:
- Verification involves the matching information “one-to-one” – comparing data from two biometric templates to check if they belong to one person;
- Identification involves the comparison of “one-to-all” – a comparison of a person’s biometric template against other templates in the database to determine whether it contains information about such a person;
- Categorization involves grouping of data according to certain characteristics of all available biometric templates – most often the categories include gender (although gender and sexual orientation recognition is considered to be a flawed and discriminatory technology), age, ethnicity, less often – hair color, eye color, etc.
The technical gaps in facial recognition with cameras include low image quality in low light, and, accordingly, the inefficiency of such technology at certain times of the day or in poorly lit rooms. Also, such cameras preferably receive images that are difficult to recognize when magnified due to the low quality of the cameras themselves. After all, the problem is that individuals rarely turn their faces to the surveillance camera itself, looking down or at their phones. Thus, the overall efficiency of such technologies remains quite low, which is confirmed by the Lviv example, where only 18 out of 450 cases of using the facial recognition function led to a positive result. Also, the lack of diverse datasets, biases in the process of labelling data or image probes and numerous other factors make these systems unreliable and ineffective.
Thus, the system tends to make mistakes in the presence of fuzzy data or incorrect analysis of such data. Specifically, it relates to inaccuracy in cases of aging – when the photo of a young person is no longer valid for identifying an individual. Also, another pitfall implies gender and racial stereotypes introduced into a system, causing malfunctioning. Scientifically this is called a “false positive” and “false negative”. The first type concerns cases when the system determines the coincidence in the face of the person and the photofit of the probable perpetrator, although in reality they are completely different people. The second type of error occurs when the image of a person actually coincides with the photofit in the database, but the system does not identify that match. Since the designers and developers themselves define the admissible rate of mistakes, there is no transparency and reliability in the systems. The EU Agency for Fundamental Rights notes in this regard that algorithms in such technologies do not give a clear result, but only establish the percentage of similarity. In addition, attention should be paid to the number of persons identified by such cameras. If the “false positive” is 8%, and the faces of 50 thousand people were analyzed during a day in the underground, the camera can be wrong about 4 thousand people. Therefore, relying solely on cameras equipped with facial recognition when searching for missing children or wanted persons is quite unsound.
What is the main concern of the mistake rate? First of all, it is about pre-programmed prejudice, through the prism of which the entire system operates, discriminating against already vulnerable and marginalised groups. For instance, according to numerous estimates police are often targeting people of colour based on damaging racialised stereotypes. The same is applicable to migrants and people with disabilities, women, gender non-conforming persons, LGBTQ+, lower socioeconomic class and actually any other group, which is possible to be targeted via facial recognition system. In practice, tests conducted in the US have already shown that error rate of facial recognition systems differs significantly according to demographic characteristics, including age, sex and country of origin. Conclusively, application of such systems is impossible until absolute combating of institutional discrimination and developing algorithms based on really neutral and non-biased information.
Big brother is still watching you…
Cameras with facial recognition functions are common in many countries. The goals of their installation, consequences, challenges and legal regulation differ significantly depending on the jurisdiction. However, assessing the practice of foreign countries, we can identify trends in the legal qualification of such technologies and the level of protection of human rights by States that use such systems.
Human Rights Watch has criticized Russia’s plans to expand the use of face-recognition cameras due to frequent breaches of privacy as a result of their application. Let us remind that the cameras were proposed to be installed in a quarter of underground trains, at the entrances to apartment buildings and in other public places to search for quarantine violators and persons qualified by the Russian Interior Ministry as criminals. Thus, Human Rights Watch experts stressed the possibility of abuse of the latest technology by the Russian authorities in view of the regular violations of rights by this State. Activists and human rights defenders have repeatedly stressed the lack of any mechanism for legal oversight or review of surveillance system decisions. Roman Zakharov v Russia became a reasonable consequence of this debate. In this case, the European Court of Human Rights (ECtHR) found that Russian law violated the principle of adequate and effective safeguards against abuse. Information from the capital’s cameras, which had the facial recognition function, became a commodity on the “black market” of personal data, which led to numerous lawsuits from human rights activists. They have also accused the State of using facial recognition cameras to obstruct peaceful demonstrations and rallies, and to further persecute their participants. Subsequently, the activists filed a complaint to the ECtHR claiming a violation of their rights to privacy and peaceful assembly, protected by Articles 8 and 11 of the Convention accordingly. Simultaneously with the application to the ECtHR, human rights activists began collecting signatures on a petition to ban cameras with the facial recognition function.
The situation in China is even more complicated. Currently, the large-scale project “Sharp Eyes” involves the installation of about 600 million cameras with facial recognition functions. Studies show that placing so many cameras across the country allows detecting 1.4 billion faces in 1 second. Responding quickly to the pandemic, China has also developed a system that can recognize masked faces. However, technology in China is far from being used to protect public order. In particular, using cameras and facial recognition technology, the government identifies the Uighur minority, persecutes and represses such individuals. Any new developments are actually tested on the members of this minority. Images and personal data of violators of the rules of conduct are placed on large billboards for public condemnation of such persons. Moreover, facial recognition technologies using cameras are even used to control the amount of toilet paper in public toilets – for example, the faces of people who use this service are scanned and the amount of paper they use is checked. In addition, the facial recognition system is currently used to compile a citizen good standing index, which is used to issue permits for travelling abroad, public activities, and persecution of political opposition. Another highly criticised by Article 19 Chinese practice is emotion recognition systems, which deeply interfere with privacy and can even violate the right against self-incrimination. Undoubtedly, these measures are not only the collection of confidential information without the consent of the person, but also a significant psychological pressure on citizens in public places.
Given that China is one of the largest developers of facial recognition technology today, video surveillance devices with such functions are often manufactured for export. However, not all states agree to cooperate with the authoritarian Chinese government. For example, in the United States, the Chinese company Hikvision was included in the sanctions list due to threats to national security and the possibility of collecting data from cameras by third parties. In addition, the American human rights organization recently addressed an open letter to President Biden regarding the rejection of the use of facial recognition technology by the American government. The main argument was that such technologies “disproportionately misidentify and misclassify people on the basis of race, gender, sexual orientation and discriminate the members of other marginalized groups”. Meanwhile, in San Francisco, California, local authorities have banned the use of facial recognition technology anywhere except for crowded places – train stations, airports and bus stations. Lots of other states, including Massachusetts and Oregon, have also banned such systems, while an initiative to prohibit facial recognition on the federal level becomes more and more popular. Thus, in the United States, there is currently a relatively heated debate about the legality of cameras with the facial recognition function.
The situation is somewhat different in Britain, where the discussion around the legal regulation of such systems has reached the level of judicial debate. For example, in the case R (Bridges) v Chief Constable of South Wales Police and Others, the court found that the use of such cameras directly affects the exercise of a person’s right to privacy and constitutes an interference with such a right. Moreover, the court stressed that the use of such technologies requires substantial and detailed legal regulation, as well as compliance with the principle of proportionality – it should be prohibited to collect more information about more people than is necessary to achieve a legitimate aim. In another case, a British court noted that the issue of those who could be included in the camera database for further search and analysis was also debatable. In particular, the court stressed that the law should clearly regulate this issue, because otherwise the police will have too much discretion to interfere with private life.
International standards: old norms – new reality
What does international law say about such technologies? First of all, reference should be made to the case law of the ECtHR, which in Szabó and Vissy v Hungary recognized that the image of a person’s face constitutes sensitive biometric information. A similar position is taken by the Court of Justice of the EU, which in Michael Schwarz v Stadt Bochum stressed the prohibition of disseminating such data without the consent of the individual and the need for detailed regulation of the collection of such data in public places. In addition, in S and Marper v the United Kingdom, the ECtHR emphasized that sensitive biometric data, even in exceptional cases, could not be disseminated in greater quantities than expected by an individual. However, the period for which the data are collected does not play any role – illegal collection of data for a short period of time is also considered a violation of Article 8 of the Convention. At the same time, in Peck v the United Kingdom, with the help of street video surveillance cameras, the police were able to identify a person who tried to commit suicide. Subsequently, clear images showing the face of the individual appeared in the media – the local newspaper and television channel. The ECtHR found a violation of Article 8 through the dissemination and use of such information without the consent of the person concerned.
The situation in Perry v the United Kingdom is even more interesting. There the claimant was aware of the presence of cameras, but there was a lack of a warning about the recording and the possibility of further use of the information for identification. Thus, the Court found a violation of Article 8 of the Convention. In another case against Britain, Catt v the United Kingdom, the Court emphasized that for lawful data retention after data collection, the State should develop clear procedural rules and guarantees for the disclosure, use and destruction of such information. The ECtHR also emphasized that the intention to create a database for the police should be clearly enshrined in national law and properly regulated. In Gaughran v the United Kingdom, the Court found a violation in the fact that the photograph was stored in a local database and could then be uploaded to a police database and used by a facial recognition system without the possibility of viewing or control by a special authority.
At the same time, the recent two cases of the ECtHR, Big Brother Watch and Others v the United Kingdom and Centrum för Rättvisa v Sweden, have somewhat clarified approaches to assessing mass surveillance at the international level. In these cases, the Court has found that mass data interception systems do not generally violate human rights, but there are a number of restrictions that governments must comply with when implementing such technologies. It includes the obligation to set up an independent body to verify surveillance permits and to oblige law enforcement agencies to assess the need for surveillance, to determine the object and amount of information collected in each case. In case of violations, citizens should be able to appeal against the permission of an independent body in court, which will serve as an additional guarantee and protection against abuse. In addition, the right of journalists to protect journalistic sources (when journalists meet a person for an interview in public places) may be violated in the case of surveillance for unauthorized purposes, which will have a “chilling effect” on freedom of expression. Similar assumptions might be made as to the usage of facial recognition technologies for surveillance during demonstrations, where numerous activists, journalists, political dissidents and opposition might abstain from participation due to potential identification and further persecution. However, one of the most important novelties of the ECtHR practice is the obligation to take into account the issue of privacy in cases where data obtained from such systems are transferred to foreign partners (for example, to law enforcement agencies of neighboring countries for illegal border crossing etc.). In particular, this applies to guarantees of the correct use of personal data, the development of a procedure for appealing against illegal actions by foreign authorities and the possibility of deleting data if the purpose of their collection is achieved or they are irrelevant.
The purpose of collecting biometric data is also important. For example, in Lupker and Others v the Netherlands, personal photos were used solely to identify offenders without any possibility of using the images for other purposes. At the same time, in Friedl v Austria, the Court considered the lawfulness of the use of a photo of the applicant’s participation in the demonstration, which was made to verify compliance with the rules and sanitary conditions during the public event. In its conclusion, the ECtHR ruled that the authorities had not unduly interfered with the applicant’s privacy, as they had not entered his apartment or recorded the applicant in places where confidentiality was expected, while the street was a public place. However, in this situation there was no mass and permanent recording, and the purpose of collecting images was clearly defined in advance – to verify compliance with the necessary rules by the participants of a particular event. When using cameras with facial recognition on a regular basis, it is unlikely that specific goals and limitations on the continued use of images can be identified. Thus, the legitimate goal of “crime prevention and protection of public order” is not suitable for establishing the period of time during which information will be stored, as well as justifying the grounds for collecting data of the unlimited number of persons.
Convention 108+ stipulates that the processing of data categories such as biometric data is only possible under law with appropriate safeguards against abuse. The CAHAI feasibility study stresses that facial recognition systems might seriously endanger rights to privacy, freedom of expression and prohibition of discriminatory treatment, thus a regulatory framework for such technologies is strictly required. In its guidelines, the Council of Europe emphasized that the development of legislation for facial recognition systems should take into account the phases of the use of such technologies (including the creation of databases and applications), the areas where the technologies are used, and the level of privacy. In addition, different standards apply to law enforcement and other government agencies. The main requirements for regulations in this area are:
- Detailed explanation of the purpose of using specific data;
- Minimum guarantees of reliability and accuracy of the used algorithm;
- Determined duration of storage of used images;
- Ability to review these criteria;
- Transparency and traceability of the processes;
- Availability of precautionary measures.
According to the European Commission’s proposals for the regulation of artificial intelligence, facial recognition systems that envisage biometric identification are risky. Their use in public places, including for the benefit of law enforcement agencies, is expressly prohibited, except for the strictly defined circumstances. They include the search for missing children, the prevention of terrorist activities or the search for particularly dangerous criminals. Each of these cases requires the permission of a court or other authorized body. At the same time, the European Data Protection Board has stated that biometric surveillance technologies should be banned for use at any circumstances, regardless of the legitimacy of goals pursued by their application. Resolution 1604 (2008) of the Parliamentary Assembly of the Council of Europe “Video Surveillance in Public Places” emphasizes that everyone who enters the video surveillance area must be aware of this and have access to all recordings with their images.
In addition, Human Rights Watch noted that the government should publish statistics on the effectiveness of facial recognition systems and invite civil society experts to participate in the debate on the necessity, proportionality and legality of the use of review of such technologies. Other civil society initiatives go even further, exhorting to establish an absolute ban on biometric surveillance and facial recognition systems accordingly, while within the EU borders there is an open petition for a new law prohibiting such technologies. Also, according to the results of the CAHAI survey, facial recognition systems are on the top of the list among the most dangerous AI systems, while information obtained from such systems shall always be subjected to human review. That is, international human rights law shall impose significant restrictions on the possibility of using artificial intelligence systems to recognize faces by law enforcement agencies.
Being one’s own producer – Ukrainian style
Ukrainian law does not explicitly refer to facial images as personal data that is protected at the legislative level. Therefore, unlike administering passport data, video surveillance system administrators have no significant interest in protecting such information. In addition, the State Service of Special Communications and Information Protection notes that the lack of personal data status leads to a situation where the system certification requirement is inapplicable if it collects images of a person’s face or vehicle registration plate. However, the Ombudsperson of Ukraine’s research indicates that the Law of Ukraine “On Personal Data Protection” should still be applicable to the operation of cameras capable of recognizing faces. In particular, a recent report described numerous violations of the processing and protection of personal data, failure to notify the subject of information collection and lack of consent to such processing. Such actions by administrators directly violate Article 8 of the Law, which requires acquaintance of a person with the mechanisms of automatic processing of personal data, as well as protection against an automated decision that has legal consequences for him/her (for example, matching the person’s photo against the alleged offender).
Currently, cameras equipped with facial recognition technology are being installed by local authorities. No legal act empowers these bodies to equip cities with such systems. Even regulations approved by local councils are not based on the provisions of the law and, accordingly, cannot create legal rights and obligations for both information administrators and persons, whose data is fed into the database. Given the impossibility of establishing a document that actually regulates the operation of such cameras, the question of a persons’ access to information about them in such systems is unlikely to be answered. In this case, there is a violation of Ukraine’s international legal obligations. In fact, this situation can serve as an excellent springboard for numerous actions in the ECtHR for violations of Article 8 of the Convention.
In addition, Ukrainian regulation desperately needs safeguards against the misuse of such technologies. In particular, experts note the case when a citizen of Kyrgyzstan tried to set an explosive device on the car of Kirill Budanov, an employee of the Chief Intelligence Directorate of the Ministry of Defense, as one of the most striking examples of insecurity of systems. During the trial, it was established that the criminal learned about the location of the car from the “Smart Сity” database, which was accessed through a relative who was a police officer. Another case mentioned in the context of the lack of guarantees of rights is the situation of information leak about the movements of the first deputy director of the State Bureau of Investigation by the system operator. Thus, there are currently no technical or legal safeguards against abuse.
In addition, in the above-mentioned cases of Perry, Catt and Gaughran, the ECtHR emphasized the need for data processing warnings, a clear procedure and oversight of the facial recognition system. The Kyiv underground, for example, although warning about the fact of video surveillance, fails to mention the function of facial recognition and further use of such information. Obviously, Ukrainian video surveillance systems are hardly equipped with a reviewing function or have a clear procedural guide that anyone can read to find out what is happening with their biometric data. The Ukrainian law does not provide for procedures for creating and maintaining a police database. The logical conclusion is that the technology is highly likely to violate the requirements of Article 8 of the Convention.
After all, many questions arise about the overall proportionality of interfering with human rights via cameras with facial recognition. The principle of proportionality involves the application of the least restrictive, but most effective measure. In the case of such technologies, efficiency is often highly questionable due to numerous technical gaps that make it difficult to identify individuals with cameras, and cause errors in the analysis of information from the database. Another fundamental challenge implies societal inequality, such as institutional racism, sexism and other forms of discriminatory treatment, which are often converted into the systems’ algorithms. Thus, the probability of harm from such systems now seems to be greater than their effectiveness – accordingly, their use is hardly proportional, while their existence – hardly permissible in a democratic society.
Is it possible to make the illegal legal?
The lack of any specific regulation makes the procedure for collecting and processing biometric information an excessive interference with the right to privacy and other human rights. In particular, this violates the requirement of legality crystalised in the case law of the ECtHR (see Ćosić v Croatia). Moreover, in Ukraine, the subjects of collection and processing of biometric data do not comply even with the general rule of Article 8 of the Law of Ukraine “On Personal Data Protection” due to lack of consent, informing about the collection of such data, and access to information in the system (due to a lack of a legal mechanism – to whom the inquiries must be sent and who is obliged to answer). Therefore, until the development of proper legislation, the use of such systems is illegal under any circumstances.
In addition, even in the case of the development of special regulations, it is important to remember about the absolute ban on mass non-selective surveillance, which has been repeatedly emphasized by both the UN Human Rights Committee and the ECtHR. In particular, in Gillian and Quinton v the United Kingdom, the Court emphasized that monitoring to prevent terrorist activities is illegal without a reasonable suspicion of wrongdoing or the imminent possibility of the violation. Namely, surveillance can be established only over the persons who have the status of a suspect, and it cannot be large-scale in crowded places such as the subway. Moreover, the duration of the surveillance and the type of information collected directly depend on the nature and gravity of the alleged offense (see Weber and Saravia v Germany). To clarify, large-scale non-targeted surveillance without identifying the object and without obtaining permission does not meet this standard under any circumstances, in particular due to the disproportionate invasion into privacy of many other people, who are not suspected of any violation. Surveillance is particularly dangerous in the absence of safeguards in the form of removal of irrelevant information, regulation of the duration of storage of biometric data and restriction of access (access only on the basis of a court or other responsible authority’s decision). A special procedure should also be developed for the exchange of such information with foreign partners.
However, in addition to the issue of personal data protection, one should pay attention to many other problems that arise due to the use of cameras with facial recognition functions. The key ones include:
- Possibility of illegal commercial use of data (such as for advertising targeting, in particular, political one);
- Risks of discrimination of persons due to false positives / negatives of the system, the presence of institutional discrimination in society and its further conversion into system algorithms;
- Risks of illegal prosecution of persons due to incorrect matching of a person against a photofit by the system (which has already happened in practice in the US);
- Problem of the status of such information as evidence in court (in particular, due to the lack of legal regulation);
- Likelihood of a violation of the right to freedom of expression in terms of protection of journalistic sources;
- Threats to the proper exercise of the right to peaceful assembly (including the creation of a “chilling effect” for activists given fear of being prosecuted for participating in rallies, demonstrations, protests);
- Possibility of abuse and use of such systems for political persecution by the state, harassment of vulnerable and marginalized groups etc.
An alternative to cameras that have a facial recognition feature today are systems that identify a missing person or a potential offender based on other parameters (without face analysis). This includes clothing, stature, gait, the presence of noticeable things, and so on. Such technologies are also being disseminated by Ukrainian manufacturers, processing anonymized data and reducing the system’s potential to discriminate against individuals on any grounds. However, the capabilities of such devices, as well as their legal assessment, is a separate area for reflection and analysis.
This policy brief was developed as part of the Technical Assistance Support in Ukraine, managed by the European Center for Not-for-Profit Law Stichting (ECNL). The project is made possible by the International Center for Not-for-Profit Law (ICNL) through the Civic Space Initiative.
This publication is wholly financed by the Government of Sweden. The Government of Sweden does not necessarily share the opinions here within expressed. The author bears the sole responsibility for the content.