New sets of stickers in the top social media, 3D avatars in messengers, “resurrected stars” of the last century movies, video game characters that look a hell of a lot like the reflection in the mirror – these are just some of the huge list of digital avatars, holograms and deepfakes that surround us in our daily lives. It seems that there is no single sphere left without an influence of artificial intelligence on our rights. Any doubts left? Here are some examples of digital personalities created in the past few years:
- In China, instead of hiring an expensive TV host, a channel created a hologram that broadcasts news, looking like a real TV presenter. The technology suggests that any person’s image can be used as the basis for this system;
- Facebook, Snapchat and Apple devices introduced the digital avatar feature (the first countries to do that were Australia, New Zealand, the UK and Ireland, followed by other European States and Canada). Avatar owners now use them as stickers, profile pictures, or emojis. Facebook has also announced it is about to create digital avatars in a virtual reality format that will function by tracking eye movements and changes in facial expressions. This technology can be used based on the special headset. In the meantime, Snapchat offers an option to set your own 3D image on a profile, with more than 1,200 combinations of bodies, facial expressions, gestures and backgrounds;
- The ABBA group got 30 years younger by creating an entire show using holograms and giving a digital concert where the deepfakes were almost indistinguishable from the real band members. Similar technology is used by other stars like Cardi B, Russell Westbrook, Justin Bieber, J Balvin, Rihanna and Shawn Mendes. And the list goes on and on!
- Talking about music: you can only take part in the Alter Ego song contest if you convert your identity into a digital avatar. The performance itself takes place on a real stage with live dancers;
- Digital avatars are worth a lot of money and constitute a lucrative trade item (notably, Visa Inc. bought a digital avatar for $150,000, demonstrating support for an increasing flow of investment in cryptocurrencies and blockchain technology);
Digital avatars are also used for medical purposes to create an interactive patient profile and test the effects of certain medications or procedures: first in a virtual space, and only then in real life. Digital avatars are considered useful for the treatment of mental disorders (including personality disorders), autism, hyperactivity and dementia.
These are just a few examples of the use of digital avatars in completely different areas. For example, ordinary photos can be converted into 3D images or avatars using Wolf3D and many other applications. Until recently, the biggest problem was the inability to reproduce deepfakes in real time, and the technology itself required dozens of hours, if not days, of work by programmers and designers. However, NVIDIA researchers have already made a significant breakthrough, making it possible to use digital avatars for video conferencing, storytelling, virtual assistants and many other areas. Another example is the development of Ukrainians, the Reface application, which is also capable of creating deepfakes in real time.
If you think the technology is very new – think about the musical band Gorillaz, whose videos were full of animated characters back in the 1990s. In 2013, virtual singers (like Hatsune Miku) emerged, and a little later, bloggers with faces that turned out to be deepfakes began to gain popularity. The following cases have given rise to widespread debate: is it legal and ethical to create digital avatars? Should rules be developed for their use, taking into account all possible risks? This is why we should assess the impact of this phenomenon on human rights and consider possible red lines for such technologies.
Before planting a flower, prepare the soil
CNBC research indicates that from 2018 to 2023, financial investment in developing virtual reality avatars should increase from $829 million to $4.26 billion. However, the creation of a digital avatar is a complex process and affects not only the sphere of business, but also a fairly wide range of human rights. In particular, it affects the right to an image, the right to express one’s identity and engage in self-development, personal data protection, as well as a number of other rights (e.g. protection from discrimination, free elections – freedoms that are affected occassionally). CNBC notes that by 2030 about 23.5 million jobs worldwide will be using virtual and augmented reality technologies. That is why it is crucial to set the applicable standards now and assess the risks to human rights.
The European Court of Human Rights (hereinafter – ECtHR) has repeatedly stated that everyone has the inherent right to develop their personality, in particular by establishing and maintaining relations with other persons and the outside world (Niemietz v Germany, §29 and El Masri v the former Yugoslav Republic of Macedonia, §§248-250). It also implies the right of the individual to shape his or her own appearance according to personal choice rather than ingrained standards of conduct, being one of the ways of expressing personal identity. For example, a person has the right to choose a hairstyle to their liking (Popa v Romania, §§32-33), to decide whether to wear a beard (Tığ v Turkey), wear a burqa (S.A.S. v France, §§106-107) or to appear naked in public (Gough v the United Kingdom, §§182-18). Any prohibitions or restrictions on a person’s physical appearance must be justified by a strong public need (Biržietis v Lithuania, §§54, 57-58). Otherwise, it violates a person’s right to express one’s identity in a form acceptable to an individual. It would be quite logical to presume the applicability of these standards in the online space – when creating digital avatars, holograms or deepfakes.
Any prohibitions or restrictions on a person’s physical appearance must be justified by a strong public need.
In this context, the right to an image is also important, as it reveals the unique characteristics of a person. Whether the photographs depict intimate details of the life of the individual and his or her family (Von Hannover v Germany (no 2), §103) or they are neutral and merely illustrate the content of the material (Rodina v Latvia, §131), the desire of the person (or the lack thereof) to gain publicity prevails. The key factors also include the level of similarity between the person and the created hologram, as well as the level of their fame (which affects the extent of using the image for artistic or critical purposes). In foreign practice, there is already a basic regulation in this regard (predominantly for computer games with deepfakes):
- The US. Under Hart v Electronic Arts, Inc. and Keller v Electronic Arts, Inc. the most important aspect is the similarity between the person and the hologram, serving the key basis for the duty to obtain consent to create such image. It also matters whether the aim for which famous people image is used is commercial or artistic (see John Doe v TCI Cablevision), and whether the advertisement creates the impression that a famous figure endorses a certain product if his/her image is used (Rogers v Grimaldi).
- The UK. A movie using a hologram of a deceased celebrity that harms the dignity of the deceased would not be a sufficient ground for a lawsuit. Cartoons and satire enjoy similar protection.
- Spain. In Zarzuela case images of a deceased Spanish opera actress could be legally used to promote opera, because the goal is to promote art. Therefore, one can only use an image of the deceased without her permission if it is in the public interest.
- Germany. The human dignity of the deceased is a value superior to the freedom of expression of the work’s author (BVerfGE 30, 173). The amount of creative transformation or artistic change of the character as compared to the real figure is crucial. In Kahn v Electronic Arts. GmbH, the court noted that the use of a remotely similar image should not become a monopolization of the image and identity of the person. Distribution of a holographic image of a deceased person requires permission from relatives for 10 years after death.
- France. According to Philippe Le Gallou v Fodé Sylla, material created as a parody does not require the consent of the depicted person. That is, it does not violate the right to use a trademark (if the person’s name is registered as such), but it may violate the right to an image (if damaging the person’s reputation). Also in Scarlett Johansson v Grégoire Delacourt the court emphasized that: if a hologram is realistic and the actress is recognizable, consent to create a deepfake is mandatory.
In fact, foreign practitioners demonstrate that the use of an image without any consent generally constitutes a violation of the right to privacy. How do these classic rules apply to digital avatars, holograms, and deepfakes?
First, their creation and use by developers for any purpose requires the consent of the depicted person. For example, it is unlawful to use someone’s biometric data (facial features, physique or anatomical features) for a hologram for advertising purposes unless the person has provided explicit and informed consent. And this rule applies despite the fact that a hologram is not an image in the classical sense (it can move, be only a remote copy of a person, etc.).
Creation and use of digital avatars by developers for any purpose requires the mandatory consent of the depicted person.
Secondly, digital avatars and holograms are now as common way of showing one’s identity as a photograph. Often, social media pages have no real images or videos at all – only the ones artificially created with special technologies (digital avatars or deepfakes). They can have the form of a hologram or look like cartoon characters – the way it is done at the Genies and Itsme platforms. Some apps show that the created image is a fake, while the developers of others try to make them as similar to the original as possible.
According to the users, the advantage of digital avatars and holograms is the ability to tailor images to their own needs, reflecting style, culture and preferences. And while people are more likely to create avatars that are close to their true appearance, the technology itself allows for the construction of images that are totally different from reality.
After all, some users seek self-determination, while others need the opportunity to remain anonymous online (e.g. to provide support, ability to conduct political discourse, etc.). In both cases, however, the options for creating a digital avatar should be broad enough to avoid discrimination on any grounds (gender, race, religion, disability, etc.). Companies should also take measures to prevent bullying based on certain categories or the misuse of holograms or digital avatars (for bullying and discrimination). For example, it is wrong to create female avatars that are objectifying the body or appearance, give fewer options to adjust the appearance of people of color, not to develop labels for people with disabilities, etc. This only strengthens institutional discrimination.
The set of options for creating a digital avatar should be wide enough to avoid discrimination on any grounds.
Some companies are already developing holograms that can not only move in GIF format, but serve as an interlocutor capable of fully representing an individual – essentially becoming a replica. These technologies are being used to overcome society’s fear of deepfakes that look like humans (“uncanny valley” phenomenon, where humanoid avatars are perceived as creepy or disgusting). They can be used to send messages to others, for advertising purposes, etc. At the same time, if a person doesn’t know that the digital avatar is artificial, and it looks as close to the prototype as possible, the perception will be just wonderful. And this can create certain problems.
In particular, the use of deepfakes without the knowledge of the person is extremely dangerous. And this applies not only to living people, but also to deceased celebrities, famous public figures and politicians. Negative consequences worth mentioning revenge porn, propaganda of fakes with the help of digital avatars of bloggers or journalists. Just imagine what will happen in case of data leakage from companies representing thousands of influential people!
That is why, in addition to assessing the applicable standards, one should analyze in more detail what risks the latest technologies pose to human rights, and how to prevent harm without cutting off the air for the development of the industry as a whole.
The rose is beautiful. Beware of the thorns, though!
While personal data theft used to be dangerous, today it can turn into identity theft. Often, by taking someone’s photo from the Internet, popular companies create numerous advertising posters and banners based on it – as a result, a person’s face or body is used to monetize a product without his or her consent. And this is not just a theft of a particular photo from social media – it is identity theft, because it gives the impression that the person has participated in an activity which he or she really has nothing to do with.
While personal data theft used to be dangerous, today it can turn into identity theft.
The threat becomes especially relevant because of significant technological advances, when high-quality fake does not require significant efforts – for example, Walt Disney used CGI for The Mandalorian TV series (Luke Skywalker is not getting younger as the years go by, after all). Can you identify a fake among the original products at a glance? Unlikely. However, no one can assure us that technology used for entertainment purposes will not turn against us one day.
Another example is the creation of an image of Iron Man in the movie of the same name – some developers believe it one of the most successful uses of AI to create a deepfake. Does it take a long time to develop a truly high-quality digital avatar? A lot. For reference: designing the character of Gollum in The Lord of the Rings, the film crew had to create a suit for the actor with 64 control points and a facial mask with 964 control points. Otherwise, the quality of the image did not work – every facial gesture caused a rush of blood to the face, and an attentive viewer noticed the lack of natural reactions to movement. That is why they had to shoot 50 to 100 sequences and superimpose them to get the frame they wanted. Working with digital hair is even more difficult, because every detail must match the movement of the body, wind, gravity, etc. Today, even though technology has reached an unprecedented development, qualitative deepfake still requires time, resources and a lot of skill.
However, not all digital avatars qualify for the Palme d’Or or the Oscars. High-quality deepfakes can be made not only by companies such as Walt Disney or New Line Cinema, but also by ordinary people. For example, the Ukrainian application Reface allows you to attach your face to the body of a celebrity or politician in real time. In the future, the authors are planning to add Reface Studio – a full-fledged video editor based on the technology of rapid processing of synthesized video, and Full Body Swap – replacing not only the face, but the whole body of the user. Similar applications exist abroad as well: Lend Me Your Face, DeepFaceLab, Faceswap, Deep Art Effects, Morphin and many others, which can be alternated depending on the desired result (research, memes, video production, etc.).
The technology is so high quality that the average viewer believes in the content scrolling down the newsfeed at the usual pace and without going into detail. The ease of creating deepfakes has its downsides. For example, the creation of digital avatars for revenge porn is destructive to a person’s reputation, dignity, and moral condition. This is an extremely unpleasant and very dangerous phenomenon because it is difficult to prove the existence of editing or to quickly remove the material from the network. At the same time, the audience is left with a negative impression of the person because of the quality of the fake. However, the technology is used not only for revenge porn, but also for commercial purposes – it turns out that the demand for porn featuring celebrities is huge! For example, a YouTube blogger in Taiwan has been recently arrested for selling pornographic videos featuring local celebrities – they were deepfakes.
The technology is so high quality that the average viewer believes in the content scrolling down the newsfeed at the usual pace and without going into detail
Deepfakes have infiltrated the realm of television as well – last year, ESPN sports channel created a commercial in which the alleged 1998 anchor tells us what to expect from basketball in 2020. Fans of Henry Cavill created a deepfake with him as Agent 007, which was actively distributed by popular media. Tom Cruise’s fans created his fake TikTok account, and a new app allows receiving personalized messages from football stars. H&M uses Macy Williams’ digital avatar to promote a cloth recycling campaign. And all of these examples look incredibly close to the original. It would seem that soon, instead of being shot in an ad, a person will only be required to agree on the use of their image, and their physical presence on the set will not be required.
And while false basketball forecasts or dreams about Bond’s future are relatively safe, what about elections? In 2019, US ex-president Donald Trump shared a fake video of Nancy Pelosi: the current Speaker of the US House of Representatives, who looked like she was drunk in a slow-motion video. Another example is the famous video of Barack Obama describing Trump in obscene words – there were many pictures and videos of him online, so it was pretty easy to create a deepfake. This is why famous people are in a more vulnerable position.
Hundreds of similar cases have made social media realize that deepfakes pose a significant threat to the electoral process. For example, they can make a jailed dissident appear relaxed in court footage when he’s really being starved to death. So the use of technology during election periods often leads to new content restrictions online. Some regulators use prohibitions rather than restrictions — Reddit and countries like China have imposed a ban on deepfakes. To define the limits of companies’ powers and avoid excessive restrictions, attempts have been made to regulate such technologies at the legislative level – by establishing mandatory disclaimers regarding manipulative media, banning virtual avatars during election periods, and developing a sanction mechanism for revenge porn.
It is good if the owners of a social media page or a certain resource warn that the image is an unrealistic digital avatar. For example, the authors of the popular Aliza Rexx, a fictional character with tens of thousands of followers, do so. It’s bad if such a warning is forgotten – accidentally or deliberately. And, in that case, the potential abuse could be the creation of a new identity, which is quite likely to be endowed with full rights and responsibilities. For example, the winner of the literature prize “Planet” was the writer Carmen Mola, behind whose name three famous screenwriters were “hiding”. If this situation seems more ridiculous than dangerous, let us remember the creation of identities for the election campaigning. It should be understood that deepfake technology will only take such abuses to a new level, because people are more inclined to believe in high-quality images or videos than text messages.
Similar situations are quite common on Instagram (including the Reels service) and TikTok, social media full of bloggers, who often monetize content. In particular, designers and programmers, having studied the latest trends, create realistic videos, images and audio using the style of popular bloggers. Subsequently, they launch fundraising campaigns or simply take advantage of established social media algorithms (ads are superimposed on the video and the owner receives funds) misleading the subscribers. However, the situation can be even more dangerous – for example, when perpetrators create a false identity on Facebook or Tinder for violence, rape or kidnapping. The ability to generate fake videos with full-fledged gestures and the use of virtual masks during calls makes it increasingly difficult to verify the identity of the person you’re talking to. Technology developers point out that avatars do not always convey a person “as is” in the real world. Therefore, one should be very careful about friend requests on social media and avoid accepting them from complete strangers: the best thing that can happen is disappointment, the worst situation is falling victim to fraud or even more severe crime.
One should be very careful about friend requests on social media and avoid accepting them from complete strangers
Often individuals create avatars that are completely different from the real prototype. However, numerous studies confirm that even this is not an obstacle to obtaining personal data. Real-life preferences have a significant impact on the choice of digital avatar appearance, hair and clothing style, backgrounds, religious symbols, etc. For example, those who chose big eyes for their avatar were seen as more extroverted and open to communication, and their account was considered more attractive to the audience. On the one hand, this, just like deepfakes, can be used by perpetrators on dating sites and bloggers to increase income. On the other hand, it allows you to understand a lot of information about a person. Hardly anyone is interested in a person’s real date of birth. However, a favorite color of a hoodie or dress would probably be useful to those who set up targeted advertising. And such a seemingly indirect connection between a digital cartoon avatar and a person does provide a lot of useful data about users.
The situation is worse if you upload your own photo or video to apps that create deepfakes. Many developers use information from volunteers to test and run the algorithm, which means that such data will remain in the system forever. For example, AI Foundation notes that this way people “won’t be forgotten” after they die. That is, neither the person nor their relatives will be able to get the data back. Similarly, Hour One uses different facial expressions from a short video recorded through the app to create realistic digital avatars. If you give the company permission to use the data further for advertising purposes, you can even earn a lot of money. People are now literally trading their faces! At the same time, companies may have poor security systems, and as a result there is a risk of personal data leakage, or even worse – the use of deepfakes created for entertainment purposes to spread misinformation.
Some companies treat the possibility of creating deepfakes quite responsibly – Ukrainian Reface has introduced markings for its products (digital watermark). So, technically you can always check: whether the video is real or if the AI made some effort to create it. Developers also advise using other video validators, especially when users have doubts about its nature. For example, one could use Microsoft’s deepfake detection app. Ursula von der Leyen, president of the European Commission, also talks about the need for regulation and in-depth research on AI technologies in this area. She notes that the risks associated with the use of digital technologies should be carefully examined and the right regulatory solution for the problems at hand should be found.
The majority of researchers, lawmakers, and human rights advocates see solutions to these problems primarily in adapting existing human rights standards to today’s digital realities. In particular, special attention should be paid to the General Data Protection Regulation (better known as GDPR), which establishes more detailed rules for the online processing of personal data. However, it cannot always be applied to social media and websites outside the EU because of jurisdictional restrictions. So, which standards should be adopted first to keep users safe and prevent abuse? Let’s find out.
A flower without a stem is just a set of petals
In general, the requirements for the development and application of such technologies are fairly standard: mandatory consent, notification of interactions with AI, and the ability to opt out. On the one hand, these requirements seem easy enough to implement, but in practice there are many pitfalls.
(Non)obligatory consent. The rule of consent concerns its voluntariness, unambiguity and prior notification of any processes that will take place with the personal data of the individual. Gaps in regulation here may not arise in relation to living individuals, who themselves can dispose of photos, their name and technical material to create deepfakes. For example, in the above-mentioned Alter Ego contest, all performers give their consent to the use of their personal data. The same happens with the use of celebrity images for advertising. The regulation is somewhat blurred when a person uses a digital celebrity avatar to illustrate their own profile with no commercial purpose. Here, the key factors would include similarity and the damage to the reputation of the depicted person – that is, the evaluation criteria are more or less clear. The problem arises when it comes to using the data of the deceased person – for the promotion of cultural events, advertising, for historical references, etc.
The rule of consent concerns its voluntariness, unambiguity and prior notification of any processes that will take place with the personal data of the individual.
For some, the opportunity to see deceased relatives and hear their voice is one of the most unattainable dreams. It is already possible to reproduce the timbre, intonation and volume of the voice using special modulations. The creation of a full-fledged hologram of a person based on photos and input data is also real – Kim Kardashian received a video of her deceased father wishing her happy birthday. In this situation, there were no objections on the part of the deceased’s family, because the purpose of creating the deepfake was to meet the deceased father, at least for a moment. As noted above, the purpose of the use is very important. However, can a lofty goal outweigh, for example, relatives’ objections to the use of a person’s name and image? Whose permission should be obtained when there are no relatives?
Research by Thomson Reuters Foundation indicates that most countries do not yet have regulation to protect the personal data of deceased individuals, which means that someone’s image after death could become the basis for a hologram. And while special regulation is slowly beginning to appear, its quality is far from perfect. For example, New York law covers the concepts of “digital replicas”, “deceased performers”, and “deceased personalities”. In the case of famous persons, what matters is the artistic contribution of the author of the deepfake and its similarity to the original. While The Beatles, playing heavy rock, would hardly seem realistic to anyone, then The Sex Pistols playing new jazz with Tony Williams would look quite organic. And the question here would be whether the permissive rule of law applies, whether every time within 40 years of a person’s death it will be necessary to ask their descendants for permission to use their image, and whether every video would be considered image theft. In conclusion, the act is hardly a perfect regulation of deepfakes. Nice try, though!
After the introduction of quarantine, the digitalization of the cultural sphere became popular – digital performances in the theater, online concerts by famous singers, and digital museum tours. However, the developers quickly realized that the use of digital technology would improve not only the digital model of museums, but also the physical space. This includes both “smart museums” with automated voicing of exhibits and the creation of digital avatars. For example, in Florida, the American Museum created a hologram of Salvador Dali, which inherently is a deepfake. The technology works on the basis of AI, communicates with visitors and tells the story of the artist’s life. In such a case, you can no longer say there is no commercial purpose at all.
Moreover, such an approach can facilitate the commercialization of deceased persons’ data. If celebrities are linked to their fields of activity, ordinary users run a much greater risk. App developers will use their personal data for advertising. This is why legislation should be developed to allow adequate disposal of information about a deceased person by their relatives. This includes a list of disposal subjects, reasonable time limits, purpose and scope of information used, public interest, etc.
Artificial intelligence speaking, say hello! Sometimes we can talk to a digital avatar or a deepfake without even knowing it. And this issue is becoming more and more relevant, given the idea of using such technologies for avatars during online business negotiations. Even today, it is not always possible to distinguish between an artificially generated image and a real person – a hologram will move freely and smoothly, while a person may not speak clearly or make abrupt movements. What’s more, video communication programs already allow for filters that significantly alter the face. If a person talks to the employer during the interview, but uses filters to hide his or her appearance – can such an interview be considered fair? Should the person warn others about the use of technology?
Soul Machines – a leader in creating AI with an “emotional” component – has developed Nadia, a virtual assistant voiced by Cate Blanchett, and Sophie – a pilot project for customer interaction in New Zealand. Both digital avatars are predicted to have a wide range of features. Samsung has been working on the Neon project for a long time, a computer-animated humanoid program that can act similarly to Siri or Google Assistant. Ironically, the first teasers of this program leaked online turned out to be fakes, also generated by digital avatar apps. However, the developers do promise that the future program will show “emotions and intelligence” when talking to users. And they also promise that the software will be able to perform the roles of actors, financiers, TV presenters, health care providers, or police. A similar scheme is already in use for border crossings in Luxembourg, Greece, Britain, Poland, Spain, Hungary, Germany, Latvia and Cyprus. Specifically, the iBorder Ctrl app requires a person to undergo an interview with a digital avatar based on AI programmed to detect false information (analysis of facial movements and expressions). Additional facial verification methods are then used, such as fingerprints and facial recognition.
In this context, there was a discourse in the human rights community – can the program assess the legality of our actions? Similarly, it has been noted that a digital avatar cannot feel despair when firing a person. That is why it cannot be empowered to fire people, report a diagnosis in a hospital, or pass a sentence in court. The general question, though, is, aren’t there too many algorithms in our lives? At what point will we begin to confuse generated and real things? It will definitely happen at some time. And it is the responsibility of companies to prevent such cases.
In particular, the European Commission has published proposals for the regulation of AI, categorizing the riskiness of technologies based on their uses. The unacceptable risk level has been attributed to technologies capable of manipulating human behavior (the question of whether all deepfakes can fall into that category is debatable). At the same time, technologies like chatbots have a usual risk level. However, regardless of the risk level, a person should be warned that they are interacting with an AI and not a real interlocutor. In addition, the Council of Europe emphasizes that discriminatory practices are unacceptable in the design of automated systems. If the impact of avatars or holograms on human rights is significant, a person has the right to demand that a service not be performed by AI and to demand that it be provided by a real person. Similarly, a person has the right to request a review of such a decision, according to the General Data Protection Regulation. And this argument looks extremely relevant given the use of digital avatars to interview people at the state border!
Petals of ethics in a boquet of violated rights
The issue of ethical standards has always been in the philosophical discourse without a correct answer. Is it ethical to “revive” people even with the consent of their relatives? Whatever the answer, such practices already exist – MyHeritage’s Deep Nostalgia feature lets you animate old family photos and long-dead relatives can smile at you. Wouldn’t such animation lead to serious mental problems for those who have just lost a loved one? Moreover, there have already been proposals to create full-fledged holograms of deceased persons, a kind of digital copy of a person’s identity. It could talk and react to events, discuss news and interact thanks to machine learning technologies. Of course, it would be nice if Anthony Hopkins movies could be made forever, but are we ready for the “eternal” life of real people, not movies?
Wouldn’t animation of images of the dead lead to serious mental problems for those who have just lost a loved one?
Researchers believe that people tend to trust realistic videos. Multiply that by a desire to believe digital avatars are real, and we get a real possibility of a nervous breakdown in a person who constantly brings dead loved ones “to life”. Developers have repeatedly noted that the ability to make a hologram or replica of literally anyone threatens to open “a Pandora’s box full of ethical problems”. A South Korean TV station showed an extremely emotional meeting of a mother and her deceased 7-year-old daughter reproduced as a digital avatar from photos and memories of loved ones. A Russian woman trained a neural network chatbot with a digital avatar, making it look like her husband who died in a car accident. A journalist was interviewed his terminally ill father to make a digital clone after his death. Is a person willing to give up the temptation to keep interacting with an avatar, moving away from the real world and only exacerbating the trauma? Do we have the right to put such a burden on anyone? And in general, thinking about the experiments of using digital avatars to treat mental disorders – is it ethical to mislead a person even for treatment?
Another issue is the creation of digital avatars of children – who can control the use of technology by minors? Won’t the use of holograms affect the ability to adequately express one’s personality? Currently, most deepfake apps have no age restrictions. However, there are already cases of children associating their self with the image on the screen. How good is it to support such a deceptive opinion? We are unlikely to know until we see the latest psychological research on the development of personalities of individuals who have grown up with such apps. The risk is there, though, and it cannot be denied.
A global ethical issue is the possibility of using the data of deceased persons from the public domain – their social media pages, news in the media, etc. There have already been sketches of projects that plan to collect the remains of personal data of deceased persons to accumulate their knowledge and experience. Some even dream of resurrecting Stephen Hawking and other geniuses of our time. How ethical is this? So far, no legislation regulates this issue, and companies like Google and Facebook only allow you to designate a person who will dispose of the account after the death of the owner. Other issues have to be regulated by legislation, which does not exist… That is why in the absence of prohibitions, the only thing holding people back from using other people’s personal data everywhere is ethical standards. And they should clearly indicate the limits of what is allowed in this matter.
In the absence of prohibitions, the only thing holding people back from the widespread use of other people’s personal data is ethical standards.
Researchers at Oxford University have recently proposed ethical guidelines for the use of digital avatars and holograms of dead people. They apply not only to social media, but also to other places where a person may have left a digital footprint. Developers themselves are not lagging behind the academics. For example, the above-mentioned company Soul Machines has developed its own ethical guidelines. The key principles include adherence to human rights standards, a focus on human welfare, the principle of personal data protection, efficiency, transparency, accountability, reporting abuse and developer competence. And these ethical guidelines actually resonate quite strongly with the proposed “red lines” that were supposed to regulate the use of AI and protect against abuse.
Existential question: a rose or a couch grass?
Technology is evolving very rapidly and it is unlikely that any legal regulation or ethical codes will keep up with it. On the one hand, the trends are inspiring, because a large number of problems can soon be put on the digital shoulders of holograms and avatars – they are predicted to replace most classic professions and become a good advisor for everyone.
On the other hand, the plane for risk is too multidimensional: deepfakes are used for revenge porn, data theft or even identity theft. Fake videos are distributed during sensitive periods: before elections, during riots or social crises. As a consequence, many rights are violated and the fine line between truth and lies is blurred. After all, the development of such technologies raises more difficult dilemmas – is it worth “bringing dead people back to life”, will such technologies negatively affect child’s development, what should guide the development of applications: the thirst for progress or fundamental ethics? Some have even suggested the creation of Declaration of Avatar Rights, so should it be a matter of time before AI is equal to humans in status?
Today, most questions do not have an unambiguous answer – neither among developers nor among scientists. And the solution is definitely not to be found in regulations or statements by human right activists. At least not in the near future. However, this does not mean that we should remain silent about the problems or stop looking. We can still distinguish between the real and the synthesized, but technology is improving every day. And it’s better to draw clear “red lines” before the line is completely erased. In the meantime, we can enjoy a great conversation with the world’s geniuses!
This analytics was developed by CEDEM as part of Technical Assistance Support in Ukraine, implemented by the European Center for Not-for-Profit Law Stichting (ECNL). The project is made possible by the International Center for Not-for-Profit Law (ICNL) through the Civic Space Initiative.
This publication is fully funded by the Government of Sweden. The Government of Sweden does not necessarily share the opinions here within expressed. The author bears the sole responsibility for the content.