Freedom of expression: strangers’ commentaries, personal responsibility and ECtHR’s judgement

December 21, 2021

Throughout the last 10 years interhuman communication has gained new forms, shifting to the online space. The public discourse increased in its quality, engaging more individuals, while the Internet environment became a suitable place not only for expressing one’s opinions, but also for public gatherings, assemblies and even binding international obligations from State officials. Recently, the UN Special Rapporteur has stated that online demonstrations even deserve a special protection, [1] especially in view of COVID-19 restrictions on offline events. Apart from that, public debates around the most crucial social changes, political proposals and other tense issues are addressed by both public and private individuals within social media, using the group functions, personal accounts and even comments section on the pages of the other individuals.

Mostly interaction is aimed at facilitation of the constructive and substantive discourse, contributing to the debate of general interest. Indeed, for the Internet it is common to face expressions delivered in a low register of style [2], and such forms of speech are rather unavoidable. In this respect, the ECtHR has repeatedly called offending, shocking and disturbing content protected [3] along with sceptical or sarcastic emotional disapprovals, [4] vulgar phrases [5], exaggerations [6] and acerbic passages of hostile tone, [7] serving merely stylistic purposes [8]. However, numerous contrast examples exist, making online environment quite a dangerous space, comprising hate speech, unlawful incitements or other types of manifestly illegal content. And social media algorithms only assist in speedy distribution of such materials even to non-subscribers of certain pages.

Accordingly, the progressive development of modern technologies caused a necessity of developing the general regulatory framework and standards for interaction within the online environment. It manifests in both State’s regulations for the platforms (such as German Network Enforcement Act) [9], development of co-regulatory documents (e.g. the EU Code of conduct on countering illegal hate speech online), [10] self-regulation implying platforms’ community standards (examples of Facebook [11], Twitter [12] and TikTok [13] are the most detailed) and practice of international courts and tribunals. Back to 2015 the ECtHR has addressed the issue of Internet-intermediaries liability for the third-party content in the comments (a fundamental judgement in Delfi v Estonia), [14] establishing a guideline for moderation of online-information and responsible actors. From that time, it seemed that providers of space for communication and owners of websites remain responsible for malicious activity within their services. However, everything has drastically changed with a new ECtHR’s judgement in Sanchez v France [15], which went far beyond the old limits, imposing liability for third-party comments on the owners of social media accounts.

This judgement has already caused a significant resonance among the human rights defenders, media lawyers and social media influencers – those, who became the primary target of the new standards. Moreover, it has reasonably raised numerous issues regarding enforceability of this judgement, its compliance with the other human rights standards and disruption of integrity of the ECtHR’s practice on the matter. Within the scope of this research, we will try to establish, whether a new judgement brings more clarity to the regulation of online climate or vice versa – puts an excessive burden on private individuals, undermining the basic human rights to free expression, right to a fair trial and other relevant freedoms.

Sanchez judgement within the ECtHR’s practice on intermediary liability

By six votes to one the Chamber decided that Sanchez shall be brought liable for hateful comments emerging in the comments under his publication. To analyse the influence of this judgement a brief analysis of the factual background is needed along with the reasoning of the ECtHR. Namely, in 2011 the French politician Julien Sanchez, who at that time was a candidate at parliamentary elections from the city Nîmes (further became the mayor of French city Beaucaire and the President of the National Front party group in Occitania), made a publication on his Facebook page [16]. This publication targeted his political opponent, condemning his inability to start an official website on time in line with the established deadlines. This publication itself can be considered as absolutely lawful, manifesting the political criticism and a contribution to the legitimate public debates. Yet, afterwards two separate comments appeared from other users, describing Sanchez’s political opponent in the following words: “this great man turned Nim into Algeria, there is no street without a kebab shop and a mosque; dominated by drug dealers and prostitutes, it is not surprising that he chose Brussels, the capital of the new Sharia world order…”. Another user commentator wrote: “hookah bars are everywhere in the city center and even hiddenDrug trafficking run by Muslims on the street for many years…”. The following day a partner of Sanchez’s opponent contacted one of the commentators, demanding to remove the remark, which was done immediately. The other comment left available since no one has contacted its author.

Afterwards, a criminal case was initiated by a local prosecutor before the French court. The following day, Sanchez encouraged people to monitor the substance of their comments, yet failing to take any action with regard to the existing ones. During the court proceedings he reasoned it by the absence of time to look through the comments on his page given his parliamentary campaign. The domestic courts forced both Sanchez and the commentators to each pay €4000 fine with €1000 moral damages. Following this domestic decision Sanchez complained to the ECtHR about the violation of his rights by imposing liability for the third-party content.

The ECtHR has considered both comments to be discriminatory towards the Muslim community, which was quite obvious in view of its previous practice. Namely, the phrases “they will start to burn, slaughter rape, rob and enslave” [17],brazenness of this demonstrable Gypsy banditry” [18], which assimilated the vulnerable groups with criminals, raising disgust [19] and negative stereotypes [20] towards them, were likewise considered unprotected. Thus, labelling Muslims as drug dealers and prostitutes apparently violated their rights even if made in the context of public debates in pre-election period (Sanchez v France, §89). However, a more interesting part was addressed, when the ECtHR analysed the potential liability of Sanchez for the third-party content. Firstly, the Court has applied the test, which is usually applicable to the Internet intermediaries from the times of Delfi v Estonia. The criteria involve the context of the commentaries, measures taken by the intermediary, liability of the actual authors, and consequences of the restrictions for the applicant. If the criterion of content and context is satisfied given the comments’ hateful nature, the issue arises as to the reasonableness of applying the remaining requirements to the private individual…

As was mentioned beforehand, the criteria were developed and firstly applied in Delfi v Estonia [21] – a case concerning the liability of the news website for third-party comments under the news column. In that case, the main reasons behind imposing liability on Delfi were the manifestly illegal nature of the comments (evident unlawfulness), presence of the editorial control over the website and materials therein, failure to expeditiously react upon the notification of comments’ illegality, capacity of Delfi to take care over the entire volume of comments on the website, primarily economic purposes of its activity and anonymous nature of comments, allowed by the website (liability of primary authors was impossible). However, even this relatively classic case has been heavily criticised by numerous experts, who viewed liability as the excessive burden, bringing inconsistency to the EU and the ECtHR case law and approaches [22].

Similar situations happened in the MTE v Hungary [23], Pihl v Sweden [24], Tamiz v the UK [25], and Høiness v Norway [26] In the first case the characteristics of the intermediary were absolutely similar, however, the content was a defamatory one, thus not prima facie unlawful. Moreover, the intermediary reacted diligently, removing the expressions directly after receiving the complaint. Accordingly, the ECtHR has held that responsibility violated the Article 10 of the ECHR. In Pihl v Sweden, the intermediary was a non-profit blogging website, which had few followers and acted expeditiously in removing offensive (not manifestly illegal) expressions, thus its liability would breach the Article 10. As follows from Tamiz v the UK, services for publications of blogs can act even in broader time-frame if the content is not manifestly unlawful, since it took 3 days to react upon the complaint, which still was considered expeditious. At the same time, in Høiness v Norway, potentially offensive expressions were removed 13 minutes after the complaint. Interestingly, the websites and their owners in all cases, apart from Delfi, faced defamatory content, which is not evidently illegal. Respectively, the ECtHR did not elaborate on their obligation to proactively monitor the comments within the resource. Moreover, a common feature was that websites were created and maintained with a purpose of news sharing, representing a resource in the broad meaning, not a private individual. Thus, the expectations of the public regarding the content moderation and editorial control were quite significant. Moreover, in most cases there was a possibility to leave anonymous comments, thus primary liability of original authors was impossible to be established. Lastly, the ECtHR itself stated that a different model is applicable to the Internet intermediaries with other forms of editorial control (such as social media platforms). Nothing was, however, said about the account of private individuals therein.

A little switch in practice has happened in 2020 with appearance of Jezior v Poland, where Polish domestic courts decided to impose liability for defamatory comments posted in the applicant’s blog. Jezior was a candidate at the municipal parliamentary elections, having a blog devoted to the news in the city and his election campaign. He encouraged individuals to publish only tolerable and well-balanced comments. The administration of the blog was conducted by Jezior’s son. Two weeks before the election a comment criticizing on Jezior’s opponent in elections has appeared, claiming that he “comes from an old gangster family”, “the mayor’s sons became drug dealers”, “K’s children continue the criminal tradition” and other. The applicant’s son immediately deleted the comment, then deleted it again in a few minutes and switched on the access control and registration system for comments via e-mail. He turned off the system the next day, but the comment was published again soon. Immediately the notification of the comment, the moderator deleted it and returned the blog access control system for 3 months. Nevertheless, Jezior faced a defamation claim, addressing which in local courts he claimed the intermediary immunity, but was forced to pay zł5000 for the charity purposes. Afterwards, Jezior applied to the ECtHR, where the Court indeed applied the criteria developed in Delfi case, assessing the liability of him as an intermediary. Specifically, the ECtHR has stressed that the blog was free and contributed to public debates, while malicious comments were removed directly after notification and even certain additional preventive steps were made to preclude their re-appearance. Thus, the liability could potentially create a chilling effect on blogger.

At the same time, in the para 58 of Jezior case the ECtHR has underlined an important point. Namely, that requiring the applicant to “to start from the principle that certain unfiltered comments could be contrary to the law would amount to requiring of him an excessive and unrealistic capacity for anticipation”. By stating this, the ECtHR de facto prevented the potential obligation of monitoring all the activities within the personal pages, thus distinguishing the professional publishers from the private bloggers. The detailed differentiation of the intermediaries’ characteristics and the ECtHR’s position can be found in the table below. As follows from such generalisation, the practice was relatively consistent until the fateful judgement in Sanchez v France… So, what has happened in that case?

Comparative Table on Characteristics of the Intermediaries in the ECtHR’s practice

Case

Delfi AS v Estonia

(2015)

MTE v Hungary

(2016)

Pihl v Sweden

(2017)

Tamiz v UK

(2017)

Høiness v Norway

(2018)

Jezior v Poland

(2020)

Sanchez v France

(2021)

Type of speech

MIC**

(hate speech)

non-MIC

(offensive, potentially defamatory expressions)

non-MIC

(offensive, expressions)

non-MIC

(defamation)

non-MIC

(potentially offensive expressions)

non-MIC

(defamation)

MIC

(hate speech)

Type of II*

news portal

news portal

blog

service for blogging

news portal

blog

Facebook private page

Measures, taken by II

comments removed 6 weeks after publication, 1 day after complaint

content

removed immediately after complaint

comment removed; apology published immediately after complaint

comments removed 3 days after complaint

comment removed 13 minutes after complaint

comment removed immediately after complaint

comment not removed, no complaint

ECtHR decision

II had to monitor and expeditiously remove MIC, II failed

II had to expeditiously react on complaints on non-MIC,

II succeeded

II had to expeditiously react on complaints on non-MIC,

II succeeded

II had to expeditiously react on complaints on non-MIC,

II succeeded

II had to expeditiously react on complaints on non-MIC,

II succeeded

II had to expeditiously react on complaints on non-MIC,

II succeeded

II had to monitor and expeditiously remove MIC, II failed

 II – Internet-intermediary (platform for locating speech, which can have a different level of editorial control)

** MIC – manifestly illegal content

 

While considering Sanchez’s role on Facebook, the ECtHR surprisingly called him an intermediary for sharing information by other individuals. Specifically, in para 90 the Court criticized him for “his lack of vigilance and reaction to certain published comments”, adding that more than 1800 friends on his personal lists had to make him expecting the heated debates in the comments. Moreover, the Court has denied his arguments regarding inability to monitor the activity on his Facebook page, since he had time to publish a post, exhorting his followers to refrain from offensive comments, but did not remove the impugned ones. Furthermore, the ECtHR has also noted the undesired to remove those comments throughout 6 weeks of the domestic courts proceedings. Thus, it stressed that responsibility shall be shared between Facebook and Jezior as intermediaries, while failing to elaborate on liability of the commentators, all of which have been identified. The main reason behind such decision was that the applicant knowingly opened his profile and comments, enabling the strangers’ activity on his page. Does it mean that from now on all individuals are forced to track the comments on their page? And whether it is at all possible?

Pitfalls, dangers and a pinch of unreasonableness on the side of the ECtHR

The decision was not unanimous with the Judge Mourou-Vikström sharing the Dissenting opinion [27], where she expressed dissatisfaction with the general position and condemned excessive obligations imposed on private individuals. Moreover, she expresses concerns regarding knowledge about the comments, especially given the fact that some of them were removed within 24 hours. In addition, the Judge stressed the risks of over-moderation since the account holders, being afraid of excessive penalties, will strive for removal of even dubious content to avoid potential liability. However, these are only a couple of risks, which might stem from the recent judgement.

In fact, analyzing this decision step by step, we can find numerous pitfalls. First of all, it may bring inconsistency to the already existing practice on intermediary’s liability. In L’Oreal SA v eBay, CJEU held that if platform stores information on its server, sets the terms of the services, obtains remuneration for them and provides general information to its users, it will be covered by the immunity [28]. It also gave an example of intermediary’s active involvement in the creation of content, which was “optimising the presentation of the offers for sale or promoting those offers” [29]. As case-law indicates, social networking platforms like Facebook [30], My Space [31] or Netlog [32], which are the means for dissemination of information through user-created profiles and can remove any posts incompatible with their terms and policy [33], enjoy “safe harbor” protection. Thus, intermediaries, which do not exercise editorial control over third-party content, shall react upon unlawful materials after notifications, not proactively monitoring it on the resource. Meanwhile, the account holders in no way can edit comments, thus they can be responsible following the notification on content’s unlawfulness. If there is a suspicion on content illegality, the complaints go to the platform’s content moderators, not the owners of the private pages. Accordingly, account holders are notified about the existence of comments, but not dissatisfaction of other users with their substance. Since sometimes people turn off the notifications regarding new comments, shares or reactions, especially if their posts are usually in the spotlight having lots of public attention, even notification procedure is not always effective. Therefore, so-called notice-and-take-down mechanism is inapplicable in such case. Hence, adding private individuals to the framework of intermediary liability will raise lots of concerns regarding enforceability of such mechanisms, given a different approach to moderating the content and capacity to receive complaints.

It is also worth mentioning that classic intermediaries and private actors on Facebook are different in size, capacity, obligations and technological responses available to address unlawful content. For example, Delfi and MTE themselves design the public website, equipping it with functions suitable for their work and vision of the Internet communication. In case of Facebook communication, the functionality is defined by Facebook itself, not by the private individuals [34]. In such situation, private users are only able to adjust privacy settings of the publications and comments within the framework, enabled by social media. The abovementioned example with the complaint procedure is emblematic of how differently networks and private actors operate within the online environment. Furthermore, sometimes complaints are received regarding the absolutely lawful content [35]. while in other cases illegal materials might remain unnoticed by the general public. So, which information shall be removed and based on which criteria? Why in the case of the shared liability of the account holder and Facebook, the complaints reach only the latter? And is it fair to make private individuals responsible if they are reduced in their technical capacities on the platform?

Becoming a judge in five minutes: a quick guide.

Another problem implies the very process of moderating the comments. Even assuming that the obligation to monitor is compatible with the free speech standards, there is still the issue – how private persons shall do that. Especially it is relevant for the pages with numerous followers. In this respect, the ECtHR recognised the person with 47 followers as a non-influential figure [36]. Contrastingly, bloggers with 74,967 [37], 140,000[38] or 428,000[39] subscribers are considered authoritative [40]. In the same vein, Narendra Modi [41], Joko Widodo [42], Jair Bolsonaro [43] and Volodymyr Zelensky [44] gain more than 6 million reactions under publications on debatable topics, thus having hundreds of thousands of readers. Some of them hire special moderator for administering their pages or public channels, but others do it themselves. And, in such situation, the question arises regarding the necessity to hire a moderator at all, the number of such moderators and their qualification. As regards the obligation to hire moderators, it might put an excessive financial burden on individual, especially if the person does not make profit out of one’s page (via advertisements [45], extra peer-to-peer financial services [46], or sponsorships) [47]. Moreover, there might be a case of the bot attack, when thousands of people come into the comments to intentionally publish hateful commentaries for the fake accounts, causing the responsibility of the specific individual. Who can assess the capacity of individual to cope with so large arrays of content? Lastly, it is unreasonable and physically impossible to oblige one individual to monitor all third-party activities at one’s account since the person might have a vacation without connection to the Internet, a working meeting, technical problems or absence of desire to use social media for a couple of days. All of the listed reasons shall not become the grounds for liability, since individuals shall not be obliged “to live in the Internet”.

Also, the crucial issue is whether one private individual shall be able to qualify the (il)legality of the materials of other private users. In case of massive platforms qualification of content is usually made by the specifically trained individuals, in most complicated cases – by the lawyers in media field. There is indeed a huge problem with social media replacing the legislators and judges by developing and enforcing their own standards of free speech [48]. However, there is even bigger problem, when private users can label other individuals as offenders. For example, since hate speech is criminally punishable, removal of comments with such substance leads to a conclusion that the author has committed a criminal offence. And such decision is made not by the court or independent review body, but by the private person of the same status as the author of the comment. Respectively, the fair trial standards are just destroyed in such cases. Moreover, even intermediaries are unable to properly assess speech – the best examples are the Myanmar’s case with incitements to genocide [49] and the unawareness of communist symbolic prohibition in the Baltic countries, Ukraine, Poland and certain other States [50]. If the large entities presumably hiring qualified professionals cannot address such problems, how can we lie similar obligations upon private individuals? For example, in Belkacem v Belgium, the applicant was even unable to properly assess his own speech under the free speech standards (which actually turned out to be hateful) [51], so what can we say about obliging the same person of moderating words of other?

No hate speech. No criticism. No reasonable assessment.

The cases of misqualification of speech might arise as well as intentional removal of critical remarks by politicians. To illustrate, in Koç and Tambas v Turkey, the Minister of Justice was accused of creating inappropriate conditions for prisoners, which led to some deaths during hunger strike in the light of the Kurdish problem. One of the articles against him was called `The butcher of justice is once again at work`. However, even virulent passages and particularly provocative title still were not construed as exposing public official to a significant risk of physical violence, containing no advocacy of revenge, massacre or armed resistance [52]. Such a qualification was finally given by the ECtHR, while domestic courts have provided the absolutely opposite ruling, showing how sophisticated it is sometimes to properly qualify the speech. At the same time, in Yordanova and Toshev v Bulgaria, the ECtHR reiterated that occurrence of expressions, designed to attract the public’s attention, in the articles or captions does not itself present a problem [53]. According to National Media Ltd and Others v Bogoshi, ordinary members of the public shall not be treated on the same footing as professional journalists [54]. The ECtHR stressed that those, reporting on the irregularities of public officials, would often dispose of greater means of verifying criticism than official media or individuals, who personally observed violations [55]. Given a limited number of sources available for verification, individuals often provide the audience with the best obtainable version of truth [56]. Consequently, the criticism might take different forms, yet still being protected. And attempts to combat it significantly undermine a free speech.

Accordingly, the comments under the public posts of the State officials often serve a great platform for the public debates on the matters of general interest. Therefore, steps towards removal of such comments are often viewed as the suppression of public debates. A great example can be found far beyond the European borders. Namely, the US ex-president has repeatedly blocked his followers reasoning it by allegedly unlawful activities and hateful messages provided in comments to his tweets. However, such arbitrariness did not last long. The US courts have decided that “if a public official speaks on a platform that automatically permits others to comment, then the official is responsible for creating a public forum” [57]. Even personal accounts shall not be used for propaganda of one’s opinions in the official capacity, suppression of the opposition thoughts or creation of the closed information environment. The same approach has been reiterated by the CJEU in Eva Glawischnig-Piesczek v Facebook Ireland [58], where an Austrian Member of Parliament sued Facebook to affect the deletion of comments posted by a user, which were allegedly damaging to her honour. The Court has stated that a service provider has no obligation to monitor the malicious content unless there is a notification of its alleged illegality or there is a court order. Such order might order to find and delete comments ‘identical’ and ‘equivalent’ to an illegal defamatory one, thus substantially leading towards a very broad content-monitoring obligation [59]. On the one hand, it might be considered an excessive obligation, on the other – the court provides a proper assessment of the content. In contrast to random ordinary individuals on Facebook.

Why are we at all speaking of defamatory content if it is not manifestly illegal one? First of all, the Australian High Court has already obliged the media to monitor third-party defamatory comments on their Facebook pages [60]. Apparently, the leading media and human rights experts have expressed their concerns regarding the overly intrusive nature of such obligation, which will impair the free and effective operation of different media [61]. However, what is more important – if such a discourse has emerged regarding the media public pages, some day it may touch the private figures, who de facto were equalized to Internet intermediaries by the ECtHR ruling in Sanchez v France. A logical sequence may go even further – if the account holders might be obliged to observe the defamatory content, why this duty shall not extend to disinformation and similar categories? In such case, they will be forced not only subjectively assess third-party comments, but also conduct an additional research and fact-checking to establish, whether a comment amounts to disinformation or is defamatory. Combining it with other duties and responsibilities we become one step closer to study rocket science on Facebook. However, a brief conclusion at this point is that the politicians are not allowed to blatantly block or remove the comments under their publications. But the list of content, which shall be specifically monitored is not defined, leaving the space for certain abusive practices with additional odd obligations for private individuals.

The best option is to do nothing. Literally.

The account owners are placed in the situation, where they have several options: to agree on monitoring of the comments, to block comments function, to stop publishing, or to de-active the page and never ever have similar problems again. Since the first two options are unfeasible and incompatible with free speech standards, as argued above, we shall review the latter.

The option of precluding oneself from publishing is quite unreasonable when the issue concerns the politicians. In their case, Facebook, Twitter and other platforms often serve the primary tool for the fast and effective communication with their voters. Since platforms are considered to be the vehicles for debating politics and directly engaging with elected representatives, [62] politicians cannot cease communication of important narratives or engagement with their supporters and critics. Finally, sometimes social media remain the only opportunity for the opposition politicians to express their opinions. Thus, they are obliged to inform their voters in the best available way, including via social networks. Another option faces the same problem of killing any communication between the voters and the elected representatives, candidates or even just public influencers. However, fearing the fines or even individuals will be more inclined to restrict their speech than to be punished. And this is called chilling effect.

The chilling effect implies a negative impact [63] of restrictions, discouraging people from expressing themselves [64] It forces users to self-censor protected speech, [65] fearing repressions after any publication [66] The overblocking adversely affects a public discourse, [67] decreasing individual users’ activity on social media [68] According to Navalny v Russia, the chilling effect amplifies if well-known public figures are targeted [69] For instance, the US Republican Moore deactivated his Twitter account given “the censorship of conservative voices he saw happening”, [70] which impaired public discourse within the whole society [71] Socially-engaged influencers experience the chilling effect due to fear of losing viewership and revenue, [72] as repeatedly happened in China [73] Finally, it influences vulnerable groups disproportionately, [74] having a significant potential to dissuade opposition supporters. Diminishing political discussions, [75] self-censorship constructs a less informed democratic public [76] Accordingly, self-censorship of politicians can be hardly considered the desired effect in case of imposing the obligation to monitor the comments on their Facebook pages. However, that is directly the final point in this discussion.

Conclusions

The analysis of Sanchez v France in the light of the recent tendencies in the field of intermediaries’ liability shows quite pessimistic perspectives. First and foremost, the imposition of responsibility for the third-party content if the person neither exercises editorial control, nor provides services for speech is itself rather weird. However, even considering it acceptable under the freedom of expression, we still face numerous pitfalls. They involve the obligation to qualify the content instead of the court, potential mistakes throughout this process, misuses of the opportunity to remove the comments and absence of physical capacity to do so. Moreover, the dangers are also stemming from the side of consistency of the approach to the intermediaries’ issues in the international human rights law. This judgement nowadays differs significantly from the approach undertaken by the CJEU, thus bringing a dose of chaos to the standards in the given sphere.

In fact, this judgement has been heavily criticized by the numerous NGOs, independent experts and even received one Dissenting Opinion from the ECtHR itself. Although it cannot make a fundamental swift in the unreasonable position, at least such a prompt and consolidated societal response is emblematic of the wrong path chosen by the Court. And the reaction is expected.

Conclusively, there is a strong need to harmonize the approaches, avoiding excessive obligations for actors, who are physically and technically unable to cope with them. It is hardly imaginable, how the ECtHR viewed the enforcement of such decision, applying it to public persons with more followers. Moreover, the Court also abstained from assessment of the risks for account holders in case of intentional attacks, dubious content or massive flows of information. And this has finally led to the thing, which the ECtHR always tried to avoid – chilling effect upon the free speech. Indeed, there is a strong hope that future decisions will overrule this approach and change the narrative, however, as of now the only thing left to say is: watch your comments!

[1] Voule C N, UN expert welcomes landmark protection for online assembly () <https://cutt.ly/HYOPIKm> accessed 11 December 2021

[2] Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary App no 22947/13 (ECtHR, 2 February 2016), para 77

[3] Handyside v the United Kingdom App no 5493/72 (ECtHR, 7 December 1976), para 49

[4] Savva Terentyev v Russia App no 10692/09 (ECtHR, 28 August 2018), para 71; Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary App no 22947/13 (ECtHR, 2 February 2016), para 77

[5] Grebneva and Alisimchik v Russia App no 8918/05 (ECtHR, 22 November 2016), para 52

[6] Prager and Oberschlick v Austria App no 15974/90 (ECtHR, 26 April 1995), para 38; Thoma v Luxembourg App no 38432/97 (ECtHR, 29 March 2001), para 45-46; Perna v Italy App no 48898/99 (ECtHR, 6 May 2003), para 39

[7] Dicle v Turkey App no 48621/07 (ECtHR, 16 June 2015), para 17

[8] Savva Terentyev v Russia App no 10692/09 (ECtHR, 28 August 2018), para 68

[9] German Act to Improve Enforcement of the Law in Social Networks 2017 (Network Enforcement Act)

[10] The EU Code of conduct on countering illegal hate speech online (May 2016) <https://cutt.ly/xYOGzkR> accessed 11 December 2021

[11] Facebook Platform Terms on Meta for Developers (31 August 2020) <https://developers.facebook.com/terms/dfc_platform_terms/> accessed 11 December 2021

[12] Developer Agreement and Policy Twitter (10 March 2010) <https://developer.twitter.com/en/developer-terms/agreement-and-policy> accessed 11 December 2021

[13] ‘General restrictions’ TikTok Developer Terms of Service (5 November 2021) <https://www.tiktok.com/legal/tik-tok-developer-terms-of-service?lang=en > accessed 11 December 2021

[14] Delfi AS v Estonia App no 64569/09 (ECtHR, 16 June 2015), paras 140-143

[15] Sanchez v France App no 45581/15 (ECtHR, 2 September 2021)

[16] Post of Julion Sanchez, (24 October 2011) <https://cutt.ly/pYONyVJ> accessed 11 December 2021

[17] Atamanchuk v Russia App no 4439/11 (ECtHR, 11 February 2020), paras 8, 62, 70

[18] Budinova and Chaprazov v Bulgary App no 12567/13 (ECtHR, 4 March 2021), paras 65, 93-94

[19] Savva Terentyev v Russia App no 10692/09 (ECtHR, 28 August 2018), para 76; Faurisson v France Communication no 550/1993 UN Doc CCPR/C/58/D/550/1993 (1996), para 9.6

[20] Ibragim Ibragimov and Others v Russia App nos 1413/08 and 28621/11 (ECtHR, 28 August 2018), para 94

[21] Delfi AS v Estonia App no 64569/09 (ECtHR, 16 June 2015), paras 140-143

[22] Ulvik M and Pavli D, ‘Case Watch: A Strasbourg Setback for Freedom of Expression in Europe’ (22 October 2013) <https://cutt.ly/pYONyVJ> accessed 11 December 2021

[23] Magyar Tartalomszolgáltatók Egyesülete and Index.hu Zrt v Hungary App no 22947/13 (ECtHR, 2 February 2016), para 69

[24] Pihl v Sweden App no 74742/14 (ECtHR, 7 February 2017), paras 31, 37

[25] Tamiz v The United Kingdom App no 3877/14 (ECtHR, 19 September 2017)

[26] Høiness v Norway App no 43624/14 (ECtHR, 19 March 2019), para 67

[27] Sanchez v France App no 45581/15 (ECtHR, 2 September 2021), Dissenting Opinion of Judge Mourou-Vikström

[28] C-324/09 L’Oréal SA and Others v eBay International AG and Others [2010] OJ C269/3, para 115

[29] Ibid, para 116

[30] CG v Facebook Ireland Ltd and McCloskey [2015] NIQB 11; Finkel v Facebook, Inc No 102578/09 (2009); Gaston v Facebook, Inc No 3:12-cv-0063 (2012); Klayman v Mark Zuckerberg and Facebook, Inc 753 F 3d 1354 (2014); Tetreau v Facebook, Inc No 10-4558-CZ (2011)

[31] Doe v MySpace, Inc 528 F.3d 413, 415 (5th Cir 2008)

[32] C-360/10 SABAM v Netlog NV [2012] OJ C98/6, para 27

[33] CG v Facebook Ireland Ltd and McCloskey [2015] NIQB 11

[34] Avdieieva T, ‘ A history of one blocking’ (25 January 2021) <https://cutt.ly/qYO4MWj> accessed 11 December 2021

[35] Wong Q, ‘Facebook faces complaints from more former content moderators in lawsuit’ (1 March 2019) <https://cutt.ly/GYPwU2T> accessed 11 December 2021

[36] Kilin v Russia App no 10271/12 (ECtHR, 11 May 2021), para 91

[37] Stepina A, ‘Four Russian bloggers who gained enough followers to compete with traditional media’ (October 2019) <https://cutt.ly/uYPevNx> accessed 11 December 2021

[38] B 8211-19 (Swedish Court of Appeal, 2020)

[39] ‘Top 10 US Social Media Influencers in Politics’ (July 2021) <https://cutt.ly/AYPrQAt> accessed 11 December 2021

[40]‘Top 100 Political Blogs and Websites’ (1 December 2021) <https://cutt.ly/dYPrFS5> accessed 11 December 2021; Goodwin A, Joseff K and Woolley S C, ‘Social Media Influencers and the 2020 U.S. Election: Paying ‘Regular People’ for Digital Campaign Communication’ (October 2020) <https://cutt.ly/0YPr3xh> accessed 11 December 2021

[41] Instagram post of Narendra Modi <https://cutt.ly/kYPr67m> accessed 11 December 2021

[42] Instagram post of Joko Widodo <https://cutt.ly/WYPtpij> accessed 11 December 2021

[43] Instagram post of Jair Bolsonaro <https://cutt.ly/NYPtdAV> accessed 11 December 2021

[44] Instagram post of Volodymyr Zelensky <https://cutt.ly/BYPtjXo> accessed 11 December 2021

[45]Jardine E, ‘ Online content moderation and the dark web: Policy responses to radicalizing hate speech and malicious content on the darknet’ (2019) <https://cutt.ly/dYSTWfc> accessed 11 December 2021; Ma R and Kou Y, ‘”How advertiser-friendly is my video?”: YouTuber’s Socioeconomic Interactions with Algorithmic Content Moderation’ (18 October 2021) <https://cutt.ly/ZYST229> accessed 11 December 2021

[46] Jardine E, ‘ Online content moderation and the dark web: Policy responses to radicalizing hate speech and malicious content on the darknet’ (2019) <https://cutt.ly/dYSTWfc> accessed 11 December 2021; Caplan R and Gillespie T, ‘Tiered Governance and Demonetization:

The Shifting Terms of Labor and Compensation in the Platform Economy’ (2020) <https://cutt.ly/dYSYOwS> accessed 11 December 2021, 1; Fredenburg J, ‘Youtube as an ally of convenience: the platform’s building and breaking with the LGBTQ+ community’ (21 April 2020) <https://cutt.ly/jYSYDTF> accessed 11 December 2021, 1

[47] Jakupovic R, ‘YouTube as a Career and a Marketing Tool’ (2019) <https://cutt.ly/iYSUpCj> accessed 11 December 2021, 9; Christin A, ‘The Drama of Metrics: Status, Spectacle, and Resistance Among YouTube Drama Creators’ (15 March 2021) <https://cutt.ly/sYSUEeY> accessed 11 December 2021, 3

[48] Avdieieva T, ‘ A history of one blocking’ (25 January 2021) <https://cutt.ly/qYO4MWj> accessed 11 December 2021

[49] Choudhury A, ‘How Facebook Is Complicit in Myanmar’s Attacks on Minorities’ (25 August 2020) <https://cutt.ly/lYSUV8w> accessed 11 December 2021

[50] Coynash H, ‘Decommunization trials in Lithuania and Ukraine, while Russia defends Soviet past’ (1 November 2017) <https://cutt.ly/rYSIVVm> accessed 11 December 2021

[51] Belkacem v Belgium App no 34367/14 (ECtHR, 27 June 2017)

[52] Koç and Tambas v Turkey App no 50934/99 (ECtHR, 21/03/2006), para 38

[53] Yordanova and Toshev v Bulgaria App no 5126/05 (ECtHR, 02/10/2012), para 52

[54] National Media Ltd and Others v Bogoshi, South Africa SCA (29 September 1998), para 35

[55]Medžlis Islamske Zajednice Brčko and Others v Bosnia and Herzegovina App no 17224/11 (ECtHR, 27 June 2017), para 87; Wojtas-Kaleta v Poland App no 20436/02 (ECtHR, 16 July 2009), para 46

[56] Kovach and Rosenstiel, The Elements of Journalism: What Newspeople Should Know and the Public Should Expect (1st ed, 2001) 37, paras 44-45; Wesley G. Peppet, An Ethics of News: A Reporter`s Search for Truth (Washington D.C.: 1989) 5

[57] Knight First Amendment Institute, et al. v Donald J. Trump, et al. F.3d 232, 237 (2nd Cir 2018)

[58] C‑18/18 Eva Glawischnig-Piesczek v Facebook Ireland Limited [2019] OJ 413, para 109

[59] Ibid, paras 55-56, 66-67

[60] Vincent J, ‘News sites are liable for defamatory Facebook comments, rules Australia’s High Court’ (8 September 2021) <https://cutt.ly/sYS0D24> accessed 11 December 2021

[61] Byrne E, ‘High Court finds media outlets are responsible for Facebook comments in Dylan Voller defamation case’ (8 September 2021) <https://cutt.ly/jYS2yE0> accessed 11 December 2021

[62] Grutzmacher v Howard County F.3d 332 (4th Cir 2017)

[63] Opinion on articles 216, 299, 301 and 314 of the Penal Code of Turkey, adopted by the Venice Commission at its 106th plenary session (CDL-AD(2016)002-e)(Venice, 11-12 March 2016), para 27

[64] Lingens v Austria App no 9815/82 (ECtHR, 8 July 1986), para 44; Karanicolas M, ‘Privatized Censorship – Developing Solutions to the Increasing Role of Platforms in Moderating Global Freedom of Expression’ (2019) <https://cutt.ly/9YS9T1o> accessed 11 December 2021, 8; Pech L, ‘The Concept of Chilling Effect: Its Untapped Potential to Better Protect Democracy, the Rule of Law, and Fundamental Rights in the EU’ (2021) <https://cutt.ly/RYS9Vff> accessed 11 December 2021, 4

[65] Vajnai v Hungary App no 33629/06 (ECtHR, 8 July 2008), para 54; Opinion on articles 216, 299, 301 and 314 of the Penal Code of Turkey, adopted by the Venice Commission at its 106th plenary session (CDL-AD(2016)002-e)(Venice, 11-12 March 2016) 31; PEN America, ‘Forbidden Feeds: Government Controls on Social Media’ (2018) <https://cutt.ly/UYS6rFD> accessed 11 December 2021, 24; 409

[66] Ong E, ‘Online Repression and Self-Censorship: Evidence

from Southeast Asia’ (2019) <https://cutt.ly/jYDq8tm> accessed 11 December 2021, 7

[67] Husovec M, ‘(Ir)Responsible Legislature? Speech Risks under the EU’s Rules on Delegated Digital Enforcement’ (2021) <https://cutt.ly/cYDqXyv> accessed 11 December 2021, 3

[68] Matias J N, ‘Do Automated Legal Threats Reduce Freedom of Expression Online? Results from a Natural Experiment’ (2021) <https://cutt.ly/BYS31cK> accessed 11 December 2021, 27-28

[69] Navalnyy v Russia App nos 29580/12, 36847/12, 11252/13, 12317/13 and 43746/14 (ECtHR, 15 November 2018), para 152

[70] ‘Elected officials suspended or banned from social media platforms’ (2021) <https://cutt.ly/wYS6LKj> accessed 11 December 2021

[71] Matias J N, ‘Do Automated Legal Threats Reduce Freedom of Expression Online? Results from a Natural Experiment’ (2021) <https://cutt.ly/BYS31cK> accessed 11 December 2021, 1

[72] Ross Ph, ‘Demonetization on YouTube and the Visibility of News Produced by Non-Mainstream News Commentators’ (2020) <https://cutt.ly/SYS70cL> accessed 11 December 2021, 27; Kumar S, ‘The algorithmic dance: YouTube’s Adpocalypse and the gatekeeping of cultural content on digital platforms’ (2019) <https://cutt.ly/1YS3xjg> accessed 11 December 2021, 15

[73] PEN America, ‘Forbidden Feeds: Government Controls on Social Media’ (2018) <https://cutt.ly/UYS6rFD> accessed 11 December 2021, 24

[74] Matias J N, ‘Do Automated Legal Threats Reduce Freedom of Expression Online? Results from a Natural Experiment’ (2021) <https://cutt.ly/BYS31cK> accessed 11 December 2021, 3

[75] Cumpănă and Mazăre v Romania App no 33348/96 (ECtHR, 17 December 2004), paras 113, 114; Lingens v Austria App no 9815/82 (ECtHR, 8 July 1986), para 44; Altuğ Taner Akçam v Turkey App no 27520/07 (ECtHR, 25 January 2012); Lombardo and Others v Malta App no 7333/06 (ECtHR, 24 April 2007), para 61; Vajnai v Hungary App no 33629/06 (ECtHR, 8 July 2008) para 54; Karanicolas M, ‘Privatized Censorship – Developing Solutions to the Increasing Role of Platforms in Moderating Global Freedom of Expression’ (2019) <https://cutt.ly/9YS9T1o> accessed 11 December 2021, 8

[76] Matias J N, ‘Do Automated Legal Threats Reduce Freedom of Expression Online? Results from a Natural Experiment’ (2021) <https://cutt.ly/BYS31cK> accessed 11 December 2021, 3; Kumar S, ‘The algorithmic dance: YouTube’s Adpocalypse and the gatekeeping of cultural content on digital platforms’ (2019) <https://cutt.ly/1YS3xjg> accessed 11 December 2021, 15