Satire on Facebook: The Secret to Not Having Your Post Removed

June 13, 2023

“Your post has been removed for violating Community Standards.” You can get this kind of message from Facebook for a satirical post that mocks russia’s totalitarian leadership.

Ukrainians have a great sense of humor, which is a skillful way to deal with the stress of the war (at least in part). That is why the memes about “good russians” and “death to executioners” are endlessly spreading on the Internet.

But while Ukrainians mock the occupiers in posts, Facebook removes them for “hate speech,” “incitement to hatred,” “violence” and cites Community Standards. Usually users do not read these Standards. However, to understand how they work in practice, you need to interpret them from Facebook’s point of view, to understand its logic and motives, to understand why it does not like satire and thinks it is dangerous.

Experts of the Centre for Democracy and Rule of Law have repeatedly revealed the logic behind Facebook’s actions. We have already covered the problem of blocking on Facebook and hate speech, analyzed the terms of use of social media and the case of removed post with a photo of a person killed in Bucha.

In the analysis of the Centre for Democracy and Rule of Law, we answer the question whether Facebook really does not understand satire, figure out the pattern Facebook uses to remove satirical posts, talk about how to reduce the likelihood of sanctions and appeal Facebook’s decision.

What is satire and why is it important?

Right now, memories from high school and a teacher’s attempts to explain the difference between satire, sarcasm, irony and humor come to mind. Facebook is not going to explain that. It will delete the post right away or show a warning screen. So let’s briefly explain why Facebook is afraid of satire so much.

Among the comedic genres, satire clearly differs in that it has certain boundaries and mocks people, society, and reality in a sociopolitical context. Whereas, for example, sarcasm has no boundaries, painfully ridicules a specific addressee, and is clearly negative and malicious.

That is why satirical posts on Facebook are not only the author’s personal opinion, but also a way of exposing a public problem and drawing attention to it. Journalists, politicians (especially opposition politicians), public activists and others like satire. It makes the thought livelier and sharper. But sometimes, where a journalist mocks the politics of a tyrant, Facebook sees hate speech and a call to violence and reacts accordingly.

“You have violated Community Standards…” or how Facebook maintains neutrality

On social media, recognizing satire, and especially understanding its intent when one has missed the memo, is difficult. Meanwhile, Facebook removes dozens of posts almost every day citing violations of Community Standards.

A question arises: Does Facebook really not understand satire in posts or does it just want to remain a neutral platform? Explaining the logic behind why Facebook deletes evidence is difficult, but let’s try to find a certain pattern using examples.

So on September 16, 2023, Facebook deleted several images on the humorous page Durdom (Nuthouse).

The first one shows the leaders of the aggressor states – Putin, Lukashenko and Medvediev. They are holding a nuclear bomb, with halos around their heads. The image is a satire on the tyrants’ speeches about peace, innocence, and the “right” religion. After appealing to Facebook and providing context, the post was restored.

Image taken from https://durdom.in.ua/uk/main/photo/photo_id/90905.phtml 

The second image shows Hitler addressing Joseph Stalin with a proposal to inspect Soviet military positions near Stalingrad. The author, however, did not intend to promote Nazism. The sarcasm was a response to some of the suggestions of the official representatives of the Rashists.

But this post was not recovered after an appeal, because Facebook had long ago put Hitler on the list of dangerous individuals and removes content with him by default.

Зображення зі сторінки https://durdom.in.ua/uk/

Another example is the deleted post by journalist Vadym Karpiak with a sharp satire about the number of dead russian occupiers. Facebook mistakenly saw in the post the author’s propensity for suicide. Then Igor Rozkladai, chief expert on media law at CEDEM, appealed the sanction, and the post was recovered.

“The complaint had to explain to Facebook the context and circumstances that preceded the post. The author published a sharp satire on the number of dead russian occupants. In doing so, he appealed to Ivan Karpenko-Karyi’s classic play “One Hundred Thousand” and in no way glorified or encouraged suicide,” explained Ihor Rozkladai.

Image taken from https://www.facebook.com/vkarpiak 

Because of the high risk that Facebook will not understand the satire and delete the post (which had been the case before), a so-called “chilling effect” is created. That is, social media users are likely to stop posting their thoughts steeped in satire altogether – and this threatens freedom of speech and expression, which are necessary for the development of a democratic society and the ability to have important public discussions.

The European Court of Human Rights (hereafter, the ECHR) defends satire, even if it is somewhat offensive. Back in 2007, the ECHR considered the case Vereinigung Bildender Künstler v. Austria, which was not related to social media, but gave insight into the phenomenon of satire.

The case concerned an exhibition of paintings that depicted naked public figures involved in sexual activities. The images were caricatures of these figures using satirical elements. One of them filed a lawsuit seeking a ban on this and subsequent exhibitions and, at the same time, on the publication of the painting. The Austrian courts granted the claim. The author of the paintings appealed to the ECHR.

The ECHR found that “the prohibition on the display of the satirical picture amounted to an interference with the applicant’s right to freedom of expression”. In addition, the court made the following findings:

  • Satire is a form of artistic expression and social commentary
  • and, by its inherent features of exaggeration and distortion of reality, naturally aims to provoke and agitate.
  • Satire should be toleratedas long as it does not concern any detail of a person’s private life (otherwise, it is no longer satire).
  • When prohibiting the expression of views (including by posting a satirical image), one must adhere to the principle of proportionality.Is the action so dangerous that it should be banned? Is it possible to achieve the goal without banning it?

Another case, Nikowitz and Verlagsgruppe News GmbH v. Austria, concerned an article about Austrian skiing champion Hermann Maier, who suffered a leg injury in a traffic accident. The article was conceived as a satirical essay on the response of Austrian society and media to the incident and had the below headline: ‘Hermann Maier. Austria is limping’. It was accompanied by a portrait of Mr. Maier and the headline ‘Hero Hermann’s leg is causing millions of Austrians pain’. In the article, the author used fictitious quotations of famous people about this situation. For example, an alleged quote from Maier’s competitor, Austrian cross-country champion Stefan Eberharter: ‘Great, now I’ll win something at last. Hopefully the rotten dog will slip over on his crutches and break his other leg too’.  

This quote created a negative image of Eberharter and threatened him with a loss of reputation. So he took his case to the Austrian courts. The author of the article was accused of defamation and fined. The author appealed to the ECHR to protect his rights.

The ECHR found that the author’s right to freedom of expression had been violated, because:

  • The Austrian courts’ interference with the author’s right to freedom of expression was unnecessary and inappropriate.
  • The ECHR examined the context of the article, its satirical nature and its social significance, namely the development of the society’s attitude towards sports stars.
  • The ECHR took into account that Profil Magazine had always been dealing with society matters, and that the author Rainer Nikowitz was known for his satirical style of columns.
  • One must take into account the criterion of a “reasonable reader” who was able to understand from the context of the article and the author’s style that the article was written in a satirical style, was intended as a humorous commentary and had evaluative judgment.

These findings can and should be applied today when it comes to removing satirical posts on Facebook for violating Community Standards. Now is the time to analyze them and look at the situation from the social network’s perspective.

Community Standards and their scope

The goal of the Facebook Standards is to create a safe environment for self-expression and to give everyone, regardless of their social standing or worldview, the opportunity to express themselves in the form of written comments, photos, music or other creative means. Even if others may not share that opinion or find it controversial.

If we look at the situation from the perspective of Facebook (or more precisely, its owner company Meta), the platform wants to remain a neutral, safe and peaceful place, not a platform for heated political discussions that can degenerate into hate speech, violence and intimidation. As of today, Facebook has somewhat relaxed its policy on removing posts and has recognized legitimate exceptions when a post does not fully comply with the Standards, but is “worth publishing and in the public interest”.

The “relaxation” policy was associated with the removal of the “Napalm Girl” photo.

Napalm Girl photo, by Nick Ut Source: https://aam.com.ua/2022/05/30/foto-yake-spynylo-vijnu/ 

The photo was taken in 1972 in the Vietnamese village of Trảng Bàng, where aircraft attacked civilians with napalm bombs. One of the residents was a 9-year-old girl, Kim Phuc, who, in pain, threw off her burning clothes as she was running. Photojournalist Nick Ut captured the moment immediately after the attack. Almost 45 years after the event, the photo was published on Facebook – and in October 2016, the social network removed it because it violated the Standards (for which it received criticism from around the world). After that, Facebook revised its policy and introduced a system for evaluating the significance of the news.

In order to assess whether a post is “worthy of publication and in the public interest,” a rigorous review is required. Facebook experts weigh the public interest on the one hand and the risk of harm on the other. Of course, the assessment of significance can be quite subjective. Users often disagree with one decision or another. But in order to ensure the safety of the community and its openness to self-expression, it is necessary to assess:

  • Whether the post poses an immediate threat to public health or safety.
  • Whether it reflects the sentiments currently being debated in the political process.
  • Circumstances in a particular country (for example, whether election is ongoing or the country is at war);
  • Political system of the state, in particular whether it has a free press.

Therefore, if a post could create a risk of harm (physical, emotional and financial) or a direct threat to public safety – it will be deleted, even if it has a certain degree of significance.

At the same time, Facebook can allow “sensitive” and “disturbing” content, but with a warning screen. The social network provides an explanation using the example of a post on the page of the Ministry of Defence of Ukraine about the number of dead occupants. It has a video of an unidentified charred body.

Image borrowed from https://transparency.fb.com/uk-ua/features/approach-to-newsworthy-content/ 

Although Facebook usually removes such content (in accordance with its violence policy), this video meets the criteria for newsworthiness because it documents an ongoing armed conflict. But the video is covered with a warning screen, and access has been restricted to adults over the age of 18 years old because of the content.

Therefore, the social significance of the post is a guarantee of its non-deletion. However, there is another question: who evaluates significance and to whom should we explain what the post is about?

Identifying the context of the post: a procedure that does not exist

The answer seems to be on the surface: in order to satisfy the interests of users and Facebook, it is sufficient to understand the context of the post and the motives of the author. Therefore, in order to avoid sanctions, you need to “hear the parties”. However, it is difficult to imagine at what point this should happen and whether the platform has the resources to consider the context of all satirical posts.

The Facebook Oversight Board indicated that the Meta penalty system needs to be reformed because there is a problem: users are not always given the opportunity to explain the context of their post when addressing Meta.  

The Oversight Board’s analysis showed that nearly 80% of users with few warnings do not violate Facebook’s policy in the next 60 days. That is, most users respond well to warnings and explanations because they don’t want to violate the Standards.

To make context clearer, Facebook introduced a specific tool: satirical page tagging. As Meta explained, satirical pages are an opportunity for people to share social comments using humor, exaggeration and absurdity to prove a point.

Meta’s logic is for this tagging to help people understand the context of the page, prevent them from taking the information at face value, immediately understand that the post contains satire and not to spread it as verified information.

The Facebook page image was borrowed from https://twitter.com/MetaNewsroom/status/1379887135998763014/photo/1 

That is why Facebook is trying to make the task of identifying satire easier, but its algorithms haven’t yet learned how to do it perfectly.

Scientists have taught artificial intelligence (hereinafter referred to as AI) to detect sarcasm in texts from social media. By analyzing a text message, the AI identifies keywords inherent in sarcastic statements (such as “just,” “again,” “absolutely,” etc.) as well as the relationship between them. The AI then estimates the probability of whether a particular message was sarcastic.

Developing such a system is super difficult. After all, it is almost impossible to predict the development of thought, the cause-and-effect relationships, the associative series, the context in a particular society and specific trends – and indeed the intention that emerges in a particular person’s mind. Scientists themselves have noted that this AI is not perfect and cannot understand sarcasm in all cases.

Although Facebook does not yet know how to detect satire, it does have algorithms to detect:

  • Hate speech”, prohibiting posting, for example, derogatory remarks or images with unconditional statements about the behavior of violent criminals (particularly terrorists, murderers, members of hate or criminal organizations).
  • “Glorification or promotion of suicideor self-harm”, prohibiting the publication of content containing graphic images of self-harm.

So, to reduce the risk of a post being deleted, users should make their intentions clear so that the context is clear.

 For example, to prevent your content from being labeled as “hate speech,” Facebook asks that you give it as much context as possible in the post, explain why the post is relevant, what issue it raises.

In doing so, Facebook even emphasizes that humor and satire are allowed. At the same time, however thorough the explanations, Facebook is unambiguous about the prohibition of content that directly offends people based on race, ethnicity, religion, national origin, sexual orientation, sex, gender, gender identity, serious disability, disease or content that compares people to animals in an offensive way.

So the question remains: Whose post evaluation does Facebook use to apply sanctions? But can AI solve the problem of identifying valid violations and reading context? 

Today, Facebook’s breach detection model rests on 2 pillars – technology and a team of reviewers.

Technology is an important part of how Facebook tries to balance the interests of the post author and the protection of Community Standards.

For example, a special filter is being developed that will help identify toxic statements and prevent such posts from being published. Facebook is also using AI to identify images and text that are identical to those already removed for violating Community Standards content.

AI continues to learn and become more efficient. So the likelihood that it will soon identify satire in posts and analyze whether it violates the standards is very high.

In his commentary for CEDEM, Vitalii Miniailo, CEO EON+, explains:

“To use AI for satire detection, Facebook needs to take the following steps:

  1. Data collection: It’s necessary to collect a large amount of data, including both satirical and non-satirical messages for model training.
  2. Model training: Based on this data, the GPT model can be trained to identify satirical content.
  3. Validation and adaptation: After training, the model should be tested on new data that it has not previously seen.
  4. Implementation: If validation results show that the model accurately identifies satire, it can be implemented for message analysis on Facebook.

It’s important to remember that even the most advanced AI models can make mistakes, so they should serve as auxiliary tools, not as the final arbitrator.

Concurrently with the artificial intelligence that detects violations automatically, there are reviewers who can identify what the technology missed. These experts have extensive language and subject matter expertise. After the technology misses a violation, thousands of reviewers take on the role of “overseeing” compliance with the Community Standards and examine the context (the specifics of relationships in a particular state, the political environment, etc.).

How do you prove your right to satire to Facebook?

If a user knows the Community Standards, understands their essence and published a sarcastic post with context, but it was still deleted, it’s worth exercising the right to appeal. This option was added not too long ago. Previously, it was possible to appeal only the deletion of accounts, groups and pages.

Facebook first notifies the user that the content will be removed, and the user has the option to file a complaint. The initial deletion notice has an “Appeal” option, after which the complaint will go to one of Facebook’s reviewers. The user has the right to an explanation as to why the post was identified as violating the Standards.

While the appeal process involving a Facebook reviewer is a positive step, do not expect a full hearing or extensive explanation. Most likely, the expert will review the request and let you know if the content complies with the community rules or not. The user may get different answers, but a rough idea is as follows:

  1. “Hi,

Thank you for bringing this to our attention.

We have checked the post content and found that it was removed by mistake. We have now restored the material.

Don’t hesitate to let us know if you have any questions.

 

Best regards,

Facebook team.”

  1. Hi,

Thank you for bringing this content to our attention.

Our team has thoroughly investigated this content and found that it does violate our Community Standards: www.facebook.com/communitystandards 

If you have any additional information or would like us to review individual elements of the content, please reply to this message.

Best regards,

Facebook team.”

If the review team has reviewed the content twice and the user still does not agree with the decision, it can be appealed to the Oversight Board. Only the Oversight Board can render a final verdict. However, not all content and not all decisions regarding content can be appealed to the Oversight Board. It chooses the cases on which it considers it necessary to give its verdict. Therefore, although the appeal procedure does exist, its procedural stages and case assessment criteria are not disclosed.

Conclusions

In live communication, satire is fairly easy to recognize. But its use in Facebook posts can be interpreted by the platform as “hate speech,” “incitement to hatred,” “suicide propaganda,” and the like. Therefore, the removal of satirical posts is common.

Security issues, however, must coexist harmoniously with the right to self-expression. Self-expression in satirical posts falls under Facebook’s special oversight, because it does not always recognize the context and prefers to delete such posts.

Facebook would do a better job of regulating satirical posts if it could:

  • Analyze the controversial post in combination with other posts on the user’s page. This will help understand whether a person is actually systematically violating the Community Standards, or has no tendency to harm themselves or others;
  • Involve more reviewers who understand certain specific political, social, cultural backgrounds of the satirical post;
  • Continue to train artificial intelligence that can objectively evaluate the premise of the creation and threats of a satirical post;
  • Give users the opportunity to explain the context of a post before it is removed, not just during the appeal process.

For their part, to make satirical posts less likely to be deleted, users should:

  • Know the Community Standards, their rationale and prohibited topics and try to avoid violating them;
  • In the post, try to give Facebook more context and the circumstances of the situation being described in a satirical style, explain why the post is worth publishing and is in the public interest;
  • Should not neglect the right to appeal and form a practice that will serve as the basis for Facebook to revise its Standards and relax them. You can appeal a deletion on your own or through CEDEM. Contact us at sn@cedem.org.ua.

The message should include the following information:

  • A link to the profile where the blocking occurred
  • Screenshots of messages from the support team regarding problematic content
  • If possible, the full text of the post/image

CEDEM will analyze the circumstances, and if it finds that there is a problem indicating algorithm vulnerability, incorrect decision, or a decision that may affect other users, your case will be sent to Meta via a special communication channel.

Following these tips will help establish effective communication between the platform and users and help strike a balance between freedom of expression and protection of Community Standards.