Facebook instructs moderators to ignore Holocaust denial posted to its website unless it came from one of four countries — out of more than a dozen countries where it is illegal — and only then if it is reported, the UK Guardian newspaper reported Wednesday.
The four countries are France, Germany, Israel, and Austria and the content is to be removed “not on grounds of taste, but because the company fears it might get sued,” the report said, citing training manuals for moderators at the social media giant.
Earlier this week the daily said that over a period of months it was able to review more than 100 training documents.
Facebook “does not welcome local law that stands as an obstacle to an open and connected world,” one manual says and only recommends removing or blocking Holocaust content when “we face the risk of getting blocked in a country or a legal risk.”
Examples of allowed content in the other countries where teh social media giant doesn’t fear legal action was a post that said “Never again Believe the Lies” with a picture of a concentration camp, according to the report.
“We believe our geo-blocking policy balances our belief in free expression with the practical need to respect local laws in certain sovereign nations in order to remain unblocked and avoid legal liability. We will only use geo-blocking when a country has taken sufficient steps to demonstrate that the local legislation permits censorship in that specific case,” a training manual explains.
“Some 14 countries have legislation on their books prohibiting the expression of claims that the volume of death and severity of the Holocaust is overestimated. Less than half the countries with these laws actually pursue it. We block on report only in those countries that actively pursue the issue with us.”
Facebook “contested the figures but declined to elaborate,” the report said, referring to the claim that action against Holocaust denial content is taken only in some countries.
Monika Bickert, head of global policy management at Facebook, told the Guardian, “Not every team of employees is involved in enforcing our policies around locally illegal content. Whether reported by government entities or individual users, we remove content that violates our community standards.”
Facebook was aware of “the sensitivities around the issue of Holocaust denial in Germany and other countries and [we] have made sure that our reviewers are trained to be respectful of that sensitivity,” she added.
Other guidelines instructed that refugees and asylum seekers are in a “quasi-protected category,” meaning they receive less stringent protection against online abuse than other vulnerable groups.
While any calls for violence against refugees must be removed, Facebook manuals advise that “as a quasi-protected category, they will not have the full protections of our hate speech policy because we want to allow people to have broad discussions on migrants and immigration which is a hot topic in upcoming elections.”
Comments such as “Fuck immigrant” and “Keep the horny migrant teenagers away from our daughters” do not need to be deleted although references to migrants that “equate them to other types of criminals, e.g. rapists, child molesters, murderers or terrorists” are not permitted, according to the report.
Anti-Muslim sentiment such as “All terrorists are Muslims” is permitted but a comment “All Muslims are terrorists” should be flagged because while terrorists are not a protected category, Muslims are a protected category, a manual explains.
“When context is ambiguous about whether a PC (protected category) or non-PC is being attacked, the default action is for reps to ignore,” Facebook training material instructs and gives as an example of content that can be left on the site a photograph of Syrian refugees and children in a swimming pool with the caption “The scum need to be eliminated.”
“Because it is ambiguous whether the caption is attacking Syrian refugees (PC) or perpetrators of sexual assault (or the subcategory Syrian refugees who commit sexual assault), the correct action is to ignore,” the manual explains.
On Tuesday European lawmakers approved legislation that would force social media platforms, including Facebook, Twitter and Google’s YouTube, to remove hate speech videos, Reuters reported. The measure still needs to be approved by the European Parliament before it becomes law.
In January, Israel’s so-called Facebook bill, which would allow the state to seek court orders to force the social media giant to remove certain content based on police recommendations, passed its first reading in the Knesset.
The bill was proposed by Public Security Minister Gilad Erdan and Justice Minister Ayelet Shaked in July, two weeks after the two met with Facebook officials in the Knesset. It is aimed at combating Palestinian incitement to terror against Israelis.
The government says the bill will only be invoked in cases of suspected incitement, where there is a real possibility that the material in question endangers the public or national security.
In December Google changed its search algorithm to deny prominence to Holocaust-denying websites. The company faced criticism after Digital Trends reported that searching for the query “Did the Holocaust happen?” gave top results from white supremacist and anti-Semitic pages, which asserted that it, in fact, did not.
Google had initially said it had no intention of removing or filtering search results, but the company subsequently announced that it has “made improvements to our algorithm that will help surface more high quality, credible content on the web.”