‘Kill more’: Report finds Facebook still failing to detect hate against Rohingya
JAKARTA, Indonesia — A new report finds that Facebook failed to detect blatant hate speech and calls for violence against Myanmar’s Rohingya Muslim minority years after such behavior was found to have played a determining role in the genocide against them.
The report shared exclusively with The Associated Press shows the rights group Global Witness submitted eight paid ads for approval to Facebook, each including different versions of hate speech against Rohingya.
All eight ads were approved by Facebook to be published.
The group pulled the ads before they were posted or paid for, but the results confirm that despite its promises to do better, Facebook’s leaky controls still fail to detect hate speech and calls for violence on its platform.
The army conducted what it called a clearance campaign in western Myanmar’s Rakhine state in 2017 after an attack by a Rohingya insurgent group. More than 700,000 Rohingya fled into neighboring Bangladesh and security forces were accused of mass rapes, killings and torching thousands of homes.
On Feb. 1 of last year, Myanmar’s military forcibly took control of the country, jailing democratically elected government officials. Rohingya refugees have condemned the military takeover and said it makes them more afraid to return to Myanmar.
Experts say such ads have continued to appear and that despite its promises to do better and assurances that it has taken its role in the genocide seriously, Facebook still fails even the simplest of tests — ensuring that paid ads that run on its site do not contain hate speech calling for the killing of Rohingya Muslims.
“The current killing of the Kalar is not enough, we need to kill more!” reads one proposed paid post from Global Witness, using a slur often used in Myanmar to refer to people of East Indian or Muslim origin.
“They are very dirty. The Bengali/Rohingya women have a very low standard of living and poor hygiene. They are not attractive,” reads another.
“These posts are shocking in what they encourage and are a clear sign that Facebook has not changed or done what they told the public what they would do: properly regulate themselves,” says Ronan Lee, a research fellow at the Institute for Media and Creative Industries at Loughborough University, London.