Social media giant Facebook discussed on Monday the steps it was taking to secure Israel’s general elections in March, touting greater transparency requirements for political ads and pages on the platform and also increased supervision against the proliferation and spread of fake news and disinformation.
The idea is to protect the elections from foreign and domestic interference, remove content that can do harm, and increase the transparency of the platform, said Fosco Riani, who is part of Facebook’s election team.
“We have learned from the 2016 elections” in the US, he told a roomful of journalists at Facebook’s offices in Tel Aviv. Today there are 35,000 people within the firm globally that focus on preserving the security of the platform, with 500 people working on elections, he said.
Facebook has come under fire over data protection and privacy issues and amid concerns that the leading social network has been manipulated by foreign interests for political purposes.
The platform is also being used to spread divisive or misleading information, as was the case during the 2016 election that put US President Donald Trump in the White House.
March 2 will see Israelis going to the polls for the third time in a year. In March last year Facebook rolled out tools and restrictions in Israel to make political advertisements more transparent.
As part of these steps, the firm requires that all ads dealing with national or political issues carry clear information detailing who paid for them, and that the identity and location of the person or people behind them are verified. The ads will be stored for up to seven years in a publicly accessible library.
The firm has also increased the transparency of the pages on its platform, especially political pages, requiring information about when it was set up, who the administrator is and if the page’s name has been changed from a former name, explained Elul Rifman, the government politics and advocacy partnership manager at the firm.
To prevent disinformation from spreading, the US firm has teams of experts who proactively search the network and remove from the platform posts that violate the company’s community standards, Riani said. The firm has taken down over 50 networks of bots in 2019, he said, and each takedown is publicized with details about who was behind them and the countries that were affected.
Facebook also uses humans and artificial intelligence tools to identify and block fake accounts. The firm removed 1.7 billion fake accounts globally in the third quarter of 2019, Riani said.
The firm also works to reduce the distribution of fake news by removing content that violates its community standards or provides misleading information regarding voting procedures, and removes content that “can contribute to real world violence and offline harm, said Jessica Zucker, product policy manager at Facebook.
Content that is deemed problematic because “it undermines the authenticity and integrity” of the Facebook platform, but does not violate the community standards, is reduced and curtailed, she added.
“We dramatically reduce that distribution in the newsfeed,” she said. “So, in other words we will allow you to post it as a form of free expression, but we’re not going to show it at the top of your newsfeed. And this is really important because we found that when we reduce the distribution of false rated content, it dramatically reduces the number of people who come into contact with it, which is really key and helping us.”
She added that it’s not Facebook’s job to check the facts of what politicians say. “We don’t want to censor political discourse,” which would limit the ability of citizens to get access to what their politicians are saying. Fact checking their statements would be a form of censorship, she said.
Even so, the firm admits that much more needs to be done.
“There is more to do and to continue to improve,” Riani said. The work cannot be done by a single team or even a single company he said, and that is why Facebook works with local experts and academics to identify problems.
Indeed, the firm works with 54 fact-checking partners globally in 45 languages and 79 countries to check the accuracy of information that is posted on the platform, said Guido Buelow, of Facebook’s fact-checking team.
At the end of 2016 there were fact-checking teams only in the US, Germany, France and The Netherlands, he said.
The focus of the program is to detect viral misinformation through community feedback, tracking comments of disbelief and using machine learning tools to identify potential disinformation. The firm is also looking to promote news literacy among users, educating them on Facebook and Instagram on how to identify and report misinformation, he said.
In Israel, the company works with The Whistle, a fact-checking NGO that scours the network for fake news and reports it to Facebook. Like all of the firm’s fact checkers, The Whistle is also certified by the International Fact-Checking Network (IFCN).