This summer, in the highly charged political environment of the US election campaign, a video showing House of Representatives Speaker Nancy Pelosi slurring her speech quickly went viral. It turned out the video was doctored — just one example of how technology can be abused to mislead the public.
The global reach of social networks means that brands, governments and individuals are now under constant threat from bad actors spreading malicious content and so-called fake news that can shatter reputations in minutes.
Inauthentic videos, false information and fake social media profiles are playing an increasingly disturbing role in politics and the shaping of public opinion. Some 82 percent of Americans said they were worried that fake news could influence the outcome of the 2020 presidential election, according to a Pew Research Center poll.
Israeli startup Cyabra is among a new crop of companies pioneering solutions to find and stop disinformation online. Cyabra, whose team includes veterans of the Israeli military’s cyber warfare unit, relies on image recognition, artificial intelligence, machine learning and algorithms that comb through millions of social media posts and profiles to identify suspect information online.
Ahead of recent elections in one country, Facebook and Twitter said they had identified several hundred fake accounts. Cyabra identified some 150,000 bad actors on those platforms trying to influence those same voters.
“We are creating the tools to try to root out these campaigns that snowball disinformation,” explained co-founder and CEO Dan Brahmy. “We want to reverse-engineer this problem of people putting information out there that is not real.”
The company, founded in 2018, has been commissioned by several governments — including the US State Department — major media organizations and multinational corporations to identify information that could cause harm or confusion.
Clients search for keywords, and within minutes Cyabra’s software examines around 250 metrics that identify bots, sock puppets and trolls, producing a report on what percentage of social media posts on the topics are associated with suspected fake profiles and other bad actors. It also identifies key players orchestrating planned attacks by analyzing their authenticity and measuring their influence.
The company’s name, a combination of the words “cyber” and the magician’s expression “abracadabra,” emphasizes the immediacy of the results it produces.
Cyabra can also identify visual manipulations and detect deep-fake threats. Based on the information it gathers, the company can also suggest ways to counter the attack.
“For example, it can alert a news organization to the fact that 95 percent of posts about a certain topic are coming from accounts associated with trolls or bots, so the news organization can choose how to handle, or how to not handle, the topic,” Brahmy said. “We produce this quickly, because with such a short news cycle, there is only a short timeframe in which to react.”
In a recent run of its software, Cyabra found that more than 30 percent of the Twitter accounts associated with support for Azerbaijan in its ongoing conflict with Armenia were fake, and also traced many of these accounts to Turkey, a long-time political enemy of Armenia.
The service is also relevant to consumer brands. It can identify if misleading or damaging reviews about products are coming from real people or fake accounts.
Cyabra is not based on fact-checking statements, but on the patterns and origins of posts, Brahmy explained.
“We aren’t looking at words,” he said. “We are relying on a dataset of past and present engagement to identify suspected false or unreliable information that is not coming from real people.”
Aside from the fake news influencing politics, Brahmy’s team can advise commercial companies about the sources of online discussions about their products or brands — especially those that might damage their reputation.
The startup is now in the process of expanding its team from 10 to 30 people as it ramps up efforts to reach new clients around the world. Cyabra says it is just one of a range of tools clients can use to understand and evaluate online information.
“Our job is to help people wear better glasses so they can make decisions,” Brahmy said.
Social media companies have recently stepped up efforts to combat and prevent disinformation and fake news campaigns, often working alongside governments and tech startups. In the days following the tense 2020 US presidential election, platforms like Twitter and Facebook flagged and removed large amounts of information, sparking accusations of censorship from the parties who posted the information. And last summer, in its Deep Fake Detection Challenge, Facebook produced and released thousands of deepfake videos — computer generated images of people doing and saying things that they didn’t actually ever do or say– so startup companies could use them as a dataset to train algorithms and develop systems to recognize future deep fakes.
Deepfakes, bots and other forms of disinformation remain one of society’s biggest challenges worldwide, with more voices calling for solutions.
“This is a threat that is dismantling our democracy,” said Nina Jankowicz, disinformation fellow at The Wilson Center, a nonpartisan think tank in Washington, DC and author of “How to Lose the Information War: Russia, Fake News and the Future of Conflict,” during a recent US House of Representatives hearing on the threat of online disinformation. “Disinformation is a threat to democracy, no matter what political party it benefits or whether it is foreign or domestic in its source, and it is long past due that the United States began to address this challenge to the very foundation of our country and its values.”
Brahmy admits the challenge of addressing the problem is huge, but also exciting. “Today the disinformation world is still a blue ocean, there is a long way to go,” Brahmy said. “But we are trying to make a dent, and bring some clarity to the world.”
To learn more about Cyabra, click HERE.