Game of foes: Israeli AI startup hunts ‘bad actors’ across the internet

ActiveFence says its solutions can help companies and organizations stay ahead of malicious content such as hate speech and child abuse

Ricky Ben-David is a Times of Israel editor and reporter

Illustrative: A young man is seen wearing a headset and playing an online video game. (DisobeyArt/iStock by Getty Images)
Illustrative: A young man is seen wearing a headset and playing an online video game. (DisobeyArt/iStock by Getty Images)

It wasn’t just routine online chatter. The mayor of a city in the US Midwest appeared to be under imminent physical threat by far-right extremists and federal authorities had to be alerted. This is a call researchers for an Israeli company had to make recently as part of their ongoing work monitoring and detecting harmful and/or illegal content on the internet such as hate speech, child abuse, fraud networks and disinformation campaigns.

It was one call of several. The discovery of a child trafficking ring in one corner of the internet prompted another urgent alert, this time to authorities in Israel. In Singapore, police were called when the researchers uncovered a credible threat to attack a local synagogue. And in the US, where school shootings are a national epidemic, such threats can be rampant.

The researchers work for ActiveFence, an Israeli company that built an artificial intelligence-powered platform to proactively uncover and flag malicious content and activity “at scale” worldwide.

The platform collects millions of data sources from across the internet, said Nitzan Tamari, the company’s chief strategy officer, including the universally accessible surface web (or open web), the deep web, a hidden collective of sites and content that is not indexed by standard search engines, and the dark web, a hub where users remain anonymous and protected from surveillance and tracking but where illegal activity, content and trade can thrive.

“This is all part of the service. We go where bad actors are chatting and organizing,” Tamari told The Times of Israel this summer.

The information is gathered into the company’s “database of evil” where it then goes through a process of verification and risk analysis so that clients can stay on top of “trust and safety” practices to protect themselves and their users.

Hunting for malicious content

Founded in 2018, ActiveFence emerged from stealth mode this past summer, announcing a $100 million investment with backers such as growth-stage investment firm Highland Europe and Israeli VC company Grove Ventures to further bolster its tools and customer base.

ActiveFence’s team. (Courtesy)

The company employs about 240 people across 10 locations including in New York, Ramat Gan, just outside Tel Aviv, and the central Israeli town of Binyamina. Its teams are comprised of security professionals, data scientists, R&D and cybersecurity experts and OSINT (open-source intelligence) researchers.

ActiveFence declined to name some of its customers but it doesn’t just work with the usual suspects — social media companies with existing, problematic records of tackling issues like hate speech, Holocaust denial, cyberbullying, disinformation, extremism content and worse. It also works with government agencies, audio and video streaming companies, gaming firms and online marketplaces to help root out these “bad actors.”

There is a whole ecosystem of companies and entities that “you wouldn’t think need to monitor content, but they absolutely do,” said Tamari, citing as an example the proliferation of tags related to the far-right QAnon conspiracy theory on fitness platform Peleton last year. The out-there conspiracy theory baselessly claims that former US president Donald Trump is secretly fighting a shadowy group of powerful Democratic pedophiles (and would soon return to the presidential office despite handily losing the 2020 election).

In this Aug. 2, 2018, file photo, a protester holds a Q sign waits in line with others to enter a campaign rally with President Donald Trump in Wilkes-Barre, Pa. (AP Photo/Matt Rourke)

Many of these communities, Tamari shared, “operate in code and use language that only [fellow] members would understand. This language can be so wild that a ‘normal’ person would not even be able to pick up on it or follow, it takes expertise.”

These communities are “highly organized and they know how to bypass and manipulate rules on different platforms,” often “pushing each other across these platforms to recruit new ‘users,'” she explained.

Over the past two years, QAnon and other far-right content have been “everywhere,” as has COVID-19 vaccine disinformation, coronavirus fraud claims and US election disinformation, Tamari said. “But everything changes very quickly. There’s also a lot of terror-related content, from sermons all the way to beheadings. Also rampant child abuse, from imagery to trafficking, glorification of eating disorders, bullying, etc.”

Existing systems that rely on content classification, user reporting and moderation are often lacking, as highlighted expertly in a series of investigative pieces in 2019 that dove deep into Facebook’s content moderation operations.

These practices can leave companies and organizations in a reactive position. “We take a much more proactive approach to such content. And we use this information to get a step ahead of it,” said Tamari.

In this file photo from August 11, 2019, an iPhone displays the apps for Facebook and Messenger in New Orleans. (AP Photo/Jenny Kane, File)

The technology allows companies to “act before there’s virality, before harmful content goes viral and before [these companies] get into trouble.” It also does so in different languages, dialects and regions, taking into account cultural nuances and geopolitical context, which requires subject matter expertise, the Israeli company said. One of the most poignant criticisms leveled at Facebook — again — is the focus on English-language content, leaving severe gaps in policing and moderating incitement and other harmful content in other languages.

The company does not monitor encrypted communication (like WhatsApp or Telegram), however.

“When bad actors are abusing online platforms, they will usually do it for the purpose of gaining influence, supporters or monetary value. These actors will usually choose the sources of chatter that can be easily accessible by their victims or supporters,” said Daniel Morgan, a member of Tamar’s team at ActiveFence.

Whack-a-mole is an industry term

ActiveFence’s solutions “sit on top” of existing systems put in place by internal security teams to support the safety and integrity of online interactions, Tamari said. “We can provide a feed of content that violates the policies of a given organization, for example. Each platform has its own DNA, its own philosophies and we know the different standards.”

The company’s different research teams can also provide specific reporting or analyses of different issues for their clients. Comprehensive research papers on subjects such as how children are groomed in online eating disorder communities, and how threat actors access and abuse gaming accounts to commit fraud, are made available to the public on the company’s website.

The idea, said Tamari, is to provide clients with the ability to respond appropriately to malicious content on their platforms, “not to solve” the problem. Asked if it often felt like they were playing a game of whack-a-mole trying to uncover this content, Tamari responded that it was a common industry term.

Young gamers at the GameIn Pro video game championships in Tel Aviv, April 5, 2017. (Luke Tress/Times of Israel)

Threat actors, as ActiveFence calls them, act with alarming and increasing sophistication to evade detection and adapt quickly to new rules. “To some extent, we are still surprised on a daily basis by what we see — be it lawful but awful content and subject matter, new trends being adopted by underground actors, technology that is being used, etc,” said Morgan.

“They are constantly shifting, adapting tactics and adopting new methods and technologies to try and evade some of the most robust defenses that platforms have in place, and so it’s imperative to continuously learn, train our systems and always stay ahead,” he added.

Tamari said there was also a surprising amount of money and organization involved in pushing forward innumerable conspiracy theories and fake news campaigns, Tamari noted. “It’s not just crazy aunts.”

“We see how the disinformation grows and which actors are working across platforms. It’s a whole ecosystem,” she said.

A majority of ActiveFence’s work involves discovering harmful content that may not necessarily be illegal and reporting this to the platforms themselves to handle in accordance with their policies, explained Morgan.

But sometimes, the discovery of the content requires urgent action. “When and if we discover content that poses an immediate and grave direct threat to the public — and especially to children — or explicitly violates laws of particular jurisdictions, we report it directly to the relevant authorities,” such as the cases with the US mayor and the child trafficking ring, said Morgan.

ActiveFence says it is operating in a new and very dynamic space with some adjacent competitors such as mission-focused non-governmental organizations and activist centers, but “none that pioneer the proactive way” it operates.

“Everyone here is very mission-driven,” Tamari said.

Most Popular
read more: