Terrorists try to sway opinion with events that didn't happen

These Israelis are fighting Hamas on the war’s emerging ‘deepfake’ cyberfront

Generated by artificial intelligence, false footage of the Israel-Gaza war has been inundating media outlets around the world since the October 7 Hamas massacres

Reporter at The Times of Israel

Founders of Israeli AI cybersecurity startup Clarity from left to right: CEO Michael Matias; Natalie Fridman, chief technology officer; and Gil Avriel, chief strategy officer. (Courtesy)
Founders of Israeli AI cybersecurity startup Clarity from left to right: CEO Michael Matias; Natalie Fridman, chief technology officer; and Gil Avriel, chief strategy officer. (Courtesy)

NEW YORK — When CBS News staff in Manhattan sifted through 1,000 videos submitted from people supposedly on the ground in Israel or Gaza last week, only 10 percent of the submissions were found to be usable.

Some of the 900 videos rejected by the staff at CBS were produced by so-called “deepfake” artificial intelligence (AI), a technology that’s made headlines this  year for its ability to fool people into believing that someone said something they did not say.

CEO Wendy McMahon said in a statement that CBS was “quietly building out” capabilities to handle the deepfake pandemic, adding some creators of deepfakes are doing so for the purpose of “misinformation.”

As the Israel Defense Forces prepare to enter Gaza, cyberspace will again be a key battleground for real-time decision-making and public opinion. This time, however, deepfakes pose a new threat that was not a factor in Israel’s recent conflicts with Hamas terrorists in Gaza.

To gain some sense of the issue, The Times of Israel spoke with Michael Matias, CEO of Clarity. The startup was founded a year ago to tackle the deepfake challenge by developing an “AI Collective Intelligence Engine” for use around the world. According to Matias, this line of warfare cannot be underestimated.

“A dramatic super-evolution of misinformation is taking us by storm, and especially at war, it has a dramatic impact,” said Matias.

“Online platforms and publishers are not equipped to deal with this because they are using traditional moderation techniques, predominantly human moderators,” said Matias.

He spoke with The Times of Israel on Thursday night as he was about to board a plane for Israel from New York. For Matias, this war is deeply personal: Among the over 1,300 murdered in last weekend’s Hamas massacres were several of his and his colleagues’ friends who were attending the Supernova festival. Likewise, his girlfriend’s three brothers are currently deployed to battle in an elite unit.

An under-the-radar threat

Deepfakes have been making headlines for less than a year, so there is still confusion among the public about AI-generated technology and how it can be used by bad actors, said Matias.

“Deepfakes are impersonations where you take somebody’s identity and make them do things and say things that they have not said or done,” explained Matias, who pointed to a recent deepfake clip of Ukraine’s president urging troops to put down their arms as one example of the AI-generated threat.

Earlier this year, deepfake footage was used to tamper with elections in Brazil and now in Britain, said Matias, and the technology has also been deployed in the US to “deepfake political leaders in order to change public perception,” he said.

In the assessment of one of Israel’s leading cybersecurity experts, veteran chief information security officer Eyal Sasson, the “attack vector of deepfakes is evolving at the pace of AI and requires AI capabilities to deal with it.”

“We’re already seeing financial fraud in banks and attempts to create social engineering at scale in political systems. It’s going to be everywhere,” Sasson said.

“It is imperative that organizations begin adopting tools to firewall from these advanced social engineering and phishing scams,” said Udi Mokady, chairman of information security company Cyberark.

No costs and no technological obstacles

Founded with partners at Stanford University and MIT, and with R&D headquartered in Israel, Clarity was planning to focus much of its attention on election systems around the world, the integrity of critical infrastructure, and the threat posed by deepfake manipulation, said Matias.

Needless to say, however, the sudden Israel-Gaza war changed the equation for Clarity, said Matias, whose training and professional experience are in offensive cybersecurity and AI.

One frightening aspect of deepfake technology is that anyone can create deepfakes from their own homes, said Matias. In other words, there are no costs or technological obstacles to stop people swaying public opinion by forging events that never took place.

Illustrative: OpenAI CEO Sam Altman speaks before a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence, Tuesday, May 16, 2023, on Capitol Hill in Washington. (AP/Patrick Semansky)

As acknowledged by McMahon of CBS on Thursday, media outlets are ill-prepared to distinguish between real footage and footage generated for “misinformation” purposes.

Although CBS is developing its capabilities to deal with deepfake challenges, said Matias, most media outlets underestimate the extent of the threat posed by AI-generated footage.

‘A deepfake revolution’

Hamas and its supporters have already used deepfake in the current war, Matias said, and it will be tempting for pro-Palestinian deepfake creators to try and manipulate the political situation in the weeks ahead. (For security reasons, Matias said he could not disclose any examples of deepfakes that have been used.)

“When it comes to negotiations and hostages and these very delicate, identity-based situations, deepfake is a method in which you alter reality and make people think through social-engineering that something happened that really didn’t happen,” said Matias.

Illustrative: An artificial intelligence tool identifies elements of a human face for analysis (vchal via iStock by Getty Images)

Right now, said Matias, “The vast majority of deepfake videos come from user-generated content groups in platforms such as Telegram.”

“The videos reach millions of people and are highly viral. You can change the public discourse of what happens,” he said.

Maias said that the New York and California-based Clarity has R&D staff in Israel that includes several of his friends from his IDF service in the army’s Unit 8200. The unit — the army’s largest — is comparable in function to the US National Security Agency.

Although the October 7 surprise massacres shook Israelis’ confidence in the army and security services, the Jewish state has been playing an outsized role in protecting cyberspace in general, Matias said.

“Israel has the best technology in the world to tackle the threat posed by deepfake,” said Matias. His company will soon announce a partnership to authenticate sensitive content and protect the public from falling victim to deepfake manipulators.

When asked if there was one thing readers should know about the deepfake threat, Matias said everyone who views footage of the Israel-Gaza war is responsible for making sure the media has not been manipulated.

“The cost of sharing a deepfake is part of the war, and it’s a global war on terrorism,” said Matias. “We are at the start of a deepfake revolution.”

Most Popular
read more: