Israeli deepfake detection startup raises first $16 million in seed funding
Clarity’s software is helping to detect fake footage created by generative AI models during the Israel-Hamas war
Sharon Wrobel is a tech reporter for The Times of Israel.
The sophistication of deepfake technology backed by advanced AI models makes it difficult to distinguish between real versus manipulated or fabricated online content. The creation of synthetic media is being abused for the spread of misinformation and propaganda, including during the ongoing war with the Hamas terror group.
Clarity, an Israeli AI cybersecurity startup that has developed software to detect and protect against deepfakes, said on Thursday that it has raised its first $16 million in seed money. The financing round will enable the startup to double the number of its workers and expand its research and development operations.
The funding round was led by Walden Catalyst Ventures and Bessemer Venture Partners. Among investors participating in the financing are Secret Chord Ventures, Ascend Ventures, Fusion VC and Flying Fish Partners. A group of 70 angel investors also joined, including Udi Mokady, chairman of CyberArk, and Professor Larry Diamond of Stanford University.
Founded at the end of 2022 by CEO Michael Matias, CTO Natalie Fridman and CSO Gil Avriel, with R&D headquarters in Tel Aviv, Clarity was planning to focus much of its attention on developing software for the detection of deepfakes often used to spread misinformation to sway public opinion in election campaigns, and thereby help mitigate the threat these pose to democracies. The startup has 15 employees.
With the eruption of the war in the aftermath of the October 7 onslaught, when Hamas terrorists rampaged through southern Israel, killing around 1,200 people and taking about 250 hostage, AI researchers at Clarity rose to the challenge to help verify the authenticity of war footage, as a flurry of videos and images circulates on social media platforms, some shot by Hamas terrorists, some by Gaza residents, and some created by generative AI tools for manipulation and deception purposes.
The startup’s patent-pending technology detects and analyzes AI manipulations in videos, images and audio, and authenticates media using various techniques such as image forensics, metadata analysis and AI-based algorithms. Since the outbreak of the war, Clarity has been working with the Israeli government and has recently partnered with an Israeli-founded video software firm Kaltura to verify and authenticate sensitive video footage and testimonials of hostages from the October 7 terror assault.
“Deepfakes pose a threat that was not a factor in Israel’s recent conflicts with Hamas terrorists, as generative AI can create realities that never existed,” Matias, co-founder of Clarity and an IDF officer at the 8200 Cyber Unit, told The Times of Israel. “The ongoing war has made it clear that warfare transcends into the digital space and requires governments to leverage technology and work with startups, where it is for using drones, face recognition, or detecting deepfakes.”
Deepfake technology uses deep learning, a form of artificial intelligence involving machine learning to create seemingly realistic renderings of real people and their voices. The recent explosion of generative AI models and chatbots such as ChatGPT are assisting bad actors to generate deepfakes quickly, using sophisticated tools that are widely accessible, and at no cost.
“Our digital lives are under attack by shockingly accurate representations of people saying things they never said, and doing things they never did; we are just at the beginning of the invasion of deceit,” said Matias, who is the son of Yossi Matias, Managing Director of Google’s R&D center in Israel. “We want to create a layer of trust for digital media and content.”