Israeli startup tracks behavior to outsmart hacker bots
Having won Tel Aviv U competition, cybersecurity firm Unbotify battles automated bandits to keep data safe and the news real
You might think of hackers as people sitting at computers, but custom software applications, or bots, can be the ones doing the dirty work. Bots automate the business of hacking, tearing through massive troves of stolen account data, for example, or bombarding website login pages with passwords, probing for hits.
Enter Unbotify, an Israeli tech startup that analyzes human behavior patterns to differentiate between bots and humans and weed out the fakers.
“Our claim is we are not raising the bar a little bit and waiting for the fraudsters to catch up” — as others do — said Eran Magril, vice president of product and operations. “We are looking at the data points which are the hardest for them to fake in order to go undetected.”
The company took first place at the 2017 Cyberstorm competition last month at Tel Aviv University. It was also ranked first among Israel’s most innovative companies in 2017 by Fast Company magazine. Its product uses behavioral biometrics — like how long keys are held down, how a mouse is moved and how a device is held — to determine whether the user is a person or a bot.
“We know if you are holding your device at a specific angle, and what happens if you tap your mobile device, how does this angle change?” Magril said. “This is a very granulated kind of data that even if you’re just putting your phone on the table, it will still be sending data about the x, y, z [axes] of your machine and how it changes all the time from very small vibrations in the room.”
The power of automating fraud
Bots are the preferred method for committing the most common kinds of online fraud, which can cost industries millions of dollars or sway public opinion on important issues.
Account data stolen in attacks on major corporations can be bought on the dark web and used to take over other accounts that use the same credentials. Those accounts can then be abused in myriad ways to cash out, including buying products with saved payment methods and stealing stored gift cards or air miles.
In one case, a bot was attempting to register new accounts with an online retailer. It continuously entered emails to see if any were already registered and built a database of those that were. Then it tested common passwords on each in order to take over any accounts it gained access to.
With an average success rate of two percent, Magril said, a hacker with one million sets of credentials can take over 20,000 accounts. “That’s the power of automation for fraudsters,” he said. “If they have automation they can operate on a big scale.”
Other common tactics include content scraping and advertising fraud. Scraping is when a website uses bots to scan for competitors’ price changes and deals to get an unfair competitive advantage, or copies content like an airline’s flight prices and availability in order to sell airline tickets on a separate platform, which diverts valuable traffic from the original seller’s website.
Online ad fraud takes many forms, including bots simulating traffic to websites – advertisers pay to run ads that aren’t being seen or clicked on by real people. Some bots will download and install games and programs that advertisers pay platforms for. Such tactics cost the industry billions of dollars each year.
That money goes to hackers instead, who keep getting more sophisticated, said Magril. “This is also where the funding comes for developing new attack tools, for developing new bots,” he said. “Bots are always evolving because they have the incentive to evolve.”
Fake profiles and fake news
Bots are also used to create fake social media profiles that can flood specific countries and locales with legitimate or hoax news stories to influence public opinion. Fake profiles can ratchet up a public figure’s or company’s popularity on a given platform, then disappear on command, creating the illusion that the subject lost support.
“It’s a huge problem and everyone is talking about it, especially in the last year with the elections in the United States and France and other places,” Magril said.
Unbotify’s technology goes well beyond the leading detection and protection measures, he said, because machines can’t fake human behavior “in all its diversity and complexity.” The company’s 12 employees are also constantly adding new characteristics to what they analyze to keep hackers from knowing what needs to be mimicked.
Founded two years ago by Yaron Oliker and Alon Dayan, the company has raised some $2 million from Israeli based Maverick Ventures. It boasts as its chief data scientist Yaacov Fernandess, whom Magril called a “world-class expert in machine learning,” of which there are only a handful, he said. Their headquarters are in the northern Israel town of Ramat Yishai.
While the current product targets automation only, the company has noticed that there are specific behavioral indicators that can identify a person who is creating fake accounts. Certain keystroke habits, for instance, might be common among people who repeatedly register new accounts, without the help of a bot. “We saw that analysis of behavioral biometrics can also be used to differentiate between different groups of people with different intentions,” Magril said.
The company is focused on its core technology for now, though, and wants to break into new markets. They have customers in the US and Europe, and want to expand their clientele to China.