'Hardware neural network' to make AI tech faster, cheaper

All-in-one chips seen boosting computer power for artificial intelligence needs

Researchers at Technion and chipmaker Towerjazz turn commercial chip into ‘revolutionary’ device with memory plus computing ability, emulating the brain

Shoshanna Solomon was The Times of Israel's Startups and Business reporter

Illustrative image of a robot, Artificial Intelligence (AI); (PhonlamaiPhoto; iStock by Getty Images)
Illustrative image of a robot, Artificial Intelligence (AI); (PhonlamaiPhoto; iStock by Getty Images)

Researchers at the Technion-Israel Institute of Technology and Israeli chipmaker TowerJazz said they have developed a “revolutionary” technology that transforms a commercial flash memory chip into a device that contains both memory and computing ability.

This will help provide the computing power needed for artificial intelligence-based applications, the researchers said.

The new device enables the creation of a “hardware neural network” inspired by the operation of the human brain, and will “significantly” accelerate the operation of AI-based computing, the Technion said in a statement.

“We have made a big jump forward” with just a small change, Prof. Shahar Kvatinsky of the Andrew & Erna Viterbi Faculty of Electrical Engineering at the Technion, who led the project, said in a phone interview. “We have taken an existing commercial technology and made a small change, transforming it into something that is very much needed.”

Published in Nature Electronics, the research was led by doctoral student Loai Danial and Prof. Kvatinsky in collaboration with Prof. Yakov Roizin and Dr. Evgeny Pikhay from TowerJazz and Prof. Ramez Daniel of the Faculty of Biomedical Engineering at the Technion.

Prof. Shahar Kvatinsky, left, and doctoral student Loai Danial of the Technion (Rami Shlush, Technion Spokesperson Department)

Computers’ ability to solve computational problems has always been superior to that of humans. Yet for decades, when it came to identifying images, classifying image attributes and making decisions, they have lagged.

In recent years, artificial intelligence has begun to narrow this gap and computers have been taught to carry out complex operations via training mathematical models or software, based on examples drawn from vast amounts of data, or so-called big data. The investment of vast resources has generated a huge leap in the effectiveness of AI-based software developed for a variety of fields, including medicine, intelligent transportation, robotics and agriculture.

However, even as AI-based software performance has made strides forward, the hardware that enables these computers to perform the tasks required has lagged behind. Major breakthroughs in the field of AI are now waiting for dramatic improvements in computing power, in terms of speed, low power demand, accuracy and cost, among others, the researchers said in a statement.

Existing hardware not up to speed

Researchers and companies working with AI technologies have had to make do with off-the-shelf hardware that was not originally built for the massive amount of computational power required to perform these tasks. The available standard digital platforms, the chips and the processors, are “not suited for this,” Kvatinsky said. “The hardware available today is not good enough.”

Existing hardware can provide great accuracy for AI tasks, but only at the cost of extremely high energy consumption and a huge amount of time, the researchers said in the statement.

The “major challenges” faced by hardware engineers when addressing the needs of artificial intelligence computing, explained Kvatinsky, include how to implement complex algorithms that require storage of massive amounts of data; how to enable a rapid retrieval from memory; how to perform many computations in parallel; and how to maintain a high accuracy.

In their work, the researchers took one of TowerJazz’s commercial memory transistors, and by connecting two of the three “legs” of the component, they managed to transform it into a so-called memristor, a device that has both computing power and memory. This can be efficiently used as a “synapse” to  connect between artificial neurons used in AI systems, Kvatinsky said.

Researchers globally are trying to develop memristors that make the computing process more efficient and suitable for AI processing, he said. However, they are still very expensive and only exist in labs. The Technion and Towerjazz study has shown that memristors can be made by using a commercial chip-making plant, with currently available technologies, without it being more expensive, he said.

In their study, the researchers then took their newly developed memristors and connected them, creating a neural hardware network — which emulates the function of the brain.

“Just as the brain can perform millions of operations in parallel, our hardware is also capable of performing many operations in parallel,” said Kvatinsky.

“The new technology is easy to implement and transforms TowerJazz’s transistors, originally designed to store data only, into memristors — units that contain not only memory but also computing ability,” said Prof. Roizin of TowerJazz, in the statement.

Because the memristors are situated on existing TowerJazz transistors, they immediately interface with all the devices the transistors work with, he said.

“The new technology has been tested under real conditions, demonstrating that it can be implemented in building neural hardware networks, thus significantly improving the performance of commercial artificial intelligence systems. Like the brain, the improved system excels in its ability to store data over the long term and in its very low energy consumption.”

Most Popular
read more: