Military sees surge in AI use, but not yet for critical missions
search

Military sees surge in AI use, but not yet for critical missions

Artificial intelligence technology is still in its infancy, biased and unreliable; though its growth is explosive, it’s not ready for life and death decisions, experts warn

Illustrative 3D Illustration of Futuristic Robotic Walking Drones in Desert (richard eppedio; iStock by Getty Images)
Illustrative 3D Illustration of Futuristic Robotic Walking Drones in Desert (richard eppedio; iStock by Getty Images)

Military and defense systems are expected to see a huge surge in the use of artificial intelligence tools in the coming years to make sense of the massive amounts of data available and to make better informed decisions. But experts are also warning that the technology is still in its infancy, and it will be a long time before it will be able to be completely safely deployed for critical missions.

That was one of the key messages provided by military and artificial experts at a conference on “Creating Insights into the Flood of Data,” held by Elta Systems Ltd., a unit of Israel Aerospace Industry Ltd., in Ashdod last week.

Artificial intelligence is a field that gives computers the ability to learn, and although the concept has been around since the 1950s it is only now enjoying a resurgence made possible by chips’ higher computational power of chips. The field is growing at a compounded annual rate of almost 63 percent since 2016 and is expected to be a $16 billion market by 2022, according to MarketsandMarkets, a research firm.

Artificial intelligence and machine learning are used today for a wide range of applications, from facial recognition to detection of diseases in medical images to global competitions in games such as chess and Go.

Israel Lupa, the VP and General Manager of Land and Naval Radars Systems Division at Israel Aerospace Industries, speaking at a conference on AI; Oct. 24, 2018 (Courtesy)

It will also play a huge role in the military and defense industries, said Israel Lupa, the VP and general manager of Land and Naval Radars Systems Division at IAI.

“We will see a massive surge in the deployment of AI technology in defense systems,” said Lupa at the conference.

“AI is at the heart of our operations,” he told the audience. Some 20% of the technologies currently made by IAI include the technology, and “it is growing at a crazy pace” and will be included in most systems within two years, he said.

The technology can be used for detection, identification and classification of objects and people, he said, in a variety of military systems, helping commanders to make more informed decisions.

What IAI cannot develop in-house, he said, it plans to get via joint development with startups, mainly Israeli ones. “We cannot develop everything ourselves, behind our fences. We need to acquire or do joint ventures with startups.”

There are, however, problems inherent with the technology, he said, and it is crucial for these issues to be resolved for a more advanced stage of the AI revolution to take place.

Illustrative image of artificial intelligence (PhonlamaiPhoto; iStock by Getty Images)

The technology is highly sensitive to the information it is fed, meaning that if the information it is given is wrong, its conclusions will be wrong, thus making the machine biased; also, the technology cannot explain how it got to its conclusions, which makes it harder for a human to trust the technology. Because of these two factors, the technology needs to be improved.

“AI is not good for life and death decisions yet, ” he said. “We cannot fully trust it. “When the machine will be able to explain how it got to that decision, that will give it authority.”

In addition, machines still need enormous quantities of data to be able to learn to identify objects or do other tasks. “It takes a child one or two pictures to correctly identify a dog, but a machine requires many more.”

Also, machines are not yet able to learn from experiences once they are deployed out in the field. “For this to happen machines need to be able to think much more like humans,” he said.

Isaac Finkelshtein, the Imagery Intelligence manager at IAI’s Elta Systems, who organized the conference, said the growth in AI is “exponential” yet the field is still in its very early stages, and focus needs to be concentrated on getting the technology to provide data processing and analysis in real time, when the information is needed.

Prof. Shie Mannor, an expert in machine learning at the Technion Israel Institute of Technology (Courtesy)

The AI “ecosystem is ripe,” Prof. Shie Mannor, an expert in machine learning at the Technion Israel Institute of Technology, said at the conference and in emailed comments to The Times of Israel. But the deployment of the technology “still requires safeguards.”

“Artificial intelligence has huge potential to be implemented everywhere, but it is still in a very initial stage,” said Mannor. Teaching a machine to recognize objects and tag them is time-consuming, and often there is not enough data available to feed into the system to make it work accurately.

Mannor brought as an example a picture of a banana, which a machine recognized as such. But when a small sticker was placed near the banana, that same machine identified the banana and the sticker as a toaster. “This can open up opportunities for malicious attacks on machine learning algorithms,” he said in emailed comments to The Times of Israel.

“We have machine learning at our fingertips, but methodology is key and verification and validation are critical and needed throughout the process,” he said.
“We still cannot trust the machine to make that crucial distinction between terrorist and non-terrorist… We will see these problems in autonomous vehicles as they are deployed, with perhaps lawsuits to follow — and wherever the technology is brought to operate in the field.”

read more:
comments