Israel signs global treaty to address risks of artificial intelligence

The treaty, spearheaded by the Council of Europe and signed by 57 states, including the US, is a legal framework to tackle harmful and discriminatory outcomes of AI systems

Sharon Wrobel is a tech reporter for The Times of Israel.

Illustrative image of innovation, AI, machine learning and robots. (ipopba; iStock by Getty Images)
Illustrative image of innovation, AI, machine learning and robots. (ipopba; iStock by Getty Images)

Israel has joined the United States, United Kingdom, and several other European Union countries in signing an international treaty on how to harness the use of artificial intelligence for the common good while ensuring that democracy, human rights, and the rule of law are not undermined.

“Israel’s signing of the first global convention for artificial intelligence emphasizes our commitment to responsible innovation and human rights values,” said Innovation, Science and Technology Minister Gila Gamliel. “This convention places Israel as a full partner in the creation of international policy in the field in the coming years and will place us at the forefront of the world’s advanced countries.”

It is the first binding international treaty on AI – signed by 57 countries, including 46 Council of Europe member states – now waiting to be ratified by the states. Once ratified, the signatory countries will be accountable for any harmful and discriminatory outcomes of AI systems. That’s as lawmakers around the world are contemplating regulation of the technology and how to deal with safety issues and other misuse and dangers it already poses.

The Council of Europe’s convention on AI signed on Thursday was created to formulate global fundamental principles, rules, and standards to ensure that the deployment of the groundbreaking and fast-developing technology is compatible with human rights, democracy, and the rule of law.

The move comes as there are concerns that Israel is falling behind the US and Europe when it comes to pulling in investments in startups that develop AI-based technologies amid the ongoing war with the Hamas terror group.

Artificial intelligence — the tech that gives computers the ability to learn fast — has been around since the 1950s. But over the last decade, it has been enjoying a renaissance made possible by the higher computational power of chips and the vast amount of data available online.

For illustration: The OpenAI logo is seen displayed on a cell phone with an image on a computer monitor generated by ChatGPT’s Dall-E text-to-image model, December 8, 2023, in Boston. (AP Photo/Michael Dwyer)

Advances in the field have enabled computers to analyze datasets and find useful patterns to solve problems, with the machine at times outwitting the human brain. AI and machine learning are already used for a wide range of applications, from facial recognition to the detection of diseases in medical images to global competitions in games and warfare.

Speaking at a conference in Tel Aviv this week, AI expert Prof. Amnon Shashua, co-founder of Israeli startup AI21 Labs, said that the advances in generative AI to create images, improve data research and text, and other tools are very nice and useful, but do not yet constitute a revolution. AI21 Labs is a natural language processing (NLP) company, which has the vision to bring generative AI to the masses and competes against OpenAI.

“It is a technological wave that has the potential to imitate human intelligence in solving complex problems in all areas of human knowledge, instead of human experts,” said Shashua. “Among those who are involved in working on AI advanced models, there is a sentiment that it is coming around the corner.”

“The very fast advancement of these models creates a sentiment that in the near future, these systems will not only be able to summarize articles but solve problems like a human expert, and then everything changes meaning that the relationship between us and machines is going to undergo a dramatic change,” he added.

Meanwhile, nations are investing huge amounts of money in the field, as it is expected to be the heart of technology going forward and a key to economic growth worldwide.

While AI technology is touted as having the potential to transform the world we live in the coming years, the big promise potentially poses also catastrophic risks, could do significant harm, and undermine democracy, many experts behind the development of AI systems have been warning.

Founder and CEO of US artificial intelligence company OpenAI Sam Altman speaks at the Tel Aviv University in the eponymous Israeli city, on June 5, 2023. (Jack Guez/AFP)

OpenAI CEO Sam Altman has been urging that government intervention will be critical to mitigating the risks of increasingly powerful AI systems. Altman has proposed the need for an international monitoring regulator along the lines of the United Nations nuclear agency to crack down on harmful AI use.

Among the concerns is the potential abuse of AI in critical infrastructure, education, human resources, and public order. The evolvement of generative AI (GenAI), which can create complex content that resembles human creativity, in the likes of OpenAI’s ChatGPT has led to the use of the tool to create deepfakes, that could lead to the online dissemination of false and misleading claims about the Holocaust; to cheat on homework assignments; spread falsehoods; and violate copyright protections. Beyond that, the concern is that advanced AI systems in the future could manipulate humans into ceding control.

“Despite the current war and all the pain and challenges that it brings, we need to remain attuned to, and involved with, international processes and organizations,” said Deputy Attorney General Dr. Gilad Noam. “Just as importantly, we want to signal to our allies and friends across the globe that we share a common desire, namely: to foster innovation while protecting human rights.”

The treaty obligates the signatories to ensure that AI systems are not used to “undermine the integrity, independence and effectiveness of democratic institutions and processes, including the principle of the separation of powers, respect for judicial independence and access to justice.”

The legal framework also imposes a general duty on the states to monitor the outcomes of AI systems and take measures to protect human rights in accordance with international and national law, including the protection of the privacy rights of individuals and their personal data are legally protected.

To comply with the standards of the treaty, the signatory states will be required to act to identify, evaluate, and minimize risks posed by AI systems, based on a risk assessment that weighs the practical and potential consequences for human rights, democracy and the rule of law. However, the treaty did not list any sanctions or fines in cases of non-compliance and raises questions about its effective enforceability.

Most Popular
read more: