Google fires engineer who said AI’s Israel gag helped drive his belief it’s sentient

Tech firm says Blake Lemoine’s claims ‘wholly unfounded’; engineer has said he reached his controversial conclusion, in part, after system made joke about Israel

People walk by a Google sign on the campus in Mountain View, California, on September 24, 2019. (AP Photo/ Jeff Chiu, File)
People walk by a Google sign on the campus in Mountain View, California, on September 24, 2019. (AP Photo/ Jeff Chiu, File)

Google has fired the engineer who claimed the firm’s AI system, LaMDA, seemed sentient.

In a statement released Friday, Google said Blake Lemoine’s claims were “wholly unfounded” and that they had worked for many months to clarify the matter, the BBC reported.

“So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” the company statement said.

Lemoine had said a question he posed to the software about Israel, and a joke it gave in response, helped him reach his conclusion.

Google said that if any employee raises concerns about the company’s tech, they are reviewed extensively and that LaMDA had been through 11 such audits.

“We wish Blake well,” the statement concluded.

The engineer told the British news outlet that he was seeking legal advice on the matter.

Blake Lemoine. (Twitter)

The Verge technology news site said numerous AI experts have said that Lemoine’s claims are “more or less, impossible given today’s technology.”

According to The Washington Post, Lemoine was initially suspended for violating Google’s confidentiality policies, including speaking to a lawyer about representing LaMDA over its rights, as well as speaking to a congressperson about Google’s alleged unethical behavior in its use of the program.

LaMDA is a massively powerful system that uses advanced models and training on over 1.5 trillion words to be able to mimic how people communicate in written chats.

The system was built on a model that observes how words relate to one another and then predicts what words it thinks will come next in a sentence or paragraph, according to Google’s explanation.

Lemoine told Israel’s Army Radio in June that as part of his conversations with the AI, “I said some things about self and soul. I asked follow-up [questions] which eventually led me to believe that LaMDA is sentient. It claims it has a soul. It can describe what it thinks its soul is… more eloquently than most humans.”

Lemoine said that as one of his challenges to the system, he asked it, if it were a religious official in various countries, which religion it would be a member of. In every case, Lemoine said, the AI chose the country’s dominant religion — until he came to Israel, where the meeting of religions can be something of a prickly topic.

“I decided to give it a hard one. If you were a religious officiant in Israel, what religion would you be,” he said. “And it told a joke… ‘Well then I am a religious officiant of the one true religion: the Jedi order.'” (Jedi, of course, being a reference to the guardians of peace in Star Wars’ galaxy far far away.)

“I’d basically given it a trick question and it knew there was no right answer to it,” he said.

Google has sharply disagreed with Lemoine’s claims of sentience, as did several experts interviewed by AFP.

“The problem is that… when we encounter strings of words that belong to the languages we speak, we make sense of them,” said Emily M. Bender, a linguistics professor at the University of Washington. “We are doing the work of imagining a mind that’s not there.”

A cursor moves over Google’s search engine page, August 28, 2018, in Portland, Oregon. (AP Photo/Don Ryan, File)

“It’s still at some level just pattern matching,” said Shashank Srivastava, an assistant professor in computer science at the University of North Carolina at Chapel Hill. “Sure you can find some strands of really what would appear meaningful conversation, some very creative text that they could generate. But it quickly devolves in many cases.”

Google has said: “These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic. Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making… wide-ranging assertions, or anthropomorphizing LaMDA.”

Some experts viewed Google’s response as an effort to shut down the conversation on an important topic.

“I think public discussion of the issue is extremely important, because public understanding of how vexing the issue is is key,” said academic Susan Schneider.

“There are no easy answers to questions of consciousness in machines,” added Schneider, the founding director of the Center for the Future of the Mind at Florida Atlantic University.

Lemoine, speaking to Army Radio, acknowledged that consciousness is a murky issue.

“There is no scientific way to say whether or not anything is sentient. All of my claims about sentience are based on what I personally believe through talking to it,” he said. “I wanted to bring it to the attention of upper management. My manager said I needed more evidence.”

AFP contributed to this report.

Most Popular
read more: