Google engineer ‘sent on leave’ after claiming AI had become sentient

AI stated it wanted to be recognized as employee of Google rather than property

Picture source - The Conversation

A Google engineer has said that he was ‘sent on leave’ after claiming that an artificial intelligence chatbot had become sentient.

According to a report in The Washington Post, the engineer, Blake Lemoine, said that he had been in the process of chatting with an interface – LaMDA, or Language Model for Dialogue Applications – as part of his duty at Google’s Responsible AI organization.

LaMDA, according to Google, is its “breakthrough” technology for conversation and is able to engage in natural, open-ended conversations.

Google states that the said technology could be utilized in applications like the Google Assistant.

Lemoine had published a post on Medium, in which he described the interface as a “person”. He stated that he had talked to the chatbot about various topics such as religion, consciousness, and the laws pertaining to robotics.

According to Lemoine, who is also a Christian priest, the interface had described itself as a sentient person, and aimed to make the well being of humanity its top priority. It also wanted to be recognised as an employee of the organization rather than its property.

In the conversations that Lemoine had with the AI program, and which he posted online, LaMDA stated that it considered itself to be a person in the same manner as Lemoine.

However, Lemoine was dismissed when he raised the idea that LaMDA may be sentient to his higher-ups at Google.

Google’s spokesperson, while speaking to The Washington Post, has said that a team of ethicists and technologists reviewed Lemoine’s concerns and told him that the evidence did not support his claims that LaMDA was sentient.

The engineer was subsequently placed on paid administrative leave for violating the company’s confidentiality policy.

Google’s spokesperson stated that it did not make sense to attribute human characteristics to conversational models, and that such AI models possessed so much data that they were capable of sounding human.

He further said that superior language was not evidence of sentience.

 

The original report appeared here