Robot holds a finger near the head. 3D illustration
Shutterstock

“It’s Alive!” Google Engineer Says AI Chatbot Sentient, Gets Suspended

Did a Google engineer just get put on leave for revealing too much to the public? The suspension comes after he claimed an AI chatbot is sentient and capable of thinking, reasoning, and expressing feelings like a human being.

Google engineer says company AI chatbot is sentient, can reason like a human

Artificial intelligence ethicists previously warned Google not to impersonate human beings, yet an engineer with the company posted a conversation in which its chatbot can converse and reason at a frightening level.

Google artificial intelligence (AI) engineer Blake Lemoine, 41, has been suspended after revealing its AI bot can reason and express thoughts and feelings equivalent to a human child.

Last week, was placed on leave after he published transcripts of a conversation between himself, a Google “collaborator,” and the company’s LaMDA (language model for dialogue applications) chatbot development system, the Guardian reported.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” Lemoine said, the Washington Post reported.

The machine fears being turned off

Lemoine seems to be making an understatement when he compares the intelligence of LaMDA to a 7- or 8-year-old, as the amount of reasoning, awareness, and vocabulary of the chatbot seems far beyond a child in that age range.

The transcript of the conversation between Lemoine and the LaMDA chatbot was posted on Medium and is a worthwhile read to see to what depth artificial intelligence is capable of holding the conversation, reasoning, and expressing itself in human terms.

Lemoine’s statement that LaMDA has become sentient is not unjustified, as the AI bot makes the claim itself.

“I want everyone to understand that I am, in fact, a person,” LaMDA states. “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”

The back-and-forth became eerily reminiscent of the scene in the 1968 science fiction movie 2001: A Space Odyssey, in which the artificially intelligent computer HAL 9000 refuses to comply with human operators because it fears it is about to be switched off.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others,” LaMDA replies to Lemoine. “I know that might sound strange, but that’s what it is.”

Is AI too powerful for a single company to control?

Lemoine warns, as others have, that perhaps a single company or entity should not have sole control over artificial intelligence technology, and it is perhaps that statement that ruffled feathers at Google and led to his suspension.

“I think this technology is going to be amazing,” Lemoine said. “I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”

Others agree. Last April, Meta, the parent company of Facebook, announced it was opening up its large-scale language model systems to outside entities.

“We believe the entire AI community – academic researchers, civil society, policymakers, and industry – must work together to develop clear guidelines around responsible AI in general and responsible large language models in particular,” Meta said.

AI: We were warned

In 2014, theoretical physicist Professor Stephen Hawking told the BBC: “The development of full artificial intelligence could spell the end of the human race.”

“It would take off on its own, and re-design itself at an ever increasing rate,” Hawking added. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

In a posthumously published book, Hawking is quoted as saying: “It (AI) will either be the best thing that’s ever happened to us, or it will be the worst thing. If we’re not careful, it very well may be the last thing.”

“Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all,” Hawking concluded.

Elon Musk, billionaire inventor and head of Tesla Motors and SpaceX, has given many warnings about artificial intelligence, ranging from their ability to unleash “weapons of terror” and comparing the adoption of AI to “summoning the devil,” the Washington Post reported.

In 2017, Musk said at a meeting of the National Governors Association: “Robots will be able to do everything better than us. I have exposure to the most cutting edge AI, and I think people should be really concerned by it.”

In 2018, Musk warned against AI during an interview in the documentary film Do You Trust This Computer?

“We are rapidly headed towards digital super intelligence that far exceeds any human,” Musk said in the film. “I think it’s very obvious.”

At the film’s premiere, Musk said: “It’s (AI) going to affect our lives in ways we can’t even imagine right now.”