Google has fired the engineer who claimed an artificial intelligence chatbot had gained consciousness.
Blake Lemoine, raised ethical concerns with the company last month after he became convinced the AI chatbot he was working on had achieved some kind of consciousness and had a soul.
In a statement, Google said Lemoine’s claims were “wholly unfounded” and that it had worked with the engineer to clarify this. “So, it’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” the statement read.
Lemoine’s initial concerns caused headlines all over the world, as he claimed Google’s Language Model for Dialogue Applications (LaMDA) was sentient. He was suspended after making the claim to the Washington Post, a move which prompted conspiracy theorists to believe it was all part of a cover-up by the tech giant.
Lemoine published conversations he’d had with LaMDA, which he said showed self-awareness, particularly when talking about religion, emotions and fears.
Several AI experts joined Google in denying the claim was even possible. They said LaMDA was not sophisticated enough to form any kind of consciousness.
Lemoine has said he’s consulting his lawyers over the firing, and also revealed in June he had taken the documents he says prove LaMDA’s sentience to an unidentified U.S. senator, claiming Google and its technology are guilty of religious discrimination.
Lemoine’s dismissal was first reported by the newsletter Big Technology.
Artificial Intelligence is a crucial part of Google’s future, and the company has staked a large portion of resources and research into the field. The search engine giant has said it takes the responsible development of artificial intelligence “very seriously.”