(Image: Adobe)
(Image: Adobe)

A Google software engineer is suspended after going public about how artificial intelligence (AI) has become sentient. It sounds like the premise for a science fiction movie -- and the evidence supporting his claim is about as flimsy as a film plot too.

Engineer Blake Lemoine has spectacularly alleged that a Google chatbot, LaMDA, short for Language Model for Dialogue Applications, has gained sentience and is trying to do something about its “unethical” treatment. Lemoine -- after trying to hire a lawyer to represent it, talking to a US representative about it and, finally, publishing a transcript between himself and the AI -- has been placed by Google on paid administrative leave for violating the company’s confidentiality policy.  

Google said its team of ethicists and technologists has dismissed the claim that LaMDA is sentient: “The evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”