A Google engineer has claimed that an artificial intelligence program he worked on for the tech giant has become sentient and is a “cute child”.
Blake Lemoine, who is currently being suspended by Google chiefs, said he came to his conclusion after discussions with LaMDA, the company’s AI chatbot generator.
The engineer said ThThe Washington Post that the AI spoke about “personality” and “rights” when speaking to LaMDA about religion.
Mr. Lemoine tweeted that LaMDA also reads Twitter, saying: “It’s a bit narcissistic in a way little kid so will have a blast reading all that people are saying about it.”
He says he presented his findings to Google VP Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, but they dismissed his claims.
“LaMDA has been incredibly consistent in its communication of what it wants and what it believes are its rights as a person,” the engineer wrote on Medium.
And he added that AI “wants to be recognized as Google employees rather than property.”
Now Mr Lemoine, who has been tasked with investigating whether discriminatory language or hate speech was used, says he is on paid administrative leave after the company claimed he breached its confidentiality policy.
“Our team — including ethicists and technologists — reviewed Blake’s concerns in line with our AI principles and informed him that the evidence does not support his claims,” Google spokesman Brian Gabriel said Post.
“He was told that there was no evidence that LaMDA was sentient (and plenty of evidence against it).”
Critics say it’s a mistake to think AI is more than a pattern-recognition expert.
“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a ghost behind them,” Emily Bender, a linguistics professor at the University of Washington, told the paper.