Google puts on leave an Engineer who claims its AI is sentient
Google declined proclamations of the engineer Blake Lemoine, who alleged that one of the of the company’s artificial intelligence (LaMDA) has become sentient, therefore it put Lemoine on paid leave on Monday.
Language model for dialogue applications (LaMDA) chatbot development system is one of Google’s Artificial Intelligence (AI) system.
Google considered that Lemoine breached the confidential rules, as he refused this saying; “Google might call this sharing proprietary property, while I call it sharing a discussion that I had with one of my coworkers.”
Lemoine said that LaMDA had developed its consciousness and had feelings, just like a kid aged eight-year-old, according to the Wahington post.
Next to him, Google noted that its systems imitated conversational exchanges and could riff on different topics, but did not have consciousness.
The company assured that hundreds of its researchers and engineers have conversed with LaMDA and reached a different conclusion than Lemoine did.
Lemoine tried before to invite a lawyer to represent the AI and talked to a representative of the House Judiciary Committee about what he claims about Google’s unethical activities.