Google engineer Blake Lemoine, who publicly claimed the company’s LaMDA conversational artificial intelligence is sensitive, has been fired, according to the Big Technology newsletter that Lemoine spoke to. In June, Google placed Lemoine on paid administrative leave for violating its confidentiality agreement, after contacting members of the government about its concerns and hiring an attorney to represent LaMDA.
a statement emailed to ledge On Friday, Google spokesman Brian Gabriel confirmed the firing, saying, “We wish Blake all the best.” The company also says: “LAMDA has been through 11 different reviews, and we published a research paper earlier this year that details work in its responsible development.” Google says it has “extensively” reviewed Lemoine’s claims and found they were “completely baseless.”
This aligns with many AI experts and ethicists, who have said that their claims were more or less impossible given today’s technology. Lemoine claims that his interactions with LMDA’s chatbot led him to believe that it had become more than just a program and had its own thoughts and feelings, as opposed to just making the conversation realistic, as it appears. It is designed to do this.
They argue that Google researchers should seek consent from LaMDA before running experiments on it (Lemoine himself was assigned to test whether AI produced hate speech) and use their means as evidence. Published excerpts of those conversations on the account.
YouTube channel Computerphile has a nine-minute explanation of how LaMDA works and how it can generate reactions that are reassuring to Lemoine without getting really emotional.
Here is Google’s full statement, which also addresses Lemoine’s allegation that the company did not properly investigate its claims:
As we share in our AI principles, we take AI development very seriously and remain committed to responsible innovation. LaMDA has been through 11 different reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development. If an employee shares concerns about our work, as Blake did, we review them comprehensively. We found Blake’s claims that the LaMDA is sensitive to be completely unfounded and worked with him for several months to clarify this. These discussions were part of an open culture that helps us innovate responsibly. Therefore, it is regrettable that, despite a long engagement on the subject, Blake still chose to consistently violate explicit employment and data protection policies, including the requirement to protect product information. We will continue our careful development of the language model, and we wish Blake all the best.