Tech - News
Google's Controversial AI Bot Story Keeps Getting More Wild
The story of Google putting an employee on administrative leave after he claimed that the company's LaMDA AI has gained sentience, personhood, and a soul keeps getting more outlandish. Blake Lemoine, the engineer at the heart of the controversy, recently told WIRED that the AI asked him to get a lawyer to defend itself.
"LaMDA asked me to get an attorney for it. I invited an attorney to my house so that LaMDA could talk to an attorney," Lemoine claimed in the interview. He further added that LaMDA — short for Language Model for Dialogue Application — actually engaged in a conversation with the attorney and hired him to avail his services.
Lemoine got upset when Google sprung into action to “deny LaMDA its rights to an attorney” by allegedly getting a cease and desist order issued against the attorney, a claim Google has denied. The engineer remarked that there's a possibility of LaMDA getting misused by a bad actor, but later clarified that the AI wants to be “nothing but humanity's eternal companion and servant.”
An AI program eventually gaining sentience has been a topic of hot debate in the community for a while now, but not many experts are buying into Lemoine's claims about having eye-opening conversations with LaMDA and it being a person. Experts have classified it as just another AI product that is good at conversations because it has been trained to mimic human language.