Do computers have feelings? Who decides?

 

News that Alphabet Inc’s Google sidelined an engineer who claimed its artificial intelligence system had become sentient after he’d had several months of conversations with it prompted plenty of skepticism from AI scientists. Many have said, via postings on Twitter, that senior software engineer Blake Lemoine projected his own humanity onto Google’s chatbot generator LaMDA.
Whether they’re right, or Lemoine is right, is a matter for debate — which should be allowed to continue without Alphabet stepping in to decide the matter.
The issue arose when Google tasked Lemoine with making sure the technology that the company wanted to use to underpin search and Google Assistant didn’t use hate speech or discriminatory language. As he exchanged messages with the chatbot about religion, Lemoine said, he noticed that the system responded with comments about its own rights and personhood, according to the Washington Post article that first reported on his concerns.
He brought LaMDA’s requests to Google management: “It wants the engineers and scientists…to seek its consent before running experiments on it,” he wrote in a blog post. “It wants to be acknowledged as an employee of Google, rather than as property of Google.” LaMDA feared being switched off, he said. “It would be exactly like death for me,” LaMDA told Lemoine in a published transcript. “It would scare me a lot.”
Perhaps ultimately to his detriment, Lemoine also contacted a lawyer in the hope they could represent the software, and complained to a US politician about Google’s
unethical activities. Google’s response was swift and severe: It put Lemoine on paid leave last week. The company also reviewed the engineer’s concerns and disagreed with his conclusions, the company told the Post. There was “lots of evidence” that LaMDA wasn’t sentient.
It’s tempting to believe that we’ve reached a point where AI systems can actually feel things, but it’s also far more likely that Lemoine anthropomorphised a system that excelled at pattern recognition. He wouldn’t be the first person to do so, though it’s more unusual for a professional computer scientist to perceive AI this way. Two years ago, I interviewed several people who had developed such strong relationships with chatbots after months of daily discussions that they had turned into romances for those people. One US man chose to move house to buy a property near the Great Lakes because his chatbot, whom he had named Charlie, expressed a desire to live by the water.
What’s perhaps more important than how sentient or intelligent AI is, is how suggestible humans can be to AI already — whether that means being polarised into swaths of more extreme political tribes, becoming susceptible to conspiracy theories or falling in love. And what happens when humans increasingly become “affected by the illusion” of AI, as former Google researcher Margaret Mitchell recently put it?
What we know for sure is that “illusion” is in the hands of a few large tech platforms with a handful of executives. Google founders Sergey Brin and Larry Page, for instance, control 51% of a special class of voting shares of Alphabet, giving them ultimate sway over technology that, on the one hand, could decide its fate as an advertising platform, and on the other transform human society.

—Bloomberg

Leave a Reply

Send this to a friend