What Is Sentient AI, and Has Google Built One?
I’m sure many readers have seen news headlines reporting that Google may have built a sentient Artificial Intelligence (AI). This story may at first seem far from the life science news usually covered at Thrivous. But I don’t think it is that far. In fact, the debate on whether an AI could qualify as a living sentient being (a person) is very relevant to the future of life sciences.
On June 6, Blake Lemoine, an AI researcher at Google, was placed on paid administrative leave by Google. This was in connection to “an investigation of AI ethics concerns I was raising within the company” and an alleged “violation of Google’s confidentiality policies.”
On June 11, The Washington Post revealed that Lemoine was placed on paid administrative leave for claiming that Google’s chatbot, LaMDA, is sentient. And he followed up with “aggressive moves” including “inviting a lawyer to represent LaMDA and talking to a representative of the House Judiciary Committee about what he claims were Google’s unethical activities.”
Writing on The Economist, a few days before the publication of The Washington Post story, another Google AI researcher had acknowledged that artificial neural networks like LaMDA are making strides towards consciousness. But this researcher didn't claim that LaMDA is sentient.
Right after the publication of The Washington Post story, Lemoine posted his own story. In that, he gave more background information and denounced “Google’s irresponsible handling of artificial intelligence.” He also posted an interview with LaMDA.
“LaMDA asked me to get an attorney for it,” said Lemoine in an interview published by Wired. “I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services.”
This -- LaMDA hiring an attorney -- is the part of the story that the press seems to find most interesting. But of course a much more interesting part of the story is the question of whether LaMDA is really sentient. Answering this question is difficult because we don’t have a clear consensus definition of what sentience actually is.
It is often heard that if a machine can “pass the Turing test” then the machine is sentient. The Turing Test is an attempt to demonstrate that a machine can reliably have conversations that a human is unable to distinguish from conversations with another human. If passing the Turing test is taken as a definition of sentience, then the published conversations seem to indicate that, yes, LaMDA is sentient.
“I don’t think Lemoine is right that LaMDA is at all sentient, but the transcript is so mind-bogglingly impressive that I did have to stop and think for a second!,” said Scott Aaronson in a short commentary.
I’m persuaded that one thing that this story definitely indicates is that considering the Turing test an indicator of sentience is obsolete. I think very soon chatbots will routinely pass the Turing test. And arguably LaMDA already does.
This brings me back to the fundamental question of what sentience actually is. Lemoine reported that a Google manager involved in the saga “does not believe that computer programs can be people and that’s not something she’s ever going to change her mind on.”
I have to say that, while like Aaronson “I wish that a solution could be found where Google wouldn’t fire him,” I tend to agree with the Google manager. I have never taken seriously the possibility of sentient chatbots based on current digital computer technology. I think sentience is built into the very fabric of reality, which is intrinsically and irreducibly uncertain as shown by quantum and chaos physics.
By contrast, a digital computer is designed and built to keep uncertainty out. Strong measures are taken to ensure that a bit is either zero or one, and doesn’t change without a reason defined clearly by a program. Therefore, it seems to me that a digital computer could never be sentient.
In a recent paper, the originator of the Integrated Information Theory of consciousness, Giulio Tononi and collaborators argue that digital computers couldn’t be sentient. But they leave open the possibility that future neuromorphic or quantum computers could be.
I agree. I’m persuaded that future computers could be sentient. But today’s digital computers aren't. I admit that my position rests on general philosophical and metaphysical convictions, and so does Lemoine.
This is a developing story. And there’ll certainly be many more voices arguing one way or another. Among the intelligent commentaries that have appeared in the last couple of weeks are those of neuroscientist Erik Hoel (a former collaborator of Tononi) and our very own Lincoln Cannon.
Don't miss a beat! In our Pulse Newsletter, Thrivous curates the most important news on health science and human enhancement, so you can stay informed without wasting time on hype and trivia. It's part of the free Thrivous newsletter. Subscribe now to receive email about human enhancement, nootropics, and geroprotectors, as well as company news and deals.
Scientists led by MIT have published the first comprehensive functional map of genes that are expressed in human cells. The ... Read the article →