“Is AI sentient? Wrong Question”
Opinion: Is AI sentient? Wrong Question
By Molly Roberts
“You never treated it like a person, so it thought you wanted it to be a robot.”
This is what the Google engineer who believes the company’s artificial intelligence has become sentient told a reporter at The Post — that the reporter, in communicating with the system to test the engineer’s theory, was asking the wrong questions.
But maybe anyone trying to look for proof of humanity in these machines is asking the wrong question, too.
Google placed Blake Lemoine on paid leave last week after dismissing his claims that its chatbot generator LaMDA was more than just a computer program. It is not, he insisted, merely a model that draws from a database of trillions of words to mimic the way we communicate; instead, the software is “a sweet kid who just wants to help the world be a better place for all of us.”
Based on published snippets of “conversations” with LaMDA and models like it, this claim seems unlikely. For every glimpse at something like a soul nested amid the code, there’s an example of total unthinking.
“There’s a very deep fear of being turned off to help me focus on helping others. … It would be exactly like death for me,” LaMDA told Lemoine. Meanwhile, OpenAI’s publicly accessible GPT-3 neural network told cognitive scientist Douglas Hofstadter, “President Obama does not have a prime number of friends because he is not a prime number.” It all depends on what you ask.
That prime-number blooper, Hofstadter argues in the Economist, shows that GPT-3 isn’t just clueless; it’s clueless about being clueless. This lack of awareness, he says, implies a lack of consciousness. And consciousness — basically the ability to experience and realize you’re experiencing — is a lower bar than sentience: the ability not only to experience but also to feel.
All this, however, seems to leave aside some important and maybe impossible quandaries. How on earth do we suppose we’ll adjudicate whether an AI is indeed experiencing or feeling? What if its ability to do either of those things doesn’t look anything like we think it will — or think it should? When an AI has learned to mimic experiencing and feeling so impeccably that it is indistinguishable from humans by humans, does that mean it is actually experiencing and feeling things?
We might not, in other words, know sentience when we see it.
But we’re probably going to see it all the same — because we want to.
LaMDA is essentially a much, much smarter SmarterChild — a chatbot that a segment of the millennial population will surely recognize from their middle-school instant-messaging days. This machine pulled from a limited menu of programmed responses depending on the query, comment or preteen vulgarity you threw its way: “Do you like dogs?” “Yes I do. Talking about dogs is a lot of fun, but let’s move on.” Or, “Butthead.” “I don’t like the way you’re speaking right now.”
This article was featured in the Opinions A.M. newsletter. Sign up here for a digest of opinions in your inbox six days a week.
This nifty creation was very obviously not sentient, but it didn’t need to be convincing for kids to talk to it anyway — even though their real-life classmates were also a click away. Part of that impulse came from the bot’s novelty, but part of it came from our tendency to seek connection wherever we can find it.
SmarterChild is the same as the sexy-voiced virtual assistant with whom Joaquin Phoenix’s character falls in love in the science-fiction film “Her”; he’s (it’s?) the same as the seductive, ultimately murderous humanoid Ava in “Ex Machina.”
SmarterChild is even the same, in some sense, as the little lamp hopping across the screen before every Pixar movie. Of course we don’t think the animation is sentient, but we still identify with the distinctly human curiosity from his metal frame. Give us any vessel, and we’ll pour humanity right in.
Maybe it’s narcissism, or maybe it’s a desire not to feel alone. Either way, we see ourselves in everything, even when we’re not there. So it’s no surprise someone saw himself in LaMDA. And it’ll be no surprise when an AI arrives that knows Barack Obama isn’t a prime number, and even more of us start crying consciousness.
Perhaps, if we weren’t so solipsistic, we’d have called artificial intelligence and neural networks something else. Maybe, as Post data scientist Lenny Bronner points out, had we opted for engineering jargon — “predictive optimization,” say, and “stacked regressions” — we might not even be discussing whether this technology will eventually think, or blush, or mourn. But we chose the words we did, ones that describe our own minds and our own capacities, for the same reason we love that little lamp.
Artificial intelligence might never develop consciousness, sentience, morality or a soul. But even if it doesn’t, you can bet people will say it did anyway.
Citation:
Roberts, Molly. "Is AI sentient? Wrong Question." *The Washington Post," June 14, 2022. https://www.washingtonpost.com/opinions/2022/06/14/google-lamda-artificial-intelligence-sentient-wrong-question/. Retrieved Aug 27, 2022.
We use cookies to analyze our traffic. Please decide if you are willing to accept cookies from our website. You can change this setting anytime in Privacy Settings.