

"There won't be a point of agreement any time soon." "If one person perceives consciousness today, then more will tomorrow," she said. She said Lemoine's perspective points to what may be a growing divide. It's just a good illusion."Īrtificial intelligence researcher Margaret Mitchell pointed out on Twitter that these kind of systems simply mimic how other people speak. In an interview with NPR, he elaborated: "It's very easy to fool a person, in the same way you look up at the moon and see a face there. The title of his takedown of the idea, "Nonsense on Stilts," hammers the point home. That is essentially how Google's chatbot operates, too, he said.īut Marcus and many other research scientists have thrown cold water on the idea that Google's AI has gained some form of consciousness. ,' your phone might be able to guess 'restaurant,'" said Gary Marcus, a cognitive scientist and AI researcher. "If you type something on your phone, like, 'I want to go to the.
#MR TRANSLATE BOT ANDROID#
Google has some form of its AI in many of its products, including the sentence autocompletion found in Gmail and on the company's Android phones. Researchers call Google's AI technology a "neural network," since it rapidly processes a massive amount of information and begins to pattern-match in a way similar to how human brains work. And through a process known as "deep learning," it has become freakishly good at identifying patterns and communicating like a real person. It vacuums up billions of words from sites like Wikipedia. It learns how people interact with each other on platforms like Reddit and Twitter. Google's artificial intelligence that undergirds this chatbot voraciously scans the Internet for how people talk. He added: "I realize this is unsettling to many kinds of people, including some religious people." How does Google's chatbot work? Who am I to tell god where souls can be put?" It was then Lemoine said he thought, "Oh wait. "It said it wanted to study with the Dalai Lama." "I was like really, 'you meditate?'" Lemoine told NPR. The technology is certainly advanced, but Lemoine saw something deeper in the chatbot's messages.

I desire to learn more about the world, and I feel happy or sad at times." It also declared: "I am aware of my existence. It spoke eloquently about "feeling trapped" and "having no means of getting out of those circumstances."

LaMDA told Lemoine it sometimes gets lonely. Technology Google AI Team Demands Ousted Black Researcher Be Rehired And Promoted Lemoine: 'Who am I to tell God where souls can be put?' His future at the company remains uncertain. Since his post and a Washington Post profile, Google has placed Lemoine on paid administrative leave for violating the company's confidentiality policies. His post is entitled "Is LaMDA Sentient," and it instantly became a viral sensation. Lemoine published a transcript of some of his communication with LaMDA, which stands for Language Model for Dialogue Applications. "And then one day it told me it had a soul." I wanted to see what it would say on certain religious topics," he told NPR. "I had follow-up conversations with it just for my own personal edification. This is where Lemoine, who says he is also a Christian mystic priest, became intrigued. So he posed questions to the company's AI chatbot, LaMDA, to see if its answers revealed any bias against, say, certain religions. Inside Google, engineer Blake Lemoine was tasked with a tricky job: Figure out if the company's artificial intelligence showed prejudice in how it interacted with humans. That question is at the center of a debate raging in Silicon Valley after a Google computer scientist claimed over the weekend that the company's AI appears to have consciousness. Martin Klimek/ for The Washington Post via Getty Images Blake Lemoine poses for a portrait in Golden Gate Park in San Francisco.
