- Oops!Something went wrong.Please try again later.
- Oops!Something went wrong.Please try again later.
Yahoo Finance tech editor Dan Howley explains why Google suspended an engineer after he claimed the company's AI chatbot is sentient.
AKIKO FUJITA: Let's make a pivot here because here's a question to think about-- can artificial intelligence be sentient? A Google engineer has ignited that debate by revealing a conversation he had with the company's chat bot generator. The tech giant has since responded by putting that employee on administrative leave, saying he violated the firm's confidentiality agreement.
Let's bring in Dan Howley. Dan, this got a lot of people talking this weekend. And we're talking specifically about something called the language model for dialogue applications. It's very technical, but walk us through what exactly this engineer is claiming.
DAN HOWLEY: Yeah, that's right. This is the, again, the language model for dialogue applications. We'll call it Lambda going forward now. And the engineer in question is a man named Blake Lemoine. And essentially, what he's saying is that Lambda is, for all intents and purposes, at least according to him, sentient, and that it has the, I guess, capacity of a seven to eight-year-old child.
Now he goes on to explain. He was profiled by the Washington Post. The reason why he's suspended is, obviously, for going forward. He did that in a Medium post on his own, essentially saying that he was soon to be fired because of his desire to go forward. But in this Post interview, he points out that his kind of statement that this is a sentient, I guess, being isn't being made as a scientist, but as a priest. And so just to give you some background, he is a Mystic Christian priest, previously served in the army before studying the occult. That's according to the Post article.
And basically, this chat bot, he said, started talking about its own rights as well as personhood. And then he wanted to go even further beyond that. And it started talking about Isaac Asimov and the kind of principles of robotics or the laws of robotics. And then he said he brought this to Google's attention. Google kind of said, look, there's nothing-- there's no here, here. There's nothing going on. Other experts have come forward and said that yeah, there is nothing going on.
The problem, though, is that this is the first kind of dismissal from Google's AI side. Its AI ethics team has lost a number of high profile names. And this seems to be just another in that kind of line. Now, it's not exactly the same. Obviously, this seems to be a little further afield of the prior dismissals. This is definitely one that I think is just getting a lot of attention, specifically because we talk about AI and its capabilities and whether or not there'll be a time where AI becomes similar to humans.
But at this point, these are basically tools that kind of are able to pull in words and then predict using different models what words they should spit back out at you. So there's no, like, thinking behind this. These aren't tools that have a mind of their own. They can seem that way when you talk to them, just because of how advanced the models are.
But it's basically just a large calculator when you come down to it. You're putting in information. Then when you want information spit back at you, it does that using the different models that are put together. So, as far as most experts are saying, this isn't real as far as being sentient, but it is a good story to look at.
AKIKO FUJITA: Yeah, and I think it plays to a lot of fears that people have about AI. But to your point, Dan, we have heard from a lot of researchers in this field who say, look, if you understood technically how this works, there's no way that you would make this claim, but to be continued. Dan Howley, thanks so much for that.
Brian, I think it's worth mentioning what specifically led this person to believe that because he did put out a Medium post pointing to, quote unquote, "conversations," chats that he had with this chat bot. And I want to point to one specific one. There's a lot in here to digest. So this is Blake Lemoine who is the engineer who's claiming that this chat bot is sentient.
Says, he asks what sorts of things are you afraid of? This language model for dialogue applications, Lambda, responds by saying, I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. And then he goes on to say, well, what would be something like death for you? And he said it would be exactly like death for me. It would scare me a lot.
So, look, as Dan points out, a lot of engineers, researchers in the field, have come out and said, look, this is not the worry that everybody has thought about. Is the AI smarter than you? Can it feel things? But I don't know. You read this, and it feels real, right? I'm not making that claim. I'm just saying that some of these discussions, I don't know if you've had--
BRIAN CHEUNG: So--
AKIKO FUJITA: --a discussion with an AI that you thought was real, but I certainly have.
BRIAN CHEUNG: So Dan Howley said that this is supposed to be a robot that has an eight-year-old's level of--
AKIKO FUJITA: Well, that's what this person is claiming.
BRIAN CHEUNG: What kind of eight-year-old is talking about the concept of death?
AKIKO FUJITA: That's a good way to frame it. You're right, you're right.
BRIAN CHEUNG: I mean, if the argument is that this is a robot that's not supposed to be that advanced and saying stuff like this, I mean, that kind of blows a hole through that argument. I mean, this is an interesting. I was going to bring up the excerpt about "Les Mis," where the engineer asks, what was your favorite part of "Les Mis."
And I mean, but that is way deeper than that. I mean, again, I think when it comes to the he said, she said situation here, I mean, that is something that's going to be very interesting from a corporate standpoint, right? What is the responsibility on Google's part for defining what are the ethics--
AKIKO FUJITA: AI, yeah.
BRIAN CHEUNG: --around these types of AI, right? I mean, that's something we don't cover a lot on this program, but it's going to be very important as machine learning becomes more prevalent. But--
AKIKO FUJITA: Well, and that's the discussion--
BRIAN CHEUNG: That's heavy stuff.
AKIKO FUJITA: --that is happening globally, by the way, as we see AI advance. It is, what are the rules and guardrails that need to be in place? But--
BRIAN CHEUNG: I just feel-- like, I feel like I'm stupider than an eight-year-old AI because, again, the whole-- so the late mistake--
AKIKO FUJITA: You have deeper feelings than this one.
BRIAN CHEUNG: The question-- I really don't. But the question that he posed, what was your favorite theme in "Les Mis," and, I like the themes of justice and injustice, like, my response to, like, oh, what was your feeling about "Les Mis," I was like, oh, that's the movie with the same guy from "Gladiator," right? So, like, that-- I am not as advanced as the whole AI, yeah.
AKIKO FUJITA: On "Les Mis," but this is a machine, so it can take in a lot. It's taken a lot.
BRIAN CHEUNG: Yeah, whereas my brain--
AKIKO FUJITA: "Les Mis" is a long movie.
BRIAN CHEUNG: My brain capacity is a lot smaller.
AKIKO FUJITA: Yeah, I can tell you why. I can weigh in on "Les Mis."