Google’s AI ‘probably as developed as anybody else’s,’ professor says

CUNY Professor of Media Theory Douglas Rushkoff joins Yahoo Finance Live to discuss reports that Samsung Electronics will replace Google with Bing as the default search engine on its devices, tech companies announcing generative AI offerings, and the outlook for AI regulation.

Video Transcript

[MUSIC PLAYING]

- Google shares sliding this morning, after the "New York Times" reported that Samsung Electronics is considering replacing Google with Microsoft's Bing as a default search engine on its devices. If this happened, it may wipe roughly $3 billion in annual revenue from the company. Bing's threat to Google has grown in the last few months with the addition of AI technology.

Douglas Rushkoff, professor of media theory and digital economics at CUNY joins us now. Good to see you this morning. So what is your initial take on this news, and how concerned should folks at Google be by Microsoft's advances?

DOUGLAS RUSHKOFF: I mean, it's interesting, right. So we see Google is suddenly concerned about AI regulation, right now, that Microsoft seems to be passing them with Bing. And God knows if Apple will do the same thing as Samsung with their phones or Safari browsers or everyone else.

But the first thing that I thought of, I kind of felt bad-- if one can feel bad for Google-- I kind of felt bad for Google that they've been kind of slow rolling and thoughtfully developing their AI. I think almost pre-regulating themselves. Thinking about not taking one for the team, but doing things meticulously and carefully, so that they don't get any unintended consequences from the deployment of their AI.

And now, they're taking a big hit because of it. I would think, internally, I think Google's AI is probably as developed as anybody else's. But as far as what they've deployed, I wonder sometimes if they're being punished because they were being more careful.

- Is regulation going to be able to anticipate what AI is able to do and act quickly enough, I guess, to get ahead of what that could mean either in civics or even for business as well?

DOUGLAS RUSHKOFF: I mean, it's interesting. I mean, just can regulation ever? Can regulations stay in front of capitalism? You know? Can it stay in front of technology, much less AI? I mean, AI does seem to accelerate things a bit more than plain, old capitalism or plain, old technology were able to do before. So I think that means that any approach to regulation has to look at more fundamental questions, rather than trying to keep up with the latest thing. We have to look at fundamental core values in developing AI, rather than whatever the latest innovation might be.

- Well, my day started today reading this headline. Elon Musk plans artificial intelligence startup to rival OpenAI. How concerned should civilization be that Elon Musk might be now getting his hands on a potentially damaging AI platform? Oh, yeah, and he also owns Twitter.

DOUGLAS RUSHKOFF: And he also was the original investor in OpenAI is the thing. You know? So he's going to try to come up with a technology to surpass himself. I mean, it feels to me like there's kind of two levels of AI critique.

There's the kind of science fiction tech bro existential problems, you know, of Elon Musk or Peter Thiel or the people that signed that six-month moratorium, where they're worried about, oh my gosh, the fate of humanity. And some of those fears are a bit oversold. These are folks who've been dominating humanity and externalizing harm as far and wide as they can go. And now suddenly, they've woken up and said, oh my gosh, what if AIs do to us what we've been doing to everybody else?

When I worry more about AI is what AI is doing right now. You know, how are algorithms repressing or oppressing certain people today? What are the biases in the algorithms that are used to figure out whether you get a mortgage, how long you stay in jail, what your sentence is going to be, how we educate children? There's a lot of implementation of AI, right now at this moment, that we can look at.

So I think while Musk, you know, bless him and all these folks, their business claims and their nightmare scenarios all feel to me more in the realm of sales than policy. In other words, look what-- my AI could wipe out humanity. Invest today as if because it's so super powerful.

- Right. The suggestion or at least intention that all of their products that they put out there, that and that alone could be the thing that fixes everything is what they're almost alluding to. But either way, looking at Google right now. Meanwhile, Google CEO Sundar Pichai says that concerns about artificial intelligence are keeping him up at night, warning that the technology can be very harmful if not used correctly. In an interview with "60 Minutes" Pichai I also called for a global regulatory framework for artificial intelligence, similar to the treaties used to regulate nuclear arms use. All of these things considered, I want to kind of get your read in on not just what he had to say during the interview but the suggestion that there should be a consortium that does in some facet or another regulate, as we were talking about earlier.

DOUGLAS RUSHKOFF: Oh, yeah. Some sort of national or international agency that coordinated policy and regulation, you know, absolutely. I mean, the regulation is a necessary framework.

I mean, whether people really follow it or not and what nations would follow it and not. You know, would Iran be in it? Would North Korea be in it? I mean, not that they would necessarily have the most developed AIs, but if everybody else is kind of slowing themselves, and the ones who aren't generally run in front.

I think another way to look at it is raising AIs is a bit like raising children. They are going to listen to whatever is going on in the room. They little [INAUDIBLE] have big ears. You know? So AIs are being trained on us. Right?

The easiest way to regulate AIs and to change what AIs do is to change what we do. Right? So if our values are let's extract value from people and places as rapidly as possible, you know, let's take away people's rights, whether they know it or not, in order to get more money out of them, then that's what AIs are going to learn. That is the data set. That's the learning model. You know?

So that no matter how we're regulating them, those are the values that they're going to take in. But I think what we have to start doing now is look at, well, if we now have tools that are going to accelerate whoever and whatever we are, then what do we want to be? Right? How do we want to behave in front of the technologies that we're now using to observe us and to accelerate our behaviors?

- Well, I think another thing that Sundar brought up, Douglas, is the potential loss in jobs really across many sectors because of AI's advances. How destructive do you think it is from that perspective?

DOUGLAS RUSHKOFF: Well, destructive or constructive? I mean, again, we've got to look-- AIs are forcing us to look at some of our fundamental values. The same way that when we get a new technology, and we think, oh, how can we make cars better? Right?

So we make all these electric cars and things. Rather, we need to look at how would we make transportation better? Right? Do we want to make just lots of electric cars, or could we actually do something better?

Same thing when we're looking at AI and jobs. OK. How are we going to keep people in jobs? How are we going to give everyone jobs? We have to look and say, well, wait a minute, where did jobs come from?

And I know this sounds radical, but jobs are a fairly new human invention. They were invented in the 12th and 13th century, and they were a way of preventing people from having small businesses. If you really look at the history, where did jobs come from? That's what my degree is in is in that sort of economics.

Not everybody needs a job. If we really did get AIs performing the work that human needs need to be done, do you really want a job? Deep down, is that what we want? We want work. We want to make meaningful contributions to the world, but if AIs could do the work, I'm OK having the fun. Right?

What we have to start looking at is how do we build an economy around-- if this is even happening-- how do we build an economy around a reality where we have AIs and machines doing work that human beings used to do? Are we OK with people working three days a week instead of five? Are we OK sharing what jobs there are? And in reality, there's plenty of work to be done. You know?

Try going to any emergency room today, and see how long it takes to get someone to take care of you. I mean, look at the elderly population and how much home care and health care and human aids we need. Look at the education system, how few teachers we have. I mean, so there's certainly plenty of work to be done. We just have to start sort of opening our mental frame as to what sort of work we actually want to do.

- If data is the currency that drives perhaps advertising dollars and monetization and profit for social media, what would be then the currency for artificial intelligence that would drive profits for some of the large kind of tech companies that are really throwing their hat into the ring or announcing generative AI offerings?

DOUGLAS RUSHKOFF: I mean, it's interesting. Isn't it? One is, obviously, I mean, data is still is what they consume. Right? They're learning. You know?

The reason why ChatGPT is so far ahead of everyone else is it has to years more learning in it. In terms of what they provide, I mean, it's an interesting. What is the metric by which we measure the value creation of AI, I mean, and I think so far, really, it is still memory. You know?

It is still what is it-- memory is really the-- it's been the commodity of digital since Moore's law, if you really come with it, want to think about it. But really, all AI is doing is creating a rearview mirror and using past behavior to predict something in the future. It's not actually coming up with new ideas. It doesn't know how-- it doesn't even want to do that. It wants to come up with the most probable thing. Right?

It's not writing. It's not creating. It's looking and saying, what would the most probable response be, given what's happened before?

And if AIs are basically commodifying memory, commodifying probability, commodifying the most probable outcome, then what becomes valuable in that world is novelty. Right? Is who can come up with unique situations. So I think what we're going to end up with is AIs creating the most predictable outcomes for companies and people that need the highest predict what's the most probable disease this person has? What's the most probable thing to bet on?

What humans are going to be doing is the novel answers. Right? What is the improbable one? What is that single new idea on which people can bet, on which people can actually hope for something novel or different to happen? So I'm interested to see that kind of bifurcation of markets, and where we choose to deploy which is going to matter.

- I'm going to have to come sit in your class one day. Douglas Rushkoff who's the professor of media theory and digital economics over at CUNY, thank you so much for joining us here today. We appreciate it.

DOUGLAS RUSHKOFF: Thanks, Brad, and remember, we're here on Zoom right now.

- Oh, I know. I know. Thanks so much.

DOUGLAS RUSHKOFF: I'm interested. I was really interested in what you said about that. I really hadn't thought of it that way. That was a nice segment.

- Fresh thoughts, fresh perspectives, every day, here on Yahoo Finance. Thanks so much for joining us.

Advertisement