U.S. markets closed
  • S&P 500

    4,432.99
    -40.76 (-0.91%)
     
  • Dow 30

    34,584.88
    -166.44 (-0.48%)
     
  • Nasdaq

    15,043.97
    -137.96 (-0.91%)
     
  • Russell 2000

    2,236.87
    +3.96 (+0.18%)
     
  • Crude Oil

    71.96
    -0.65 (-0.90%)
     
  • Gold

    1,753.90
    -2.80 (-0.16%)
     
  • Silver

    22.42
    -0.33 (-1.44%)
     
  • EUR/USD

    1.1732
    -0.0040 (-0.34%)
     
  • 10-Yr Bond

    1.3700
    +0.0390 (+2.93%)
     
  • GBP/USD

    1.3737
    -0.0059 (-0.43%)
     
  • USD/JPY

    109.8950
    +0.1770 (+0.16%)
     
  • BTC-USD

    48,611.00
    +1,107.50 (+2.33%)
     
  • CMC Crypto 200

    1,193.48
    -32.05 (-2.62%)
     
  • FTSE 100

    6,963.64
    -63.84 (-0.91%)
     
  • Nikkei 225

    30,500.05
    +176.71 (+0.58%)
     

Influencers with Andy Serwer: Kai-Fu Lee

In this episode of Influencers, Andy is joined by Sinovation Ventures CEO Kai-Fu Lee as they discuss the future of robotics and how artificial intelligence will change the world over the next 20 years.

Video Transcript

[MUSIC PLAYING]

ANDY SERWER: Artificial intelligence is already reshaping our lives, the way we drive, to fight wars, or even how we develop vaccines. And there's a lot more to come. Kai-Fu Lee has been at the center of AI development for decades with stints as an executive at Apple, Microsoft, and Google. He's now the CEO of Sinovation Ventures, a venture capital firm in China with over $2.5 billion in assets under management. His new book "AI 2041-- 10 Visions For Our Future" helps us understand the promise and peril of that technology over the next two decades.

KAI-FU LEE: It will disrupt health care, improve education, and pretty much impact, disrupt, or enable every imaginable industry.

ANDY SERWER: On this episode of "Influencers," Kai Fu joined me to talk about the jobs that robots will replace, the way AI factors into plans for the so-called metaverse, and what it means for winners and losers in the tech industry.

Hello, everyone. And welcome to "Influencers." I'm Andy Serwer.

And welcome to our guest Kai-Fu Lee, CEO of Sinovation Ventures, former president of Google China, and author of the new book "AI 2041-- 10 Visions For Our Future." Kai-Fu, welcome.

KAI-FU LEE: Well, thank you. Thanks for having me.

ANDY SERWER: So I want to ask you about your book, which uses fiction to explore how AI will transform society not in some far off time, but over the next 20 years. Why did you use fiction? And why such a relatively short time window?

KAI-FU LEE: Because AI is such an important technology that I think everyone should try to understand it. But yet, it sounds like rocket science to some people. And it's very hard to understand.

I wanted to make it the most accessible and even entertaining. So I have a co-author, who is a well-known science fiction writer, [INAUDIBLE]. And he wrote the stories based on my roadmap of what technologies will mature in the next 20 year frame.

ANDY SERWER: So how significant will the impact of AI be over the next two decades, and in what specific ways?

KAI-FU LEE: It's quite significant. 20 years is actually quite a long period of time. Think about 20 years ago. If I went back in time to show the world we have today with iPhone and apps and Netflix and Zoom, none of these existed back then. It would be almost like science fiction.

And that's-- and I think going to the future, things will change even more because AI is now gaining many aspects of intelligence able to converse with us, able to understand text language and images and video, and autonomous vehicles and robots will work. It will disrupt health care, improve education, and pretty much impact, disrupt, or enable every imaginable industry. So quite a large amount of change.

ANDY SERWER: Yeah, we'll talk about some maybe some even more specific examples coming up. But first, I got to ask you about the pandemic. And has this influenced the trajectory of AI?

KAI-FU LEE: I think it accelerated AI a little, partly because people work from home and the workload is being digitized. And then AI can take over or improve parts of that work. And we are already seeing that.

And also partly because of social distancing, robotics is improving at a faster speed. For example, COVID tests in China are done by robots. So it's 100 times faster than people. And such robots can also be used in drug discovery and growing organoids. And this is not only '41. This is 2021. So it's accelerated robotics as well.

ANDY SERWER: Kai-Fu, let me ask you a really fundamental question. What is AI?

KAI-FU LEE: AI is generally considered the study of intelligence and the use of technologies that perform tasks that could only be done with quote, unquote, "intelligence." So that's the general field. And then within AI, there is a subfield called machine learning and sometimes called deep learning, which is one aspect of machine learning, which specifically has to do with developing software technologies using data to learn some tasks that requires intelligence. So it's a data driven way to deliver the appearance of intelligence. And currently AI, machine learning, deep learning are sometimes used interchangeably. So I wanted to clarify that.

ANDY SERWER: And so an example would be a machine that performs a task but then learns from the task and improves its functionality.

KAI-FU LEE: Yes, from data. And it's important to note that the human programmer does not program every logic into that AI. The human programmer says go learn this. And then AI takes all the data and learns from it.

It goes all the way from very simple examples. Like Facebook watches what you click and what you buy and what you watch, and then decides what content to target you so that you watch the most content and stay on Facebook, all the way up to an autonomous vehicle watches the road and knows where you want to get to, plans the route, avoids hitting a pedestrian, and gets you there as quickly as possible, and also safely. So that I think is the range of what AI can do today.

ANDY SERWER: Another one that I love, of course, is the music streaming services that learn your taste, and then serve you more and more choices that delight you if it works, right?

KAI-FU LEE: So the more it knows about you, the better it does. So if a music service knows other people like you, knows what you have liked, and what you liked is indicated by what you listen to a lot, but also what you don't listen to and what you listen to, and then stop. So it's learning all that as it watches you.

And if a service actually knows other aspects about you, then it can become even more intelligent. One of the stories in the book is a super app that was developed by an insurance company, which also develops many other applications like social applications, dating applications, e-commerce, coupons. And then it collects all the patterns about someone. And it does such a good job minimizing your insurance, improving your health.

But the interesting thing in the story is while it optimizes all these good things for you, it also unintentionally does something bad. In the story, it interferes with the main character's love life because the person that she was dating is believed to be dragging down her social status and health. And that relates to a racial issue that-- and the story takes place in India where the caste system is already basically gone. But there are still remnants that AI can pick up when it knows too much about you.

ANDY SERWER: Yes, so you talk about those negative consequences that people are concerned about. And of course, they see the movies where the robots take over. And there's a million movies like that. But so how realistic is it to be concerned about these potential consequences?

KAI-FU LEE: Yeah, the one thing that most people are most concerned about won't happen. But there are many other concerns people should have. The dystopian scenario where the robots take over presumes that AI has self-awareness and emotion and desire and intentionality. And it doesn't have any of that.

AI is today is a giant optimizer. Humans says go optimize this. And it takes all the data and optimizes what the human tells it to optimize.

And it doesn't have any desire or belief. If you shut the program off, if you unplug the computer, it's gone. So that belief that it has an intention, desire, and bad motives is simply not true. And probably won't be developed or developable.

We don't even know how to develop it. We don't know how the human brain works and why we have self-awareness and desires and emotions. So it's going to-- it may happen one day. But certainly very, very unlikely in the next 20 years.

ANDY SERWER: So with negative consequences again, going back to the music example, which is maybe very mundane. But it would negate serendipity, the chances of me just discovering something that is completely unrelated to my previous listening patterns that might in fact be something I want to hear. Is that a potential consequence for instance?

KAI-FU LEE: That's possible. But there can be cleverer AI. So if a higher level AI learns that you, as someone, like serendipity because you really enjoyed reading something out of the blue that didn't match what you want. It could then infer you might want that in music.

And it might try to find what that might be. What we regard as serendipity might actually be predictable. So I think that would be possibly solvable and certainly not the worst outcome of AI.

ANDY SERWER: OK, well, there got to be some concerns that are potentially serious. What might those be?

KAI-FU LEE: OK, so in the book, there is one set that we call externalities. Externalities happens when AI is told to do something. And it's so good at doing that thing that it forgets or actually ignores other externalities or negative impacts that it may cause.

So when YouTube keeps sending us videos that we're most likely to click on, it's not only not thinking about serendipity, it's also potentially sending me very negative views, or very one-sided views that might shape my thinking. So that would be one form of externality that is unintentional consequence on the user because it maniacally tries to optimize something else.

Another is the personal data if that's possibly compromised. Another is bias and fairness. Another is can AI explain to us why it made decisions that it made for key things like driving autonomous vehicles, the trolley problem, medical decision making surgeries. It gets serious. But the single largest danger as I describe in the book is autonomous weapons.

And that's when AI can be trained to kill, and more specifically trained to assassinate. Imagine a drone that can fly itself and seek specific people out either with facial recognition or cell signals or whatever. And then it has a bullet, a small piece of dynamite that it can shoot point blank at a person's forehead. And you know how fast drones move.

So the danger is that this targeted assassination weapon can be built by an experienced hobbyist for $1,000. And I think that changes the future of terrorism because no longer are terrorists potentially losing their lives to do something bad. It also allows a terrorist group to use 10,000 of these drones to perform something as terrible as genocide.

And of course, it changes the future of warfare because between country and country, this can create havoc and damage, but perhaps anonymously. And people don't know who did the attack. So it's also quite different from nuclear arms race where nuclear arms race at least has deterrence built in that you don't attack someone for the fear of retaliation and annihilation.

But autonomous weapons might be doable as a surprise attack. And people might not even know who did it. So I think that is from, my perspective, the ultimate greatest danger that AI can be a part of. And we need to be cautious and figure out how to ban or regulate it.

ANDY SERWER: Yeah, that is scary. And I think I've read an article about that fairly recently about the future of warfare is terrifying. And it described various weapons and scenarios where these weapons were used. So just to drill down on that just a little bit, how would we prevent these types of weapons to be deployed or developed even?

KAI-FU LEE: So one example is to look at history, how chemical weapons, biological weapons were banned. There could be a global treaty that is enforced. If there are drones today, the easiest way or cheapest way is to build a drone, not a robot. Robots are much more expensive and more clumsy and harder to control.

Drones are the most dangerous. So perhaps, having some stronger laws of the air, where and how drones can be deployed, and perhaps having some defensive mechanisms that prevent where there are a lot of people, or a lot of government functions to have defensive functions that would basically shoot down drones in areas that aren't permitted. So I'm not an expert in the domain. But just to brainstorm, these are some ideas. I'm sure there are other better ideas.

ANDY SERWER: Some of the other problems you talked about sounded like very close to the problems that the big social media companies in the United States are already running into in terms of privacy or in terms of sending an abundance of negative signals unintentionally, perhaps even. So I want to ask you about the big tech companies. Will AI offer a meaningful avenue of disruption? Or is it just another development that big companies will co-opt and take hold of?

KAI-FU LEE: I think there will be giants developing in many domains. I don't think the current internet companies will easily move across domains. So the most likely outcome, if unchecked, is that the internet company, social media companies will grow ever more powerful in their domains. And probably antitrust will prevent them from going across domains and also other domains that are tricky and different.

But most likely, there will be other giants that emerge-- giants in health care, in insurance, giants in transportation and automotive, giants in robotic manufacturing. So I think this is truly a disruptive force that will enable every industry to have new giants that emerge. Just the natural course of technology development is to have large companies that build platforms that are at the same time beneficial because it allows an industry to be reborn but also dangerous because of too much power they have and with the power of data, that they're gathering data from the users. And they know a lot about individuals. And this data makes their AI and technology work better than other companies, thereby allowing them to extend the longevity of their monopoly. So it's both a great thing, but also a dangerous thing.

ANDY SERWER: When you talked about crossing domains, it sounds to me a little bit like the metaverse. And I'm wondering how AI and the metaverse are connected, or if they are at all in fact.

KAI-FU LEE: Absolutely, I think in the metaverse, there will be other beings. And there will be people who are themselves. But there will also be other beings, pets and aliens and games and other people. And I think it's a lot more interesting and fun if there are a mixture of real people and virtual people.

So I think AI will be a part of that. And then in the truly natural metaverse, we will be conversing using our language and our body language. And AI can, of course, provide an ability to understand that.

And in the metaverse, here is a tricky and maybe a little bit scary question is, will the programmer of the metaverse, the company that builds the metaverse will actually listen in on every conversation and watch every person? And that on the one hand can make the experience very exciting because it can see what makes you happy and give you more of that. But then what is the notion of privacy in a metaverse? So I think a lot of excitement, I think, in combining these two technologies.

ANDY SERWER: You recently said, quote, AI will disrupt every imaginable industry. Why will its effects be so pervasive? And will it really touch all facets of our lives, do you think?

KAI-FU LEE: Because this ability to essentially deliver human intelligence just by merely observing data and the power of when you have, the more data you have, the more powerful AI gets, these properties make it all encompassing. So let's take health care as an example.

AI can take so much more of our data and consider it to in order to make us healthy, help us live longer, and treat our illnesses. This data includes our family history, our health records, our wearables, our blood pressure on the 24 by 7 basis, and also all of our imaging, radiology reports, and our genetic sequencing multi-omics output, and of course, blood tests.

So with all of this combined feeding into an AI, it can make a much more accurate and precise diagnosis when we're ill, but also give us health hints how to make us more healthier. Just as an example, I am using a AI longevity software along with a professional doctor who interprets the output. And I am measuring all those things that I mentioned to you.

And in the past year, my level of my data shows that I am now six years younger than I was one year ago. So the advice that it's being able to give me has helped given me great advice about how my lifestyle as well as nutrients and medicines to take. And this is just the very beginning.

And also AI will help people discover more drugs at 1/10 the cost. Rare diseases will become treatable. And AI will be able to help monitor older people and keep them healthy and watch if they have a fall or didn't take their medicine. So I think the entire health care industry will change. And people will not only live longer, but healthier. So that's just one example in one industry.

ANDY SERWER: I want to ask you a little bit about economic implications, Kai-Fu, because you said that AI will lead to a world in which some tycoons will make a lot of money while jobs will be lost. Why could AI exacerbate wealth inequality? And what can we do about that?

KAI-FU LEE: Yes, we can just already see all the internet companies. I think without AI, they probably would be only worth half of what they're worth because AI helped them monetize. And that will extend into other other industries. So the tycoons, they will be more numerous.

And they will be even richer and richer. At the same time because AI is developing human intelligence equivalents, and that means AI can do many of the tasks and jobs that we do today. And in particular, AI will first do jobs that are routine.

So white collar jobs like telemarketing and customer service and people who copy and paste and file expense reports desk jobs, those will be gone first because AI can do them just in software. You don't even need robotics. And then blue collar work, visual inspection, assembly line work, many waiters and waitresses and many of the jobs and factories and warehouses, the pickers in at Amazon, the cashiers at the grocery store, and of course, in about 15, 20 years, all the drivers, all the people who drive for a living. So when you add all that up, it's a substantial number of jobs, and when simultaneously making a small number of people ultra rich and making many people jobless. That is the wealth inequality problem that AI will exacerbate.

ANDY SERWER: It sounds like the only jobs left are going to be the people who program and code AI. I mean, I know that's an exaggeration. But is it probably the case that AI will be a net job killer? Or would it possibly be a net job creator?

KAI-FU LEE: I think ultimately it will be a net job creator as every technology has been. But I think the next 20 years, it will take away more jobs than it creates. But over time, it will create many jobs.

Think about the internet, right? It's created many jobs that we did not think it would. 20 years ago, none of us could have predicted Uber drivers would be an interesting new and sizable profession. And AI will do similar things.

Also, there are many things that AI cannot do in 20 years or maybe even longer. It won't have creativity. When I say that it has human level intelligence, I meant for a simple routine, one domain at a time things, like driving, like answering a call. It does not have the general analytical, creative capabilities that we have, nor does AI have any self-awareness, emotion, compassion, empathy, or the ability to win trust from other people.

So there will be also many service type of jobs that has to do with human connection and trust, for example, health care services that I think will see more jobs emerging. So ultimately, I think we'll figure this out. There will be new professions created. There will be more creative and service level jobs. But there will be a challenging period when in which job destruction is larger than job creation.

ANDY SERWER: With all this change and all these implications, a huge question, what role should governments play in regulating AI?

KAI-FU LEE: Oh, AI clearly has to be regulated. There are just so many things that can go wrong with AI companies and engineers that don't take care. And for example, protecting personal data of individuals. For example, ensuring that there is not built in inherent bias or unfairness. And the new ways need to be developed to regulate internet companies and also future holders of big data.

This is happening throughout the world. I think it can be a couple of things that will be developed further. One, I think is just a very serious punishment for companies that compromise personal data in some very bad way, like selling it to people without the user's consent, like the Facebook Cambridge Analytica situation.

Another idea that I find very interesting is AI audit because it's so expensive and challenging to go lift the hood under some companies AI and see what went wrong by fairness issues. I think we might think of AI audits like we think of financial audits. IRS audits a tiny percentage of the people. But it's a strong deterrence for people not to evade taxes.

So if there is a similar process of how maybe when the company gets too many complaints, it gets audited. And then it needs to comply in fairness and other aspects. So all this has to be worked out.

I really don't think the current general thinking of looking at the big internet company and it's abusing data, so let's break it up into several companies, that I think is to brute force. And it's too 19th century. We're not in the 19th century dealing with old issues of standard oil and things like that anymore, or the 20th century with AT&T. This is something that needs a finer grain. And something that really helps push companies to be in greater alignment with what users want, and more delicate approaches are needed than just brute force breaking companies up.

ANDY SERWER: Interesting, you've argued in recent years that China has taken the lead over the US and the development of AI. Do you still feel that China is a better home for innovation in this area? And if so, what does the US need to do to catch up?

KAI-FU LEE: Yeah, in my previous book, "AI Superpowers," I talked about the rise of Chinese AI, which I think is proving to be true. I don't think I quite stated that China is taking over from the US. Each country has companies that are strong in different aspects.

I think Chinese companies are pushing forward, for example, robotics because China is strong in manufacturing. In the US, AI companies are pushing forward enterprise AI. Companies like Palantir, C3.ai, are leading the world. So I think there are strong examples in both countries. And academic research is also quite strong in both countries.

So my day job is Venture Capital Investor. So in the last three years, we've invested in a lot of robotic companies that build robots and smart technologies for factories, and that basically take over some work from the people, but reducing the costs. I think that is a driver because China is the factory of the world. And automation and robotics is the best way to reduce the cost of manufacturing. So that's something I believe in, not only in my books, but in my day job.

ANDY SERWER: Kai-Fu, I want to change gears here a little bit and ask you about you because you have such an interesting career. For instance, decades ago your research was central to the development of speech recognition and automated speech technology. Did you ever imagine that that technology would come this far in such a short time?

KAI-FU LEE: Yes, I wrote my PhD thesis in 1988. And actually, to tell you the truth, at the time, I thought speech would become pervasive certainly by the year 2000. And I was too much of an optimist because things that work in the laboratory actually has a lot of fragility that will ultimately get fixed with being put in the market and getting feedback from the user.

But I think we finally reached that moment. This was the moment that I had dreamed that AI would take off. And it would liberate humans from routine work. And it would have a certain level of intelligence that becomes our companion.

And it does things that we don't want to do. So this is all my dreams come true. But honestly, later than I thought, but it's great. I'm still able to catch it at a maybe tail-end of my career, but able to catch it nevertheless.

ANDY SERWER: Well, speaking of your career, I mean, it's remarkable that you worked at Apple and Microsoft and Google in the 1990s and 2000s. What were some of the differences in how those companies approach, say, technological innovation?

KAI-FU LEE: Yeah, that's a great question. I learned so much from these three companies. And I now strongly believe that companies have their DNA.

And they need to stay strong with their DNA to be the best they can be. And it's hard to shift into some way that it's not. For example, at Apple, I learned about being just incredibly focused on the user and building things that wow people.

And it's a double-edged sword. That's why I build great products. And that's why they're pretty expensive.

At Microsoft, I learned that a gigantic team can work together and build a unbelievably large product like Windows. Teams of tens of thousands of people were actually able to get organized and build software that work interconnectedly. And the process in developing this gigantic monolithic software is a real marvel. And I learned so much Microsoft.

And I think Google is more the believer that small teams of incredibly smart people can out do large organizations, and that internet changes everything. And that's what I learned at Google, the fact that small projects from Google Maps and Gmail and all the way to Google Brain, and some of the AI efforts, were just a handful of people who can make so much difference because they're so smart.

And it's a non-hierarchical organization. And people can all have brilliant ideas. And there's minimal hierarchy and bureaucracy. So I learned so much from each one. And each company has done great things since I left, but continuing to do the best they can when they focus on the parts of the DNA that is their essence and their strength.

ANDY SERWER: And last question, Kai-Fu, what legacy do you want to leave behind?

KAI-FU LEE: I like to be remembered as someone who has played a small role to make complex technologies usable by everyone.

ANDY SERWER: Short and sweet, but a lot of depth behind that, no doubt. Kai-Fu Lee, CEO of Sinovation Ventures. Thank you so much for your time.

KAI-FU LEE: Thank you. Thanks for having me.

ANDY SERWER: You've been watching "Influencers." I'm Andy Serwer. We'll see you next time.