U.S. markets close in 51 minutes
  • S&P 500

    4,449.91
    -5.57 (-0.13%)
     
  • Dow 30

    34,912.61
    +114.61 (+0.33%)
     
  • Nasdaq

    14,983.19
    -64.51 (-0.43%)
     
  • Russell 2000

    2,290.49
    +42.41 (+1.89%)
     
  • Crude Oil

    75.38
    +1.40 (+1.89%)
     
  • Gold

    1,750.60
    -1.10 (-0.06%)
     
  • Silver

    22.67
    +0.24 (+1.07%)
     
  • EUR/USD

    1.1703
    -0.0015 (-0.13%)
     
  • 10-Yr Bond

    1.4820
    +0.0220 (+1.51%)
     
  • GBP/USD

    1.3709
    +0.0028 (+0.20%)
     
  • USD/JPY

    111.0000
    +0.3150 (+0.28%)
     
  • BTC-USD

    43,031.26
    -198.05 (-0.46%)
     
  • CMC Crypto 200

    1,065.79
    -35.73 (-3.24%)
     
  • FTSE 100

    7,063.40
    +11.92 (+0.17%)
     
  • Nikkei 225

    30,240.06
    -8.75 (-0.03%)
     
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

This is the ’single largest danger’ of A.I. according to expert Kai-Fu Lee

In this article:
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

Sinovation Ventures CEO Kai-Fu Lee joins 'Influencers with Andy Serwer' to explain the top 4 dangers of artificial intelligence.

Video Transcript

ANDY SERWER: There got to be some concerns that are potentially serious. What might those be?

KAI-FU LEE: OK, so in the book, there is one set that we call externalities. Externalities happens when A.I. is told to do something, and it's so good at doing that thing that it forgets or actually ignores other externalities or negative impacts that it may cause. So when YouTube keeps sending us videos that we're most likely to click on, it's not only not thinking about serendipity, it's also potentially sending me very negative views or very one-sided views that might shape my thinking. So that would be one form of externality that is unintentional consequence on the user because it maniacally tries to optimize something else.

Another is the personal data if that's possibly compromised. Another is bias and fairness. Another is, can A.I. explain to us why it made decisions that it made? For key things like driving autonomous vehicles, to try a problem, medical decision-making surgeries, it gets serious. But the single largest danger, as I describe in the book, is autonomous weapons. And that's when A.I. can be trained to kill, and more specifically, trained to assassinate.

Imagine a drone that can fly itself and seek specific people out, either with facial recognition, or cell signals, or whatever, and then it has a bullet. A small piece of dynamite that it can shoot point-blank at the person's forehead. And you know how fast drones move, so the danger is that this targeted assassination weapon can be built by an experienced hobbyist for $1,000. And I think that changes the future of terrorism because no longer are terrorists potentially losing their lives to do something bad.

It also allows a terrorist group to use 10,000 of these drones to perform something as terrible as genocide. And, of course, it changes the future of warfare because between country and country, this can create havoc and damage, but perhaps, anonymously and people don't know who did the attack. So it's also quite different from nuclear arms race, where nuclear arms race at least has deterrence built-in.

That you don't attack someone for the fear of retaliation and annihilation, but autonomous weapons might be doable as a surprise attack, and people might not even know who did it. So I think that is, from my perspective, the ultimate greatest danger that I can be a part of, and we need to be cautious and figure out how to ban or regulate it.

ANDY SERWER: Yeah, that is scary. And I think I've read an article about that fairly recently about the future of warfare. It's terrifying, and it described various weapons and scenarios where these weapons were used. So just to drill down on that just a little bit, how would we prevent these types of weapons to be deployed or developed even?

KAI-FU LEE: So one example is to look at history. How chemical weapons, biological weapons were banned. There could be a global treaty that is enforced. If there are drones today, the easiest way, the cheapest way is to build a drone, not a robot. Robots are much more expensive, and more clumsy, and harder to control. Drones are the most dangerous. So perhaps having some stronger laws of the air, where and how drones can be deployed.

And perhaps having some defensive mechanisms that prevent, you know, where there are a lot of people, or a lot of government functions, to have defensive functions that would basically shoot down drones in areas that are aren't permitted. So I'm not an expert in the domain, but just to brainstorm, these are some ideas. I'm sure there are other better ideas.