U.S. markets open in 1 hour 25 minutes
  • S&P Futures

    -8.75 (-0.19%)
  • Dow Futures

    -27.00 (-0.08%)
  • Nasdaq Futures

    -38.00 (-0.24%)
  • Russell 2000 Futures

    -4.40 (-0.24%)
  • Crude Oil

    +0.78 (+1.04%)
  • Gold

    +3.00 (+0.15%)
  • Silver

    +0.02 (+0.08%)

    -0.0008 (-0.08%)
  • 10-Yr Bond

    0.0000 (0.00%)
  • Vix

    +0.14 (+1.10%)

    -0.0020 (-0.16%)

    -0.0560 (-0.04%)
  • Bitcoin USD

    +182.29 (+0.49%)
  • CMC Crypto 200

    -6.82 (-0.88%)
  • FTSE 100

    -32.16 (-0.43%)
  • Nikkei 225

    -39.28 (-0.12%)

Yahoo Finance Presents: Nvidia Founder & CEO Jensen Huang

Yahoo Finance's Daniel Howley and Julie Hyman speak with Nvidia Founder & CEO Jensen Huang about the company's new CPU and GPU tech, as well as what is in store for the future of artificial intelligence and autonomous vehicles.

Video Transcript


JULIE HYMAN: NVIDIA has been holding its GTC Developers Conference, in which it, as always, has rolled out a lot of new developments, a lot of new products, a lot of exciting new technologies. And among other things, this time around, has announced an architecture named for the computer science pioneer Grace Hopper. So there is a Hopper GPU and there is a Grace CPU now that the company has rolled out.

Let's bring in the CEO, Jensen Huang, to talk more about all of this. Jensen, it is great to talk to you again. Thanks for being here. So as we look at, in particular, these designs named for Grace Hopper, both parts of it, can you talk to me about the main applications that you're looking for in terms of how these chips are going to be used?

JENSEN HUANG: The proper use case is artificial intelligence. Artificial intelligence, as everybody knows, has really taken off. And at the foundation, it's a new way of writing software, software that no humans can write, and software that is written in a very different way. Whereas we used to write software-- we still do-- programmers would write software on a laptop, and it would create a program, and we would test it and ship it.

But with artificial intelligence, this new form of software takes a ton of data, just a ton of data, finds patterns and relationships among the data, and automatically learns the structure of that data. It's called representation. How to represent, for example, a cat, or how to represent language, words, certain words. And it has the ability to learn that automatically from the data that you give it. It's called deep learning.

And so this new way of developing software requires a lot of computation. And over the years, the last 10 years or so, it's really taken off. And we've created a brand new generation called Grace Hopper, or Hopper's the GPU, Grace is the CPU. The Hopper, new GPU makes it possible for us to learn using this new form of artificial intelligence model called a transformer that has made it possible to learn natural language and computer vision in an incredible way. And in the future, robotics and other types of artificial intelligence will be made possible because of it.

DANIEL HOWLEY: Jensen, I want to ask about the GPU. And you have 80 billion transistors packed into this thing. I guess, at what point does the silicon become outdated to a degree, where you have to start moving on to new types of technologies? Is that something that NVIDIA is actively looking at? I'm sure it is. But at what point do you think the advancements that are able to be made on this kind of hardware become too great for that and you have to move on to something different?

JENSEN HUANG: Plenty of time. Plenty of time. It is absolutely the case that transistor scaling is slowing. We're getting more transistors, but the effectiveness of the transistors, which is for the transistors you get, the money that you pay for it, that pace of advance has slowed tremendously. And so we're going to have to design computers in a very different way.

Now, of course, the very good circumstance of the world using cloud computing has made it possible for us to build very large computers. And so if you think about a lot of these wonderful services, artificial intelligence capabilities, like language understanding or speech recognition, we're putting out all of those AI models up in the cloud. And in the cloud, you can make computers as big as you like.

And in fact, if you look at the computers that we announced today, it has incredible size. For example, 80 billion transistors. We have eight of those chips in one system. And then we take 32 of those systems, and we put it together into one giant GPU. They work like one giant GPU. So instead of just 80 billion transistors, now look at the scalability we get. Now, we had to go invent a whole bunch of new technology to make that possible, use software to make that possible, new type of interconnects to make that possible. But we could still-- we could scale, we could scale computing because of the technologies that we have for quite some time.

JULIE HYMAN: And when we talk about the sort of two parts that make up that Grace Hopper suite, if you will, the other part, the Grace part, is a CPU, which is sort of a new kind of offering or a newer kind of offering for you guys. Does this open you up to new clients or are you selling it sort of to existing clients just in different applications?

JENSEN HUANG: This is our first discrete CPU product. And we designed it for a very new time. If you look at the way the CPUs were designed in the past, they were designed for what is called single-threaded applications and applications that were written by humans. And so it has a particular style, nature to it. The type of software that's written by computers starts with a lot of data, not a normal amount of data, I mean mounds of data. And from that mound of data, it will-- Grace will go and discover the patterns and relationships.

And so what Grace is really incredibly great at is single-threaded applications it's incredibly great at, but it's incredibly great at moving data around. And so the data bandwidth, the memory bandwidth of Grace is many, many times and many times higher than what's available on modern CPUs, normal CPUs designed for data centers. So it's designed for the era of artificial intelligence. And it's really designed to be the CPU of AI.

JULIE HYMAN: And so, again, sort of not just as a specific product, but what does this symbolize for you in terms of opening up new and different markets?

JENSEN HUANG: For our very first time, we're selling CPUs.


JENSEN HUANG: Today we connect our GPUs to available CPUs in the market. And we'll continue to do that. The market's really big. There are a lot of different segments. And the different segments of the market has different characteristics. There's a lot of software in the world to run. And so we'll work with whatever is the best CPU or the CPU that the market likes best for all of the different markets that we serve.

However, for this one area in artificial intelligence or scientific computing, the amount of data that we have to move around is so much. So this gives us the opportunity to offer a revolutionary type of product to an existing marketplace for a new type of application that's really sweeping computer science that's called artificial intelligence. So this is a new growth market for us.

However, if you think about our company today, it's really a data center scale company. We offer GPUs, and systems, and software, and networking, and switches. And so the entire data center, the entire data center, whether it's for scientific computing or for artificial intelligence, training, or inference, the deployment of AI, or data centers out at the edge, or all the way out to an autonomous system, like a self-driving car, we have data center scale products and technologies for all of that. And so the CPU now adds to that data center scale strategy and gives us another technology component, if you will. Where we used to have two, now we have three-- the three essential pillars of computing-- we have all three pillars to be able to make and configure data centers and computers for the market.

DANIEL HOWLEY: Jensen, I want to ask about the auto business. You guys have been moving into that for some time now. And at this GTC, you announced Hyperion 9. You discussed some of the partnerships that you have. And I know that you previously had Hyperion 8. So can you kind of break down to us the big difference between Hyperion 9 and Hyperion 8? And what are kind of the main differentials between them?

JENSEN HUANG: In both cases, what Hyperion is about, what NVIDIA's architecture is about, is really about centralized computers. Instead of hundreds of little embedded controllers, each with their specialized functionality, we're going to centralize the computer to a fewer chips, and we're going to make it possible to do things like artificial intelligence and autonomous driving. And so, number one, it's centralized. The second, it has capabilities for AI and AV.

And the third is because it's completely programmable, it is software-defined. So now the car is like, if you will, a connected device like a phone. It's like a connected device like a smart TV. You deploy it on day one, and it's useful on day one. But over the entire life of that product-- in the case of a car is 20 years-- it stays on the road for 20 years. For that entire time on the road, the software gets better and better and better and better and better.

And so on the first day of deploying the car, you have to put a lot of computation into it, because it has to stay on the road for so long, and gives us the opportunity to develop software for it for two decades. So that's number one is that the architecture of Hyperion, the architecture of NVIDIA DRIVE is really about software, programmable AV and AI that's centralized in computing.

The second, what makes 9 different than 8 is we have more sensors, more cameras, more ultrasonics, higher resolution cameras, more LiDARs. And so we have the ability now to cocoon the entire car with even higher fidelity and higher sample rate sensors. We used to sample at 30 frames per second, or every 33 milliseconds. Now we sample-- in the future, we'll sample at 60 frames per second, or 16 milliseconds. And so if you can sample faster, meaning you could see faster, you can respond faster.

And so the difference is really expanding the operating domain of autonomous capabilities. And whenever we are in autonomous capability, we can enhance the safety of it.

DANIEL HOWLEY: I know that it can go up to-- the capability is there for it to go up to level 4, with level 5 being Jetsons future cars driving themselves. Maybe "Total Recall" is probably the better one. But I guess at what point do you see that happening with level 4? Do you see that coming sooner rather than later? And I guess what will need to happen for the technologies to get to that point? Is it, as you said, that kind of learning that they need to do to pick up the abilities to do that?

JENSEN HUANG: We design Hyperion with two foundational pillars for continued improvement in ODD and safety. The first one is redundancy and diversity. For any system to be resilient or robust, you want more redundancy, meaning you do the same thing in different-- using multiple configurations, and you want diversity. You want to do it, in fact, even a different way.

And so organizations have it. Society has it. Large systems have it. They naturally become redundant and diverse.

In the case of redundancy and diversity, we use radar, surround radar. We have surround camera. And wherever we need very long distances, we back it up with LiDAR in some configurations. And so we'll add more and more of those three types, and ultrasonics. So between those four sensors, and on top of that, the HD map, where our fleet of cars are out there mapping the road, you have basically five sensors that are covering for each one, and providing you diversity and redundancy.

The second part is to put as much computation as you can into the car. And by doing so, your software can get better and better and better and better. And so the car itself has the necessary hardware on day one for diversity and redundancy. And the software will just get better and better over the course of 20 years, so that you could cover an increasing number of operating domains that are level 3 or level 4.

And so right off the bat, you should have a lot of domains where you could be level 2, where the person still has to pay attention. But over time, starting with maybe Traffic Jam, you could have Traffic Jam autopilot where you can go take a nap if you like. Of course, there's already level 5, where you can be out of the car by summoning the car without the person inside.

So there are a lot of different operating domains where you could have-- where you have complete autonomy or even driverless. And that just keeps expanding over time. And the two pillars, basically, is you need to have the sensor architecture for diversity and redundancy, and then you need to have the [INAUDIBLE] of the computation so that software can get better over time.

JULIE HYMAN: Jensen, the auto business gets a lot of attention. You guys have talked about it a lot. Obviously, it's also easy for people to wrap their heads around. And they're excited about it, about taking a nap while they're in traffic, which is obviously an attractive proposition. It's still a relatively small part of the business. So in the last fiscal year, it earned a little over a half million dollars in revenue compared with a couple of $10 billion businesses you have in data center and gaming chips, each individually.

You talked about, during this conference, an $11 billion pipeline of auto revenue, but that's over six years. So I'm curious what we can expect in terms of acceleration. When is that going to become a billion dollar business line for example?

JENSEN HUANG: Considering where it is at the moment-- it's very small, as you mentioned-- from where it is, in order to do $11 billion over the next six years, it's got to ramp pretty fast. And so this will surely-- automotive will surely be our next multibillion dollar business.

In addition to that, Julie, the thing that's really, really cool about our auto business is that you could think about autonomous vehicles, not just as the computer in the car, but remember, in order to develop AI, you have to have an autonomous computer in the data center. And so there are really four computers that you need. And the four computers kind of break down like this. You need a computer that is doing the mapping for the fleet. Ultimately, the memories of the fleet. The map is kind of like the memory of the fleet. And this mapping system is a computer. And it's reconstructing from all the routes of the car, of the fleet, a collective map. So the mapping-- map is done in the data center.

The second thing is, you need to train the AI, the training of the system. That's what-- where NVIDIA really started in AI. And that's what the DGX is for. That's what the Hopper is for. And training of the AI.

The third is, before you deploy the fleet into the road, you would like to have a digital twin of that fleet. And the reason for that is you're constantly inventing new software, you're constantly inventing new algorithms. Before you put it on the fleet, you really want to try it on a virtual fleet. And that virtual fleet is called the digital twin. And that's what Omniverse is built for. And Omniverse has also a data center. So I now have four data centers, the data center, if you will, in the car, plus three other bigger data centers, one for mapping, one for training AI, and one for the digital twin.

In the $11 billion, I didn't even mention the other three pillars. And yet every single car company that wants to go into AVs will have to do all four pillars. And so the way that we architected our solution and our product offering, we make it possible for companies, whether they use our computer in the car or not, to benefit from NVIDIA's entire workflow, from digital mapping, from HD mapping, to AI training, to digital twinning, all the way out to deploying it into the car.

And so the $11 billion is going to be quite a significant business for us just in the car. But if you look at the totality of AV, I think this is going to be one of the largest AI industries in the world.

JULIE HYMAN: Wow. And I know we talked a lot the last time we spoke with you about digital twins, which is such a cool idea.

I want to switch gears a little bit, Jensen, because I think we have to talk about the Arm deal not happening a bit. February 8, you guys called it off, that $40 billion deal to buy Arm. There was a lot of regulatory resistance to that obviously. So now, as you go forward, I'm wondering where you might see other gaps in the business. Where do you see other potentials for acquisitions, for bolt-ons, for areas where NVIDIA can add growth from outside of itself?

JENSEN HUANG: Organic growing is NVIDIA's kind of natural way. No company has ever been built like ours. This is the first computing company that has been built to be the size based on one fundamental architecture.

We innovate from chips, to systems, to system software, to the libraries of science, all the way up to artificial intelligence applications like self-driving cars. And we serve markets that start from client computing, personal computing, all the way to workstations, all the way to supercomputers in the cloud. Very few companies have ever had this breadth of technology and depth of technology to be able to serve such large markets in computing. And so we built that all largely, largely, organically.

Whenever there are franchises, platforms, that you simply can't recreate with-- it's just-- once it's created, you just can't recreate it. Like for example, the reason why we bought Mellanox is because their networking platform is the best in the world. You're not going to recreate that. It's not about the fact that you have ethernet chip or Ethernet technology. We know how to build Ethernet. It's really about the fact that they built the foundational technology, they've integrated it into the world's IT industry, and all of the world's software stacks depends on the work that they do. And so that's kind of the example of a platform has a rich ecosystem around it.

In the case of Arm, that's another perfect example. You're not going to recreate Arm. That's one of the reasons why it was such an attractive company for us to own.

Now, the fact of the matter is we're going to build Arm. We're going to build Arm CPUs. And we have a couple decade-long license to build Arm. We have a great partnership with them.

All of the work that we-- all of the energy and the time that we got to spend together helped them get excited about the data center business and high-performance computing. And Grace is a great example of that. I think they realized the wonderful opportunity ahead with data centers. And I think all of that activity helped make Arm a multi-vector CPU company. Instead of just a cell phone company, mobile device company, they now do a lot more. They're a lot more interested in. They're going to be a lot-- they're going to be very successful in data centers. And we're going to surely partner with them to do so. And I think that that-- all of that energy was well channeled.

And so I'm, of course, disappointed that the transaction didn't happen. But our two companies now are off racing into the areas that I had hoped that we would go into.

DANIEL HOWLEY: Jensen, I want to talk about that hack that happened, the Lapsus$ hack. It's not just NVIDIA, that they had obviously targeted this group. They'd gone after Microsoft, Samsung. We were hearing reports that Okta was also involved. And it didn't sound like NVIDIA had given in to their demands to kind of lift the restrictions on cards for crypto mining. So where does NVIDIA stand on that? And how does it deal with this going forward? Is there fears that they'll dump additional information or anything along those lines?

JENSEN HUANG: Well, we're-- I'm really, really quite disturbed that hacks like this could happen. And it was a wake-up call for us. And it accelerated and intensified some areas of vulnerability that, quite frankly, every company has. And I'll talk about the path forward in just a second. But fortunately, we didn't lose any customer information and any sensitive information. They got access to source code, which, of course, we don't like, but nothing that is harmful to us. But the thing that it highlighted is that-- and this is something that I know well and that we're building technology for-- is just that we need to finish building the technology, the industry needs to adopt it, so that third party cybersecurity technology and solutions can come back to us that we can buy to make our company fundamentally what is called a zero trust architecture. The fact of the matter is the intrusion tends to be internal. It tends to be somebody wandering around your hallway, somebody who has access to a fair amount of privileges.

And so we to need to all be what is called a zero trust architecture company. And so we're accelerating our path to do that. In the meantime, there's all kinds of things that we could do. I think that multi-factor authentication is important, so long as we don't-- so long as nobody gets fatigued by it. You know, it takes a couple of authentications to get in, and so people can get tired of that.

And so during now this has happened to us, the discipline around it, the rigor around it, has gone through the roof, which is fantastic. But long term, we have to make it possible for our data center to literally be completely wide open, completely exposed, and yet be completely secure. And so the path to a zero trust data center starts with the technologies that we're building.

And so I've got to go build that technology faster, all the way from Bluefield, the DPUs that does security to the switching architectures that we have, the software stacks that we're creating, as well as this new AI framework we call Morpheus, to do real time exhaustive inspections of anomalies on the network in your data center. And so we have to bring, we have to really bring accelerated computing into the enterprise traffic.

And we know how to do that. And I just got to go do it.

JULIE HYMAN: So I know we want to cover a lot of different topics here. So I want to switch over to supply chain also, because, obviously, that's another very hot topic right now. And if you can just kind of give us a status report on that front, right? It seemed like from you and from most of the other semiconductor industry folks, you were talking about it sort of alleviating over the course of this year.

There are a lot of moving parts in the global economy right now, not least of which is what's going on in Russia and Ukraine. So I'm just wondering where we stand, and how that progress is going?

JENSEN HUANG: Supply chain will be tight for us for some time. And the reason for that is because our demand is so great. If you look at our company's growth, if you just look at our company's growth, the world's total supply did not grow that fast. And yet NVIDIA is growing quite fast in multiple directions.

Of course, our gaming business is quite large. And the gaming dynamics are fantastic. There are more games, more gamers than ever. There are more ways to game. Games is not just games anymore, and so the dynamics around, and we invented this brand new thing called RTX that can completely reset the install base of computer graphics. And so computer gaming, the dynamics is really fantastic. And that is growing incredibly well.

Our professional graphics business is growing incredibly well, because the world needs two offices now, all of us, more than two offices. We need one at work and we need one at home. And so for all the designers and creators and all the people who are using NVIDIA technology at work, now needs to build a home studio, if you will, a home lab, if you will.

And then our data center business is growing so incredibly well. And the automotive business is growing well for us. And so we have multiple dimensions of growth. And that puts a lot of pressure on our supply chain, at exactly the time when the world supply chain is, if you will, difficult.

And so I think that the answer for us, and the things that we did last year, and I'm super proud of it, is while keeping up with this incredible demand, we put in place a much more diverse, much more expanded supply base, from the number of process technology nodes that we support-- we support a lot more process technology nodes now, to substrate suppliers, assembly suppliers, testing suppliers, all the way to our system integrators. The number of partners that we've expanded into, not just because we're a lot larger now than the past, so we had to do that anyways, but we doubled down on that, so that we could have a lot more diversity.

I think diversity and an expanded supply base for us is going to be one of our strengths. And I'm really looking forward to growing into the year. But I expect that demand will still exceed supply.

DANIEL HOWLEY: And just, Jensen, as to take it back to GTC as a last question, I want to ask you about, you know, you had mentioned the concept of the digital twin, and this concept of Earth 2.0. Can you just give us a basic rundown of that, and kind of when you expect us to really see it. I know that it has to do with climate change. So how does that kind of function?

JENSEN HUANG: It's hard to make decisions about important things, if you can't somehow simulate the outcome. It turns out that most of us have this ability to simulate the outcome of the different things that we do. And that's how, we have a mental model of the world. And within that mental model of the world in our brain, we simulate the decisions we make. And we come up with a plan or an action, a plan that we can act on.

Well, it turns out that, in the case of climate science, the science itself is hard. And yet we all know that climate change is a social matter, a human matter, a global matter, of extraordinary levels. And yet how do we decide what strategies and decisions that we make and what adaptation strategies or mitigation strategies that we have, or what new digital biology technology that we can put in place, or carbon capture technology that we put in place, what is going to work and by when, and how much?

If you can't predict that, it's really, really hard to make trade-offs on what strategy is better and whether some strategies should be taken in advance of other strategies, and where do you ultimately distribute the investment funds for any particular country or, for that matter, company. And so we thought, and we believe this, that the answer is to have a simulation of the earth, and that's sufficiently high fidelity that you could test various mitigation and adaptation strategies at a regional level.

Somebody in Southeast Asia could decide whether we need to build up the necessary dams so that we can protect the Mekong River, because we believe that somehow in 30 years or 15 years, the level is going to be so severely depleted that the food source in 70% of Southeast Asia will be in harm's way. Or that the decisions made in Venice was going to be sufficient to keep that city out of harm's way, long term. And so those kind of questions, those kind of questions, and the technology, the investments that you want to make, needs to have a simulation engine around it.

And so here comes the challenge. In order to be able to make those decisions and simulate those possibilities, we need a supercomputer that's about a billion times faster than the one, the largest ones we have, a billion times larger. Well, you know, unless we do something extraordinary, and apply new technology, we're just simply not going to get there. We're not going to be able to have a simulator for the earth to test our theories against, until it's too late.

And so we've got to find a way to bring in that. And so we're going to do this. We're going to invent three fundamental technologies. The first fundamental technology is the nature of the processors that we need, to do a new type of algorithm. It's called physics machine learning. Using artificial intelligence, we're going to teach an artificial intelligence physics, not normal physics but multi-physics.

You know, of course, as you know, the earth has a lot of physics working simultaneously, ocean physics, and atmospheric physics. And, you know, there's cloud physics. There's all kinds of-- and land physics, and they're all playing into this overall outcome of climate. And so we need to create an AI that can learn physics.

And once we can create that AI that can learn physics, that AI could make physical predictions based on the inputs, the human drivers we give it, the inputs, for 10 years out, 50 years out, a hundred years out, 200 years out. To give us a sense, give us its best prediction of the decisions that we make.

So the first thing that we have to do, there's a new type of processor we have to invent. The second is a new type of algorithm that we have to go invent. And then the third, we have to build the largest digital twin computer the world's ever built. And this is within NVIDIA's scale to be able to do something like this easily.

However, this is the only supercomputer that's ever been built that runs 24/7, because it's a digital twin of the earth. And then we'll put that computer in the hands of scientists, researchers, companies, countries, for them to do simulations against, so that they know the implications, the impact of the decisions that they make. And so that's, I think this is going to be one of the great challenges in computer science.

I think the journey, just as going to the moon was, the journey will invent a whole lot of derivative science that I can benefit from, all over in other places in the meantime. And then hopefully in the next several years we put something in the ground that could really make a big change and a big impact on the outcome of the planet. That's it.

JULIE HYMAN: Well, as usual, Jensen, you've blown our minds. That's usually, this is now the second long conversation we've had with you, and I think you're 2 for 2 on that front. Jensen Huang is the CEO of NVIDIA. Thank you so much for spending some time with us today and walking us through what you guys are cooking up there in NVIDIA appreciate it.

JENSEN HUANG: Julie, Daniel, great to see you.