video_id
stringclasses 10
values | transcript_chunk
stringlengths 830
2.96k
|
---|---|
Pkj-BLHs6dE | know, everything you know, and I do my best work in that condition and I you know, I like going home and telling condition, my condition wife condition, I saved the company today and and maybe maybe it wasn't true. But but I like to think so and so another question we have a lot of Business Leaders and CEOs here and I think they're going to be surprised to hear this you have 40 direct reports. So at the so at the So at the company so at the 50 director 50 direct Reports, most people say I don't know if we have any consultants in the room, they'd the room, in consultants have any we know if don't I say they'd say, you know, what half a dozen maybe 10, that should be the limit. What's your what's your philosophy or Theory here? Well, the people that report to the CEO should require the least amount of pampering. And so I don't think they need life advice. I don't think they need career guidance. They should be at the top of their game incredibly good at their craft their craft. And unless they need my personal help, you know, they should require very little management. And so so I think that one the more the more direct reports of CEO has the less layers are in the company and so Co so I it allows us to keep information fluid allows us to make sure that that everyone is empowered by Make sure information make sure Make sure and our company that you know just performs better because you know, everybody is aligned you know you know, everybody's informed of what's going on. I want to open up to questions in |
Pkj-BLHs6dE | the more the more direct reports of CEO has the less layers are in the company and so Co so I it allows us to keep information fluid allows us to make sure that that everyone is empowered by Make sure information make sure Make sure and our company that you know just performs better because you know, everybody is aligned you know you know, everybody's informed of what's going on. I want to open up to questions in just a moment. So, please do raise your hand so I can find you but I want to ask you this you did a podcast recently find you, find you and there are a lot of headlines about it. And you said during the podcast if you could do it all over again, like if you could start inventing again, invading again, yeah, you wouldn't. No, what do you what did you mean? Why I mean you've done this amazing thing? Yeah, you're worth forty billion dollars personally. That wasn't what I meant. First of all, you know, I think it would be disingenuous. If I said that that it wasn't quote worth it. I enjoy a lot of good things in life. I've got a great family. We built a great company. All of that is worth it. That wasn't what I meant. What I meant was if people realized how hard something is and if I were to realize how hard it was how many times we're going to fail how the original business plan had no, hope of succeeding. that that That that almost the That early Founders that we built the whole company with we had to completely relearn just about everything. We had to know if I would have known we everything |
Pkj-BLHs6dE | I meant was if people realized how hard something is and if I were to realize how hard it was how many times we're going to fail how the original business plan had no, hope of succeeding. that that That that almost the That early Founders that we built the whole company with we had to completely relearn just about everything. We had to know if I would have known we everything all of the things that We everything. we everything I had to know in order to be a CEO everything that we had to solve in order to be where we are that mountain of work that melon of you know challenges you know, the mountain of adversity and setback and some amount of humiliation and a lot of embarrassment. If you want if you want to If you want mount piled all if you want of that on in 1993 in you know on the table of a 29 year old, I don't think I would have done it. I would you know, I would have said there's no way I would know all this. There's no way I could learn all this. There's no way we can overcome all this. There's no way you know, this is a game plan that that's not going to work. And so that's what I meant that I think I think the ignorance of enterpreneurs this attitude that and I try to do to keep that today, which is ask yourself. How hard could it be you know you approach life with this attitude of how hard could it be they could do it I could do it that attitude is completely helpful, but it's also completely wrong. It's very |
Pkj-BLHs6dE | that's what I meant that I think I think the ignorance of enterpreneurs this attitude that and I try to do to keep that today, which is ask yourself. How hard could it be you know you approach life with this attitude of how hard could it be they could do it I could do it that attitude is completely helpful, but it's also completely wrong. It's very helpful because it gives you courage but it's wrong because it is way harder than you think. Yes, and and the amount of skill that is necessary to amount of knowledge as a sentence that's necessary. You know, I think it's one of those teenager attitudes and and I think I think we I try to keep that in the company that teenage attitude how hard can something can scan something be, you know gives you courage gives you confidence. Let's I too seek in one question or two if we could I know I Ron Conway had a question last time for at a different moment. I know if he's still in the room. I felt like I should give him an opportunity but I see Gary Lauder there. Hey, Gary. So so there are So a lot of startups so and not some non startups doing AI chips optimized for LMS. Can you talk about and they claim to be dramatically more effective at energy efficient than now gpus. Can you talk about what you're planning on these roads? Yeah. First of all, this is one of the great observations that we made in a we realized that that deep learning and AI wasn't was not a chip problem. It's a reinvention of shooting problem |
Pkj-BLHs6dE | doing AI chips optimized for LMS. Can you talk about and they claim to be dramatically more effective at energy efficient than now gpus. Can you talk about what you're planning on these roads? Yeah. First of all, this is one of the great observations that we made in a we realized that that deep learning and AI wasn't was not a chip problem. It's a reinvention of shooting problem everything from how the computer works how computer software Everything everything works the type of software that was going to write the way that we write it the way we develop software today using AI creating a i that method of software is fundamentally different than the way we did it before. So every aspect of computing is is changed. And in fact, one of the things that people don't realize is the vast majority of computing today. today is a retrieval model meaning just all you have to ask Self what happens when you touch your phone self what someone like, you know, Self what there's some electrons go to a data center somewhere retrieves the file and brings it back to you in the future. The vast majority of computing is going to be retrieval plus generation. And so the way that Computing is done is fundamentally changed now, we have we observe that and realize that about a decade and a half ago. I think a lot of people are still trying to sort that out. It is the reason why you know, people say, oh, we're practically the only P'nay doing it. It's probably because we're the only company that got it and people are still trying to get it. You can't you can't solve this new way of doing |
Pkj-BLHs6dE | we have we observe that and realize that about a decade and a half ago. I think a lot of people are still trying to sort that out. It is the reason why you know, people say, oh, we're practically the only P'nay doing it. It's probably because we're the only company that got it and people are still trying to get it. You can't you can't solve this new way of doing Computing by just designing a chip every aspect of the computer has fundamentally changed and so everything from networking to the switching to the way the computers are designed to the chips and self all of the software that sits on top of it in the methodology that pulls it all together. It's It's a big deal because it's a complete reinvention of the computer industry. And now we have a trillion dollars with the data centers in the world. All of that is going to give retooled. That's the amazing thing. We've got we're in the beginning of a brand new generation of computing. It hasn't been reinvented in 60 years. This is the this is why such a big deal it's hard for people to wrap their head around it. But that's that's the that was the great observation that we made is it includes a trip, but it's not about that ship Jensen Wong everybody. Thank you very very much. long everybody. Thanks everybody |
8Pm2xEViNIo | it's my pleasure and privilege to be sitting in front of all of you here today to moderate a Pioneer not just in the technology space but in the artificial space as well artificial intelligence space Jensen who um is leading probably the company that's at the center of the eye of the storm when it comes to artificial intelligence the hype the possibilities and what this technology with mean Jens it's a pleasure being with you on stage here thank you it's great to be here when I'm amazing conference I um just want to say that we really appreciate you taking the time especially since you have GTC in 6 weeks in six weeks I'm going to tell everybody about a whole bunch of new things we've been working on the next generation of AI every single year they just push the envelope when it comes to artificial intelligence and GTC so um we're hoping to get a few Snippets out of this okay so I'd like to start with a um question that was going on in my mind how many gpus can we buy for 7 trillion well apparently all the gpus I I I think this is one thing I'm I'm waiting to ask Sam about because it's it's a really big number talk about ambition we have a lot of ambition here in the UA we don't lack ambition but is there a view that you can give the government leaders today with regards to compute capabilities and artificial intelligence how can they plan well where do you think the deployment is going to make sense and what advice you have uh well first of all these are amazing times these are amazing times because we're at the beginning of a new Industrial Revolution production of energy through Steam production of electricity it and information revolution with PC and internet then now artificial intelligence uh we are experiencing two simultaneous uh Transitions and this has never happened before the first transition is the end of general purpose Computing and the beginning of accelerated Computing it's like specialized Computing using CPUs for computation as the foundation of everything we do is no longer possible and the reason for that is because it's been 60 years we invented central processing units in 1964 the announcement of the IBM Sy uh system 360 we've been writing that wave for literally UH 60 years now and this is now the beginning of accelerated Computing if you want sustainable Computing energy efficient Computing high performance Computing cost effect cost Effective computing you can no longer do it with general purpose Computing you need specialized domain specific acceleration and that's what driving at the foundation our growth accelerated Computing it's the most sustainable way of doing uh Computing going forward it's the most energy efficient um it is so energy efficient it |
8Pm2xEViNIo | foundation of everything we do is no longer possible and the reason for that is because it's been 60 years we invented central processing units in 1964 the announcement of the IBM Sy uh system 360 we've been writing that wave for literally UH 60 years now and this is now the beginning of accelerated Computing if you want sustainable Computing energy efficient Computing high performance Computing cost effect cost Effective computing you can no longer do it with general purpose Computing you need specialized domain specific acceleration and that's what driving at the foundation our growth accelerated Computing it's the most sustainable way of doing uh Computing going forward it's the most energy efficient um it is so energy efficient it's so coste effective it's so performance so performant that it enabled a new type of application called AI the question is what's the cart and and the horse you know first is accelerated Computing and enabled a new uh new application there's a whole bunch of applications that are accelerated today and so now we're in the beginning of this new uh New Era uh and what's going to happen is there's a about a trillion dollar worth of installed base of data centers around the world and over the course of the next four or five years we'll have $2 trillion do worth of data centers um that will be uh uh powering software around the world and all of it is going to be accelerated and and this architecture for Accelerated Computing is ideal for this next generation of software called generative Ai and so that's really at the core of what is happening uh while we're repl placing the install base of general purpose Computing remember that the performance of the architecture is going to be improving at the same time so you can't assume just that you will buy more computers you have to also assume that the computers are going to become faster and therefore the total amount that you need is not going to be as much otherwise the mathematics if you just assume you know that that computers never get any faster you might come to the con conclusion we need 14 different planets and three different galaxies and you know four four more Suns and um to to fuel all this but but obviously uh computer architecture continues to advance in the last 10 years one of the greatest contributions and I really appreciate you mentioning that um the rate of innovation one of the greatest contributions we made was advancing Computing and advancing AI by 1 million times in the last 10 years and so whatever demand that you think is going to power the the world you have to consider the fact that it is also going to do it one million times larger faster you know more efficiently don't you think that creates a risk of |
8Pm2xEViNIo | might come to the con conclusion we need 14 different planets and three different galaxies and you know four four more Suns and um to to fuel all this but but obviously uh computer architecture continues to advance in the last 10 years one of the greatest contributions and I really appreciate you mentioning that um the rate of innovation one of the greatest contributions we made was advancing Computing and advancing AI by 1 million times in the last 10 years and so whatever demand that you think is going to power the the world you have to consider the fact that it is also going to do it one million times larger faster you know more efficiently don't you think that creates a risk of having a world of halves and Have Nots since we need to constantly invest to ensure that we have The Cutting Edge and to ensure that we are able to create the applications that are going to reshape the world and governments as we know them do you think that there's going to be an issue of countries that can afford uh these gpus and countries that can't and if not because you know it' be surprising if you said the answer is no if not what are going to be the drivers of equity excellent question um first of all when something improves by a million times and the cost or the space or the energy that it consumed did not grow up by a million times in fact you've democratized the technology um researchers all over the world would tell you that Nvidia singlehandedly democratized high performance Computing we put it in the hands of every researcher it is the reason why uh AI researchers uh Jeff Hinton in University of Toronto Yan Lun I think Yan's going to be here uh University of uh New York um Andrew Ang uh in uh Stanford simultaneously discovered us they didn't discover us because of supercomputers they discovered us because of gaming gpus that they used for deep learning we put accelerated Computing or high performance Computing in the hands of every single researcher in the world and so when we accelerate the rate of innovation we're democratizing the technology the cost of building purchasing a supercomputer today is really negligible and the reason for that is because we're making it faster and faster and faster whatever performance you need costs a lot less today than used to it is absolutely true we have to democratize this technology and the reason the reason why is very clear there's an Awakening of every single country in probably the last six months that artificial intelligence is a technology you can't be mystified by you cannot be terrified by it you have to find a way to activate yourself to take advantage of it and the reason for that is because this is |
8Pm2xEViNIo | we accelerate the rate of innovation we're democratizing the technology the cost of building purchasing a supercomputer today is really negligible and the reason for that is because we're making it faster and faster and faster whatever performance you need costs a lot less today than used to it is absolutely true we have to democratize this technology and the reason the reason why is very clear there's an Awakening of every single country in probably the last six months that artificial intelligence is a technology you can't be mystified by you cannot be terrified by it you have to find a way to activate yourself to take advantage of it and the reason for that is because this is the beginning of a new Industrial Revolution this Industrial Revolution is about the production not of energy not of food but the production of intelligence and every country needs to own the production of their own intelligence which is the reason why there's this idea called Sovereign AI you own your own data nobody owns it your country owns the data your cult it it it codifies your culture your society's intelligence your common sense your history you own your own own data you therefore must take that data refine that data and own your own National Intelligence you can't cannot allow that to be done by other people and that is a real realization now that we've democratized the computation of AI the infrastructure of AI the rest of it is really up to you to take initiative activate your uh your uh industry uh build the infrastructure as fast as you can so that the researchers the companies your governments can take advantage of this infrastructure to go and create your own AI I I think we completely subscribe to that Vision um that's why the UAE is moving aggressively on creating large language models IM mobilizing compute and maybe work with other partners of this let's try to flip the Paradigm a little bit let's today assume that Jensen hang is the president of of a developing nation that has a relatively small GDP and you can focus on one AI application what would it be let's call it a hypothetical nation and say that you know you have so many problems that you need to deal with what is the first thing that you're going to approach if you're going to mobilize artificial intelligence in that scenario the first thing you have to do is you have to build infrastructure if you want to if you want to mobilize the production of food you have to build farms if you want to mobilize the production of energy you have to build AC generators if you want to if you want to operationalize information digital if you want to digitalize your economy you have to build the internet um if you want to automate the |
8Pm2xEViNIo | what would it be let's call it a hypothetical nation and say that you know you have so many problems that you need to deal with what is the first thing that you're going to approach if you're going to mobilize artificial intelligence in that scenario the first thing you have to do is you have to build infrastructure if you want to if you want to mobilize the production of food you have to build farms if you want to mobilize the production of energy you have to build AC generators if you want to if you want to operationalize information digital if you want to digitalize your economy you have to build the internet um if you want to automate the creation of artificial intelligence you have to build the infrastructure it is not that cost it's not that it's not that costly it is also not that hard um companies all around the world of course wants to mystify terrify glorify you know all of those uh those those ideas but the fact of the matter is they're computers you can buy them off the shelf uh you can install it uh every country needs already has the expertise to do this uh and you you have to you surely need to have the imperative To Go activate that um the first thing that I would do of course is I would codify the uh language the the data of your culture into your own large language model and you're doing that here uh core 42 um Saudi ramco uh uh uh uh uh s sad um really doing uh important work to uh codify the Arabic language and creating your own large language model um but simultaneously remember that AI is not just about language AI we're seeing several AI revolutions happening at the same time AI for language AI for biology learning the language of protein Mach and and chemicals uh AI for physical sciences learning the AI of climate materials energy Discovery AI of iot the language of keeping places safe computer vision and such um AI for iot AI for Robotics and autonomous systems manufacturing and such there's AI revolutions happening AI great breakthroughs happening in all of these different domains and if you build the infrastructure you will activate the researchers in every one of these domains without the internet how can you be digital without Farms how can you produce food without an AI infrastructure how can you activate all of the researchers that are in your region to go and create the AI models you touched upon um the issue of I would say authentic ignorance the fear mongering AI taking over the world and um I I think there is a requirement for us to clarify where the hype is real and where artificial intelligence really has the power to create a lot of disruption and to harm us |
8Pm2xEViNIo | AI revolutions happening AI great breakthroughs happening in all of these different domains and if you build the infrastructure you will activate the researchers in every one of these domains without the internet how can you be digital without Farms how can you produce food without an AI infrastructure how can you activate all of the researchers that are in your region to go and create the AI models you touched upon um the issue of I would say authentic ignorance the fear mongering AI taking over the world and um I I think there is a requirement for us to clarify where the hype is real and where artificial intelligence really has the power to create a lot of disruption and to harm us and where AI is going to be good what do you think is the biggest issue when it comes to artificial intelligence right now because I think the the the problem of regulating AI is like trying to say we want to regulate a field of computer science or regulate electricity you don't regulate electricity as a invention or as a discovery you regulate a specific use case what is one use case that you think we need to regulate against and that government should mobilize towards e excellent question um first of all whatever new incredible technology is being created uh you go back to the earliest of times uh it is absolutely true we have to develop the technology safely we have to apply the technology safely and we have to help people use the technology safely and so uh whether it's um uh the plane that I came in uh cars uh Manufacturing Systems medicine all of these different Industries are heavily regulated today those regulations have to be extended augmented to consider artificial intelligence artificial intelligence will come to us through products and services it is the automation of intelligence and it will be augmented on top of all of these various Industries now it is the case that that there are some interests to scare people about this uh new technology to mystify this technology to encourage other people to not do anything about that technology and rely on them to do it and I think that that's a mistake we want to democratize this technology let's face it the single most important thing that has happened last year if you were to ask me the one single most important event last year and how it has activated AI researchers here in this region it's actually llama 2 it's an open- Source model or falcon or Falcon another excellent model very true uh M trell excellent model uh a I just I just saw another one uh a smog um there's so many open source models Innovations on safety alignment um uh uh Guard railing uh reinforcement learning so many different reasoning so many different innovations that are happening on top of transparen |
8Pm2xEViNIo | 's a mistake we want to democratize this technology let's face it the single most important thing that has happened last year if you were to ask me the one single most important event last year and how it has activated AI researchers here in this region it's actually llama 2 it's an open- Source model or falcon or Falcon another excellent model very true uh M trell excellent model uh a I just I just saw another one uh a smog um there's so many open source models Innovations on safety alignment um uh uh Guard railing uh reinforcement learning so many different reasoning so many different innovations that are happening on top of transparencies explainability all of this technology that has to be built all were possible because of some of these open source languages and so I think that democratizing activating every region activating every country to join the AI Advance is probably one of the most important thing rather than ex convincing everybody it's too complicated it's too dangerous it's too my mystical and only two or three people in the world should be able to do that that I think is a huge mistake uh the the uh Focus I think that we have done in the UAE is to focus on open source systems because we do believe that anything that we develop here should be given as um a opportunity for others that can't develop it most of this is developed using gpus so graphic processing units that you guys um are are supplying the world what do you think the next era is going to depend on is it going to continuously be built on gpus is there something else as a breakthrough that we're going to see in the future you think actually uh you know that that in just about all of the large companies in the world uh there are internal developments uh at Google there's tpus at um AWS there's tranium at Microsoft there's Maya uh uh has um uh chips that they're building uh in China just about every single CSP has chips that they're building the reason why you mention inidia gpus is NVIDIA GPU is the only platform that's available to everybody on any platform that's actually the observation it's not that we're the only platform that's being used we're simply the only platform that's used that democratizes AI for everybody's platform we're in every single Cloud we're in every single data center were available in the cloud uh in your private data centers all the way out to the edge all the way out to autonomous systems Robotics and self-driving Cars one single architecture spans all of that that's what makes Nvidia unique that we can uh in the |
8Pm2xEViNIo | has chips that they're building the reason why you mention inidia gpus is NVIDIA GPU is the only platform that's available to everybody on any platform that's actually the observation it's not that we're the only platform that's being used we're simply the only platform that's used that democratizes AI for everybody's platform we're in every single Cloud we're in every single data center were available in the cloud uh in your private data centers all the way out to the edge all the way out to autonomous systems Robotics and self-driving Cars one single architecture spans all of that that's what makes Nvidia unique that we can uh in the beginning when cnns were popular we were the right architecture because we were programmable Aruda architecture has the ability to adapt to any architecture that comes along so when CNN came along RNN came along Along lstms came along and then eventually Transformers came along and now Vision Transformers bir eye view Transformers um all kinds of different Transformers are being uh created a Next Generation State space uh models uh uh which is a uh probably the next generation of Transformers all of these different architectures can live and breathe and be created on invidious flexible architecture and because it's available literally everywhere any researcher can get access to Nvidia gpus and invent the Next Generation so so for those of you who are non-technical and heard you know a foreign language there with cnns and and some of the other uh acronyms that are being used the the thing about artificial intelligence is it's going through a lot of Evolutions over a very short period of time so whatever the infrastructure that was used probably 5 years ago is very different to the infrastructure that's being used today but what Jensen's point was I think it's a very important point is NVIDIA has always been relevant historically we see companies that are relevant at one phase of development and then as the infrastructure changes they become irrelevant but you guys were able to innovate and and push through let's move to a non- air related topic for a second I want to talk about education so today knowing what you know seeing what you see and being at The Cutting Edge of the technology what should people focus on when it comes to education what should they learn how should they educate their kids and their societies wow excellent question I'm going to say something and it it's it's going to sound completely opposite um of what people feel uh you you you probably recall uh over the course of the last 10 years 15 years um almost everybody who sits on a stage like this would tell you it is vital that your children learn |
8Pm2xEViNIo | innovate and and push through let's move to a non- air related topic for a second I want to talk about education so today knowing what you know seeing what you see and being at The Cutting Edge of the technology what should people focus on when it comes to education what should they learn how should they educate their kids and their societies wow excellent question I'm going to say something and it it's it's going to sound completely opposite um of what people feel uh you you you probably recall uh over the course of the last 10 years 15 years um almost everybody who sits on a stage like this would tell you it is vital that your children learn computer science um everybody should learn how to program and in fact it's it's almost exactly the opposite it is our job to create Computing technology such that nobody has to program and that the programming language is human everybody in the world is now a programmer this is the miracle this is the miracle of artificial intelligence for the very first time we have closed the Gap the technology divide has been completely closed and this the reason why so many people can engage artificial elligence it is the reason why every single government every single industrial conference every single company is talking about artificial intelligence today because for the very first time you can imagine everybody in your company being a technologist and so this is a tremendous time for uh all of you to realize that the technology divide has been closed or another way to say it the tech technology leadership of other country has now been reset the countries the people that understand how to solve a domain problem in digital biology or in education of young people or in manufacturing or in farming those people who understand domain expertise now can utilize technology that is readily available to you you now have a computer that will do what you tell it to do to help automate your work to amplify your productivity to make you more efficient and so I think that this is just a tremendous time um the impact of course uh is is great and your imperative to activate and take advantage of the technology is absolutely immediate um and also to realize that to engage AI is a lot easier now than at any time in the history of computing it is vital that we we upskill everyone and the upskilling process I I believe will be delightful surprising um to realize that this computer can perform all these things that you're instructing it to do and doing it so easily so if I was going to choose a uh major and University as a degree that I'm going to pursue what would you give me as an advice for something to pursue if I were starting all over again um I would |
8Pm2xEViNIo | is great and your imperative to activate and take advantage of the technology is absolutely immediate um and also to realize that to engage AI is a lot easier now than at any time in the history of computing it is vital that we we upskill everyone and the upskilling process I I believe will be delightful surprising um to realize that this computer can perform all these things that you're instructing it to do and doing it so easily so if I was going to choose a uh major and University as a degree that I'm going to pursue what would you give me as an advice for something to pursue if I were starting all over again um I would realize uh one thing that one of the most complex fields of science is the understanding of biology human biology not only is it complicated because it's so diverse so complicated so hard to understand living and breathing it is also incredibly impactful complicated technology complicated science incredibly impactful for the very first time and and remember we call this field life sciences and we call drug Discovery Discovery as if you wander around the universe and all of a sudden hey look what I discovered nobody in computer science nobody in computers and nobody in the traditional industries that are very large today nobody says car Discovery we don't say computer Discovery we don't say software Discovery we don't go home and say hey honey look what I found today this piece of software we call it engineering and every single year our science our computer science our software becomes better and better than the than the year before every single year our chips get better every single year our infrastructure gets better however Life Sciences is sporadic if I were to do it over again right now I would realize that the technology to turn life engineering life science to life engineering is upon us and that digital biology will be a field of engineering not a field of science it will continue to have science of course but not a field just of Science in the future and so I I hope that that this is going to start a whole generation of people who enjoy working with proteins and chemicals and and enzymes and um materials and and they're engineering these amazing things that are more energy efficient that are lighter weight that are stronger that are more sustainable all of these inventions in the future are going to be part of engineering not scientific discovery so I think we can end with a very positive note hopefully we're going to enter an era of Discovery an era of proliferating a lot of the things that unfortunately today are challenges to us whether it's disease whether it's limitations and resources thank you so much Jess for taking the time and being with us and I know that we could |
8Pm2xEViNIo | to start a whole generation of people who enjoy working with proteins and chemicals and and enzymes and um materials and and they're engineering these amazing things that are more energy efficient that are lighter weight that are stronger that are more sustainable all of these inventions in the future are going to be part of engineering not scientific discovery so I think we can end with a very positive note hopefully we're going to enter an era of Discovery an era of proliferating a lot of the things that unfortunately today are challenges to us whether it's disease whether it's limitations and resources thank you so much Jess for taking the time and being with us and I know that we could have continued for another hour but um thank you for taking the stage and thank you for your Insight thank you thank you everyone |
MwiM_nPyx5Y | I am the William v McLain professor of
business in the decision and operations division here at the
Columbia Business School. I want to thank you all for joining
us this evening for tonight's program, which features Jensen Wong, co-founder
and c e O of Nvidia Corporation, as well as our own ris, the Dean
of Columbia Business School, and David and Lynn Sip,
the professor of business. Our two speakers tonight
have much in common. In fact, they both graduated from Stanford in
electrical engineering nearly the same time, maybe even not possibly
overlapping. Jensen is the co-founder, president officer of
Nvidia. He is a businessman, entrepreneur and electrical engineer. And over the last 30 years
through his work at nvidia, he has revolutionized first the graphics
processing unit industry and now more recently, the artificial
intelligence industry. He's been named the world's best, c e o by Harvard Business Review and Brand
Finance as well as Fortune Magazine's business person of the year,
and one of time magazine's, 100 most influential people. Our fireside chat today has been made
possible through both the David and Lynn Discipline Leadership series, as well as
the digital finance, sorry, excuse me, the Digital Future
Initiative. And additionally, I serve on the leadership of the
Digital Future Initiative, the D F I. Here at C V s, the Digital Future Initiative
is C V S'S new think tank, focusing on preparing students to
lead for the next century of digital transformation, as well as helping
organizations governances and
communities better understand leverage and prosper from
future ways digital structure. Now I would like to hand it over to. Thank you. Very much. Thank you all for coming. So. This an exciting topic and a
topic that is near and dear, certainly to my heart. And it's a topic where the school, everything that we do at the school is
changing so fast, trying to keep up, trying to change curricula, trying to create opportunities
for our students to actually learn about technologies and how they're
changing the world and be honest, prepare for the future and there
is no better person to be having to talk about AI than Jensen Pond. Jensen, thank you so much for making the
time and coming here. Welcome. Sun. Sun. Yes. I just love hearing talk here. I think the expectation's |
MwiM_nPyx5Y |
topic that is near and dear, certainly to my heart. And it's a topic where the school, everything that we do at the school is
changing so fast, trying to keep up, trying to change curricula, trying to create opportunities
for our students to actually learn about technologies and how they're
changing the world and be honest, prepare for the future and there
is no better person to be having to talk about AI than Jensen Pond. Jensen, thank you so much for making the
time and coming here. Welcome. Sun. Sun. Yes. I just love hearing talk here. I think the expectation's
going to be pretty high, but say something smart. Well, good luck with you. So I want to start. By having you walk us through a
little bit the history of Nvidia and then I talk a little bit about that
leadership thing you just mentioned, but you launched that
company 30 years ago and you have led it through a transformation, different applications,
different type of products. Walk us through a little bit that journey. Yeah, one of the most proud moments,
I'll start with the proud moment, what happened recently, the c e O of Denny's where
with my first company, and they learned that Vidia,
not only was I dishwasher, bus boy and worked my way up
corporate ladder and became waiter at Denny's, and they were my first company that I still know how to take.
I still done the menu well, super, by the way, anybody
know what a superbird is? What kind of College Street were you? Denny's is America's Diner Go. And that Nvidia was founded by outside our home in San Jose there. And so they contacted me recently
and the booth that I sat at is now in Nvidia Booth and my name is Nvidia. This is where a trillion
dollar company was founded. And so Nvidia was founded during a time when the EC revolution and the microprocessor was capturing just above the entire industry and the world properly solved
that the CPU of micro processor revolution. And it
really reshaped how the IT companies that were successful before
the micro processor revolution, revolution and companies successful. We started with our company during
that time and our perspective was that general purpose computing, as incredible as it's can't
sensibly be the solution for, we wanted to believe that
a way of doing computing, we call accelerated computing, where you would add |
MwiM_nPyx5Y | . This is where a trillion
dollar company was founded. And so Nvidia was founded during a time when the EC revolution and the microprocessor was capturing just above the entire industry and the world properly solved
that the CPU of micro processor revolution. And it
really reshaped how the IT companies that were successful before
the micro processor revolution, revolution and companies successful. We started with our company during
that time and our perspective was that general purpose computing, as incredible as it's can't
sensibly be the solution for, we wanted to believe that
a way of doing computing, we call accelerated computing, where you would add a specialist
next to the generalist. The CPU is a generalist. If
you well could do anything, it could do everything however you can. Obviously if you can do
everything and anything, then obviously you can't
do anything very well. And so there are some problems
we felt that were not solvable, not good solutions or not the
problems to be solved by what we call. And so we started this accelerated
computing company. The problem is if you want to create a
computing platform company, you want to create a computing
platform one hasn't created since 1960, a year after I was the b m system 360 beautifully described what
the computer is in 1964, I B M described that the System
360 had a central processing unit, IO subsystem, direct memory
access, virtual memory, binary compatibility across
a scalable architecture. It described everything that we described
computers to this day 60 years later. And we felt that there was a new form
of computing that could solve some problems At the time it wasn't completely
pure what problems we could solve, but we felt that we felt that
accelerated comput. So nonetheless, we went out to start this
company and we made a great first decision that frankly is un to this day, if somebody were to come
up to you and said, one, we are going to invent a new technology
that the world doesn't have. Everybody wants to go build a
computer company around cpu. We want to build the computer company
around something else connected to c p number one and the killer app. The killer app is a video
three D video game in 1993 and that application doesn't exist.
And the companies who we built, this company doesn't exist
and the technology that
we're trying to build doesn't exist. And so now you have |
MwiM_nPyx5Y | great first decision that frankly is un to this day, if somebody were to come
up to you and said, one, we are going to invent a new technology
that the world doesn't have. Everybody wants to go build a
computer company around cpu. We want to build the computer company
around something else connected to c p number one and the killer app. The killer app is a video
three D video game in 1993 and that application doesn't exist.
And the companies who we built, this company doesn't exist
and the technology that
we're trying to build doesn't exist. And so now you have a company that has a technology challenge and
a market challenge and an ecosystem challenge. And so the odds of that company
succeeding is approximately 0%. But nonetheless, we were fortunate this because
two very important people frankly, that I had worked with and Kristen
Curtis, the three of us I've worked with were incredibly important people in the
technology industry at the time called up the most important
ship capital in the world, Valenti at the time and
told Don Don and his name was Gu, wanted to industry don give this kid money and
then figure out along the way what it's going to work. And fortunately they did. But that business plan, I wouldn't fund myself
today and it just has too many dependencies and each one of them
has some profitability of success. And when you compound all of these
together, we multiply all these together. And so nonetheless, we imagined that there would be this
market called video games and this market would be the largest entertainment
industry in the world at the time it was zero and three D graphics we
oscillated with would be used for telling the stories
of almost a sport any game. And so in virtual world, you
could have any game, any sport, and as a result everybody would be a
gamer. And so Don Valentine asked me, so how big is this market
going to be? And I said, well, every human will be a gamer someday.
Every human would be a gamer. Someday. Also the wrong answer, quite
frankly for starting a company. So these are horrible habits,
these are horrible skills. I'm not advocating them
for you, but nonetheless, it turned out to have been true video
games turned out to be the largest entertainment industry
in three D graphics. And we've found first
P |
MwiM_nPyx5Y | , any sport, and as a result everybody would be a
gamer. And so Don Valentine asked me, so how big is this market
going to be? And I said, well, every human will be a gamer someday.
Every human would be a gamer. Someday. Also the wrong answer, quite
frankly for starting a company. So these are horrible habits,
these are horrible skills. I'm not advocating them
for you, but nonetheless, it turned out to have been true video
games turned out to be the largest entertainment industry
in three D graphics. And we've found first
Pillar app for accelerating, which brought us the time
to use accelerated Comput
to solve a whole bunch of other problems, which eventually led to. This is fantastic. Sorry.
So before we go to ai, I would like to ask a little
bit about the crypto period. So gaming was a huge obviously journey for Nvidia. And then at some point in time the
killer app became crypto and mining. What was that chapter? It's already computing can solve
problems that normal computers can, and all of our GPUs, even though
you use it for designing cars, designing buildings, designing,
use it for molecular dynamics, use it for playing video games, it has this programming model
called Kuda that we invented. And Kuda is the only computing
model sits that exists today that is as popular as an exit.
It's used by the vault Boeing. And so anyways, one of the things
that Kuda can do is process parallel processing incredibly fast. And obviously one of the algorithms
that we would do very nicely on is cryptography. And so when
Bitcoin first came out, there were no Bitcoin asics. And the obvious thing is to go find
the fastest supercomputer in the world. And the fastest supercomputer that also
has the highest volume is Nvidia you use, it's available in hundreds of
millions of gamers' homes. And so by downloading an application, you could do some mining
at your house. Well, the fact that you could buy one
of our GPUs, one of our computers, and you plug it into the wall
and money starts squirting out, that was a day that my mom figured
out what I did for a living. And so she called me one
day and she said, son, I thought you were doing
something about video games. And I finally figured out what |
MwiM_nPyx5Y | fastest supercomputer that also
has the highest volume is Nvidia you use, it's available in hundreds of
millions of gamers' homes. And so by downloading an application, you could do some mining
at your house. Well, the fact that you could buy one
of our GPUs, one of our computers, and you plug it into the wall
and money starts squirting out, that was a day that my mom figured
out what I did for a living. And so she called me one
day and she said, son, I thought you were doing
something about video games. And I finally figured out what
you do. You buy NVIDIA's products, you plug it in and money courts out.
And I said, that's exactly what I do. And that's the reason why that's
the, so many people bought it, Bitcoin works led to Ethereum. But the idea that you
would use a supercomputer, use a super processing
system like via GPUs to either encode or compress or
do something to refine data and transfer it, transform
it into a valuable token. You guys know what that sounds
like to generate valuable tokens, Chad should bet. And so today, really
one of the things that's happening, if you extend the sensibility
about Ethereum and crypto mining, it's kind of sensible in the sense that
all of a sudden we created this new type of industry where raw data comes in, you apply energy to this computer and
literally money comes sporting out. And the currency is of course tokens. And that token is int intelligence tokens. This is one of the major
industries of the future. Now I'll describe something else and
it makes perfect sense to us today, but back then it looks strange. You take
water and you move it into a building, you apply fire to it. And what comes out is something
incredibly valuable and invisible called electricity. And so today we're going to move data
into a data center that's going to refine it and it's going to work on it and it's
going to harness the capability of it and produce a whole bunch of digital
tokens that are going to be valuable digital biology, it'll be valuable
in physics, it'll be valuable in it. All kinds of computing areas and
social media and all kinds of things. Computer games and all kinds of
things. And it comes out in tokens. So the future is going to be about
AI |
MwiM_nPyx5Y | comes out is something
incredibly valuable and invisible called electricity. And so today we're going to move data
into a data center that's going to refine it and it's going to work on it and it's
going to harness the capability of it and produce a whole bunch of digital
tokens that are going to be valuable digital biology, it'll be valuable
in physics, it'll be valuable in it. All kinds of computing areas and
social media and all kinds of things. Computer games and all kinds of
things. And it comes out in tokens. So the future is going to be about
AI factories and then video gear will be powering these AI factories. So we have jumped into the
neural networks and I want to, and we talked about power
computing, how we render graphics, let's say on a monitor, how we play games, how we solve cryptographic
problems for Bitcoin. Talk to us a little bit about how
the G P U is useful in training your own networks. But then what I wanted us
to do for this audience, tell us a little bit about what it
takes to train a model like J G P T, what it takes in terms of hardware,
what it takes in terms of data, what in terms of the size of
the cluster that you're using, the amount of money that you need to
spend. Because these are huge problems. And I think giving us a
glimpse of the scale would be fun. Well, everybody wants you to think that it's
a huge problem. It's super expensive. It's not. It's not. And
let me tell you why. It costs our company about five, 6 billion of engineering
costs to design a trip. And then at 1.2 years, three years later, I had enter and I sent
an email to T SS M C and I F T P, basically a large
positive TS M C. And they fab it. And that process costs our company
something along the lines of the half a billion dollars. So five and a
half billion dollars, I get a chip. And that chip of course is valuable
to us, but it's no big deal. I do it all the time. And so if
somebody were to say, Hey Jensen, you need to build a billion dollar
data center and once you plug it in, money will start squirting out the
other side. |
MwiM_nPyx5Y | I sent
an email to T SS M C and I F T P, basically a large
positive TS M C. And they fab it. And that process costs our company
something along the lines of the half a billion dollars. So five and a
half billion dollars, I get a chip. And that chip of course is valuable
to us, but it's no big deal. I do it all the time. And so if
somebody were to say, Hey Jensen, you need to build a billion dollar
data center and once you plug it in, money will start squirting out the
other side. I'll do it in a heartbeat. And apparently a lot of people do. And the reason for that is because who
doesn't want to build a factory for generating intelligence now? So a
billion dollars is not that much money, frankly, the world spends about 250 billion
a year in infrastructure computing infrastructure and none of it's generating
money. It's just storing our files, passing our email around. And
that's already 250 billion. And so one of the reasons why our growth
we're growing so fast is because after 60 years, general purpose computing is on
decline because it is not sensible to invest another 250 billion to build
another general purpose computing data center. It's too through force in energy, it's too slow in computation. And so
now accelerated computing is here, that 250 billion goes to build
accelerated computing data centers. And we're very, very happy to support
customers to do that. And in addition to that, accelerated computing, you now have an infrastructure can to
AI for all of the things that we're just talking about. Basically the way it works is you take
a whole lot of data and you compress it, you compress it. Deep learning is like a compression
algorithm and you're trying to figure out, you're trying to learn the mathematical
representation of mathematical representation, the patterns
and relationships of the
data that you're studying, and you compress it into a neural network. So what goes in is say trillions of bytes, trillions of pokes.
So let's say a few trillion, trillion bytes, what comes out of it
is a hundred gigabytes. And so you've taken all of that data and
you've compressed it into this little tiny funnel. A hundred
gigabytes is like two DVDs. Two DVDs you could download on your
phone and you |
MwiM_nPyx5Y | gorithm and you're trying to figure out, you're trying to learn the mathematical
representation of mathematical representation, the patterns
and relationships of the
data that you're studying, and you compress it into a neural network. So what goes in is say trillions of bytes, trillions of pokes.
So let's say a few trillion, trillion bytes, what comes out of it
is a hundred gigabytes. And so you've taken all of that data and
you've compressed it into this little tiny funnel. A hundred
gigabytes is like two DVDs. Two DVDs you could download on your
phone and you can watch it. So you could download this giant neural
network on your phone. And now that all of this data
has been compressed into it, the data that's compressed, your
network model is a semantic model, meaning you can interact with it, you could ask questions and it would go
back into its memory and understand what you meant and generate text,
read, have a conversation. So at the core is kind of
like that. It sounds magical, but for all the computer scientists
in a room, it's very sensible. And don't let anybody convince
you it costs a lot of money. I'll give you a good break.
Everybody go Bill aids. Go bill, as. The scale. If I press you
a little bit on that scale, do you need a computer that is
essentially a data center to estimate these models? 16,000 GPUs is what it
took to build a g, PT four, which is the largest one that
anybody's using today. It's a billion, and that's a check. It's
not even a very big check. Don't be afraid. Don't let anybody
talk you how to building a company, build your. Dreams. Let me ask you a question about
the billion dollar check and the growth that you've been experiencing. I think you were named
the best C e O by H B R. That's entertainment. That's. Entertainment. I'll keep repeating it and then eventually
I appreciate that and then eventually we'll end with that
line. But in some sense, you are leading a company right now
through a period of extreme growth, hypergrowth, something that most companies have
not experienced in their life. And I want to perhaps. Tell us. A little bit about what
does it look like? I mean, doubling in |
MwiM_nPyx5Y | me ask you a question about
the billion dollar check and the growth that you've been experiencing. I think you were named
the best C e O by H B R. That's entertainment. That's. Entertainment. I'll keep repeating it and then eventually
I appreciate that and then eventually we'll end with that
line. But in some sense, you are leading a company right now
through a period of extreme growth, hypergrowth, something that most companies have
not experienced in their life. And I want to perhaps. Tell us. A little bit about what
does it look like? I mean, doubling in size in under a year or managing supply chains,
managing customers, managing
growth, managing money. How do you actually add to that? I love the management money part
of it. Just counting is fun. You just wake up in the morning
and just roll around all the cash. Isn't that what you guys are all
here to do? My understanding is. That's the end goal. That's the end goal, yeah. Let's see. Building companies hard, there's nothing easy about it.
There's a lot of pain and sufferings, a lot of hard work. If it was
easy, everybody would do it. And the truth about all
companies, big or small, ours or others in technology,
you're always dying. And the reason for that is because
somebody's always trying to leapfrog you. So you're always on the wave of business. And if you don't internalize
that sensibility, don't internalize that belief.
You will go out of business. So I started at Denny's, as you guys know, and Nvidia was built out
of very unlikely odds. And it took us a long time to be here.
I mean, we're a 30 year old company, and when Nvidia first found it, the PC Windows nine
five had come out 1993, and that was the first usable
pc. We didn't have email. And so there were no such
laptops or smartphones, none of that stuff existed. And so you could just imagine the world
that we were started in and the world, we didn't have cd, everything was
CRTs. And so the world was very, very different. Cd ROS didn't exist.
We just to put it in perspective, all this stuff, that was the era we were founded in
and |
MwiM_nPyx5Y | a 30 year old company, and when Nvidia first found it, the PC Windows nine
five had come out 1993, and that was the first usable
pc. We didn't have email. And so there were no such
laptops or smartphones, none of that stuff existed. And so you could just imagine the world
that we were started in and the world, we didn't have cd, everything was
CRTs. And so the world was very, very different. Cd ROS didn't exist.
We just to put it in perspective, all this stuff, that was the era we were founded in
and it took this long for our company to be recognized as heavy reinvented
for the first time in 60 years growing fast. Growing
fast is all about people. Obviously companies is all
about people. Whether you, if the right systems in
place, you get right, your surrounded by
amazing people like I am and the company has craft skills. It doesn't really matter whether you
ship a hundred billion dollars or 200 billion. Now the truth is that the
supply chain is not easy. People think, does anybody know what a GForce
graphics card looks like? And just show me as a hand, anybody knows
what Nvidia graphics card looks like. And so you have a feeling that the
graphics card is like a cartridge that you put into a pc, PC express slide pc. But our graphics chips these days, what is used in these deep
lining systems is 35,000 parts. It weighs 70 pounds. It takes robots to build 'em
because they're so heavy. It takes a supercomputer to test it
because it's a supercomputer itself and it costs $200,000. And for $200,000, you buy one of these computers, you replace several hundred
general purpose processors
that cost several million dollars. And so for
every $200,000 you save, you say for every $200,000
you spent with Nvidia, you save two and a half
million dollars in computing. And that's the reason why I tell you,
the more you buy, the more you save and early, it's working out really
well. People are really lining up. So that's it. That's
what we do for a living. And the supply chain is complicated. We build the most complicated
computers the world's ever seen, but hard can it be really? And it's really hard |
MwiM_nPyx5Y | cost several million dollars. And so for
every $200,000 you save, you say for every $200,000
you spent with Nvidia, you save two and a half
million dollars in computing. And that's the reason why I tell you,
the more you buy, the more you save and early, it's working out really
well. People are really lining up. So that's it. That's
what we do for a living. And the supply chain is complicated. We build the most complicated
computers the world's ever seen, but hard can it be really? And it's really hard. It's really hard. But at the core of it, if
you're surrounded by amazing, the simple truth is that
it's all about people. And I'm lucky to be surrounded
by a great management team. You have. And then the CEO E O says things
like, make a So number one, something. Like that. Yeah, make it work. Make it work, make it. So. I want to go back to AI trends
and what you think about the future, but you mentioned
the word platform earlier on. You mentioned your software environment. So you have the hardware infrastructure, you have a software environment that is
actually pervasive in training neural networks. Right now you're building in data centers or
you're creating environments within data centers that are sort of
clusters of Nvidia hardware, software and public communication
between these resources, how important it is to be sort of
a whole platform solution versus a hardware play. And how core is that into Nvidia Drive? Unlike first of all, before
you could build something, you have to know what you're building and what is the reason the first
principles for its existence. Accelerated computing is not a chip,
that's why it's not called an accelerator. Accelerated computing is
about understanding how can
you accelerate everything in life? If you can
accelerate everything in life, if you can accelerate every application,
that's called really fast computing. And so accelerated computing is first
understanding what are the domains, what are the applications
that matter to you? And to understand the algorithms and the
computing systems and the architecture necessary to accelerate that application. So it turns out that general
purpose computing is a sensible idea. Accelerating an
application is a sensible idea. So we'll give you an example.
There's, you have |
MwiM_nPyx5Y | called an accelerator. Accelerated computing is
about understanding how can
you accelerate everything in life? If you can
accelerate everything in life, if you can accelerate every application,
that's called really fast computing. And so accelerated computing is first
understanding what are the domains, what are the applications
that matter to you? And to understand the algorithms and the
computing systems and the architecture necessary to accelerate that application. So it turns out that general
purpose computing is a sensible idea. Accelerating an
application is a sensible idea. So we'll give you an example.
There's, you have DVD decoders, you play DVDs or H two sixty
four decoders on your phone. It does one job and one job only, and it does it incredibly well.
Nobody knows how to do it better. Accelerated computing is
kind of this weird middle. There are many applications that
you can accelerate. So for example, we can accelerate all kinds of image
processing stuff, particle physics stuff. We can accelerate all kinds of things.
That includes literary algebra. We can accelerate, we can accelerate many, many domains of applications.
That's a hard problem. Accelerating one thing is easy. Generally running
everything under A is easy. Accelerating enough domains such
that if you accelerate too many domains, so those of you
accelerate every domain, then you're back to a
general purpose processor. What makes them so dumb that they
can't build just a faster chip? And so on the one hand, on the other hand, if you only accelerate one application, then the market size is not
big enough to fund your r d. And so we had to find that
slippery middle. And that is the strategic journey of our company.
This is where strategy meets reality. And that's the part that Nvidia got right, that no other company in the history
of computing ever got, right? To find a way to have a sufficiently
large domain of applications that we can accelerate that is still a hundred
times, 500 times faster than the C P U and such that the economics, the flywheel, the flywheel of number of domains
expanding the number of customers, expanding the number of
markets, expanding the sales, which creates larger r d, which allows
us to create even more amazing things, which allows us to stay well ahead
of the c p. Does that make sense? That fly |
MwiM_nPyx5Y | the part that Nvidia got right, that no other company in the history
of computing ever got, right? To find a way to have a sufficiently
large domain of applications that we can accelerate that is still a hundred
times, 500 times faster than the C P U and such that the economics, the flywheel, the flywheel of number of domains
expanding the number of customers, expanding the number of
markets, expanding the sales, which creates larger r d, which allows
us to create even more amazing things, which allows us to stay well ahead
of the c p. Does that make sense? That flywheel is insanely hard
to create. Nobody's ever done it. It's only been done just one time. And so that is the capability.
And in order to do that, you have to understand the algorithms, you have to understand a lot about the
domains of applications. You have to select it, right? You have to create
the right architecture for it. And then the last thing that we did right, was that we realized that in order
for you to have a computing platform, the applications you develop for
Nvidia should run on all of video. You should have to think,
does it run on this chip? Is it going to run on that chip?
It should run on every chip. It should run on every
computer with Nvidia in it. That's the reason why every single G p
that's ever been created in our company, even though we had no customers
from Kudo a long time ago, we stayed committed to it. We were determined to create
this computing platform
since the very beginning. Customers were not. And that
was the pain and suffering. It cost the company decades and
billions of dollars getting here. And if not for all the video gamers in
the room here, we would be here. You were our day jobs. And then at night we can
go solve digital biology. Those help people with quantum chemistry. They'll help people with artificial
intelligence and robotics and such. And so we realized, number one, that we were accelerating
computing a software problem. The second thing is AI is a data center,
data center infrastructure problem. And it's a very obvious, because you
can't train an AI model on a laptop, you can't train it on a cell phone.
It's not big enough of a computer. The amount of data is measured
in |
MwiM_nPyx5Y | here, we would be here. You were our day jobs. And then at night we can
go solve digital biology. Those help people with quantum chemistry. They'll help people with artificial
intelligence and robotics and such. And so we realized, number one, that we were accelerating
computing a software problem. The second thing is AI is a data center,
data center infrastructure problem. And it's a very obvious, because you
can't train an AI model on a laptop, you can't train it on a cell phone.
It's not big enough of a computer. The amount of data is measured
in trillions of bytes, and you have to process that
trillions bytes billions of times. And so obviously that's going to be a
large scale computer distributing the problem across millions of GPUs. The reason why I say millions is
16,000 inside the 16,000 or thousands. And so we're distributing the workload
across millions of processors. There are no applications in the world
today that can be distributed across millions of processors.
Excel works on one processor. And so that computer science
problem was a giant breakthrough, utterly giant breakthrough. And this reason why it enabled generative
AI enabled large language models. So we observed two things.
One, accelerated computing
is a software problem, algorithm problem, and AI
is a data center problem. And so we're the only company that
went out and built all of that stuff. And the last part that we did
was a business model choice. We could have been a data center company
ourselves and be completely vertically integrated. However, we would
recognize that no computer company, no matter how successful will be the only
computer company in the world and it's better to be a platform computing
company because we love developers. It's better to be a platform computing company
that serves every computing company in the world than to be a computing
company all by ourselves. And so we took this data center,
which is the size of this room, whole bunch of wires and a whole bunch
of switches and networking and a bunch of software. We disaggregated all of that and we
integrated into everybody else's data centers that are all completely different. So a w Ss and G C P and Azure
and Meta and so on and so forth, data centers all over the world,
that's an insane complexity problem. And we figured out a way to have enough
|
MwiM_nPyx5Y | computing company
that serves every computing company in the world than to be a computing
company all by ourselves. And so we took this data center,
which is the size of this room, whole bunch of wires and a whole bunch
of switches and networking and a bunch of software. We disaggregated all of that and we
integrated into everybody else's data centers that are all completely different. So a w Ss and G C P and Azure
and Meta and so on and so forth, data centers all over the world,
that's an insane complexity problem. And we figured out a way to have enough
standardization where it was necessary enough flexibility so that we could
accommodate enough collaboration with all the world's computer
companies. As a result, N v's architecture has
now graft, if you will, into every single computer
company in the world. And that has created a
large footprint, larger, larger install base, more
developers, better applications, which makes customer happier
customers provide them more chips, which increases the install base,
which increases our r d budget, so on and so forth. The flywheel,
the positive feedback system. And so that's how it works. Nice and
easy. So one thing you haven't done. And I wanted you explain to us
why if you haven't invested in fabricating your own chips and why. Is that? That's an excellent question. The reason for that is as a matter of strategic choice,
the core values of our company, my own core values, the core values
of our company is about choosing. The most important thing in life
is choosing. How do you choose? How do you choose? Well, everything.
How do you choose what to do tonight? How do you choose? Well, our company decides to choose
projects for one fundamental goal. My goal is to create the
environment and environment by which amazing people in the
world will come and work. Amazing environment for the best people
in the world who want to pursue this field of computing and computer science
and artificial intelligence to create the conditions by which they will
come and do their lives work. Well, if I say that then now the question
is how do you achieve that? So lemme give you an example
of how not to achieve that. Nobody that I know wakes up in the
morning and say, you know what? My neighbor is doing that, |
MwiM_nPyx5Y |
projects for one fundamental goal. My goal is to create the
environment and environment by which amazing people in the
world will come and work. Amazing environment for the best people
in the world who want to pursue this field of computing and computer science
and artificial intelligence to create the conditions by which they will
come and do their lives work. Well, if I say that then now the question
is how do you achieve that? So lemme give you an example
of how not to achieve that. Nobody that I know wakes up in the
morning and say, you know what? My neighbor is doing that, and
you know what I want to do? I want to take it from
them. I can do it too. I want to take it from them.
I want to capture their share. I want to pumble them on
price. I want to kick 'em in. I want to take their share. It turns out, no great people do that. Everybody wakes up in
the morning and says, I want to do something that
has never been done before. That's incredibly hard to do that if
successful makes it great impact in the world. And that's what greatest core
values are. One, how do we choose, do something that the
world's never done before? Let's hope that's insanely hard to do. The reason why you choose something
insanely hard to do by the way, so that you have lots
of time to go learn it. If something is insanely easy
to do, like tic-tac dough, I wouldn't buss over it. And the reason for that obviously is
highly competitive. And so you got to choose something that's incredibly hard
to do and that thing that's hard to do discourages a whole bunch of all by
itself because the person who's willing to suffer the longest wins. And so we choose
things that are incredibly hard to do, and you've heard me say, pain is suffering a lot and it's
actually a positive attribute. People who can suffer are ultimately
the ones that are the most successful, number one. Number two, you should choose something that's
somehow you're destined to do. Either a set of qualities
about your personality or
your expertise or the people you're surrounded by, your
scale, whatever your perspective, whatever you're somehow destined
to do. The number three, you better love working on that |
MwiM_nPyx5Y | person who's willing to suffer the longest wins. And so we choose
things that are incredibly hard to do, and you've heard me say, pain is suffering a lot and it's
actually a positive attribute. People who can suffer are ultimately
the ones that are the most successful, number one. Number two, you should choose something that's
somehow you're destined to do. Either a set of qualities
about your personality or
your expertise or the people you're surrounded by, your
scale, whatever your perspective, whatever you're somehow destined
to do. The number three, you better love working on that
thing so much because unless so, the pain and suffering is too
great. Now, I just described to you, I just described to you Invidia's
core values. It's that simple as that. And if that's the case, what am I
doing? Making a cell phone check. How many companies in the world
can make a cell phone a lie? Why am I making a C P U? How
many more CPUs do we need? Does that make sense? We
don't need all those things. And so we naturally selected
ourselves out of commodity markets. We naturally selected ourselves
out of commodity markets. And because we selected amazing
markets, amazingly hard to do things, amazing people joined us. And because amazing people joined us
and because we had the patience and let them go succeed to go
and do something amazing, have the patience to let 'em do something
amazing, they do something amazing. The equation is that simple. The
equation is literally that simple. It turns out it's simple to say, it
takes incredible character to do. Does that make sense? That's why it's
the most important thing to learn. It turns out great success and
greatness is all about character. And no fabrication. The reason why we don't do fabrication
is because T SS m C does it so well, and they're already doing it. For
what reason do I go take their work? I like the people at t c,
they're great friends of mine. Cc's a great friend of mine, Mark's a great friend of mine
just because I've got business, I can drive into it. So what,
they're doing a great job for me. Let's not squander my time to go
repeat what they've already done. Let's go squander |
MwiM_nPyx5Y | is all about character. And no fabrication. The reason why we don't do fabrication
is because T SS m C does it so well, and they're already doing it. For
what reason do I go take their work? I like the people at t c,
they're great friends of mine. Cc's a great friend of mine, Mark's a great friend of mine
just because I've got business, I can drive into it. So what,
they're doing a great job for me. Let's not squander my time to go
repeat what they've already done. Let's go squander my time on
something that nobody has done. Does that make sense? Nobody has done,
that's how you build something special. Otherwise you're only
talking about market share. Thinking about the
future, what do you think when we're thinking about these decade. Are these right answers? By the way, I don't have an M B A and I
didn't get a finance degree. I read some books and I watched a
lot of YouTubes. I got to tell you, nobody watches more
business YouTubes than I do. And so you guys have nothing on me. Are these right answers professor version? But yes, they're the right answers.
And best, c e o. Yeah, right? And what. Do you think about ai? What are you thinking about AI
applications and where we're going to see change in our lives, let's
say over the next 3, 5, 7 years? Where do you see that going
and in places where we will all potentially be affected
in our daily experience? Yeah, first of all, I'm
going to go to the punchline. AI is not going to take your jobs. The person who used AI is going to take
your job. You guys agree with that? Okay? So use AI as fast as you can so then
you can stay gainfully employed. Let me ask you a second thing.
When productivity increases, when productivity increases,
meaning we embed AI all over Nvidia, Nvidia is going to become one giant ai.
We already use AI to design our chips. We can't design our chips, we can't write
our optimizing compilers without ai. So we use AI all over the place. When AI increases the productivity
of your company, what happens next? Layoffs Or you hire more people, |
MwiM_nPyx5Y | take
your job. You guys agree with that? Okay? So use AI as fast as you can so then
you can stay gainfully employed. Let me ask you a second thing.
When productivity increases, when productivity increases,
meaning we embed AI all over Nvidia, Nvidia is going to become one giant ai.
We already use AI to design our chips. We can't design our chips, we can't write
our optimizing compilers without ai. So we use AI all over the place. When AI increases the productivity
of your company, what happens next? Layoffs Or you hire more people, you hire more people. And the reason for that is give me an
example of one company that had earnings growth because of productivity
gains that said, guess what? My gross margins just
went up time for a layoff. So why is it that people
think about losing jobs? If you think you have no new ideas,
then that's the logical thing. Does that make sense? If you don't have any more ideas to
invest your incremental earnings, then what are you going to do? When
the work is replaced? It's automated. You lay people off. And so join companies where they
have more ideas than they can afford to fund so that when
AI automates their work, it's going to shift. Of course, it's
going to change the style of working, AI's going to come after CEOs
right away. Deans and CEOs we're so toast. I think CEOs first need
second, but you're close. So you join companies
where they have more ideas, more ideas than they have money
to invest. And so naturally, when earnings improve, you're
going to hire more people. Ai. So first of all, this is
the giant breakthrough. Somehow we've taught
computers how to learn to represent information
in numerical ways. Okay, so you guys, has anybody heard of
this thing called word to back? It's one of the best things
I've ever word to back a word. You take words and you learn from
the words studying every single word. It's relationship to every
other word. And you learn, read a whole lot sentences of paragraphs, and you try to figure out
what's the best number vector, what's the best number to
associate with that word? So mother and father are
close together numerically, oranges and apples are close |
MwiM_nPyx5Y |
computers how to learn to represent information
in numerical ways. Okay, so you guys, has anybody heard of
this thing called word to back? It's one of the best things
I've ever word to back a word. You take words and you learn from
the words studying every single word. It's relationship to every
other word. And you learn, read a whole lot sentences of paragraphs, and you try to figure out
what's the best number vector, what's the best number to
associate with that word? So mother and father are
close together numerically, oranges and apples are close together.
Numerically, they're far from mom and dad. Dogs
and cats are far from mom and dad, but closer probably to mom
and dad than they are from oranges and apples chair
and tables and chair. Hard to say exactly where they lie, but those two numbers are close to
each other, far away from mom and dad, king and queen, close to mom
and dad. Does it make sense? Imagine doing this for every single
number and every time you test it, you go, son, a gun. That's pretty good. And when you subtract something from
something else, it makes sense. Okay? That's basically learning the
representation of information. Imagine doing this for English. Imagine
doing this for every single language. Imagine doing this for
anything with structure, meaning anything with predictability.
Images have structure. Because if there are no
structure, it'd be white noise. Physically it'd be white noise.
And so there must be structure. That's the reason why you see a cat, I
see a cat, you see a tree, I see a tree. You can identify where the tree is, you
can identify where the coastline is, where the mountains are where.
And so we could learn all of that. So obviously you could take that
image and turn it into a vector. You could take videos and turn
into vector three D into vectors, proteins into vectors, because there's
obviously structure and protein, chemicals into vectors. Genes
eventually into vectors. We can learn the vectors
of everything. Well, if you can learn everything
into numbers and its meaning, then obviously you can
take ca word, c a t, translated to the image c a t image of ca. Obviously this is the same meaning
if you can go from |
MwiM_nPyx5Y | mountains are where.
And so we could learn all of that. So obviously you could take that
image and turn it into a vector. You could take videos and turn
into vector three D into vectors, proteins into vectors, because there's
obviously structure and protein, chemicals into vectors. Genes
eventually into vectors. We can learn the vectors
of everything. Well, if you can learn everything
into numbers and its meaning, then obviously you can
take ca word, c a t, translated to the image c a t image of ca. Obviously this is the same meaning
if you can go from words to images, that's called mid journey
staple diffusion. If you
can go from images to words, that's called captioning video, YouTube videos to words underneath
videos. And so one of you went from, what do you call it, if you go from say, amino acids to proteins,
that's called the Nobel Price. And the reason for that is because
that's alpha alcohol. Incredible. Isn't that right? And so
this is the amazing time, the amazing time in computer science
where we can literally take information one kind and convert it, transfer it generated into information
of another kind. And so you can go text to text
a large body of text, P D F, small body of text, a summarization of
archive, which I really enjoy, right? And so instead of reading
every single paper, I can ask it to summarize the paper. And it has to understand
images because in the archive, the papers have a lot of images and charts
and things like that. So you can take all of that to summarize it. And so you can now imagine all of the
productivity benefits and in fact the capabilities you can't possibly do
without it. So in the near future, you do something like this, you
say, hi, I would like to design, give you some options of a whole
bunch of cars. I work for Mercedes, I really care about the brand.
This is the style of the brand. Lemme give you a couple of sketches and
maybe a couple of photographs of the type of car I like to
build. It's a four wheel, s u v four wheel drive, SS u v,
let's say, so on and so forth. And all of a sudden it
comes up with 20 10, 200 completely fully three D design cab |
MwiM_nPyx5Y | future, you do something like this, you
say, hi, I would like to design, give you some options of a whole
bunch of cars. I work for Mercedes, I really care about the brand.
This is the style of the brand. Lemme give you a couple of sketches and
maybe a couple of photographs of the type of car I like to
build. It's a four wheel, s u v four wheel drive, SS u v,
let's say, so on and so forth. And all of a sudden it
comes up with 20 10, 200 completely fully three D design cab. Now the reason why you want that instead
of just finishing the car is because you might want to select
one of them and you say, iterate on this one another 10 times, and you might find select one and then
modify it yourself. And so the future of design is going to be very different. The future of everything
will be very different. Now, if you gave that capability to
designers, they would go in the same, they would love you so much.
They would love you so much. And that's the reason why
we're doing this. Now, what's the long-term impact of this? One of my favorite areas is if you
could use language to describe a protein and you could use language to
figure out a way to synthesize protein in the future of protein engineering
is near us. And protein engineering, as you know, creating
enzymes to eat plastic, creating enzymes to catch a carbon, creating enzymes of all kinds
to grow vegetables better, all kinds of different enzymes could
be created during your generation. And so the next 10 years is going to
be unbelievable. We were the computer, the chip engineering generation. You'll
be a protein engineering generation. Something that we couldn't imagine
doing just a few years ago. I think we're going to open it
up for q and a to the audience. So questions, and maybe I'll point and we have
some mics that will be running okay over there. We'll start there. Thank you for coming tonight. Thank you. So are you worried that Moore's law
business schools are students are so serious, I understand that the graduates
of Columbia ends up being investment bankers and stock traders.
I'm actually, look, computer science, is that right? Is that right? And
one computer science, you'll |
MwiM_nPyx5Y | that we couldn't imagine
doing just a few years ago. I think we're going to open it
up for q and a to the audience. So questions, and maybe I'll point and we have
some mics that will be running okay over there. We'll start there. Thank you for coming tonight. Thank you. So are you worried that Moore's law
business schools are students are so serious, I understand that the graduates
of Columbia ends up being investment bankers and stock traders.
I'm actually, look, computer science, is that right? Is that right? And
one computer science, you'll be, and so that's what I understand.
So I'm here selling stock in the future. In the future, if somebody
asks you what stock to buy and video, go ahead. A question for you is, are you worried that Moore's law might
actually catch up to GP industry as it did for companies like, and can you also explain the difference
between Moore's law and CO's law? I didn't phrase Wong's law and it
wouldn't be likely me to do so. The very simple thing is this, Moore's Law was twice the performance
every year and a half approximately. The easier math to do is
10 times every five years. So every 10 years is about a
hundred times, if that's the case. In general, purpose
computing microprocessors, the general purpose
computing was increasing in
performance at 10 times every five years, a hundred
times every 10 years. Why change the computing method
a hundred times every 10 years? Not fast enough. Are you kidding me? If cars would go a hundred times
every five years when life be good? And so the answer is it's in
fact, Moore's law is very good, and I benefited from it. The
whole industry benefited from it. The computer industries
here because of it, but eventually set general
purpose computing. Moore's law. It is not about the number
of transistors in computing, it's about the number of
transistors, how you use it for CPUs, how you translate it
ultimately to performance. That curve is no longer 10 times
every five years. That curve, if you're lucky, is two or
four times every 10 years. Well, the problem is if that curve
is two or four times every 10 years, the demand for computing and our
aspir |
MwiM_nPyx5Y | . The
whole industry benefited from it. The computer industries
here because of it, but eventually set general
purpose computing. Moore's law. It is not about the number
of transistors in computing, it's about the number of
transistors, how you use it for CPUs, how you translate it
ultimately to performance. That curve is no longer 10 times
every five years. That curve, if you're lucky, is two or
four times every 10 years. Well, the problem is if that curve
is two or four times every 10 years, the demand for computing and our
aspirations of using computers to solve problems, our aspirations, our imagination for using
computers to solve problem, it's greater than four times
every 10 years. Isn't that right? And so our imagination, our demand, the world's consumption
of all exceeds that. Well, you could solve that problem by just
buying more CPUs. You could buy more. But the problem is these CPUs consume so
much power because of general purpose. It's like a generalist. A
generalist is not as efficient. The craft is not as great. They're not as productive as a specialist. If I'm ever going to have an open chest
wound, I don't send me a generalist. You guys know what I'm saying? If you
guys are around, just call a specialist. Alright? Yeah, he's a vet, he's a generalist. Look or do wrong specialist. So generalist is too brute forced. And so today it costs the
world too much energy. It costs too much to just brute
force general purpose computing. Now, thankfully, we've been working on
accelerating computing for a long time, and accelerating computing,
as I mentioned, is not
just about the processor, it's really about understanding the
application domain and then creating the necessary software and algorithms
and architecture and chips. And somehow we figured out a way
to do it behind one architecture. That's the genius of the
work that we've done, that we somehow found this
architecture that is both incredibly fast. It has to accelerate the C
P U a hundred times, 500 times, sometimes a thousand times. And yet it is not so specific
that it's only used for one singular activity.
Does that make sense? And so you need to be sufficiently
broad so that you have large markets |
MwiM_nPyx5Y | processor, it's really about understanding the
application domain and then creating the necessary software and algorithms
and architecture and chips. And somehow we figured out a way
to do it behind one architecture. That's the genius of the
work that we've done, that we somehow found this
architecture that is both incredibly fast. It has to accelerate the C
P U a hundred times, 500 times, sometimes a thousand times. And yet it is not so specific
that it's only used for one singular activity.
Does that make sense? And so you need to be sufficiently
broad so that you have large markets, but you need to be sufficiently narrow
so you can accelerate the application. That fine line, that razor's edge is what
caused the video to be here. It's almost impossible, if I
can explain the 30 years ago, nobody would've believed it. And
in fact, if you did, to be honest, it took a long time and we just stuck
with it and stuck with it and stuck with it. And we started with
a seismic processing, molecular dynamics, image processing,
of course, computer graphics. And we just kept working on and working
on and working on weather simulation, fluid dynamics, particle
physics, quantum chemistry, and then all of a sudden
one day and deep learning and then transformers, and then the next will be some form of
reinforcement learning transformers, and then there'll be some multi-step
reasoning systems. And so all of these things are we just one application, somehow we figured out a way to
create an architecture and solve all. And so will this new law end. And I don't think so. And
the reason for that is this. It doesn't replace the C
P U, it augments the cpu. And so the question is, what
comes next to augment us? We'll just connect it next to it.
We're just connect it next to it. And so when the time comes, we'll know, we'll know that there's another tool
that we should be using to solve the problem because we are in service of
the problems we're trying to solve. We're not trying to build a
knife and make everybody use it. We're not trying to build a
acquire, make everybody use it. We're in service of accelerated
computings in service of the problem. And so this is one of |
MwiM_nPyx5Y | the question is, what
comes next to augment us? We'll just connect it next to it.
We're just connect it next to it. And so when the time comes, we'll know, we'll know that there's another tool
that we should be using to solve the problem because we are in service of
the problems we're trying to solve. We're not trying to build a
knife and make everybody use it. We're not trying to build a
acquire, make everybody use it. We're in service of accelerated
computings in service of the problem. And so this is one of the
things that all of you learn. Make sure your mission is right. Make sure that your mission
is not build trains, but enable transportation.
Does that make sense? Our mission is not build GPUs. Our mission is to accelerate applications, solve problems that
normal computers cannot. If your mission is articulated right
and you're focused on the right thing, it'll last forever. Thank you. Okay. Up there. Someone? Yes, that guy right there is
Tony. Go ahead. Tony. Am I, Tony? What's Tony say? Where's Tony? Tony was that guy
in the middle, right? Yeah. See, I met him just now. I'm
just kidding. Straight. My memory. Take my chance. I wasn't, I wasn't trying
to give Tony the mic. I was just demonstrating my
incredible memory for Tony. Go ahead. Thanks again. Now there's a push for onshoring, the supply chains for semiconductors. Then there are also restrictions
on the export supply countries. How do you think that would
affect Nvidia in the short term, but also how would that affect
us as consumers in the long term? Yeah, really excellent question.
You guys all heard a question. It's all repeated relates to geopolitics
and geopolitical tension and such. The geopolitical tension, the geopolitical challenges will affect
every industry will affect every human. We deeply, we the company deeply
believed in national security. We are all here because our
countries are known for security. We believe in national security, but we also simultaneously
believe in economic security. The fact of the matter is most families
don't wake up in the morning and say, good gosh, I feel so vulnerable
because of the lack of military. They feel vulnerable because
of economic survivability. |
MwiM_nPyx5Y | guys all heard a question. It's all repeated relates to geopolitics
and geopolitical tension and such. The geopolitical tension, the geopolitical challenges will affect
every industry will affect every human. We deeply, we the company deeply
believed in national security. We are all here because our
countries are known for security. We believe in national security, but we also simultaneously
believe in economic security. The fact of the matter is most families
don't wake up in the morning and say, good gosh, I feel so vulnerable
because of the lack of military. They feel vulnerable because
of economic survivability. And so we also believe
in human rights and the ability to be able to create a
prosperous life is part of human rights. And as you know, the United States believe in the human
rights of the people that live here as well as the people that don't live here.
And so the country believes in all of those things simultaneously.
And we do too. The challenge with the
geopolitical tensions, the immediate challenge is that if
we're too unilateral about deciding that we decide on the prosperity of
others, then there will be backlash. There'll be unintended
consequences. But I am optimistic. I want to be hopeful that the people who
are thinking through this are thinking through all the consequences
and unintended consequences. But one of the things that has done
is that it has caused every country to believe to deeply internalize its sovereign rights. Every country is talking about
their own sovereign rights. And that's another way of saying
everybody's thinking about themselves and as it applies to us. On the one hand, it might restrict the use of our
technology in China and the export control there. On the other hand, because of sovereignty and every country
wanting to build its own sovereign AI infrastructure, and not all of them
are enemies of the United States, and not all of 'em have a difficult
relationship with the United States, we would help 'em build AI infrastructure
everywhere. And so in a lot of ways, this weird thing about geopolitical, it limits the market
opportunities for us in some way. It opens the market opportunities
in other ways. But for people, for people, I am just really hopeful. I really hope not hopeful.
I really hope that we don't allow our
tension with China result into tension with Chinese. That we don |
MwiM_nPyx5Y | own sovereign AI infrastructure, and not all of them
are enemies of the United States, and not all of 'em have a difficult
relationship with the United States, we would help 'em build AI infrastructure
everywhere. And so in a lot of ways, this weird thing about geopolitical, it limits the market
opportunities for us in some way. It opens the market opportunities
in other ways. But for people, for people, I am just really hopeful. I really hope not hopeful.
I really hope that we don't allow our
tension with China result into tension with Chinese. That we don't allow our tension with
the Middle East turn into tension with Muslims. Does that make sense? We
are more sophisticated than that. We can't allow ourselves to
fall into that trap. And so a little bit about that. I worry
about that as a slippery slope. One of our greatest sources of
intellectual property for our country as foreign students. I see many of 'em
here. I hope that you stay here. It is one of our country's
single greatest advantage. If we don't allow foreign students in
the brightest minds in the world to come to Columbia and keep you
here in New York City, we're not going to be able to retain
the great intellectual property of the world. And so this is our
fundamental core advantage. And I really do hope that we don't
ruin that. So as you can see, the geopolitical challenges are real
and national security concerns are real. So are all of the other economic market.
Social technology matters, technology, leadership matters, market leadership
matters. All that stuff matters. The world's just a complicated place. And so I don't have a simple answer
for that. We will all be affected. So we'll take one more question there. But in the meantime, stay focused
on your school. Do a good job, just study. Hi there. So I actually started off working as an
engineer at a semiconductor company at Houston in entrepreneurship. And now I'm here as someone like yourself
that is fundamentally technologist, an engineer, started a company, very successfully learned
finance from YouTube videos. What do you think of MBAs? Oh, I think it's terrific.
You should be, first of all, you'll likely live until you're a
hundred. And so that's the problem. What are you |
MwiM_nPyx5Y | more question there. But in the meantime, stay focused
on your school. Do a good job, just study. Hi there. So I actually started off working as an
engineer at a semiconductor company at Houston in entrepreneurship. And now I'm here as someone like yourself
that is fundamentally technologist, an engineer, started a company, very successfully learned
finance from YouTube videos. What do you think of MBAs? Oh, I think it's terrific.
You should be, first of all, you'll likely live until you're a
hundred. And so that's the problem. What are you going to do for
the last 70 years or 60 years? And this isn't something I'm telling you, it's something I tell everybody care
about. Look to the best of your ability. Education. When you come here,
you're forced by education. How good can that be
after you leave? Like me? I got to go scour the planet for knowledge and I've got to go through a lot of junk. That gets to some good
stuff. You're in school, you've got all these amazing professors
who are curating the knowledge for you and present it to you in
a platter. My goodness, I would stay here and pig out on
knowledge for as long as I can. If I could do it again, I'd still be here. Dean and me sitting next to each
other. I'm the oldest student here. I'm just preparing for that big
step function when I graduate, just go instantaneously. Success. I'm just a little kidding about that. You have to leave at some point and
your parents won't appreciate it. But don't be in a hurry, I
think. Learn as much as you can. There's no one right
answer to getting there. Obviously I have friends who never
graduated from college and they're insanely successful. And so there
are multiple ways to get there. But statistically, I still think
this is the best way to get there, statistically. And so if you believe
in stat in math and statistics, stay school. Yeah, go
through the whole thing. And so I got a. Virtual b a by working through
it, not because of choice. When I first graduated from school, I
thought I was going to be an engineer. Nobody says, Hey, Jensen, here's your
diploma. You're going to be |
MwiM_nPyx5Y | Obviously I have friends who never
graduated from college and they're insanely successful. And so there
are multiple ways to get there. But statistically, I still think
this is the best way to get there, statistically. And so if you believe
in stat in math and statistics, stay school. Yeah, go
through the whole thing. And so I got a. Virtual b a by working through
it, not because of choice. When I first graduated from school, I
thought I was going to be an engineer. Nobody says, Hey, Jensen, here's your
diploma. You're going to be a c e O. And so I didn't know that. So
when I got there, I learned M B A. And there's a lot of different
ways to learn. Business strategy matters. Obviously. Business matters are very
different things. Finance matters, very different things. And so you got to learn all
these different things in
order to build a company. But if you're surrounded by
amazing people like I am, they end up teaching you along the way. And so there's some things that
depending on what role you want to play, that's critical. Yours,
okay? And so for a C e O, there are some things that are
critically, it's not only my job, but it's critical that I lead with it.
And that's character. There's something about your character,
about the choices that you make, how you deal with success,
how you deal with failure. And Norma said that how you make choices, those kind of things matter a lot. Now, from a skill and craft perspective, the most important thing for
a C is strategic thinking. There's just no alternative. The
company needs you to be strategic. And the reason for that is
because you see the most. You should be able to look around
corners better than anybody. You should be able to connect
dots better than anybody. And you should be able to mobilize.
Remember what a strategy is, action. It doesn't matter what the rhetoric
says, it matters what you do. And so nobody can mobilize the company
better than the CEO O. And so therefore, the CEO's uniquely, uniquely in the right place to be the
chief strategy officer, if you'll, and so those two things, I
would say, from my perspective, two of the |
MwiM_nPyx5Y | that is
because you see the most. You should be able to look around
corners better than anybody. You should be able to connect
dots better than anybody. And you should be able to mobilize.
Remember what a strategy is, action. It doesn't matter what the rhetoric
says, it matters what you do. And so nobody can mobilize the company
better than the CEO O. And so therefore, the CEO's uniquely, uniquely in the right place to be the
chief strategy officer, if you'll, and so those two things, I
would say, from my perspective, two of the most important things. The rest of it has a lot of
skills and things like that. And you'll learn the skills. And maybe
if I could just add one more thing. I do believe that a company is about some particular craft. You make some
unique contribution to society. You make something and you make
something. You ought to be good at it. You should appreciate the craft.
You should love the craft. You should know something about
the craft, where it came from, where it is today, and where
it's going to go. Someday. You should try to embody
the passion for that craft. And I hope today I get a little bit
embodying the passion and the expertise of that craft. I know a lot
about the space that I'm in, and so if it is possible, the CEO should know the craft. You
don't have to have founded the craft, but it's good that you know the craft.
There's a lot of crap that you can learn. And so you want to be
an expert in that field. But those are some of the things
you can learn that here. Ideally, you can learn on the job, you
can learn that from friends. You can learn that a lot of different
ways to do it. But stay in school. So before I thank the best c e o, I want to thank the Digital
Future Initiative, the
David Hilton Speaker Series, but mostly thank you gentlemen for coming. We all understand why you were voted
the best, c e o now. Thank you. |
Z2VBKerS63A | Ladies and gentlemen, please welcome NVIDIA founder and CEO Jensen Huang. 20 years... 20 years after we introduced to the world the first programmable shading GPU, we introduced RTX at SIGGRAPH 2018 and reinvented computer graphics. You didn't know it at the time, but we did, that it
was a 'bet the company' moment. The vision of RTX was to bring forward real time ray tracing, something that was of
course used in film rendering, offline. It required that we reinvent the GPU, added ray tracing accelerators, reinvented the software of rendering, reinvented all the
algorithms that we made for rasterization
and programmable shading. And that wasn't even enough. We had to bring
together computer graphics and artificial intelligence for the very first
time to make it possible. In 2018, five years ago, this was the showcase demo. The first RTX GPU was called Turing. As you can imagine, we did it on purpose. It was appropriately
named to unify computer graphics and artificial
intelligence for the very first time. This demo was
called Star Wars Reflection. It was created by the
researchers at L&M, XLabs, Epic, and NVIDIA. It had two and a
half million polygons or so, two rays per pixel, a
couple of bounces per ray. We did ambient inclusion, area lights, specular reflections. It was a hybrid
rasterization and ray traced demo. We rendered it at
720p, 30 frames a second, and we used DLSS super
resolution to scale it to 4K. The demonstration
was, frankly, at the time, incredibly beautiful. That was five years ago. Now five years later, Racer RTX, 250 million polygons, 100 times more geometry, 10 rays per pixel,
about 10 bounces per ray. We're using a unified
lighting system for every effect for the very first time. This entire scene
is completely path traced. No rasterization. We're rendering it at 1080p, 30 hertz, and using DLSS,
using artificial intelligence, infer something like one out of, infer like seven out of eight pixels, computing only one out of eight, and as a result,
we're able to render this at 4K, scale it up to 4K, 30 hertz. Hit it. Not bad for real time. I think it's safe
to say that it was worth it to bet the |
Z2VBKerS63A | We're using a unified
lighting system for every effect for the very first time. This entire scene
is completely path traced. No rasterization. We're rendering it at 1080p, 30 hertz, and using DLSS,
using artificial intelligence, infer something like one out of, infer like seven out of eight pixels, computing only one out of eight, and as a result,
we're able to render this at 4K, scale it up to 4K, 30 hertz. Hit it. Not bad for real time. I think it's safe
to say that it was worth it to bet the company. We realized that
rasterization was reaching its limits, and unless we took
such a giant risk again and introduced a brand new
way of doing computer graphics, combining CG and AI
for the very first time, what you just saw would not be possible. Modern computer
graphics has been reinvented. The bet has paid off. While we were
reinventing computer graphics with artificial intelligence, we were reinventing the GPU altogether for artificial intelligence. The GPU, when I came to
see you last time, five years ago, most people would say
that this is what a GPU looks like, and in fact, this is
the GPU that we announced. This is Turing, and as you guys might remember this, this is the Turing GPU,
but this is what a GPU is today. This GPU is, I guess, let's see, eight Hoppers, each one of them, all together, something like between the Hoppers, the eight Hoppers connected with NVLink, the InfiniBand
networking, the NV switches that are connecting them
together, the NVLink switches, all together, one trillion transistors. This GPU has 35,000 parts. It's manufactured by a
robot like an electric car. It weighs 70 pounds, consumes 6,000 watts, and this GPU
revolutionized computer science altogether. This is the third generation. This is Hopper GPU. This is the GPU that
everybody writes music about. There's a Billy Joel
song written about this GPU. And so this GPU has gone on to reinvent artificial intelligence, and 12 years
later, after 12 years working on artificial
intelligence, something gigantic happened. The generative AI era is upon us. The iPhone moment of AI, if you will, where all of the technologies of artificial
intelligence came together in such a way that it |
Z2VBKerS63A | an electric car. It weighs 70 pounds, consumes 6,000 watts, and this GPU
revolutionized computer science altogether. This is the third generation. This is Hopper GPU. This is the GPU that
everybody writes music about. There's a Billy Joel
song written about this GPU. And so this GPU has gone on to reinvent artificial intelligence, and 12 years
later, after 12 years working on artificial
intelligence, something gigantic happened. The generative AI era is upon us. The iPhone moment of AI, if you will, where all of the technologies of artificial
intelligence came together in such a way that it is now
possible for us to enjoy AI in so many different applications. The revolutionary
transformer model allows us to learn from a large amount of
data that's across large spans of space and time to
find patterns and relationships, to learn the representation of almost anything with structure. We learned a
representation, how to represent language in mathematics and
vectors in vector space, audio, animation, 3D,
video, DNA, proteins, chemicals. And with a generative
model and the learned language model, you can guide the
autoregressive diffusion models to generate almost anything you like. And so we could learn the representation of almost anything with structure. We can generate almost anything that we can learn from structure, and we can guide it
with our human natural language. The journey of
NVIDIA accelerated computing met the journey of
the deep learning researchers, and the big bang of modern AI happened. This is now 12 years
later, the 12-year journey of our work in artificial intelligence, and it is incredible what
is happening around the world. The generative AI
era has clearly started. The combination of large language models and generative models, these autoregressive generative models, has kicked off the generative AI era. Thousands of papers in
just the last several years have been written about
this area of large language models and generative AI. Billions of dollars are
being invested into companies, and just about every single domain and every single industry is pursuing ideas on generative AI. And the reason for that is very simple. The single most
valuable thing that we do as humanity is to generate intelligent information. And now, for the very first time, computers can help us augment our ability to generate information. And a number of startups
are just doing amazing things. Of course, they're
doing content creation, but |
Z2VBKerS63A | AI era. Thousands of papers in
just the last several years have been written about
this area of large language models and generative AI. Billions of dollars are
being invested into companies, and just about every single domain and every single industry is pursuing ideas on generative AI. And the reason for that is very simple. The single most
valuable thing that we do as humanity is to generate intelligent information. And now, for the very first time, computers can help us augment our ability to generate information. And a number of startups
are just doing amazing things. Of course, they're
doing content creation, but they're also using generative AI to steer the steering
wheel of a self-driving car, or animate, articulate the robotic arm, generate proteins,
chemicals, discover new drugs, even learning the structure of physics so that we can generate
physics of mesoscale multiphysics, maybe accelerate the
understanding of climate change. Well, here are some
examples of some amazing things. This is the Adobe Firefly. Adobe Firefly does outpainting. Imagine the space around
the image that we never captured. MOVE Ai does mocap from just video. This is on the upper right. You decide which one's real. I'm going with the left. Vizcom does sketch-to-image
guided by language prompt. This one's really cool. There are a lot of
people who know how to sketch, and from the sketch and
some guidance from your language, you could generate
something photorealistic and rendered. The future of computer graphics
is clearly going to be revolutionized. And this is really cool. Wonder Dynamics, not only
is the name of the company cool, but they do pose and lighting detection and replace the
actor with a CG character. It just goes on and on and on. The number of generative
AI startups around the world, I think we're coming
up on something like 2000, and they're in just
about every single industry. The generative AI era has arrived. Well, what's really profound, though, is that when you take
a step back and ask yourself, "What is the meaning of
generative AI? Why is this such a big deal? Why is it changing everything?" Well, the reason for that is, first, human is the new programming language. We've democratized computer science. Everybody can be a programmer now, because human language, |
Z2VBKerS63A | and on. The number of generative
AI startups around the world, I think we're coming
up on something like 2000, and they're in just
about every single industry. The generative AI era has arrived. Well, what's really profound, though, is that when you take
a step back and ask yourself, "What is the meaning of
generative AI? Why is this such a big deal? Why is it changing everything?" Well, the reason for that is, first, human is the new programming language. We've democratized computer science. Everybody can be a programmer now, because human language, natural language, is the best programming language. And it's the reason why
ChatGPT has been so popular. Everybody can program that computer. Large language model
is a new computing platform, because now the
programming language is human, and what you program that
computer understands large language models. And generative AI is the new killer app. These three insights has
gotten everybody just insanely excited. And because for the very
first time, after 15 years or so, a new computing platform has emerged, like the PC, like the
Internet, like mobile cloud computing, a new computing platform has emerged. And this new computing
platform is going to enable all kinds of new applications,
but very differently than the past. This new computing platform benefits
every single computing platform before it. Notice, the thing I'm looking
forward to most is generative AI for Office. There are so many different
things that I do in Office today. It would be great to plug in
generative AI to help me be more productive in that. Generative AI is going to be plugged into just about every
digital content creation tool, every single CAE
tool, every single CAD tool. For the very first
time, this new computing platform not only enables new
applications in this new era, but helps every
application in the old era. This is the reason why
the industry is moving so fast. Well, one of the
important things that's going to happen is this application space,
this new way of doing computing, is so profoundly different
that the computer will be reinvented. The computer itself, the computer itself, will, of course, process
information in a very different way. And we need a new processor. The computers in the world,
computing is done in so many different places. And sometimes they're used for
|
Z2VBKerS63A | not only enables new
applications in this new era, but helps every
application in the old era. This is the reason why
the industry is moving so fast. Well, one of the
important things that's going to happen is this application space,
this new way of doing computing, is so profoundly different
that the computer will be reinvented. The computer itself, the computer itself, will, of course, process
information in a very different way. And we need a new processor. The computers in the world,
computing is done in so many different places. And sometimes they're used for
training, sometimes they're used for inference, sometimes they're in the
cloud, sometimes it's for scale-up, sometimes it's for scale-out,
sometimes it's for enterprise, sometimes it's underneath
your desk in your workstation. There are so many different ways
that computing needs to be refactored. And NVIDIA's accelerated computing
will support every single one of those. But one particular
area is extremely important, which is the basic
scale-out of the cloud. The basic scale-out of the cloud
historically was based on off-the-shelf CPUs, x86 CPUs, while general-purpose
computing is a horrible way of doing generative AI. And you can see that in just a second. And so we created a brand new
processor for the era of generative AI. And this is it. This is the Grace Hopper. We announced Grace Hopper, in
fact, just only recently, several months ago. And today we're announcing
that we're going to give it a boost. We're going to
give this processor a boost with the world's fastest memory,
called HBM3e. The world's fastest memory now
connected to Grace Hopper, we're calling it GH200. The chips are in production, we'll
sample it at the end of the year or so, and be in production
by the end of second quarter. This processor is designed for
scale-out of the world's data centers. It has 72 cores. Grace CPU core is connected
through this incredibly high-speed link— cache-coherent, memory-coherent link—
between the CPU and the GPU. This is the CPU, and that's the GPU. The Hopper GPU is now connected to HBM3e. It has four petaflops of
trans |
Z2VBKerS63A | we're calling it GH200. The chips are in production, we'll
sample it at the end of the year or so, and be in production
by the end of second quarter. This processor is designed for
scale-out of the world's data centers. It has 72 cores. Grace CPU core is connected
through this incredibly high-speed link— cache-coherent, memory-coherent link—
between the CPU and the GPU. This is the CPU, and that's the GPU. The Hopper GPU is now connected to HBM3e. It has four petaflops of
transformer engine processing capability. And now it has five
terabytes per second of HBM3e performance. So this is the new GH200,
based on the architecture, Grace Hopper, and a processor
for this new computing era. There's a whole lot of ways that we
can connect Grace Hopper into a computer. This is one of my favorites. By connecting two of
them into one computing node, connecting it together with NVLink, and this NVLink between these two
processor modules is six terabytes per second. And it basically turns these
two processors, these two super chips, into a super-sized super chip. One giant GPU, one giant CPU. The CPU now has 144 cores. The GPU has 10 terabytes per
second of frame buffer bandwidth— 10 terabytes per
second of frame buffer bandwidth— and 282 gigabytes of HBM3e. Well, pretty much you could take
just about any large language model you like and put it into this,
and it will inference like crazy. The inference cost of large
language models will drop significantly, because look how small this computer is. And you could scale this
out in the world's data centers, because the servers are
really, really easy to scale out. You can connect this with Ethernet. You can connect it with the InfiniBand. And of course, there's all kinds of
different ways that you can scale it out. Let's take a look at what
it means if you were to take this and now scale it up into a giant system. This is two GPUs. But what if we would like to
scale this up into a much, much larger GPU? Run it, please. All right, this is
actual size, by the way. This is actual size, and
it probably even runs Crysis. The world's largest |
Z2VBKerS63A | to scale out. You can connect this with Ethernet. You can connect it with the InfiniBand. And of course, there's all kinds of
different ways that you can scale it out. Let's take a look at what
it means if you were to take this and now scale it up into a giant system. This is two GPUs. But what if we would like to
scale this up into a much, much larger GPU? Run it, please. All right, this is
actual size, by the way. This is actual size, and
it probably even runs Crysis. The world's largest single GPU. One exaFLOPS. Four petaflops per Grace Hopper, 256
connected by NVLink into one giant system. And so this is a modern GPU. So next time when you order a GPU on
Amazon, don't be surprised if this shows up. Okay, so that's how
you take Grace Hopper and scale it up into,
of course, a giant system. Future frontier models
will be built this way. The frontier models of the past,
like GPT-3 and GPT-4 and Llama, are the mainstream models of today. Only after a couple of
years, these frontier models, which were just gigantic to train, on systems like this in the
future, becomes mainstream. And once they become
mainstream, they could be scaled out into all
kinds of different applications. And how would we scale these out? And so let me show you this. This is how you would do it. And so now you would have a single
Grace Hopper in each one of these nodes. This is the way computing
was done in the past. For the last 60 years,
ever since the IBM System/360, the central processing units, or
general-purpose computing, was relatively mainstream. And for the last 60 years,
that's the way we've been doing computing. Well, now, general-purpose computing
is going to give way to accelerated computing and
AI computing. And let me illustrate to you why. The canonical use
case of the future is a large language model on the
front end of just about everything. Every single application,
every single database, whenever you interact
with a computer, you will likely be first
engaging a large language model. That large language model
will figure out what is your intention, |
Z2VBKerS63A |
general-purpose computing, was relatively mainstream. And for the last 60 years,
that's the way we've been doing computing. Well, now, general-purpose computing
is going to give way to accelerated computing and
AI computing. And let me illustrate to you why. The canonical use
case of the future is a large language model on the
front end of just about everything. Every single application,
every single database, whenever you interact
with a computer, you will likely be first
engaging a large language model. That large language model
will figure out what is your intention,
what is your desire, what are you trying
to do, given the context, and present the information
to you in the best possible way. It will do the smart query,
maybe a smart search, augment that query in search with
your question, with your prompt, and generate whatever
information necessary. And so the canonical
example that I'm using here is a Llama 2 large language
model that is being inferenced. It then does a query into a semantic
database, a vector database of some kind, and the output of that is augmented
and becomes a guide for a generative model. And here, the generative model
I'm using is Stable Diffusion XL. And so these three models, Llama 2,
Vector Database, and Stable Diffusion, SDXL, are relatively well understood as
state-of-the-art and the type of models that you could imagine running just about everywhere. Well, if you were
to have an ISO budget way of processing that
workload, it would take, let me just choose the
number of $100 million, and $100 million would be a
reasonably small data center these days. $100 million would
buy you about 8,800 x86 CPUs. It would take
about 5 megawatts to operate that, and I
normalized the performance into 1x. Using the exact
same budget with accelerated computing Grace Hopper, it
would consume only 3 megawatts, but your throughput
goes up by an order of magnitude. Basically, the energy
efficiency, the cost efficiency, of accelerated computing for
generative AI applications is about 20x. 20x in Moore's law
and just the current way of scaling CPUs, that
would be a very, very long time. And so this is a giant step |
Z2VBKerS63A | buy you about 8,800 x86 CPUs. It would take
about 5 megawatts to operate that, and I
normalized the performance into 1x. Using the exact
same budget with accelerated computing Grace Hopper, it
would consume only 3 megawatts, but your throughput
goes up by an order of magnitude. Basically, the energy
efficiency, the cost efficiency, of accelerated computing for
generative AI applications is about 20x. 20x in Moore's law
and just the current way of scaling CPUs, that
would be a very, very long time. And so this is a giant step
up in efficiency and throughput. So this is ISO budget. Let's take a look at this now
again, and let's go through ISO workload. Suppose your intention was to provide a service, and that
service has so many number of users, and so your workload is
fairly well understood, plus or minus. And so with ISO workload, this 1x,
$100 million using general purpose computing, and using accelerated computing
Grace Hopper, it would only cost $8 million. $8 million and only
260... not megawatts 0.26, 260 kilowatts, so 20 times
less power and 12 times less cost. This is the reason why accelerated
computing is going to be the path forward. And this is the reason
why the world's data centers are very quickly transitioning
to accelerated computing. And some people say, and
you guys might have heard, I don't know who said it, but the
more you buy, the more you save. And that's wisdom. If I could just ask
you to remember one thing from my talk today,
that would really be it. That it’s... That the future is
accelerated computing, and the more you buy,
the more you save. Well, today I want to talk
about something really, really important. And so the backdrop, accelerated
computing, the backdrop generative AI, the backdrop, the things that...
real-time ray tracing, the future of computer
graphics unified with AI. Let's talk about a couple of new things. Today I want to talk about Omniverse
and generative AI and how they come together. The first thing that
we already established is that graphics and artificial
intelligence are inseparable. That graphics needs
AI, and AI needs graphics. Graphics needs AI, and |
Z2VBKerS63A | buy,
the more you save. Well, today I want to talk
about something really, really important. And so the backdrop, accelerated
computing, the backdrop generative AI, the backdrop, the things that...
real-time ray tracing, the future of computer
graphics unified with AI. Let's talk about a couple of new things. Today I want to talk about Omniverse
and generative AI and how they come together. The first thing that
we already established is that graphics and artificial
intelligence are inseparable. That graphics needs
AI, and AI needs graphics. Graphics needs AI, and AI needs graphics. And so the first thing that you could
imagine doing for the future of artificial intelligence is to
teach it common sense. All of us understand the
consequences of the physical actions we take. All of us understand that
gravity has effect, and all of us understand that even
though you don't see something, that object might still be there,
probably is still there, object presence. And so that common sense is known to
humans ever since you were babies. And yet for most artificial intelligence agents
that learned on large language models, it's unlikely it has that common sense. That object permanence, the effects
of gravity, the consequence of your actions, you have to learn it
in a physically grounded way. And so the thing
that we could do is we could create a virtual world that is physically
simulated, a physics simulator, that allows an artificial intelligence to
learn how to perceive the environment using
a vision transformer maybe, and to use reinforcement learning
to understand the impacts, the consequences of
its physical actions, and learn how to animate and learn how
to articulate to achieve a particular goal. And so one mission of a connected
artificial intelligence system and a virtual world system that we
call Omniverse is so that the future of AI
could be physically grounded. The number of applications is really
quite exciting because as we know, the largest industries in the
world are heavy industry, and those heavy industries
are physics-based, physically-based. And so first application is so
that AI can learn in a virtual world. The second application, the second
reason why AI and computer graphics are inseparable is that AI will
help also to create these virtual worlds. Let me give you a couple of examples. This is an AI that is a large
language model, |
Z2VBKerS63A | virtual world system that we
call Omniverse is so that the future of AI
could be physically grounded. The number of applications is really
quite exciting because as we know, the largest industries in the
world are heavy industry, and those heavy industries
are physics-based, physically-based. And so first application is so
that AI can learn in a virtual world. The second application, the second
reason why AI and computer graphics are inseparable is that AI will
help also to create these virtual worlds. Let me give you a couple of examples. This is an AI that is a large
language model, as I mentioned, that will be connected to almost
every single application. However, the future
user interface of almost every application
is a large language model. And so it's sensible to imagine that
this large language model could also be a query front end
to a 3D database. And so here I find a DENZA N7 SUV. Now once you find this SUV, you
might ask an AI agent to help you, to turn this car, to embed this car,
to integrate this car into a virtual environment. And instead of designing that
virtual environment, you ask the AI to help you. Give me a road in the desert at sunset. Now inside Omniverse,
we can then unify, aggregate, composite
this information together. And now the car is integrated,
rendered into, positioned into a virtual world. And so here's an AI that helps you
maybe create and find some.. manage your data assets. You also have an AI that helps
you generate a virtual world around it. And Omniverse allows you
to integrate all this information. Well, let's take a look at what WPP,
the world's largest ad agency, and BYD, the world's largest
electric vehicle maker, are using, how they're using
Omniverse and generative AI in their work. Play it, please. WPP is building the next generation of
car configurators for automotive giant BYD's DENZA Luxury brand, powered
by Omniverse Cloud and Generative AI. OpenUSD and Omniverse Cloud allows
DENZA to connect high-fidelity data from industry-leading CAD tools to create a physically accurate,
real-time digital twin of its N7. WPP artists can work seamlessly on
this model, in the same Omniverse Cloud environment, with their preferred
tools from |
Z2VBKerS63A | vehicle maker, are using, how they're using
Omniverse and generative AI in their work. Play it, please. WPP is building the next generation of
car configurators for automotive giant BYD's DENZA Luxury brand, powered
by Omniverse Cloud and Generative AI. OpenUSD and Omniverse Cloud allows
DENZA to connect high-fidelity data from industry-leading CAD tools to create a physically accurate,
real-time digital twin of its N7. WPP artists can work seamlessly on
this model, in the same Omniverse Cloud environment, with their preferred
tools from Autodesk, Adobe, and SideFX, to deliver the next era of
automotive digitalization and immersive experiences. Today's configurators require hundreds
of thousands of images to be pre-rendered to represent all possible
options and variants. OpenUSD makes it possible for WPP to
create a super-digital twin of the car that includes all possible
variants in one single asset, deployed as a fully interactive
3D configurator on Omniverse Cloud GDN, a network that can stream high-fidelity, real-time 3D experiences
to devices in over 100 regions, were used to generate
thousands of individual pieces of content that comprise a global
marketing campaign. The USD model is placed in a 3D
environment that can either be scanned from the real world using
Lidar and virtual production, or created in
seconds with generative AI tools from organizations
such as Adobe and Shutterstock. This innovative WPP solution for
BYD brings generative AI and cloud-rendered real-time 3D together for the first time, powering the next
generation of e-commerce. God, I love that. Everything was rendered in
real time. Nothing was pre-rendered. Every single scene that
you saw was rendered in real time. Every car, all of
the beautiful integration with the background, all the rendering,
everything is 100% real-time. The car is the original CAD
dataset of BYD. Nothing was changed. You literally take the
CAD, drag it into Omniverse, you tell an AI,
synthesize and generate an environment, and all of a sudden the car
appears wherever you like it to be. So this is one example
of how generative AI and human designs come together to
create these incredible applications. And so how do |
Z2VBKerS63A | scene that
you saw was rendered in real time. Every car, all of
the beautiful integration with the background, all the rendering,
everything is 100% real-time. The car is the original CAD
dataset of BYD. Nothing was changed. You literally take the
CAD, drag it into Omniverse, you tell an AI,
synthesize and generate an environment, and all of a sudden the car
appears wherever you like it to be. So this is one example
of how generative AI and human designs come together to
create these incredible applications. And so how do we do this? Applications, generative AI
models are making tremendous breakthroughs. And what you want to
do, we all want to do this. There are millions
of developers and artists and designers
around the world and companies, every single company would like to take advantage of, and
certainly everybody at NVIDIA, working hard to utilize large
language models and generative AI in our work. In fact, the Hopper GPU is
impossible to design by humans. We needed AIs and
generative models to help us find the way to design this thing
in such a high-performance way. And so it augments our
design engineers, it makes it possible for us to create
some of these amazing things at all, and of course the productivity of the
teams go up tremendously. Well, we would like to do this
in just about every single industry, so the first thing that
we have to do is we have to go find a model that works
for us so that we can fine-tune it. You can't just use the model as is.
You want to fine-tune it for your curated data. The second thing you
want to do is to augment your engineers, your artists,
your designers, your developers, with the capability
of these generative models. So augmenting it,
composing the information together. In this particular
case, I'm using media and entertainment, where
Virtual World is an example, and this is the reason
why Omniverse is central to that. We want to be able
to run this in the cloud, of course, and we'll
continue to run this in the cloud, but as you know,
computing is done literally everywhere. AI is not some widget
that has a particular capability. AI is the way software is
going |
Z2VBKerS63A | your artists,
your designers, your developers, with the capability
of these generative models. So augmenting it,
composing the information together. In this particular
case, I'm using media and entertainment, where
Virtual World is an example, and this is the reason
why Omniverse is central to that. We want to be able
to run this in the cloud, of course, and we'll
continue to run this in the cloud, but as you know,
computing is done literally everywhere. AI is not some widget
that has a particular capability. AI is the way software is
going to be done in the future. AI is the way
computing will be done in the future. It will be
literally in every application. It will be run in
every single data center. It will run in every single
computer at the edge in the cloud. And so we want to have the ability to
not just do AI, generative AI in the cloud, but to be able to do
it literally everywhere, in the cloud and data
center workstations, your PCs. And we want to do
this by making it possible for these really
complicated stacks to run. There's a reason why the world's
AI is done largely in the cloud today. We partner very closely with the
CSPs, the amount of acceleration libraries, and all the runtimes, and from data processing to training
to inference to deployment. The software stack is really complicated. The libraries, the
runtimes, just getting it to run on a particular device
and system is incredibly hard. And that's the
reason why it's stood up as a managed service that everybody can use. Well, we believe that in order
for us to democratize this capability, we have to make it
run literally everywhere. And so we have to have these unified optimized stacks be
able to run on almost any device and make it
possible for you to engage AI. Well, the first question
is, where are the world's models? Well, the world's models
are largely on Hugging Face today. It is the largest AI community in the world. Lots and lots of people use it. 50,000 companies,
2 million users, I think. 50,000 companies engage Hugging Face. There's some 275,000
models, 50,000 data sets. Just about everybody who
creates an AI model and wants to |
Z2VBKerS63A | so we have to have these unified optimized stacks be
able to run on almost any device and make it
possible for you to engage AI. Well, the first question
is, where are the world's models? Well, the world's models
are largely on Hugging Face today. It is the largest AI community in the world. Lots and lots of people use it. 50,000 companies,
2 million users, I think. 50,000 companies engage Hugging Face. There's some 275,000
models, 50,000 data sets. Just about everybody who
creates an AI model and wants to share with the
community puts it up in Hugging Face. So today we're announcing that
Hugging Face is going to build a new service to enable their community to
train directly on NVIDIA DGX Cloud. NVIDIA DGX Cloud is
the best way to train models. And its footprint is being set up,
our DGX Cloud footprints are being set up, in Azure, OCI, Oracle Cloud, and GCP. So the footprint is
going to be largely everywhere. And you'll be able to
find from the Hugging Face portal, choose your model
that you would like to train, or you'd like to train a brand new model, and connect yourself
to DGX Cloud for training. So this is going
to be a brand new service to connect the
world's largest AI community with the world's best
AI training infrastructure. So that's number one. Where do you find the models? But you want to do this in the cloud, but you might also
want to do this everywhere else. And how do you build
that infrastructure for yourself? And so the second thing we're
announcing today is the NVIDIA AI Workbench. This Workbench is a collection of tools that make it
possible for you to assemble, to automatically assemble
the dependent runtimes and libraries, the libraries to help
you fine tune and guard rail, to optimize your large language model, as well as assembling all
of the acceleration libraries, which are so complicated, so that you could run it
very easily on your target device. You could target a PC,
you could target a workstation, you could target your own data center, or with one click, you
can migrate the entire project into any one of these different areas. Let's take a look at
NVIDIA AI Workbench in action. Generative AI is incredibly powerful, |
Z2VBKerS63A | for you to assemble, to automatically assemble
the dependent runtimes and libraries, the libraries to help
you fine tune and guard rail, to optimize your large language model, as well as assembling all
of the acceleration libraries, which are so complicated, so that you could run it
very easily on your target device. You could target a PC,
you could target a workstation, you could target your own data center, or with one click, you
can migrate the entire project into any one of these different areas. Let's take a look at
NVIDIA AI Workbench in action. Generative AI is incredibly powerful, but getting accurate
results customized with your secured, proprietary data is challenging. NVIDIA AI Workbench
streamlines selecting foundation models, building your project environment, and fine tuning these
models with domain-specific data. Here, AI Workbench is
installed on a GeForce RTX 4090 laptop, where we've been
experimenting with an SDXL project. As our project gets more complex, we need much more
memory and compute power, so we use AI Workbench
to easily scale to a workstation powered by four NVIDIA
RTX 6000 Ada-generation GPUs. AI Workbench automatically
creates your project's environment, building your container with
all dependencies, including Jupyter. Now, in the Jupyter Notebook, we prompt our model to
generate a picture of Toy Jensen in space. But because our model
has never seen Toy Jensen, it creates an irrelevant result. To fix this, we fine-tune the
model with eight images of Toy Jensen, then prompt again. The result is much more accurate. Then, with AI Workbench, we deploy the new model
in our enterprise application. This same simple process can
be applied when customizing LLMs, such as Llama 2 70B. To accommodate this much larger model, we use AI Workbench
to scale to the data center, accessing a server
with eight NVIDIA L40S GPUs. We tune with 10,000 USD code snippets and nearly 30,000 USD
functions built by NVIDIA, which teaches the model to
understand 3D USD-based scenes. We call our new model ChatUSD. ChatUSD is a USD developer's copilot, helping answer questions
and generate USD Python code. With NVIDIA AI Workbench, you can easily scale
your generative AI projects from laptop to workstation to
data center or cloud with a few clicks |
Z2VBKerS63A | To accommodate this much larger model, we use AI Workbench
to scale to the data center, accessing a server
with eight NVIDIA L40S GPUs. We tune with 10,000 USD code snippets and nearly 30,000 USD
functions built by NVIDIA, which teaches the model to
understand 3D USD-based scenes. We call our new model ChatUSD. ChatUSD is a USD developer's copilot, helping answer questions
and generate USD Python code. With NVIDIA AI Workbench, you can easily scale
your generative AI projects from laptop to workstation to
data center or cloud with a few clicks. Everybody can do this. You just have to come to our website, download NVIDIA AI Workbench. Anybody could do this. It turned out, my parents gave me a
Swedish name, as you know. It's Jensen. I'm pretty sure that
when they looked up Toy Jensen, that's why it turned out that way. It took a few more
examples to turn them into Toy Jensen. Everybody can do this. Come to the website,
early access, download AI Workbench. For the creator of the project, it helps you set up the
libraries and the runtimes that you need. You can fine tune the model. If you want to migrate this project so that all of your colleagues
could use it and fine tune other models, you could just tell it
where you want to migrate it to. In one click, it'll migrate
the entire dependency of the project, all the runtimes, all the
libraries, all the complexities. It runs on workstations. In the data center, it runs in the cloud. One single body of
code, one single project, allows you to run literally everywhere. Everybody can be a
generative AI practitioner. What makes it possible to do all this is this other piece of
code called NVIDIA AI Enterprise. This is essentially the operating
system of modern data science and modern AI. It starts with data
processing, data curation, and data processing
represents some 40, 50, 60 percent of the amount of
computation that is really done before you do the training of the model. Data processing, then
training, then inference and deployment, all of those libraries,
there are 4,500 different packages that are inside the NVIDIA AI
Enterprise with 10,000 dependencies. This represents literally
the NVIDIA 30-year body of work starting |
Z2VBKerS63A | do all this is this other piece of
code called NVIDIA AI Enterprise. This is essentially the operating
system of modern data science and modern AI. It starts with data
processing, data curation, and data processing
represents some 40, 50, 60 percent of the amount of
computation that is really done before you do the training of the model. Data processing, then
training, then inference and deployment, all of those libraries,
there are 4,500 different packages that are inside the NVIDIA AI
Enterprise with 10,000 dependencies. This represents literally
the NVIDIA 30-year body of work starting from CUDA, all
the CUDA acceleration libraries, and everything else is accelerated
for the GPUS, for the couple of hundred million GPUs that are all over
the world, all CUDA compatible, make all of them run. It has the ability to support
multi-GPU in a multi-node environment and every single version of our
GPU, 100 percent compatible with everything. This operating system of AI, if you will, has been integrated into the cloud, integrated with leading
operating systems like Linux and Windows, WSL2, Windows subsystem for Linux. The second version,
WSL2, has been optimized for CUDA and supports VMware. The body of work that
we've done with VMware is incredible. This took several years to
do, a couple, two and a half years for us to make VMware
be CUDA compatible, CUDA aware, multi-GPU aware, and still
have all the benefits of an enterprise one pane of glass,
resilient, virtualized data center. And so this entire
stack of NVIDIA AI Enterprise, this is really the giant body of
work that makes all of this possible. As a result, literally
everything that you would like to run will be supported by the
ecosystem we're talking about here. It's also integrated above
the stack into MLOps applications to help you with the
management and the coordination of doing data processing,
data-driven software in your company, as well as will be
integrated into AI models that will be provided
by ServiceNow and Snowflake. So NVIDIA AI Enterprise
is what makes NVIDIA AI Workbench even possible in the first place. Now we have these
incredible models that are in Hugging Face that are pre-trained and open-sourced. We can now train them and
fine-tune them on |
Z2VBKerS63A | by the
ecosystem we're talking about here. It's also integrated above
the stack into MLOps applications to help you with the
management and the coordination of doing data processing,
data-driven software in your company, as well as will be
integrated into AI models that will be provided
by ServiceNow and Snowflake. So NVIDIA AI Enterprise
is what makes NVIDIA AI Workbench even possible in the first place. Now we have these
incredible models that are in Hugging Face that are pre-trained and open-sourced. We can now train them and
fine-tune them on AI Workbench. You could run it
anywhere because of AI Enterprise. Now we just need some powerful machines. We have powerful
machines in the cloud, of course. DGX Cloud has many,
many footprints around the world. But wouldn't it be great if you
had a powerful machine under your desk? And so today we're
announcing our latest generation Ada GPU, Ada Lovelace GPU. The most powerful GPU we've
ever put in the workstation is now... Oh, gosh darn it. I just put my fingerprints on there. Can you guys see that? That's not me. Hey, can I have
this cleaned in the future? My bad. Yuck. Let me show you. That's the worst
product launch ever, you guys. The CEO pulls it out and goes, "Yuck." This is the
data-centered version of Ada Lovelace. Sorry, everybody. My bad. They worked so hard. It was perfect. It's beautifully lacquered. I'm sad. I'm super sad. I'm super sad. Okay, anyways. Thank you. Thank you. And that's why you should rehearse. It goes into these amazing workstations. These amazing
workstations pack up to four of these GPUs. It packs up to four NVIDIA RTX 6000s, the most powerful GPUs ever created. And they run real-time
ray tracing for Omniverse, as well as train, fine-tune,
and inference large language models for generative AI. And it's available from Boxx
and Dell and HPE and Lambda and Lenovo. And it's available now. So we're in
production with these workstations. And with, as I
mentioned, Hugging Face to AI Workbench, NVIDIA AI Enterprise
running on Windows 11 |
Z2VBKerS63A | ations. These amazing
workstations pack up to four of these GPUs. It packs up to four NVIDIA RTX 6000s, the most powerful GPUs ever created. And they run real-time
ray tracing for Omniverse, as well as train, fine-tune,
and inference large language models for generative AI. And it's available from Boxx
and Dell and HPE and Lambda and Lenovo. And it's available now. So we're in
production with these workstations. And with, as I
mentioned, Hugging Face to AI Workbench, NVIDIA AI Enterprise
running on Windows 11 with WSL2, you have an amazing AI machine. You could fine-tune
GPT3, 40 billion parameter GPT3, in about 15 hours
on nearly a billion tokens. And so you could take your
proprietary data, your curated data, you could maybe bring all of your PDFs, and you could fine-tune
this model before you ask it, prompt it, and ask it questions. SDXL could be trained, and after
it could be fine-tuned, as I mentioned, we showed you an example of
us fine-tuning it with Toy Jensen. You can now
generate 40 images per minute. 40 images per minute, this
workstation will pay for itself, and who knows, depending on how
much you use generative AI these days, it could pay for itself in months. Like I said, the more
you buy, the more you save. Incredibly fast,
incredibly powerful, and it's all yours. It produces
answers in seconds, not minutes in some of the
services that are out there. And so another
incredible machine are the servers. And these servers, as you know, getting GPUs in the cloud
these days is no easy feat. And now you can buy it. You can have your company buy it
for you and put it in the data center. And there's a
whole bunch of these servers, a whole bunch of
different configurations. I don't know if you guys could see this. This is a server that has up to
eight of the L40S Ada Lovelace GPUs. And of course, these are not
going to be used for frontier models. These are not designed to train
large frontier models like GPT-4 or GPT-5. These are really used
for mainstream |
Z2VBKerS63A | these days is no easy feat. And now you can buy it. You can have your company buy it
for you and put it in the data center. And there's a
whole bunch of these servers, a whole bunch of
different configurations. I don't know if you guys could see this. This is a server that has up to
eight of the L40S Ada Lovelace GPUs. And of course, these are not
going to be used for frontier models. These are not designed to train
large frontier models like GPT-4 or GPT-5. These are really used
for mainstream models today that you can download from Hugging Face, or NVIDIA could work
with your company to create, based on our language model called NeMo, we could create
models that are mainstream today that you could use in just about all
kinds of applications around your company. And you could fine
tune it with these GPUs. The fine tuning of a GPT-3 model, so this is GPT-3 40 billion parameters, takes about seven
hours for about a billion tokens. And so 15 hours in a
workstation with four GPUs, of course, takes less with eight GPUs. And just in fine tuning,
this is 1.5 times faster than our last generation A100. And so L40S is a really
terrific GPU for enterprise scale, fine tuning of mainstream
large language models. You can also use it for, of course,
synthesizing and generating images. So generative AI for everyone. Everybody could do it now. Hugging Face, NVIDIA AI
Workbench, NVIDIA AI Enterprise, these amazing new enterprise
systems that are in production today. All right, let's change gears and talk
about what's going on at SIGGRAPH this year. I'm pretty sure all of you
have already heard about OpenUSD. OpenUSD is a very big deal. SIGGRAPH 2023 is all about OpenUSD. OpenUSD is visionary and
it's going to be a game changer. OpenUSD is a framework, a
universal interchange for creating 3D worlds, for describing, for compositing, for
simulating, for collaborating on 3D projects. OpenUSD is going to bring together
the world onto one standard 3D interchange and has the
opportunity to do for the world and for computing
what HTML did for the 2D web. Finally, an industry standard, |
Z2VBKerS63A | all of you
have already heard about OpenUSD. OpenUSD is a very big deal. SIGGRAPH 2023 is all about OpenUSD. OpenUSD is visionary and
it's going to be a game changer. OpenUSD is a framework, a
universal interchange for creating 3D worlds, for describing, for compositing, for
simulating, for collaborating on 3D projects. OpenUSD is going to bring together
the world onto one standard 3D interchange and has the
opportunity to do for the world and for computing
what HTML did for the 2D web. Finally, an industry standard,
powerful and extensible 3D interchange that brings the whole world together. A really big deal. Now, let's take a look at why it
is such a visionary thing that Pixar did. It was invented,
well, I forget exactly when they invented it, but
they open sourced it in 2015. And they've been, of course, using this framework for over a
decade, building amazing 3D content. Well, the 3D pipeline
is incredibly complicated. The 3D workflow is
specialized and complicated. You've got designers
and artists and engineers. They all specialize in
some part of the 3D workflow. It could be modeling and
texturing, materials, physics simulation, animation,
set design, scene composition. There are so many parts
and so many different tools. And because the tools are created by
different companies and largely incompatible, import and exporting data
conversion is just part of the workflow. And because they're incompatible and
because there's all this import and exporting, fundamentally the
workflow has to be serialized. It's impossible to parallelize that. And the converting, of course, all of
this data is cumbersome and is error prone. And so this workflow
is fundamentally complex. You could argue that
was just designed to be complex. And this is one of
the reasons why creating these incredible 3D
animation movies are so expensive and takes so much time. Well, one of the
visions, the first vision, of course, of OpenUSD is
to put the data at the center. Could you imagine if every single
tool was natively compatible with USD? Then as a result,
data gravitates to the center. Everybody can work in parallel. The interchange and conversion goes away. And instead of a serialized
model, you have a parallelized spoke and |
Z2VBKerS63A | fundamentally complex. You could argue that
was just designed to be complex. And this is one of
the reasons why creating these incredible 3D
animation movies are so expensive and takes so much time. Well, one of the
visions, the first vision, of course, of OpenUSD is
to put the data at the center. Could you imagine if every single
tool was natively compatible with USD? Then as a result,
data gravitates to the center. Everybody can work in parallel. The interchange and conversion goes away. And instead of a serialized
model, you have a parallelized spoke and hub model. And so this way of doing
work, of course, is incredibly appealing and is one of the reasons why
the vision of OpenUSD has taken off. Well, there's some
50 tools available now. The industry loves the vision. OpenUSD already has a rich ecosystem. Some 50 tools are now
compatible with OpenUSD natively. 170 contributors in the
USD forum from about 100 companies. So you've got a lot of
people really interested in this. And the momentum is growing. It's being adopted in film, architecture, engineering, and
construction, manufacturing, and so many different fields of robotics. We're engaged with companies in
so many different parts of the industry, all excited about USD because
everybody's workflow, like creating movies, is complicated. This is no different than
a company who's in manufacturing, a company who's trying to build a
new building. So many different specialists, so many different contractors, so
many different parties coming together to build something complex. For the very first time, we have
an interchange, a standard interchange, that can bring everybody together. So super, super exciting. Well, five years ago,
we started working with Pixar, and we adopted USD as
the foundation of Omniverse. Our vision was to create these
virtual worlds that make it possible for us to bring world design into
the applications that I mentioned, industrial digitalization at the
core of many things that we want to do, not just for, of course, creating
amazing movies and broadcasts and video games, but also to take 3D worlds, physically based, real time
into the world's industries. We felt that we could make a real impact. And so we selected USD. It was a brilliant move. The team made just a
visionary move to partner with Pixar and select USD |
Z2VBKerS63A | USD as
the foundation of Omniverse. Our vision was to create these
virtual worlds that make it possible for us to bring world design into
the applications that I mentioned, industrial digitalization at the
core of many things that we want to do, not just for, of course, creating
amazing movies and broadcasts and video games, but also to take 3D worlds, physically based, real time
into the world's industries. We felt that we could make a real impact. And so we selected USD. It was a brilliant move. The team made just a
visionary move to partner with Pixar and select USD as
the foundation of Omniverse. This is probably the
world's first major platform, incredible database and engine system that was built
completely from the ground up for USD. Every single line of code
was designed with USD in mind. The platform for USD
for describing, simulating, all the things that we just mentioned. And so Omniverse was designed to connect. It's not a tool
itself, it's a connector of tools. It's not intended to
be a final production tool. It's intended to be a
connector that make it possible for everybody to collaborate,
interchange, share live work. Okay, so Omniverse is a connector. Well, let's take a look at how
the vision of OpenUSD came together. And this is just a
fantastic illustration. Starting from the left here, I
think this is Adobe Stager, Houdini. This is a modeling system, Maya,
or animation system, modeling system. This is Omniverse,
Blender, RenderMan, Pixar's RenderMan, and Unreal Engine
from Epic, a game engine. Literally all OpenUSD. One dataset
ingested into everybody's tools, and it looks basically the same. Everybody's rendering
system is a little different, and so the quality of the rendering
is a little different from tool to tool, but one dataset
available and usable by every tool. This is the vision of OpenUSD. So incredibly powerful. Well, we've been investing
in USD now for over five years. This SIGGRAPH is the five, if you will. We've been working on USD now
for about five years, this SIGGRAPH. And we've been working
on extending USD to real-time and physics-based
systems for industrial applications. We brought RTX to it. We extended USD
with |
Z2VBKerS63A | rendering
system is a little different, and so the quality of the rendering
is a little different from tool to tool, but one dataset
available and usable by every tool. This is the vision of OpenUSD. So incredibly powerful. Well, we've been investing
in USD now for over five years. This SIGGRAPH is the five, if you will. We've been working on USD now
for about five years, this SIGGRAPH. And we've been working
on extending USD to real-time and physics-based
systems for industrial applications. We brought RTX to it. We extended USD
with a schema for physics, real-time physics and offline physics. We added CAD to USD,
ConnectUSD to a whole new industry. We made it possible to
understand geospatial data, to recognize and understand, comprehend, consider the curvature of the Earth. We integrated it with an AI runtime, as well as a
framework to build generative AI. For example, the deep
search that we showed you, ChatGPT or ChatUSD that
I'll show you in just a second. We created a...we extended USD
for assets that are physics-accurate, physics-aware. So we call it SIM-ready. It's particularly
interesting, particularly important for robotics applications, so that the joints
move accordingly and such. And we took USD and made it hyperscale so that we can expand it and grow it, make it support
datasets of a normal scale, and put it in the cloud,
connected it with OpenXR and RealityKit so that we can stream from the
cloud to spatial computing devices. Well, for the last five
years, we've been working on... Omniverse has been building and working, collaborating with the industry on USD. Let's take a look at this... Everything you're
about to see is a simulation. Everything is real-time. And so take a look at this. This is the latest of Omniverse. Doesn't that just make you happy? No art, all physics. Physics make you happy, doesn't it? Physics makes you happy. Okay, well, we wanted
to put Omniverse everywhere. You could download Omniverse from our website and run it on
your PC and your workstation. For enthusiasts and
designers, that's perfect. You could also
license Omniverse for enterprise. And for enterprises that are using it across many different
organ |
Z2VBKerS63A | Everything you're
about to see is a simulation. Everything is real-time. And so take a look at this. This is the latest of Omniverse. Doesn't that just make you happy? No art, all physics. Physics make you happy, doesn't it? Physics makes you happy. Okay, well, we wanted
to put Omniverse everywhere. You could download Omniverse from our website and run it on
your PC and your workstation. For enthusiasts and
designers, that's perfect. You could also
license Omniverse for enterprise. And for enterprises that are using it across many different
organizations, we even set up a managed service
for you inside your company. We're putting Omniverse now in the cloud so that we could
host and serve up APIs that can be connected to developers and applications and services
so that you can have the benefit of some of these amazing capabilities. And so we're setting up Omniverse Cloud. Now Omniverse, as I
mentioned before, is not a tool. It's a platform for tools. It's not a tool. It's a platform for tools. And we created a whole bunch of
interesting tools to help you get started. There are reference
applications, and many of them are open sourced. One application, for
example, that I really love is Isaac Sim. It's a gym for teaching robots,
for robots to learn how to be a robot. And because it's so physically
accurate, the sim to real gap is reduced. And so theoretically, you should be able to learn as a robot
how to be a robot inside Omniverse. And that neural network, that software, could then be put into a
local embedded device like a Jetson or one of
NVIDIA's Jetson computers, robotic computers,
and the robot can perform its task. Okay? And so we would like to put
Omniverse in as many places as possible. There's a whole bunch
of different applications. And this SIGGRAPH, we're
announcing a few new APIs, really cool APIs. Now we demonstrated a
really cool API just recently. It's called ACE, Avatar Cloud Engine. And so it understands speech. So you could do speech with it. It talks. It recognizes your voice. Speaks to you. Based on what it's
saying, based on the sound that it's making, it
animates the face accordingly, audio to face. |
Z2VBKerS63A | . Okay? And so we would like to put
Omniverse in as many places as possible. There's a whole bunch
of different applications. And this SIGGRAPH, we're
announcing a few new APIs, really cool APIs. Now we demonstrated a
really cool API just recently. It's called ACE, Avatar Cloud Engine. And so it understands speech. So you could do speech with it. It talks. It recognizes your voice. Speaks to you. Based on what it's
saying, based on the sound that it's making, it
animates the face accordingly, audio to face. And so we call that
the ACE engine in the cloud. We're going to show
you a couple of new APIs. And this one API, this is the runUSD API. Of course, this should be the first API. What do you guys think? Pretty cute, huh? And so you send to the cloud USD. And what comes out of the cloud, what streams from the
cloud onto your device in OpenXR or RealityKit to your spatial computing device will be this incredibly, beautifully rendered— and very importantly—interactive USD. So let's take a look at it. So for USD programmers,
USD developers, you will have hours of joy, just hours
and hours of joy. And you can just create your USD content, USD asset, load it up on
NVIDIA's Omniverse Cloud, and enjoy the device,
enjoy the asset on your device. Now this allows you, of course,
to test the compatibility of your USD. And so we now have a universal
compatibility tester up in the cloud. And so whenever
you have USD content big or small, you could
load it up on Omniverse Cloud and independent of
which version of USD that you're using, we'll
test it for compatibility. And so this is going
to be free for developers. There's another API that we're creating. And we showed you earlier how we used AI Workbench to train
this model, to fine tune this model. We started with Llama 2 and we
taught it, we fine tuned it for USD. And so let's take a look at the video. For USD developers, building, profiling, and optimizing
large 3D scenes can be a very complex process. ChatUSD is an LLM that's fine tuned
with USD functions and Python USD code snippets using NVIDIA AI Work |
Z2VBKerS63A | we'll
test it for compatibility. And so this is going
to be free for developers. There's another API that we're creating. And we showed you earlier how we used AI Workbench to train
this model, to fine tune this model. We started with Llama 2 and we
taught it, we fine tuned it for USD. And so let's take a look at the video. For USD developers, building, profiling, and optimizing
large 3D scenes can be a very complex process. ChatUSD is an LLM that's fine tuned
with USD functions and Python USD code snippets using NVIDIA AI Workbench
and the NeMo framework. This generative AI copilot is easily accessed as an
Omniverse Cloud API, simplifying your USD development
tasks directly in Omniverse. Use ChatUSD for
general knowledge, like to understand the geometry
properties of your USD schema, or complete previously tedious,
repetitive tasks, like generating code to find and replace
materials on specific objects or to instantly expose
all variants of a USD prim. ChatUSD can also help you build
complex scenes, such as scaling a scene and organizing it in a certain way in your USD stage. Build bigger, more complex virtual worlds faster than ever
with ChatUSD generative AI for USD workflows. (Applause) ChatUSD. Now, everybody can speak USD. And ChatUSD could be a USD teacher, it could be a USD copilot,
and help you create your virtual world, enhance
your productivity incredibly. And this is going to be also
available on the Omniverse Cloud. Well, I showed you some examples. Probably the largest
opportunity for the world of IT for software and
for artificial intelligence is to help revolutionize
the world's heavy industries. There's just
enormous amounts of waste, as we all know, $50 trillion
worth of industry. Over the next
several years before the end of the decade, there will be trillions of dollars of new EV factories, battery factories, and new chip
fabs that are going to be built all over the world. Not to mention the
enormous number of factories that are already in
operation, some 10 million factories that are in operations today. This industry would love to be digital. They would love
the benefits of all of our industries, but
unfortunately their industry has to be physically coherent. |
Z2VBKerS63A | just
enormous amounts of waste, as we all know, $50 trillion
worth of industry. Over the next
several years before the end of the decade, there will be trillions of dollars of new EV factories, battery factories, and new chip
fabs that are going to be built all over the world. Not to mention the
enormous number of factories that are already in
operation, some 10 million factories that are in operations today. This industry would love to be digital. They would love
the benefits of all of our industries, but
unfortunately their industry has to be physically coherent. It's physically based. They build things. They build and operate physical things. But they would love to
do it digitally, just like us. And so how can we help them do that? Well, this is where Omniverse and artificial and
generative AI comes together for us to be able to help the heavy
industries of the world digitalize their workflow. Just as modeling
and texturing and lighting and animation and
set design and so on and so forth are all done by different groups in a very
complicated pipeline, this is very much the case of the
world's heavy industries. Every one of their
organizations, from design to styling to engineering
and simulation and testing to factory building and design, factory planning to build the
products, and even operating these software-defined robotics-assisted products in the
future, all of this is done completely mechanically today. That entire flow could be digitalized
and it could be integrated for the very first time using OpenUSD. This is the incredible vision of
OpenUSD, why we're so excited about it. If we can only augment it with real-time capability and physics
simulation capability and make it so that every single tool is connected to Omniverse, we
can digitalize the world's industries. Well, their
excitement is enormous because they all would love
to have the productivity, would love to reduce the energy consumed, would love to reduce the waste, would love to reduce the mistakes in digital long before they have to build it in physical. And so this is Mercedes. They're using Omniverse to
digitalize their manufacturing lines. This is Mercedes using
Omniverse to simulate autonomous vehicles. This is BMW using Omniverse to digitalize their global network of factories,
some 30 factories. They're now
building it in Omniverse without breaking ground and |
Z2VBKerS63A | we
can digitalize the world's industries. Well, their
excitement is enormous because they all would love
to have the productivity, would love to reduce the energy consumed, would love to reduce the waste, would love to reduce the mistakes in digital long before they have to build it in physical. And so this is Mercedes. They're using Omniverse to
digitalize their manufacturing lines. This is Mercedes using
Omniverse to simulate autonomous vehicles. This is BMW using Omniverse to digitalize their global network of factories,
some 30 factories. They're now
building it in Omniverse without breaking ground and
doing the entire integration a year before the
factory is actually even built. Using Omniverse to simulate
new electric vehicle production lines. Remember the
placement of the factory, the planning of the
factory, and the programming of all the robotic systems
are just incredibly complicated. In the future, entire
factories will be software-defined. The factory will be robotic, orchestrating a whole bunch of
robots that are building cars that themselves are robotic. So robots building robots,
orchestrating robots, building robots. So that's the
future, and everything is all software-driven,
everything is all AI-enabled. This is BMW using AI
to drive their operations. This is Wistron
using Omniverse to digitalize their production
line to build this machine. As I mentioned, 35,000 parts, incredibly complicated, one of the most
expensive ones, the most valuable instruments
that's made anywhere. And so having that line be completely
robotic and automated is really important. This is Pegatron using
Omniverse to digitalize PCB manufacturing. Again, this is the PCB of this
incredibly complicated PCB motherboard. This is the most complex
motherboard the world's ever made. Techman is using
Omniverse to test and simulate cobots. It's not surprising to you,
but most cobots, most robots today are
not very autonomous, not very AI-driven. Programming the robots usually
costs way more than robots themselves. I heard a statistic that the robot for a manufacturing arm for the
automotive industry is something along the lines of $25,000. Not very much, but
programming it could cost a quarter million
dollars, which is very sensible. We would like to have AI be
self-programming these |
Z2VBKerS63A | ever made. Techman is using
Omniverse to test and simulate cobots. It's not surprising to you,
but most cobots, most robots today are
not very autonomous, not very AI-driven. Programming the robots usually
costs way more than robots themselves. I heard a statistic that the robot for a manufacturing arm for the
automotive industry is something along the lines of $25,000. Not very much, but
programming it could cost a quarter million
dollars, which is very sensible. We would like to have AI be
self-programming these autonomous limbs. So Techman is using
Omniverse to test and simulate cobots. They're also using Omniverse to build
applications to automate optical inspection. Of course, for PCB lines,
the cameras can be stationary. The product, the
manufacturing system is just rolling by you. But for many things
like cars and other very complicated
systems, the optical inspection has to follow the
curvature and, of course, the various contours
and shapes of the product. Hexagon is using Omniverse to connect— just as we've connected
tools all over the world— Hexagon is using Omniverse
to connect their own tools. This is one of the most powerful
things that they observed in Omniverse. Whereas they had different groups and different teams that
were in silos, and because of their incompatible tools, getting
them to connect together was hard. So this was a company
organization and a company management challenge. So very first time using Omniverse, they broke down the silos,
they connected all the different teams, and they became
one unified company for the very first time. Really, really cool. This is READY Robotics
using Omniverse to build applications to
simplify the robot programming process. In the future,
robot programming is probably going to be about
explaining in prompts what you would like the robot
to do, showing it a few examples. Just as we taught our language model, our generative model,
Toy Jensen, just as we taught the generative model USD, we're going to teach a generative
model of the future, a robotic generative model, a few examples, and it will be able to
generalize and do that task. Amazon is using Omniverse
to digitalize their warehouse. The warehouse is robotic. Giant system help |
Z2VBKerS63A | simplify the robot programming process. In the future,
robot programming is probably going to be about
explaining in prompts what you would like the robot
to do, showing it a few examples. Just as we taught our language model, our generative model,
Toy Jensen, just as we taught the generative model USD, we're going to teach a generative
model of the future, a robotic generative model, a few examples, and it will be able to
generalize and do that task. Amazon is using Omniverse
to digitalize their warehouse. The warehouse is robotic. Giant system help their workers inside. They don't have to walk as far. Amazon is using Omniverse
to simulate their fleet of AMRs. These are autonomous moving robots. Using Omniverse to
generate synthetic data to train the
perception models, the computer vision models of these robots. You could also use
Omniverse to create a digital twin. NVIDIA is creating a digital twin of
the earth, the climate system of the earth. Deutsche Bahn is
using Omniverse to create a digital twin of
their entire railway network so they could
operate it completely in digital. In order for that to happen,
Omniverse has to be real time. Let me show you one more example. This one example
is about human designers, architects working
side by side with generative AI models from different
applications and different companies. Together they will
automate and try to, and help do industrial
digitalization a lot more rapidly. Okay, roll it. Planning industrial spaces like
factories or warehouses is a long complex process. Let's see how you
can use NVIDIA Omniverse and generative AI
to connect your OpenUSD to fast-track planning concepts like
a storage extension to an existing factory. Use SyncTwin's Omniverse extension
to quickly convert a 2D CAD floor plan into
a 3D OpenUSD model, and populate it with SimReady OpenUSD assets using Omniverse's AI enabled DeepSearch. Then use prompts to generate physically accurate lighting options with Blender GPT, realistic floor materials with Adobe Firefly, and an HDRI Skydome with Blockade Labs. To see the new space in context, compose it on a Cesium
geospatial plane next to your existing factory digital twin. Then to share with stakeholders, use one click to publish the
proposal to Omniverse Cloud GD |
Z2VBKerS63A | SyncTwin's Omniverse extension
to quickly convert a 2D CAD floor plan into
a 3D OpenUSD model, and populate it with SimReady OpenUSD assets using Omniverse's AI enabled DeepSearch. Then use prompts to generate physically accurate lighting options with Blender GPT, realistic floor materials with Adobe Firefly, and an HDRI Skydome with Blockade Labs. To see the new space in context, compose it on a Cesium
geospatial plane next to your existing factory digital twin. Then to share with stakeholders, use one click to publish the
proposal to Omniverse Cloud GDN which serves a fully interactive review
experience to any device. Fast-track your factory planning
process with NVIDIA Omniverse and generative AI. How incredible is that? Remember it started
with a 2D PowerPoint slide and it ended with a
virtual factory in spatial computing. That is incredible. PowerPoint to a virtual factory. 2D to spatial 3D. So this is the future. This is the future and this
is how everything comes together. USD of course is foundational in that journey and Omniverse
foundational in that journey and generative AI. Well this is what Omniverse is. We're super excited about
the work that we're doing here. We're so happy that we chose
OpenUSD as the foundation of
Omniverse and all of the work that we've done to extend it into
real time, into physics-based applications. The number of partners that
we have is just growing incredibly and
it covers so many different industries, as I mentioned, from
manufacturing to robotics and others. We hope, and this is the beginning of a journey, that we will finally be able to digitalize, to bring software-driven, artificial
intelligence-powered workflows into the world's heavy industries,
the $50 trillion worth of industries that are wasting enormous amounts of energy and money and time
all the time because it was simply based,
built on technology that wasn't available at the time. And so Omniverse for
industrial digitalization. Well all of this
momentum that we've already seen with OpenUSD is
about to get turbocharged. Alliance for OpenUSD
was announced with Pixar, Apple, Adobe, Autodesk,
NVIDIA as the founding members. The Alliance's mission is to foster development and
standardization of OpenUSD and accelerate its adoption. So whatever
|
Z2VBKerS63A | heavy industries,
the $50 trillion worth of industries that are wasting enormous amounts of energy and money and time
all the time because it was simply based,
built on technology that wasn't available at the time. And so Omniverse for
industrial digitalization. Well all of this
momentum that we've already seen with OpenUSD is
about to get turbocharged. Alliance for OpenUSD
was announced with Pixar, Apple, Adobe, Autodesk,
NVIDIA as the founding members. The Alliance's mission is to foster development and
standardization of OpenUSD and accelerate its adoption. So whatever
momentum we've already enjoyed, the vision that
we've already enjoyed, it's about to get kicked into turbocharge. Well I want to thank
all of you for coming today. We talked about SIGGRAPH. SIGGRAPH 2023 for us is four things. It's about the
transition of a new computing model. The very first time in decades that the computing architecture
is going to be fundamentally redesigned from the processor to the data center to the
middleware, the AI algorithms and the applications that it enables. The processor we created for this era
of accelerated computing and generative AI is Grace Hopper and we call it GH200. We have NVIDIA AI Workbench
to make it possible for all of you to be
able to engage generative AI. NVIDIA Omniverse now has a major release with generative AI
and of course release and support for OpenUSD. And then finally
whether you want to compute in the cloud or do AI
in your company underneath your desk or in your data center,
we now have incredibly powerful systems to help you all do that. I want to thank all of you for coming
but before you go, I have a treat for you. This is an anniversary, I
understand, of computer graphics. SIGGRAPH has been very, has been a
really special event and a very special place, and computer graphics,
as you know, is the driving force of our
company and very dear to us. We have dedicated
probably more time, more engineering, more
R&D, quarter of a century, over a quarter, well 30 years,
dedicated to advancing computer graphics. I don't know any
company who has invested so much. This industry is dear to us. The work that you do is dear to us and if not for the work
|
Z2VBKerS63A | , I
understand, of computer graphics. SIGGRAPH has been very, has been a
really special event and a very special place, and computer graphics,
as you know, is the driving force of our
company and very dear to us. We have dedicated
probably more time, more engineering, more
R&D, quarter of a century, over a quarter, well 30 years,
dedicated to advancing computer graphics. I don't know any
company who has invested so much. This industry is dear to us. The work that you do is dear to us and if not for the work
that you do, all of you who come to SIGGRAPH
each year, how is it possible that AI would have
achieved what it is today? How would it be possible that Omniverse would be possible or
OpenUSD would be possible? So we made something special for you. Please enjoy. It all started over 50
years ago with a simple question. What if a computer could make pictures? How could we use them? And what would they
look like generations from now? - Hello, folks. I'm Mr. Computer Image. - Welcome in. I hope you're hungry. Thank you all for coming. Have a great SIGGRAPH 2023. And remember,
remember, accelerated computing and generative AI, the
more you buy, the more you save. |
__Ewkal7s3g | ladies and gentlemen esteemed faculty members distinguished guests proud parents and above all the 2023 graduating class of the national Taiwan University today is a very special day for you and a dream come true for your parents you should be moving out soon it is surely a day of Pride and Joy your parents have made sacrifices to see you on this day they're right here foreign let's show all of our parents and our grandparents many of them are here our appreciation came to NTU for the first time over a decade ago Dr Chang invited me to visit his computational physics lab as I recall his son based in Silicon Valley had learned of nvidia's Cuda invention and recommended his father utilize it for his quantum physics simulations when I arrived he opened the door to show me what he had made Nvidia GeForce gaming cards filled the room plugged into open PC motherboards and sitting on metal shelves in the aisles were oscillating tatong fans Dr Chang had built a homemade supercomputer the Taiwanese way out of gaming graphics cards and he started here an early example of nvidia's Journey he was so proud and he said to me Mr Huang because of your work I can do my life's work in my lifetime those words touch me to this day and perfectly capture our company's purpose to help the Einstein and da Vinci of our time do their life's work I am so happy to be back at NTU and to be your Commencement Address the world was simpler when I graduated from Oregon State University TVs were not flat yet there was no cable television and MTV and the words mobile and phone didn't go together the year was 1984. the IBM peace at Apple Macintosh launched the PC Revolution and started the Chip And software industry that we know today you enter a far more Flex world with geopolitical social and environmental changes and challenges surrounded by technology we are now perpetually connected and immersed in a digital world that parallels our real world cars are starting to drive by themselves 40 years the after the computer industry created the home PC we invented artificial intelligence like software that automatically drives a car or studies x-ray images AI software has opened the door for computers to automate tasks for the world's largest multi-trillion dollars of Industries Healthcare Financial Services transportation and Manufacturing AI has opened immense opportunities agile companies will take advantage of AI and boost their position companies less so will perish entrepreneurs many of them here today will start new companies and like in every Computing era before create new Industries AI will create new jobs that didn't exist before like data engineering prompt engineering AI Factory operations an AI safety Engineers these are jobs that never |
__Ewkal7s3g | drive by themselves 40 years the after the computer industry created the home PC we invented artificial intelligence like software that automatically drives a car or studies x-ray images AI software has opened the door for computers to automate tasks for the world's largest multi-trillion dollars of Industries Healthcare Financial Services transportation and Manufacturing AI has opened immense opportunities agile companies will take advantage of AI and boost their position companies less so will perish entrepreneurs many of them here today will start new companies and like in every Computing era before create new Industries AI will create new jobs that didn't exist before like data engineering prompt engineering AI Factory operations an AI safety Engineers these are jobs that never existed for automated tasks will obsolete some jobs and for sure AI will change every job supercharging the performance of programmers designers artists marketers and Manufacturing planners just as every generation before you Embrace Technologies to succeed every company and you must learn to take advantage of AI and do amazing things with an AI co-pilot by your site while some worry that AI may take their jobs someone who expert with AI will we are at the beginning of a major technology era like PC internet mobile and Cloud but AI is far more fundamental because every Computing layer has been reinvented from how we write software to how it's processed AI has reinvented computing from the ground up in every way this is a rebirth of the computer industry and a golden opportunities for the opportunity for the companies of Taiwan you are the foundation in Bedrock of the computer industry within the next decade our industry will replace over a trillion dollars of the world's traditional computers with new accelerated AI computers my journey started 40 years before yours 1984 was a perfect year to graduate I predict that 2023 will be as well what can I tell you as you begin your journey today is the most successful day of your life so far you're graduating from the national Taiwan University I was also successful until I started Nvidia at Nvidia I experienced failures great big ones all humiliating and embarrassing many nearly doomed us let me tell you three Nvidia stories that Define us today we founded Nvidia to create accelerated computing our first application was 3D graphics for PC gaming we invented an unconventional 3D approach called forward texture mapping and curves our approach was substantially lower cost we won a contract with Sega to build their game console which attracted games for our platform and funded our company after one year of development we realized our architecture was the wrong strategy it was technically poor and Microsoft was about to announce Windows 95 direct 3D based on inverse texture mapping and triangles many companies were already working on 3D chips to support the standard if we completed sega's |
__Ewkal7s3g | embarrassing many nearly doomed us let me tell you three Nvidia stories that Define us today we founded Nvidia to create accelerated computing our first application was 3D graphics for PC gaming we invented an unconventional 3D approach called forward texture mapping and curves our approach was substantially lower cost we won a contract with Sega to build their game console which attracted games for our platform and funded our company after one year of development we realized our architecture was the wrong strategy it was technically poor and Microsoft was about to announce Windows 95 direct 3D based on inverse texture mapping and triangles many companies were already working on 3D chips to support the standard if we completed sega's game console we would have built inferior technology be incompatible with Windows and be too far behind to catch up but we would be out of money if we didn't finish the contract either way we would be out of business I contacted the CEO of Sega and explained that our invention was the wrong approach that Sega should find another partner and that we could not complete the contract and the consul we had to stop but I needed Sega to pay us in whole or Nvidia would be out of business I was embarrassed to ask the CEO of Sega to his credit and my amazement agreed his understanding and generosity gave us six months to live with that we built Riva 128. just as we were running out of money Riva 128 shocked the young 3D Market put us on the map and save the company the strong demand for our chip led me back to Taiwan after leaving at the age of four to meet Morris Chang at tsmc and started a partnership that has lasted 25 years confronting our mistake and with humility asking for help save Nvidia these traits are the hardest for the brightest and most successful like yourself in 2007 we announced could a GPU accelerated computing our aspiration was for Cuda to become a programming model that boosts applications from scientific computing and physics simulations to image processing creating a new Computing model is incredibly hard and rarely done in history the CPU Computing model has been the standard for 60 years since the IBM system 360. Kuda needed developers to write applications and demonstrate the benefits of the GPU developers needed a large installed base and a large Cuda installed base needed customers buying new applications so to solve the chicken or the egg problem we used G-Force our gaming GPU which already had a large Market of gamers to build the installed base but the added cost of Cuda was very high nvidia's profits took a huge hit for many years our market cap hovered just below just above one billion dollars we suffered many years of poor performance our shareholders were skeptical |
__Ewkal7s3g | rarely done in history the CPU Computing model has been the standard for 60 years since the IBM system 360. Kuda needed developers to write applications and demonstrate the benefits of the GPU developers needed a large installed base and a large Cuda installed base needed customers buying new applications so to solve the chicken or the egg problem we used G-Force our gaming GPU which already had a large Market of gamers to build the installed base but the added cost of Cuda was very high nvidia's profits took a huge hit for many years our market cap hovered just below just above one billion dollars we suffered many years of poor performance our shareholders were skeptical of Cuda and preferred we focused on improving profitability but we persevered we believe the time for Accelerated Computing would come we created a conference called GTC and promoted Cuda tirelessly worldwide then the applications came seismic processing CT reconstruction molecular Dynamics particle particle physics fluid dynamics and image processing one science domain after another they came we worked with each developer to write their algorithms and achieved incredible speedups then in 2012 AI researchers discovered Cuda the famous alexnet trained on G-Force GTX 580. started the Big Bang of AI fortunately we realize the potential of deep learning as a whole new software approach and turned every aspect of our company to advance this new field we risked everything to pursue deep learning a decade later the AI Revolution started and Nvidia is the engine of AI developers worldwide we invented Cuda and pioneered accelerated Computing and AI but a journey forged our corporate character to endure the pain and suffering that is always needed to realize a vision One More Story in 2010 Google aimed to develop Android into a mobile computer with excellent graphics the phone industry had chip companies with modem expertise nvidia's Computing and Graphics expertise made an ideal partner to help build Android so we entered the mobile chip Market we were instantly successful and our business and stock price surged the competition quickly swarmed modem chip makers were learning how to build Computing chips and we were learning how to build modems the phone market is huge we could fight for share instead we made a hard decision and sacrificed the market nvidia's mission is to build computers to solve problems that ordinary computers cannot we should dedicate ourselves to realizing our vision and to making a unique contribution our strategic retreat paid off by leaving the phone market we opened our minds to invent a new one we imagine creating a new type of computer for robotic computers with neural network processor safety architectures that run algorithms at the time this was a zero billion dollar market to retreat from a giant phone market to create a zero billion dollar robotics Market we now have billions |
__Ewkal7s3g | Computing chips and we were learning how to build modems the phone market is huge we could fight for share instead we made a hard decision and sacrificed the market nvidia's mission is to build computers to solve problems that ordinary computers cannot we should dedicate ourselves to realizing our vision and to making a unique contribution our strategic retreat paid off by leaving the phone market we opened our minds to invent a new one we imagine creating a new type of computer for robotic computers with neural network processor safety architectures that run algorithms at the time this was a zero billion dollar market to retreat from a giant phone market to create a zero billion dollar robotics Market we now have billions of dollars of automotive and Robotics business and started a new industry Retreat does not come easily to the brightest and most successful people like yourself yet strategic retreat sacrifice deciding what to give up is at the core the very core of success of 2023 you're about to go into a world witnessing great change and just as I was with the PC and Chip Revolution you're at the beginning at the starting line of AI every industry will be revolutionized Reborn ready for new ideas your ideas in 40 years we created the PC internet mobile cloud and now the AI era what will you create whatever it is run after it like we did one don't walk remember either you're running for food or you are running from being food |
VhSGmVyKykg | foreign foreign foreign thank you foreign how's that good morning come on I could do I think you could do better than that uh almost a full house so that's not too bad I was I was hoping it'll be better than this but let's get started anyway because what you just saw today here I hope it sets the stage for what's happening going to happen today and I hope you're ready to rock because we are the biggest rock star of Silicon Valley coming up here on this stage in the next 30 seconds the rock star the rock star that is uh is on a mission to take us to a different real democratized AI put it all over the place a rock star who has taken the company that is founded 30 years ago from 400 billion to 1.2 trillion in the last 18 months a rock service ticket in these companies stock up 550 in five years and half of that happened in the last 12 months so of course the Rockstar is done other than founder and CEO of jensenberg ladies and Gentlemen please welcome Jensen all right folks there you go Jensen yeah that's you yeah please am I yeah yeah okay she doesn't have to do this not just we have a whiteberry Alumni Association and we have on behalf of the entire IIT Community thank you thank you for being here on a Saturday morning in spite of your crazy busy schedule and I know that for sure I've been working with your awesome team for the last five months to make this happen oh it's happening and thank you well I think folks that speaks about who our luxury is so I don't want to waste any time because there is a reason we got him here there's a lot to unpack and unblock but here's uh where it gets interesting um like any rock star Jensen is traveling today with his band rock band and there's something special about this rock band it's not only the band members he is hand-picked they're all from IIT yeah yeah there must be a reason Jensen did that I'm not sure whether it's because he didn't go to IIT but we'll figure that out so uh without uh wasting any more time let me call up the band they're going to be in conversation with Jensen to unpack and unplug him so uh let me first call up Raj Raja gopalan vice president of go to market and operations I've been working with Raj to uh to to construct this session Raj looking forward to a great session next up Vivek Singh vice president of Advanced Technology |
VhSGmVyKykg | not only the band members he is hand-picked they're all from IIT yeah yeah there must be a reason Jensen did that I'm not sure whether it's because he didn't go to IIT but we'll figure that out so uh without uh wasting any more time let me call up the band they're going to be in conversation with Jensen to unpack and unplug him so uh let me first call up Raj Raja gopalan vice president of go to market and operations I've been working with Raj to uh to to construct this session Raj looking forward to a great session next up Vivek Singh vice president of Advanced Technology Group at Nvidia pre-work is from IIT Delhi Raj is from ID Bombay and uh last but not the least sorry nice to meet you some news from ID Bombay 2. okay I raised Samir okay I'm gonna get out of here Raj Jensen Vivek Summit the stage is yours we are looking forward to it thank you thank you work on that yeah we need to work on that okay let's get this started Jensen I'd like to start by saying congratulations that has not been a person and I'm not saying by my knowledge I'm just saying there's not been a human being who has started their small startup company under such humble surroundings in a Denny's and then for 30 years being CEO of that and led it through huge amounts of stress lots of upload absolutes of downs and day in day out toiled at some things that you believed in today your company is worth a trillion dollars only six companies are in that ratified League you did this by making a lot of strong bold bets two of the most well-known bets are on accelerated Computing and AI and you bet on this a dozen years ago when people could badly spell AI or accelerated computing and today you saw the potential at that time and you steadfastly kept investing in this today everybody sees the potential with this kind of a track record the audience here what we want most to hear from you is where do you see the future going where is this technology going to evolve to how is it going to impact Our Lives how is it going to impact the GDP and the companies that make up the global economy what is the next wave of innovation going to look like how should we prepare ourselves capitalize on it your thoughts Johnson well this sounds like it's going to be a one question interview I think I think we know now and I think we all know now that that the last 60 years of computing has been has been extraordinary we utilized a form of computing that was introduced |
VhSGmVyKykg | of a track record the audience here what we want most to hear from you is where do you see the future going where is this technology going to evolve to how is it going to impact Our Lives how is it going to impact the GDP and the companies that make up the global economy what is the next wave of innovation going to look like how should we prepare ourselves capitalize on it your thoughts Johnson well this sounds like it's going to be a one question interview I think I think we know now and I think we all know now that that the last 60 years of computing has been has been extraordinary we utilized a form of computing that was introduced in 1964 the year after I was born by IBM called the IBM system 360 and I don't know how many of you have read the manuals of the system 360 I have and yeah and uh I hope not in the room actually wrote it but but it it covered it covered some real a really important concept uh Central Processing Unit i o subsystem dma uh virtual memory multitasking uh scalable architecture backwards compatibility what else is left I think we described all of the computer industry in the last 60 years and this this general purpose way of doing software recognize that that the cost of computing is in software not in the hardware and sensible because you use a computer for a very long time and so the body of software that accumulates over that computer architecture grows over time and that body of work can't be wasted that recognition was profound and it changed the computer industry almost everybody who made an impact in the computer industry really really recognized system 360's teachings if you will as the governing dynamics of the computer industry well that way of scaling Computing lasted 60 years however general purpose Computing as the world continued to scale ran out of steam and the reason for that is is when you start to use general purpose anything over time the Energy Efficiency or the effect cost efficiency or any efficiency is squandered and and we we got to a point where now the the the capacity of computing uh the demands on Computing are so great that general purpose Computing ways are really quite expensive to to use going forward and particularly for for Energy Efficiency reasons and and so we've been working on on a new class of computing a new form of computer called accelerated Computing and it in a lot of ways we're the only parallel Computing architecture that ever survived and the only the reason why we're the only parallel Computing architecture that ever survived is because we obeyed amdos law almost all of the other forms of parallel Computing disobeyed amdos law I mean I almost looked at amdel's law and looked |
VhSGmVyKykg | a point where now the the the capacity of computing uh the demands on Computing are so great that general purpose Computing ways are really quite expensive to to use going forward and particularly for for Energy Efficiency reasons and and so we've been working on on a new class of computing a new form of computer called accelerated Computing and it in a lot of ways we're the only parallel Computing architecture that ever survived and the only the reason why we're the only parallel Computing architecture that ever survived is because we obeyed amdos law almost all of the other forms of parallel Computing disobeyed amdos law I mean I almost looked at amdel's law and looked at it in this face and and say you know we're going to overcome and those law and it our first principles doesn't make any sense and those law is a good law and and um we we were sensible about it and we added a parallel Computing processor an accelerator next to the CPU next to general purpose Computing and we offloaded and accelerated the workloads that were particularly good for our domain and and we worked it and we worked at it for a very long time there was a lot of reasons why why parallel Computing shouldn't survive and and we can go into it but but um but at this point I think it's fairly clear that in order for us to continue to expand Computing into the the type of workloads we're interested in that general purpose Computing will will have to be augmented by accelerated Computing and and we now have about a trillion dollars of the Computing in the world maybe maybe a trillion and a half dollars worth of computing installed over the course in the next decade or so probably less than that most s-curves you know once once an s-curve gets going it takes a little bit less than a decade or so in the next 10 years I'm fairly certain that every computer in the world will be accelerated and protective Center will be accelerated all of the infrastructure software will be accelerated everything will continue to be software software programmable and software defined but accelerated Computing will will pretty much take over the vast majority of now the the question the question you asked about where is Computing going uh and AI has to do with with um several things maybe the way to answer that is to think about Computing in in the three layers how would computers be designed and I just mentioned that accelerated it would be accelerator for sure the second is how would software be created and and what would software be able to do we know now we know now that that we've defined a new type of before which is almost if there are any any hardware designers |
VhSGmVyKykg | will be accelerated everything will continue to be software software programmable and software defined but accelerated Computing will will pretty much take over the vast majority of now the the question the question you asked about where is Computing going uh and AI has to do with with um several things maybe the way to answer that is to think about Computing in in the three layers how would computers be designed and I just mentioned that accelerated it would be accelerator for sure the second is how would software be created and and what would software be able to do we know now we know now that that we've defined a new type of before which is almost if there are any any hardware designers in the room um it's almost it's it's almost developing software the way that that Hardware is developed the way ships are developed in a structured sort of way um deep learning deep learning as you know is is using data to uh train a um a function and this function is a universal function approximator because it it could be it could be stacked up in layers and each one of the layers could be trained individually because of these activation functions that that separates each one of the layers from another layer you could build software as tall as you wanted to build it and because you could build software as tall as you like and as wide as you like the dimensionality of the function you would like to approximate could be as great as you like and and um and so that that observation about 13 years ago led us to believe that we've discovered a universal function approximaker the universal function approximator then says wherever you have data you could create a function approximator that can predict the future and what can you apply this for and so so now you have this Universal function approximator um you know that we've we've uh largely largely tackled computer vision we've largely tackled speech recognition uh We've largely tackled uh time sequence uh approximation and prediction uh We've largely tackled many of the things that you guys know that that able to solve today and to the point where we've largely tackled uh natural language understanding and and so the question is is what else can you learn with a universal function approximator like this well almost anything that has structure that's the the simple simple idea if language has structure it obviously has structure because I I can somehow somehow create language that you can also understand and if that's the case then we we obviously both recognize uh the same structure we both uh understand the the governing laws of how that structure is is put together the the the vocabulary the syntax the grammar uh the semantics we understand the structure of language |
VhSGmVyKykg | things that you guys know that that able to solve today and to the point where we've largely tackled uh natural language understanding and and so the question is is what else can you learn with a universal function approximator like this well almost anything that has structure that's the the simple simple idea if language has structure it obviously has structure because I I can somehow somehow create language that you can also understand and if that's the case then we we obviously both recognize uh the same structure we both uh understand the the governing laws of how that structure is is put together the the the vocabulary the syntax the grammar uh the semantics we understand the structure of language well what else do we understand of well we understand the structure of the physical world if we if we don't then we would all be biologically completely different and we're not we're similar so there's there's got to be some structure in biology that we can we can learn there's obviously structure in the physical world if there wasn't structure in the physical world we'd be white noise right now right and so somehow atomically uh We Gather into this kind of kind of mature and uh with their structure in in the things that we created we created words we created chairs and tables and stages and we created these things and because we created these these things and we gave them words the words and the structures are somehow related we can learn that and so we can learn we could learn text in 3D we could learn text to image we can learn text to almost anything okay and so almost anything that has structure well we know physics has structure otherwise if physics has no structure then tomorrow morning would be different than this morning it didn't you know I I was able to stand up the same way and right and so so they're predictable structures uh we could therefore use use deep learning to learn multi-physics we can we can use it to learn the language of protein we could use it to learn the language of genes we can chemistry the language of physics we can use it to predict whether obviously we can use it to predict climate hopefully and so the type of things that we can now use this Universal function approximator to do that's pretty profound it's pretty exciting and then the third layer after that just you know a short answer to your very live life you know the Raj asked you a question basically you're going to spend the rest of your life this is the this is what's the meaning of life question the the last the part the last layer that I'm excited about and I kind of kind of hinted at it already is |
VhSGmVyKykg | learn the language of genes we can chemistry the language of physics we can use it to predict whether obviously we can use it to predict climate hopefully and so the type of things that we can now use this Universal function approximator to do that's pretty profound it's pretty exciting and then the third layer after that just you know a short answer to your very live life you know the Raj asked you a question basically you're going to spend the rest of your life this is the this is what's the meaning of life question the the last the part the last layer that I'm excited about and I kind of kind of hinted at it already is is um there are two areas where all of us who are in the world of computing really haven't had a chance to to to deeply engage and these two areas we now know we we have the language of if it wasn't because of if it wasn't because of our ability to represent transistors and logical Gates and functions in language if we didn't have a language structure to it like verilog for example how would we actually design we need to have a representation of the thing that we're trying to build that is on the one hand very high in abstraction so that we can represent represent our ideas as efficiently as possible but also synthesizable down into lower level structures isn't that right the the revolution of high-level design 40 years ago um and I was I was lucky to to have been the first generation of Engineers that was able to to engage high-level design and logic synthesis and using computer-aided design to revolutionize how we design Electronics if it wasn't because of that representation how would we be able to achieve what we achieve well we're about to have that same high level representation of biology we can now represent things all the way as as low level as genes to proteins to cells pretty soon and we'll be able to represent these high-level things Concepts and if we could do that then our ability to design proteins that for example could go in or design enzymes that can go and eat plastic and so that so that the ocean isn't isn't polluted by by this incredible thing that's just taking over the ocean um so that we could eat carbon that we can we can capture carbon post uh carbon before but ideally uh even if you could post capture and creation carbon creation would be fine and so so we can go we can go and uh of course including life-saving disease so I think that the the days of protein design is coming along and and that that is surely within the next decade computer aided drug |
VhSGmVyKykg | if we could do that then our ability to design proteins that for example could go in or design enzymes that can go and eat plastic and so that so that the ocean isn't isn't polluted by by this incredible thing that's just taking over the ocean um so that we could eat carbon that we can we can capture carbon post uh carbon before but ideally uh even if you could post capture and creation carbon creation would be fine and so so we can go we can go and uh of course including life-saving disease so I think that the the days of protein design is coming along and and that that is surely within the next decade computer aided drug Discovery computer Aida biology um I think is is on the one of the next next Frontiers uh the other giant next Frontier is is um uh finally using computer-aided systems and digital technology to revolutionize the world's largest Industries we we generate the reason why the the electronics Industry is so uh is moving so fast and we design such amazing things today it's because of of we don't waste time we don't waste ideas we don't waste transistors we don't waste power we don't waste anything we focus on on saving all of those things when you you can't you can't be efficient in those ways without tools if not for the work that you guys do how would we possibly be able to drive Energy Efficiency to the levels that we do isn't that right now just apply that same logic to Construction our buildings over designed uh basically buildings are either under designed or they're overtime but there are no buildings that are efficiently designed and the reason for that is it can't simulate anything there are no factories that are properly designed and there are no plants chemical plants that are efficiently designed everything is either over designed or under under you can tell when they're under designed whenever there's some some you know extreme weather event an entire city gets gets demolished they're under-designed when they're over designed we end up with with uh with concrete jungles that's fine they're safe but unfortunately consumes too much concrete and you guys know that concrete consumes a lot of generates a lot of carbon and so so there there's a an entire industry trillions and trillions of dollars of Industry the world's largest Industries and it's what's called heavy industries that are completely under uh under captured with under utilizing uh digital technology and so the question is how do we solve that problem well we have to represent physics we have to represent um the heavy Industries the language of heavy Industries which is a universal function approximator and that can learn the representation of |
VhSGmVyKykg | 're over designed we end up with with uh with concrete jungles that's fine they're safe but unfortunately consumes too much concrete and you guys know that concrete consumes a lot of generates a lot of carbon and so so there there's a an entire industry trillions and trillions of dollars of Industry the world's largest Industries and it's what's called heavy industries that are completely under uh under captured with under utilizing uh digital technology and so the question is how do we solve that problem well we have to represent physics we have to represent um the heavy Industries the language of heavy Industries which is a universal function approximator and that can learn the representation of heavy industry once we can learn heavy Industries representation we will describe it as we describe it with there will be words that will be under there'll be ways that we represent heavy industry which is physically based there will be way we ways we represent biology which was you know obviously biology based we'll have languages that represent those things and we'll have tools that allows us to go and optimize a simulated Optima design it simulated optimize it and optimize so I'm excited about about the next 10 years as we because we've discovered this breakthrough of learning the language of these physical things we can go represented and in this garage is the reason why Nvidia is working in these areas and the work that we do with Clara has to do with all of the work and preparing ourselves for this digital biology Revolution all of the work that we're doing with Omniverse is to create that digital the digital bridge between the physically based worlds and digital based worlds so that's it 10 minute answer I'm afraid for the next question which I have the honor to ask thank you by the way um Raj runs nvidia's worldwide operations so so uh yeah field operations when we take all of our ideas to Market is is obviously the the go to market for our company is is quite unique invented at Nvidia and the operations of it is is a really complicated end and so so Raj and his team orchestrates all that really amazing work yes I'm going to ask the next question um but by the way I'm glad this is being recorded because I'm going to have to listen to the last answer a couple of times to make sure I understand the alphabet as is often the case with me let's but my question Jensen is that you've taken on some really hard problems in Computing and you've set goals that can seem quite audacious at least at least in the beginning so how do you select these problems and how do you steer the team through the inevitable technological challenges that these problems will |
VhSGmVyKykg | a really complicated end and so so Raj and his team orchestrates all that really amazing work yes I'm going to ask the next question um but by the way I'm glad this is being recorded because I'm going to have to listen to the last answer a couple of times to make sure I understand the alphabet as is often the case with me let's but my question Jensen is that you've taken on some really hard problems in Computing and you've set goals that can seem quite audacious at least at least in the beginning so how do you select these problems and how do you steer the team through the inevitable technological challenges that these problems will pose so first of all black is working on computational lithography and as you know we we are shrinking transistors at a Relentless Pace but you also know you can shrink transistors but you can't shrink atoms and so there's a there's a there we're pushing against the physical limits and lithography is is the first step of the the Miracles of semiconductor physics and and we're pushing well beyond the limits of of light and and the ability to pattern these amazing things and in fact is working on a revolutionary way of doing computational lithography and and you know the work that you're doing today sets the foundations for for hopefully the next job where it conductive manufacturing is connected together in a really really uh really integral way we we select problem like the one that you're working on based on several things the first thing is it's surprising to hear this but the the eat to succeed on are the ones that are the hardest to achieve and the reason for that is because if nobody can do it you just got to make sure that nobody else can do it if it's hard for you but everybody else can do it then then then that's a problem and then you got to keep looking but you have to find a problem that is hard for everybody universally everybody and so so first of all in order to even understand what a problem that is hard that is universally hard for everyone that that requires a skill in itself you have to be informed and so let me let me just assume that first we are highly informed professionals which uh and so so um you choose a problem that is incredibly hard to do and the reason why that's the easiest is because it gives you time to go learn it and and uh in a lot of ways if we chose time travel for example we're all in exactly the same spot okay and so so by the time that any of us ever figure it out you know it and there are some theories of |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.