video_id
stringclasses 10
values | transcript_chunk
stringlengths 830
2.96k
|
---|---|
VhSGmVyKykg | first of all in order to even understand what a problem that is hard that is universally hard for everyone that that requires a skill in itself you have to be informed and so let me let me just assume that first we are highly informed professionals which uh and so so um you choose a problem that is incredibly hard to do and the reason why that's the easiest is because it gives you time to go learn it and and uh in a lot of ways if we chose time travel for example we're all in exactly the same spot okay and so so by the time that any of us ever figure it out you know it and there are some theories of you how you might might be able to travel at the speed of light um time travel is harder but the speed of light travel you know I could actually imagine it and and so so the question the question is is did you solve did you choose a problem that is insanely hard to solve and then the second something that somehow you're you're destined to solve um and it has to it has to do with your own set of lenses you're you know if you're if your perspective on the world or your your all of your life's preparation or your particular field of Interest or because of the collection of people that I have around me uh for example you guys the the selection of the selection of problems that that I tend to go for are highly informed by the team that I have and and it's and the team that that I I have surrounding me that's highly informed by my own personal interest and my own right self-selection and before you know it we are incredibly good at this particular field and so that's kind of the second characteristic and the third is is um you have to choose a problem that that um independent of of um of pain and suffering that's gonna that will guarantee to come from solving a hard problem uh and that's going to take a long time that you're going to continue to still love it and so I I think the the the problems that we also that we selected and I'll give you some examples of it in just a second computational lithography for example uh you've been working on it now for a solid three years anything ouch and at the moment at the moment there's you know at every moment at every moment we believe it will isn't that right yeah um at every moment we believe we will but at any moment we haven't shipped a thing and and so that that is in fact the the exactly that we're on the right track but we're not successful we're on the right track |
VhSGmVyKykg | to continue to still love it and so I I think the the the problems that we also that we selected and I'll give you some examples of it in just a second computational lithography for example uh you've been working on it now for a solid three years anything ouch and at the moment at the moment there's you know at every moment at every moment we believe it will isn't that right yeah um at every moment we believe we will but at any moment we haven't shipped a thing and and so that that is in fact the the exactly that we're on the right track but we're not successful we're on the right track but we're not successful and the reason why we're not successful is because it's hard and you also know how many people are working on could live though in our company it's coming up on hundreds isn't that right and so here we are we have hundreds of people working on something coming up on some four years we haven't shipped deadly Squad and and we are absolutely convinced we will and and so that is that that is that that is that perfect circumstance and in fact um you could you could argue that that's pain in long-term pain and suffering um but it's the joy that we derive from and every single week uh um first thing on Monday I I look forward to seeing some results that you published and um uh and and it's just enough progress to keep it going isn't that right just enough progress to keep you going and and so that is innovation you know you have this long-term dream that you believe you can make a huge difference every single week you're making some progress there's always some setback and and uh but you believe it you still believe it and and so I think that these three conditions is how we how we choose uh what what are some of the problems that we chose it turns out the single best strategic decision we've made was in 1993. 1993 we we realized um we wanted to do two things we wanted to accelerate computing and um most people don't don't remember this anymore but but people talk about this new Computing framework that we've created called cuda c-u-d-a but the C part of it we added about 20 years ago the UDA was was uh severe when you joined yay um it was 1993 1994. we we invented a new way of accelerating uh i o uh which is where uh a virtualized accelerated i o subsystem was invented in 1993 1994 and we stuck with it this whole time uh this one we're Computing will be will be essential because |
VhSGmVyKykg | 1993 we we realized um we wanted to do two things we wanted to accelerate computing and um most people don't don't remember this anymore but but people talk about this new Computing framework that we've created called cuda c-u-d-a but the C part of it we added about 20 years ago the UDA was was uh severe when you joined yay um it was 1993 1994. we we invented a new way of accelerating uh i o uh which is where uh a virtualized accelerated i o subsystem was invented in 1993 1994 and we stuck with it this whole time uh this one we're Computing will be will be essential because either we can solve problems that general purpose Computing can solve so for for example computer Graphics um or eventually general purpose Computing will be will be too inefficient it will run out of steam and we we believe that we've believed that for 30 years general purpose anything will run out of state Jennifer's car general purpose plane General from the boat why would we have a general purpose everything um over time when the market matures and the demand for something becomes ubiquitous you're going to have specialization we have specialized everything you know we have specialized television we have Specialized Bicycles we have specialized everything and so why wouldn't we have specialized computing and so so we believe that general purposes is not the right answer long term it's the perfect answer short term and and so we've believed this now for that for that entire time the first thing that we chose we said how what problem can we choose that simultaneously insanely hard to solve meaning that it should be a sustainable problem uh the more we solve it the more the bigger the market uh the bigger the market the more RDU we can we can create uh the greater the r d we can solve even the larger markets and we came to the conclusion that 3D Graphics was that that it would take uh nearly forever to solve and it has the the the fractional percentage a small percentage that it might become a very large market and the the large Market potential that we saw that we chose was influenced by our own surrounding at the time which was uh video games and I think that that um uh history would suggest that choosing 3D Graphics video games as the as the the foundational Market opportunity the driver r d was quite frankly the company's single greatest business decision now over time over time and so we had to go create that market unfortunately this the problem with that with that business plan if you go back in 1993 I read the business plan uh you have accelerated Computing which nobody wanted right because I've already wanted Central Central |
VhSGmVyKykg | percentage a small percentage that it might become a very large market and the the large Market potential that we saw that we chose was influenced by our own surrounding at the time which was uh video games and I think that that um uh history would suggest that choosing 3D Graphics video games as the as the the foundational Market opportunity the driver r d was quite frankly the company's single greatest business decision now over time over time and so we had to go create that market unfortunately this the problem with that with that business plan if you go back in 1993 I read the business plan uh you have accelerated Computing which nobody wanted right because I've already wanted Central Central Processing units at the time intel was was uh front and center everything was about Intel everything was about CPUs so we invented an um a model of computing that nobody wanted um for a new technology 3D Graphics that there's no uh ecosystem for for a market that doesn't exist you know video games at the time Electronic Arts at the time was 14 employees Sega was Nintendo was almost out of business as you guys remember at the time and so so the uh these three factors doesn't make for a very good business plan and so I wrote the business plan anyways and I didn't know how to do it but I tried and um nobody believed in the business plan come to think of it but but the if not for if not for um uh my previous employer who who basically told uh Sequoia capital and Sutter Hill that that uh Jensen's a good kid give him some money that was I think I think the um I think the only virtues we had going and Chris and Curtis were uh and I've worked with uh Andy bechtelschine he has no manufacturers but indirectly uh Andy was the founder of sun uh gave the three of us really really great uh quite long recommendations the business plan I think I think on merits on any business plan that's written today uh shouldn't be funded and and because if you have zero percent times zero percent times zero percent you know that's as close to an absolute zero and and so so I think I think the benefit however of all of those choices um created a company that was was smart about investing in long-term things and smart about making money along the way while pursuing a long-term vision so for example all of the pieces of technology that we're using for Kula though as you know we're making money from today all right and so because the building blocks of kulitho are profitable today we could support kalutho for as long as we shall live um because because |
VhSGmVyKykg | and because if you have zero percent times zero percent times zero percent you know that's as close to an absolute zero and and so so I think I think the benefit however of all of those choices um created a company that was was smart about investing in long-term things and smart about making money along the way while pursuing a long-term vision so for example all of the pieces of technology that we're using for Kula though as you know we're making money from today all right and so because the building blocks of kulitho are profitable today we could support kalutho for as long as we shall live um because because the in the beginning of deep learning because our basic architecture was uh able to do Computing even if deep learning was going to take 20 years I would have funded it forever and the reason for that is because nvidia's GPU was had a day job and before that before that uh Cuda we put it on top of our programmable shaders yeah it crushed our gross margins and maybe go back in history and you looked at her looked at our stock price on the on the generation that Kuda was announced our stock was happy and and the reason for that was because I added this giant cost into our company that no no customer valued you know I still remember I just remember calling customers and introducing Cuda to them and not one of them wanted it they just you know they just wanted the old chip faster and cheaper and and so so I so I think that but but nonetheless nonetheless even if the customer didn't want it so long as we're willing to accept lower gross margins we could carry that technology to the marketplace so so I think the the rest of it is is really about selecting hard problems that take a long time that you're just essentially good at but you have to have the skills to make money along the way and if you go back and Chase down almost every initiative that we're currently working on we're making some money along the way okay and um and uh that's that's really that's really about skill and the rest of it is just reselling thank you thanks and uh it's it's funny that you mentioned that you raised me because uh next month I I finished 21 years at Nvidia so I can legally drink at the bar in Voyager but uh over over those 21 years I've seen you make a lot of gutsy calls and uh you know touching upon a little bit of what you said the one that really stands out is is in the second half of of the 2000s when uh you know the economy |
VhSGmVyKykg | currently working on we're making some money along the way okay and um and uh that's that's really that's really about skill and the rest of it is just reselling thank you thanks and uh it's it's funny that you mentioned that you raised me because uh next month I I finished 21 years at Nvidia so I can legally drink at the bar in Voyager but uh over over those 21 years I've seen you make a lot of gutsy calls and uh you know touching upon a little bit of what you said the one that really stands out is is in the second half of of the 2000s when uh you know the economy wasn't doing great and uh you were under a lot of pressure from investors to just focus on the gaming business you know why are you shipping millions of dollars of silicon to customers that just don't care for it right and uh and yet you know you showed 100 conviction in continuing to invest both in operating expenses in the r d as well as in saying you know I mean I have to say I myself was conflicted during those years about you know does this make sense because you know we we had a pretty Competitive Gaming business at that time where you know you know we weren't like just rolling through with our gpus and uh so on the one side we were under pressure to make sure that you know our gpus for gaming were efficient but on the other side you wanted to continue to invest so just take us back to that time and you know help us understand what was going through your mind as you were trying to push back against this this demand to just focus on shareholder value and so on at all times were driven by aspiration and Desperation and and let me give you let me let me that particular example and I'll give you one more example let me give you one example before so there are three things that we did that profoundly uh changed uh how computation is done okay so of course of course let me just ignore the first one that I already mentioned the the zeroth one which is really about accelerated Computing which that's 1993. um 25 years ago we invented the programmable Shader and and the idea there was was uh that every single Pixel become a program because prior to that every single Pixel was just a textured lookup just a color lookup we would add a program to it so that the program could could reference something else so before you know it the pixel could be shiny bumpy you know all kinds of things like that okay all kinds of interesting programs can be written on top that first inspiration is about the fact |
VhSGmVyKykg | course of course let me just ignore the first one that I already mentioned the the zeroth one which is really about accelerated Computing which that's 1993. um 25 years ago we invented the programmable Shader and and the idea there was was uh that every single Pixel become a program because prior to that every single Pixel was just a textured lookup just a color lookup we would add a program to it so that the program could could reference something else so before you know it the pixel could be shiny bumpy you know all kinds of things like that okay all kinds of interesting programs can be written on top that first inspiration is about the fact that that computer Graphics it was going to be long-term sustainable can't all look like a military flight simulator and so it has to be artistic there has to be some way of telling a story so that you have all these different styles of video games that was a that was a giant breakthrough programmable shares the second was general purpose computation and the inspiration there was um if if if the world was only beautiful but it was static how would that possibly make for an interesting world and if we didn't make if we didn't bring physics into the world if we didn't bring physics processing into the world we have you know waves and smoke and fire and be able to simulate physics we would just have a whole bunch of static looking images and and and so I'm giving you the desperation View that our world you know our computer Graphics world would run out of steam and and then recently as you know we introduced Ray tracing a a new type of computer graphics that is in spired by physically based laws um basically instead of computer Graphics is now is now light simulation um the the type of computer Graphics we can the type of imagery we can generate is much much more beautiful much more subtle and and um unfortunately the computation computation is insanely hard well we came to the conclusion we have to do that because because um otherwise the the amount of of artists that it would take to make beautiful Graphics was going to keep on scaling exponentially and and we would collapse in our own weight of our of our industry each one of these decisions I described it in desperation mode if we don't do this in and you have to convince yourself of that otherwise you won't innovate does that make sense and so I just I just I just gave you the backwards for you the backwards way of explaining how do we motivate ourselves to do it by convincing yourself you will die and and um by convincing yourself you will perish and and it's not it's not hard to do that if you |
VhSGmVyKykg | um otherwise the the amount of of artists that it would take to make beautiful Graphics was going to keep on scaling exponentially and and we would collapse in our own weight of our of our industry each one of these decisions I described it in desperation mode if we don't do this in and you have to convince yourself of that otherwise you won't innovate does that make sense and so I just I just I just gave you the backwards for you the backwards way of explaining how do we motivate ourselves to do it by convincing yourself you will die and and um by convincing yourself you will perish and and it's not it's not hard to do that if you've been in the industry long enough and the reason for that is because you just have to tell other stories of other Industries or um you just have to experience your own life experience in your own industry that that because we didn't innovate because they didn't they perished okay and so so we have to go and create that condition if we don't change the entire Paradigm of how we do things literally put ourselves out of business notice Ray tracing destroyed programmable shading rasterization of which we are you know the inventors of and if we if we we came to the conclusion if we if we invented this it would literally disrupt everything that we've ever done on the other hand if somebody did it you know it would disrupt everything we've ever done and so you gotta you gotta come at it in a whole lot of different ways and you come to a conclusion you have to innovate the other way you do it is look this way of computation is not just for computer Graphics computer Graphics is a domain of physics simulation when you take a step back and you say you know computer Graphics is that way I am certain that Ray tracing will be the next step of lithography computational lithography our current methods our current methods are broadly approximate however the ability for us to do ray tracing um so you so you can deal with you know those deep whatever those those words that you use you know the layers between the layers between um mentalization is and the narrowness of trenches The Trenches are so incredibly high without Ray tracing it's very difficult for us to to figure out exactly how it's going to pattern and so Ray tracing of course is a form of physics simulation computer Graphics is a form of physics simulation the reason why we could simulate Radars today Radars lidars today is because we do ray tracing and there are many other other forms of physics that we we could do because of Ray tracing um and so so our our generalization of what |
VhSGmVyKykg | can deal with you know those deep whatever those those words that you use you know the layers between the layers between um mentalization is and the narrowness of trenches The Trenches are so incredibly high without Ray tracing it's very difficult for us to to figure out exactly how it's going to pattern and so Ray tracing of course is a form of physics simulation computer Graphics is a form of physics simulation the reason why we could simulate Radars today Radars lidars today is because we do ray tracing and there are many other other forms of physics that we we could do because of Ray tracing um and so so our our generalization of what a GPU is which is a physics simulation engine was a great breakthrough it was a great realization you know to take a step back and say you know we really simulate physics and for Jeff Hinton who uh who said uh once that deep learning is really inverse computer graphics let me just give you let me just prove the point let me prove the point right now okay um I just want you guys to to uh listen to my words don't do anything but but listen to my words okay and you apply your natural language understanding and so I want you all to imagine a Ferrari I want you to all imagine a red Ferrari okay well right now you know I didn't paint a picture did I I didn't paint a picture I didn't do anything I didn't do any Imaging I want you to imagine a red Ferrari but in your brain you did computer graphics every one of you every one of you took red right Ferrari and you turned it into an image and so all of you did computer graphics and and deep learning is in a lot of ways the inverse of that and so so I think that that realizing generalizing the problem space that you're in um gives you this this new aperture that opens up your aspiration if we do this maybe we can make a real contribution there if we do that we can make a real contribution there and and going back to what we were talking about earlier went back how we choose problems you know we're choosing problems to solve that has great impact on the world that is incredibly hard to solve that we believe that we singularly are equipped to solve which kind of goes back to the aspiration point that I'm making when you open up the aperture of you know what what what is our true potential what is our true potential what can we really do you know what problems can we help to conquer what new frontier can we push further uh and and you teach your company how to you know ordeal pain and suffering for decades on end |
VhSGmVyKykg | make a real contribution there and and going back to what we were talking about earlier went back how we choose problems you know we're choosing problems to solve that has great impact on the world that is incredibly hard to solve that we believe that we singularly are equipped to solve which kind of goes back to the aspiration point that I'm making when you open up the aperture of you know what what what is our true potential what is our true potential what can we really do you know what problems can we help to conquer what new frontier can we push further uh and and you teach your company how to you know ordeal pain and suffering for decades on end uh what problem can't you solve you know and so that's Nvidia today is kind of a combination of those core values Simpson we have 10 minutes left and you've given us a lot of great nuggets already would you like to talk to us about some closing thoughts that you would like to leave the audience with well you guys are horrible in a conversationalists you you I guess I guess let me ask you let me ask you you know we've worked on a whole bunch of things together over the last 20 years and and you know what are what are the characteristics of our company and what are the characteristics are you most value that that allows you to do the groundbreaking work that you do because you know you know that so so you guys know Samir some years of organization is nvidia's vlsi foundation and he's the person that gives us courage to go build these incredibly large incredibly High skyscrapers and we're not talking about 150 stories high we're talking you know hi I'd like to build a skyscraper and it's you know 2 000 stories high 20 000 stories high and without without his his confidence in doing it we would never undertake those kind of things and and long before long before anybody uh complex gpus are just to put it in perspective you know probably ten thousand human years goes into every generation and all of us come together and we build this thing and we hit we hit tape out and when we had tape out we assumed that it's perfect I mean we don't assume it's broken and hope it's perfect we assume it's perfect and and the reason why Samir knows that I assume it's perfect because when I tape out the chip with Samir I go to production you don't unlunch Rockets you don't untape out tit you know untape out chips and so we when we tape out we tape out and so what what are the properties and characteristics of our company that |
VhSGmVyKykg | probably ten thousand human years goes into every generation and all of us come together and we build this thing and we hit we hit tape out and when we had tape out we assumed that it's perfect I mean we don't assume it's broken and hope it's perfect we assume it's perfect and and the reason why Samir knows that I assume it's perfect because when I tape out the chip with Samir I go to production you don't unlunch Rockets you don't untape out tit you know untape out chips and so we when we tape out we tape out and so what what are the properties and characteristics of our company that allows you to do that have that confidence I think you you've mentioned it you've actually framed our company as a learning machine you know it's interesting that we're building learning machines but we ourselves as an organization are a learning machine yeah first of all right a very very big part of it is the fact that you were able to go from founding a three-person startup to running a trillion dollar company I mean I think having a steady hand really it's it's underappreciated how important it is right I am certain that if it weren't for you being the founder lasting through the second like 2007-8-9 and continuing investing in what has now become the foundational technology for everything that we do today would not be possible Right but but you like I said even I myself was not sure whether it made sense you know to to continue to go down that path because I performance Computing was only bringing us in maybe 200 to 50 million a quarter which was not enough to to sustain the r d that it needed but every single mistake no matter how small we just have this unrelenting Focus to make sure that we understand why it happened and how we're going to make sure that it's not going to happen again when you're building half a trillion transistor chips gpus right I mean it's it's not possible to to build those if you have a 0.1 probability percent probability of failure right because that will guarantee that the Chip is going to come back and not work so your probability of failure needs to be point zero zero zero zero one percent right yeah and and that's really what you know every generation every every tape out even if it is successful we do you you would if you go into our after Action reviews you would think this chip was a total failure based on the discussion that go on there right because we just take even near misses we take very seriously and so I do think that and and that's just not |
VhSGmVyKykg | it's it's not possible to to build those if you have a 0.1 probability percent probability of failure right because that will guarantee that the Chip is going to come back and not work so your probability of failure needs to be point zero zero zero zero one percent right yeah and and that's really what you know every generation every every tape out even if it is successful we do you you would if you go into our after Action reviews you would think this chip was a total failure based on the discussion that go on there right because we just take even near misses we take very seriously and so I do think that and and that's just not just vlsi every single organization in the country even go to market I have to go to market you know Raj will do an after action review what went right what went wrong probably after this event we'll do one so I think that's the culture that you established from the very beginning of openness of intellectual honesty and Australia yeah you know one of the things that you you said so learning and this is this is a personal characteristic of mine and also a characteristic of our company we always remember the learnings we always forget the pain and suffering and this is really really important I'm saying something that's really important you have to let the pain and suffering go you can't be resilient if you if you if all you remember is the pain and suffering and not the learnings you know learn what you learn celebrate the learnings right celebrate the learnings and then move on in a lot of ways I just don't remember the past you know people ask me about the past I don't remember it and the reason for that is I just remember all the learnings and and so I think that that's a really important skill the the other the other the other characteristic that that you reminded you you um is uh is the the kind of a joy and and um confronting pain and suffering I don't know I don't know it must be the the the the feeling that marathoners go through um but but uh we we really I think I think there's a I used to use this phrase I use it less now and you know I used to say this almost every single day is the entertainment value entertainment value of the pain and suffering can't be understated and and the fact you're you're uh the newest on on this on this team and and you're working on on a really extraordinary thing one of the one of the one of the challenges uh that you have is is you were you were a new employee and you were assigned a new |
VhSGmVyKykg | know I don't know it must be the the the the feeling that marathoners go through um but but uh we we really I think I think there's a I used to use this phrase I use it less now and you know I used to say this almost every single day is the entertainment value entertainment value of the pain and suffering can't be understated and and the fact you're you're uh the newest on on this on this team and and you're working on on a really extraordinary thing one of the one of the one of the challenges uh that you have is is you were you were a new employee and you were assigned a new problem our company is not in the semiconductor manufacturing business and and uh cool litho is is spot on right in the center of the semiconductor manufacturing um process and and uh how how was it on the one hand bringing this capability into our company working on this new Endeavor on the one hand and on the other hand tapping into the the tapestry of resources in the company that you know harness all of these expertise that come together to solve this problem in a way that nobody's ever solved before um I think it goes back to something said and something I also mentioned in my question which is this audacious goal I think recognizing which problem to solve and setting a goal that nobody believes is possible right I you know I've worked in this area before I know the other people who worked in this narrow Arcane field core competition lithography and doing just a 2X seem nearly impossible now be burdened by the general purpose computing but for you frankly as I think I've said to you 2x was hard for Jensen to say that we'll do an order of magnitude 10x was just we didn't know how to how to even come close but somehow believing that it is possible I think is is a very big deal I think believing that something can be done is very very important and that was sort of frankly all gents I've I've worked at other places and I know what the Alternatives can be like and to have a leader who's sort of technically that engaged I mean on Saturday morning I would get papers on inverse lithography from Jackson I said wow I mean that's just unheard of right ask me more later on about all the technical details Jensen is injected the other thing is this uh very flat organization that has been created allows you to draw in from on the expertise of many other people places where I worked on before there's a lot of talk of transparency and fluidity and no boundaries but in practice seeing it |
VhSGmVyKykg | that was sort of frankly all gents I've I've worked at other places and I know what the Alternatives can be like and to have a leader who's sort of technically that engaged I mean on Saturday morning I would get papers on inverse lithography from Jackson I said wow I mean that's just unheard of right ask me more later on about all the technical details Jensen is injected the other thing is this uh very flat organization that has been created allows you to draw in from on the expertise of many other people places where I worked on before there's a lot of talk of transparency and fluidity and no boundaries but in practice seeing it like I've seen it at Nvidia and so that combination I think I said three things uh a leader who can recognize the problems of the future a leader who has the appetite for all the technical you know complexity of that problem and then an organization which basically is truly boundaryless in spite of lip service having been paid to that um uh you know I think the organization was a company's organization is the Machinery that allows it to do its work and and I find that a lot of companies are organizing exactly the same way even though they do very different things you know the way that you organize a fried chicken restaurant and the way you organize the chicken biryani restaurant you probably ought to be different and the reason for that is you're making different things and and if you're building gpus you know that would be different if you're building chips versus what Nvidia does which is building a Computing snack it ought to be different um and so so I think the architecture of our company is designed to do that the other thing is is our our company's architecture is is designed to empower leaders and therefore it makes some sense that that at the highest level of the company or the lowest level how you want to think about it the the number of direct reports to me should be the highest and the reason for that is you're the most senior and you don't need as much career advice you don't need as much hand holding um the the tapering the tapering of organizations really struck strikes me wrong almost at the very time or at the very bottom however you want to think about the organization and then the the second thing I would say is that that I my greatest gift is actually being surrounded by amazing people like yourself and all of you are teaching me about your domain of expertise and as you know I'm a a generous learner and so I I try to learn as much from you guys as possible in order to in order to form the |
VhSGmVyKykg | for that is you're the most senior and you don't need as much career advice you don't need as much hand holding um the the tapering the tapering of organizations really struck strikes me wrong almost at the very time or at the very bottom however you want to think about the organization and then the the second thing I would say is that that I my greatest gift is actually being surrounded by amazing people like yourself and all of you are teaching me about your domain of expertise and as you know I'm a a generous learner and so I I try to learn as much from you guys as possible in order to in order to form the intuition about what's possible you know the I have I have some background and and all of you have your background and together figure out intuitively what is what are the first principles what are the governing Dynamics and what is if you will the speed of light of what's achievable and and that that I think is is how we set goals in the company instead of letting some external uh rent of their party decide how we ought to do we have to decide on first principles how we should achieve ment okay well I exhausted my time attention that's a line of people in your mind I'm delighted to take as many questions as I'm allowed to take if I'm not gonna give us time or not pick two questions okay I don't want to extend my I don't want to extend my welcome this I understand that this is you know I I was hoping that somebody would ask me about about my last week in India maybe you'll get that question and so let's see we have five minutes come on it's okay so I wasn't I was in Guru gram uh by the way group Grandma looks like this of uh of India it's grown incredibly since that last time I was there I was Deli I was in Delhi uh to uh see uh Moto G and and um I spent almost an hour and a half with him uh he's incredible uh the last time I saw him was five years ago uh pre-pandemic and I was there to address the the his cabinet of our artificial intelligence and he was he was a very very uh inquisitive about that um I told him several things may I may I do this absolutely not do I have time to do this absolutely thinks the first thing is I told him is that India has its uh uh what is it 23 languages 25 different 2500 different dialects you possess your own data um you have a you have an indigenous Market it's a giant Edition indigenous Market you should not |
VhSGmVyKykg | an hour and a half with him uh he's incredible uh the last time I saw him was five years ago uh pre-pandemic and I was there to address the the his cabinet of our artificial intelligence and he was he was a very very uh inquisitive about that um I told him several things may I may I do this absolutely not do I have time to do this absolutely thinks the first thing is I told him is that India has its uh uh what is it 23 languages 25 different 2500 different dialects you possess your own data um you have a you have an indigenous Market it's a giant Edition indigenous Market you should not export data everything and then import the AI what India should do is import the technology of uh of AI but build your own Ai and Export Ai and um and what he said was uh Jensen that reminds me of of something I told some Farmers once let's not export grain let's export bread thank you and that makes that makes a lot of sense that makes a lot of sense don't don't export the raw material uh India is going to be a giant market for AI the second thing we spoke about is the the track the tractability of re-skilling um the entire I.T industry of of um of India this is the reason why I met with uh Nanda Industries and why I met with Chandra as you know they revolutionized how it is done and uh it is way harder than AI and and um and I explained that to to download and then and then the third part uh is um to uh formulate uh Partnerships with large companies in India so that so that we could um together uh revolutionize the the the the industry again uh together and uh um build AI in India for India that was really the mission of the trip thank you for sharing that that's awesome Jensen it's not easy to get to get you here so and since you're here let's try to take some questions for our audience I'd like to do it we don't have a lot of time but folks be selective about your questions and we might have to cut it up at some point so let's go yeah technology Foundation secretary I have been hovering around Walsh Avenue and the place six sodas was there he said please dollars so I am very very what you call out by your taking the question and coming to the point you went you made modiji you met Indian professionals and all that in India of course with all Indians here Point here is you have coda computer unified device analytics one second you do have a unified approximation which you described what is |
VhSGmVyKykg | some questions for our audience I'd like to do it we don't have a lot of time but folks be selective about your questions and we might have to cut it up at some point so let's go yeah technology Foundation secretary I have been hovering around Walsh Avenue and the place six sodas was there he said please dollars so I am very very what you call out by your taking the question and coming to the point you went you made modiji you met Indian professionals and all that in India of course with all Indians here Point here is you have coda computer unified device analytics one second you do have a unified approximation which you described what is your intuition and what is your looking into National mission on Quantum technology and application we have called in India nmqta what's your intuition easy which is the future people say I Quantum Quantum Computing isn't going to solve all of computing Quantum Computing is right specialized in what it can solve the type of problems it can solve you know as you know Quantum is is very good at compute intensive problems that are not very data intensive and and which is the reason why biology as an example cryptography is an example but data processing is not an example for for a Quantum Computing computer computer Graphics is not a good example for for Quantum Computing and so how to arrange how to arrange uh 500 guests at an Indian wedding that would be that would be a perfect problem for Quantum Computing now that that problem is either for for quantum computers or or the Mother-in-law it's one or the other and so so um the future of computing will likely have a Quantum accelerator for the for the for the for the few problems that that Quantum is going to be quite specialized in but even then it's going to take a couple of decades for us to get there but in the meantime we use Cuda you probably know about coup Quantum and and Cuda quantum where we're working with the quantum industry to architect classical Quantum systems and uh in order to design the fastest computer in the future you have to have the fastest computer today and they could included gpus is the fastest way to do that thank you so extending on two sites one on the communication let's have one question please let's yeah thank you thank you hello my name is Sanjeev Bode I work for Infosys and thank you for meeting Nandan he has always been talking about you ever since yeah in fact I think apart from Bollywood Stars you are the one who managed to wear a jacket a leather jacket in in Indian humid conditions I don't know how you can manage that in fact I |
VhSGmVyKykg | and uh in order to design the fastest computer in the future you have to have the fastest computer today and they could included gpus is the fastest way to do that thank you so extending on two sites one on the communication let's have one question please let's yeah thank you thank you hello my name is Sanjeev Bode I work for Infosys and thank you for meeting Nandan he has always been talking about you ever since yeah in fact I think apart from Bollywood Stars you are the one who managed to wear a jacket a leather jacket in in Indian humid conditions I don't know how you can manage that in fact I was in Italy just two days ago watching Da Vinci and Jim Cramer my Guru talks to about you as as DaVinci so it's a it's a pleasure really to see you in Flesh and Blood yeah can I have the question please yes this you might be stuck here for a whole day I don't know that either I'm glad to be here the question so I'm a social media influencer and there ever since London spoke about your interactions with him there was a lot of questions about saying will Nvidia and companies like you AI is out there to take my job what is the message you would like to give not only to India or Bharat but to the rest of the world there's a whole section which is drinking from the kool-aids AI is going to destroy the future which might not be true but what is your take I would like to hear that um AI will definitely shape reshape the future of work there's no question about it I'm going to use a lot of AI Samir Samir already uses a ton of air and and uh viveki uses AI uh all of us are going to use AI and we're none of us are afraid at all that we're going to lose our job to the AI we're all afraid that we're going to lose our job to somebody who uses Ai and we don't and so that's that's really the answer I think the the first thing to realize is you're not going to lose your job to yeah you're going to lose your job to somebody who uses AI um the second thing is is we're all going to be augmented by AI the future chips that Samir builds will be impossible without AI in fact the current ships we build are impossible without it we just don't talk about it but AI is all over our company and we can't do our jobs without it and and we're not going to be unique everybody's |
VhSGmVyKykg | we're going to lose our job to somebody who uses Ai and we don't and so that's that's really the answer I think the the first thing to realize is you're not going to lose your job to yeah you're going to lose your job to somebody who uses AI um the second thing is is we're all going to be augmented by AI the future chips that Samir builds will be impossible without AI in fact the current ships we build are impossible without it we just don't talk about it but AI is all over our company and we can't do our jobs without it and and we're not going to be unique everybody's going to be the same way it's going to augment um and turbocharge the work that we do now ultimately the question is is it good is it good for the world for Industries to be more productive now I just asked a very sensible question and and people usually go all the way to the Limit if the empty industry is infinitely productive meaning no humans are necessary then we're all going to lose our jobs and we'll you know we'll we'll Star Trek around the universe however to the so that so long as we realize we're not going to be infinitely productive but we're just going to be more productive then more productivity helps who just who says guess what you know what we just want less productivity I think we're just going way too fast and we're doing things way too well now I I would I would say that there uh where where productivity may not be helpful for example raising a child or nurturing a child or something you know I could imagine that productivity is not helpful uh and enjoying uh enjoying your garden because robot do it do it not what you're looking for I can imagine a lot of things like that but most Industries are looking for more productivity more safety um two more questions please that's it thanks for being here I have two questions I'll stick with one um so uh you made semi-industry for the record six and especially since the movement from proactive AI to genetic AI all the hyperscalers now are changing two gpus do you think that this is a situation where the first hardware and software they'll dominate or is there a more sustainable story for the hardware yeah appreciate the question the um VCS tell me that we were singularly responsible for them investing in chip companies again and and um I I think that that's an overstatement but I would say I would say that that AI is is a as a seriously groundbreaking thing um and the reason for that is is several reasons one the type of applications that |
VhSGmVyKykg | you made semi-industry for the record six and especially since the movement from proactive AI to genetic AI all the hyperscalers now are changing two gpus do you think that this is a situation where the first hardware and software they'll dominate or is there a more sustainable story for the hardware yeah appreciate the question the um VCS tell me that we were singularly responsible for them investing in chip companies again and and um I I think that that's an overstatement but I would say I would say that that AI is is a as a seriously groundbreaking thing um and the reason for that is is several reasons one the type of applications that we can now go and solve and go do our our unimaginable in the past that the things that we were talking about earlier but at the pro at the foundational level at the foundational level the computer the simple logic is this we used to program our computer with with either uh programming languages like C plus plus and python or domain-specific programming languages like SQL or for example Cuda is a domain specific programming language um and now we're programming the computer using uh the human language and and that's another way of saying that everybody is going to be a programmer and this is one of the this is one of the the points I made with with uh with modig that this is this is a this is this is an incredibly important moment in time for India and the reason for that is is that the people that were left behind by the last technology divide that you know you know that that computer science has created a larger technology divide not reduced it and the reason for that is the number of people who know how to program C plus plus is really small compared to a number of population in the world but the number of people who can chat is basically 100 and and as a result everybody knows how to program a computer now so what is a what is a programming model look like in the future in the future uh you're probably starting to hear about these things called Lang chain and Lang flow and things like that basically what it basically what it is is is a chaining of AI models to create an application hey what is what is an intelligence that I'm looking for here what problem what what kind of problem do I want to solve and what intelligence do I have that I can go get an AI model maybe I could chain it with another intelligent what am I doing I'm assembling a team I'm literally assembling a team a team of software I connect them together graphically I just take your output connect to that and and you you the team members know how |
VhSGmVyKykg | uh you're probably starting to hear about these things called Lang chain and Lang flow and things like that basically what it basically what it is is is a chaining of AI models to create an application hey what is what is an intelligence that I'm looking for here what problem what what kind of problem do I want to solve and what intelligence do I have that I can go get an AI model maybe I could chain it with another intelligent what am I doing I'm assembling a team I'm literally assembling a team a team of software I connect them together graphically I just take your output connect to that and and you you the team members know how to communicate with each other if anybody can communicate with chat GPT then any AI model can communicate with another AI model they'll figure it out they'll negotiate among themselves and figure out what's the best way to communicate and they just and they create maybe even their own language and so this this way of assembling software in the future is unimaginable today could you imagine that in the future you don't have to program software you just assembled AIT that's the future so the programming model of the world has changed the type of application has changed and another way of thinking thinking Computing and why and why out is is a for the longest time the hardware was general purpose but the software is really brittle you have this application called Excel you have another application called word you have another and there's singular applications now all of a sudden the hardware specialized and because the hardware specializes insanely fast in the last 10 years we accelerated deep learning by a million x a million times that is so much more than Moore's Law is unbelievable and so so now the hardware is incredibly fast but the software on top of it now can be General does that make sense so so the the programming abstraction has flipped on its end and because of that um computer scientists are are so incredibly excited about the future of computing it's it's it's um opened up a whole bunch of new new possibilities there's only one more question please okay disposal that makes it makes it work so I was told that you have to make the tip work because we provide you every single tools to make things work yeah you're absolutely right okay this is the last question folks so sorry um you are such a Visionary of course um so if we are talking to you three years or five years down the road or if you're in this conference what is a application you can imagine um you can share with us would be most sophisticated things we could do the two the two applications that I mentioned I was I mentioned right |
VhSGmVyKykg | um opened up a whole bunch of new new possibilities there's only one more question please okay disposal that makes it makes it work so I was told that you have to make the tip work because we provide you every single tools to make things work yeah you're absolutely right okay this is the last question folks so sorry um you are such a Visionary of course um so if we are talking to you three years or five years down the road or if you're in this conference what is a application you can imagine um you can share with us would be most sophisticated things we could do the two the two applications that I mentioned I was I mentioned right away and one of them is protein engineering engineering biology and then the other one is is uh the digitalization of heavy industry to bring AI to factories to bring AI to buildings to build you know to bring software um to everything that we do you design everything in software software languages I don't mean CAD I mean software languages and through software languages it generates a building it's your software languages it generates a factory through software languages it generates you know and when you do that when you do that the chassis of the car won't be over engineered when you do that that you generate of course the shape and the form you might be you know human human inspired but the rest of the generation of it just like logic synthesis if you will the rest of the things that that affect rebar and cement will be auto-generated by AI and that generative process would result in a structure that is that is that a slider uses less waste and so I've now described two problems that were impossible to be software defined in the past because we simply didn't have representations of them but in the will be able to rip languages and we'll be able to generate them using okay thank you thank you thank you |
Ckz8XA2hW84 | Ilia unbelievable today is the day after gpt4 it's great to have you here I'm delighted to have you I've known you a long time the journey and just my mental hit my my mental memory of your of the time that I've known you and the seminal work that you have done starting in Universal University of Toronto the co-invention of alexnet with Alex and Jeff Hinton that led to the Big Bang of modern artificial intelligence your career that took you out here to the Bay Area the founding of open Ai gpt123 and then of course chat GPT the AI heard around the world this is this is the incredible resume of a young computer scientist um you know an entire community and Industry at all with your achievements I I guess my I just want to go back to the beginning and ask you deep learning what was your intuition around deep learning why did you know that it was going to work did you have any intuition that are going to lead to this kind of success okay well first of all thank you so much for the quote for all the kind words a lot has changed thanks to the incredible power of deep learning like I think this my personal starting point I was interested in artificial intelligence for a whole variety of reasons starting from an intuitive understanding of appreciation of its impact and also I had a lot of curiosity about what is consciousness what is The Human Experience and it felt like progress in artificial intelligence will help with that The Next Step was well back then I was starting in 2002 2003 and it seemed like learning is the thing that humans can do that people can do that computers can't do at all in 2003 2002 computers could not learn anything and it wasn't even clear that it was possible in theory and so I thought that making progress in learning in artificial learning in machine learning that would lead to the greatest progress in AI and then I started to look around for what was out there and nothing seemed too promising but to my great luck Jeff Hinton was a professor at my University and I was able to find him and he was working in neural networks and it immediately made sense because neural networks had the property that we are learning we are automatically programming parallel computers back then the parallel computers were small but the promise was if you could somehow figure out how learning in neural networks work then you can program small parallel computers from data and it was also similar enough to the brain and the brain works so it's like you had these several factors going for it now it wasn't clear how to get it to work but of all the things that existed that seemed like it had |
Ckz8XA2hW84 | to my great luck Jeff Hinton was a professor at my University and I was able to find him and he was working in neural networks and it immediately made sense because neural networks had the property that we are learning we are automatically programming parallel computers back then the parallel computers were small but the promise was if you could somehow figure out how learning in neural networks work then you can program small parallel computers from data and it was also similar enough to the brain and the brain works so it's like you had these several factors going for it now it wasn't clear how to get it to work but of all the things that existed that seemed like it had by far the greatest long-term promise even though in the time that you first started at the time that you first started working with deep learning and and neural networks what was what was the scale of the network what was the scale of computing at that moment in time what was it like an interesting thing to note is that the importance of scale wasn't realized back then so people would just training on neural networks with like 50 neurons 100 neurons several hundred neurons that would be like a big neural network a million parameters would be considered very large we would run our models on unoptimized CPU code because we were a bunch of researchers we didn't know about blast we used Matlab the Matlab was optimized and we just experiment like what is there what is even the right question to ask you know so you try to want to gather to just find interesting phenomena interesting observation you can do this small thing you can do that small thing you know Jeff Hinton was really excited about training neural Nets on small little digits both for classification and also he was very interested in generating them so the beginnings of generating models were right there but the question is like okay so you've got all this cool stuff floating around what really gets traction and so that's it wasn't so it wasn't obvious that this was the right question back then but in hindsight that turned out to be the right question now now the the year Alex net was 2012. 2012. now you and Alex were working on Alex net for some time before then and and uh at what point at what point was it was it clear to you that you wanted to uh build a computer vision oriented neural network that imagenet was the right set of data to go for and to somehow go for the computer computer vision contest yeah so I can talk about the context there I think probably two years before that it became clear to me that supervised learning is what's going to get us the traction and I can explain |
Ckz8XA2hW84 | question back then but in hindsight that turned out to be the right question now now the the year Alex net was 2012. 2012. now you and Alex were working on Alex net for some time before then and and uh at what point at what point was it was it clear to you that you wanted to uh build a computer vision oriented neural network that imagenet was the right set of data to go for and to somehow go for the computer computer vision contest yeah so I can talk about the context there I think probably two years before that it became clear to me that supervised learning is what's going to get us the traction and I can explain precisely why it wasn't just an intuition it was I would argue an irrefutable argument which went like this if your neural network is deep and large then it could be configured to solve a hard task so that's the key word deep and large people weren't looking at large neural networks people were you know maybe studying a little bit of depth in neural networks but most of the machine learning field wasn't even looking at neural networks at all they were looking at all kinds of Bayesian models and kernel methods HR theoretically elegant methods which have the property that they actually can't represent a good solution no matter how you configure them whereas the large and deep neural network can represent a good solution to the problem to find the good solution you need a big data set which requires it and a lot of compute to actually do the work we've also made advanced work so we've worked on optimization for for a little bit it was clear that optimization is a bottleneck and there was a breakthrough by another grad student in Jeff hinton's lab called James Martens and he came up with an optimization method which is different from the one we're doing now using now some second order method but the point about it is that it's proved that we can train those neurologics because before we didn't even know we could train them so if you can train them you make it big you find the data and you will succeed so then the next question is well what data and an imagenet data set back then it seemed like this unbelievably difficult data set but it was clear that if we were to train a large convolutional neural network on this data set it must succeed if you just can have the compute and write that right at a time gpus camera you and I you and I are history and our paths intersected and somehow you had the the the the observation that a GPU and at that time we had this is our couple of generations into a Cuda |
Ckz8XA2hW84 | train them so if you can train them you make it big you find the data and you will succeed so then the next question is well what data and an imagenet data set back then it seemed like this unbelievably difficult data set but it was clear that if we were to train a large convolutional neural network on this data set it must succeed if you just can have the compute and write that right at a time gpus camera you and I you and I are history and our paths intersected and somehow you had the the the the observation that a GPU and at that time we had this is our couple of generations into a Cuda GPU and I think it was GTX 580 uh generation you had the you're at the inside that the GPU could actually be useful for training your neural network models what was that how did that day start tell me you know you and I you never told me that moment you know how did that day start yeah so you know the the GP the gpus appeared in our in our lab in our Toronto lab thanks to Jeff and he said we should be good we should try this gpus and we started trying and experimenting with them and it was a lot of fun but but it was unclear what to use them for exactly where are you going to get the real traction but then with the existence of the imagenet data set and then it was also very clear that the convolutional neural network is such a great fit for the GPU so it should be possible to make it go unbelievably fast and therefore train something which would be completely unprecedented in terms of its size and that's how it happened and you know very fortunately Alex krajevski he really loved programming the GPS and he was able to do it he was able to code to program really fast convolutional kernels and then and trained the neuralness on the image and data set and that led to the result but it was like it shocked the world this shocked the world it it broke the record of computer vision by such a wide margin that that it was a clear discontinuity yeah yeah and I wouldn't I would say it's not just like there is another bit of context there it's not so much like when you say break the record there is an important it's like I think there's a different way to phrase it it's that that data set was so obviously hard and so obviously outside of reach of anything people are making progress with some classical techniques and they were actually doing something but this thing was so much better on a data set which was so obviously hard it was |
Ckz8XA2hW84 | shocked the world this shocked the world it it broke the record of computer vision by such a wide margin that that it was a clear discontinuity yeah yeah and I wouldn't I would say it's not just like there is another bit of context there it's not so much like when you say break the record there is an important it's like I think there's a different way to phrase it it's that that data set was so obviously hard and so obviously outside of reach of anything people are making progress with some classical techniques and they were actually doing something but this thing was so much better on a data set which was so obviously hard it was it's not just that it's just some competition it was a competition which back in the day an average Benchmark it was so obviously difficult so obviously Out Of Reach and so obviously with the property that if you did a good job that would be amazing Big Bang of AI fast forward to now uh you came out to the valley you started open AI with some friends um you're the chief scientist now what was the first initial idea about what to work on at open AI because you guys worked on several things some of the trails of of inventions and and work uh you could you could see led up to the chat GPT moment um but what were the initial inspiration what were you how would you approach intelligence from that moment and led to this yeah so obviously when we started it wasn't 100 clear how to proceed and the field was also very different compared to the way it is right now so right now you already used we already used to you have these amazing artifacts these amazing neural Nets who are doing incredible things and everyone is so excited but back in 2015 2016 early 2016 when you were starting out the whole thing seemed pretty crazy there were so many fewer researchers like 100 maybe they were between a hundred and a thousand times fewer people in the field compared to now but back then you had like 100 people most of them were working in Google slash deepmind and that was that and then there were people picking up the skills but it was very very scarce very rare still and we had two big initial ideas at the start of open AI that state that had a lot of staying power and they stayed with us to this day and I'll describe them right now the first big idea that we had one which I was especially excited about very early on is the idea of unsupervised learning through compression some context today we take it for granted that unsupervised learning is this easy thing and you just pre-train on everything and it all does exactly as you |
Ckz8XA2hW84 | most of them were working in Google slash deepmind and that was that and then there were people picking up the skills but it was very very scarce very rare still and we had two big initial ideas at the start of open AI that state that had a lot of staying power and they stayed with us to this day and I'll describe them right now the first big idea that we had one which I was especially excited about very early on is the idea of unsupervised learning through compression some context today we take it for granted that unsupervised learning is this easy thing and you just pre-train on everything and it all does exactly as you'd expect in 2016 unsupervised learning was an unsolved problem in machine learning that no one had any insight exactly any clue as to what to do that's right iyanla Khan would go around and give a talk give talk saying that you have this Grand Challenge and supervised learning and I really believed that really good compression of the data will lead to unsupervised learning now compression is not language that's commonly used to describe what is really being done until recently when suddenly it became apparent to many people that those gpts actually compress the training data you may recall that Ted Chiang New Times article which also alluded to this but there is a real mathematical sense in which training these autoregressive generative models compress the data and intuitively you can see why that should work if you compress the data really well you must extract all the hidden secrets which exist in it therefore that is the key so that was the first idea that we're really excited about and that led to quite a few Works in openai to the sentiment neuron which I'll mention very briefly it is not this work might not be well known outside of the machine learning field but it was very influential especially in our thinking this work like the result there was that when you train a neural network back then it was not a Transformer it was before the Transformer right small recurrent neurological lstm sequence work you've done I mean there's some of your some of the words that you've done yourself you know so the same lsdm with a few twists trying to predict the next token in Amazon reviews next character discovered that if you predict the next character well enough it will be a neuron inside that lstm that corresponds to its sentiment so that was really cool because it's showed some traction for unsupervised learning and it's validated the idea that really good next character prediction next something prediction compression yeah has the property that it discovers the secrets in the data that |
Ckz8XA2hW84 | former it was before the Transformer right small recurrent neurological lstm sequence work you've done I mean there's some of your some of the words that you've done yourself you know so the same lsdm with a few twists trying to predict the next token in Amazon reviews next character discovered that if you predict the next character well enough it will be a neuron inside that lstm that corresponds to its sentiment so that was really cool because it's showed some traction for unsupervised learning and it's validated the idea that really good next character prediction next something prediction compression yeah has the property that it discovers the secrets in the data that's what we see with these GPT models right you train and people say just statistical correlation I mean at this point it should be so clear to anyone that observation also you know for me intuitively open up the whole world of where do I get the data for unsupervised learning because I do have a whole lot of data if I could just make you predict the next character and I know what the ground truth is I know what the answer is I could be I could train a neural network model with that so that that observation and masking and other other technology other approaches you know open open my mind about where would the world get all the data that's unsupervised for unsupervised learning well I think I think so I would I would phrase it a little differently I would say that within supervised learning the hard part has been less around where you get the data from though that part is there as well especially now but it was more about why should you do it in the first place why should you bother card part was to realize that the training these neural Nets to predict the next token is a worthwhile goal at all they would learn a representation that it would it would be able to understand that's right but it will be youth grammar and yeah but to actually to actually just wasn't obvious right so people weren't doing it but the sentiment neuron work and you know I want to call out Alec Radford is a person who really was responsible for many of the advances there the sentiment this this was this was before GPT one it was the precursor to GPT one and it influenced our thinking a lot then the Transformer came out and we immediately went oh my God this is the thing and we trained mid-train gpt-1 now along the way you've always believed that scaling will improve the performance of these models yes larger larger networks deeper networks more training data would scale that um there was a very important paper that open |
Ckz8XA2hW84 | obvious right so people weren't doing it but the sentiment neuron work and you know I want to call out Alec Radford is a person who really was responsible for many of the advances there the sentiment this this was this was before GPT one it was the precursor to GPT one and it influenced our thinking a lot then the Transformer came out and we immediately went oh my God this is the thing and we trained mid-train gpt-1 now along the way you've always believed that scaling will improve the performance of these models yes larger larger networks deeper networks more training data would scale that um there was a very important paper that open AI wrote about the scaling laws and the relationship between loss and the size of the model and the amount of data set the size of the data set when Transformers came out it gave us the opportunity to train very very large models in very reasonable amount of time but what did the intuition about about the scaling laws or the size of of models and data and your journey of gpt123 which came first did you see the evidence of GPT one through three first that would was there the intuition about the scaling law first the intuition so I would say that the way the way I'd phrase it is that I had a very strong belief that bigger is better and that one of the goals that we had at open AI is to figure out how to use the scale correctly there was a lot of belief about an open AI about scale from the very beginning the question is what to use it for precisely because I'll mention right now we're talking about the gpts but there's another very important line of work which I haven't mentioned the second big idea but I think now is a good time to make a detour and that's reinforcement learning that clearly seems important as well what do you do with it so the first really big project that was done inside open AI was our effort at solving a real-time strategy game and for context a real-time strategy game is like it's a competitive sport yeah right we need to be smart you need to have faster you need to have a quick reaction time you there's steam work and you're competing against another team and it's pretty it's pretty it's pretty involved and there is a whole competitive league for that game the game is called DotA 2. and so we train the reinforcement learning agent to play against itself to produce with the goal of the reaching a level so that it could compete against the best players in the world and that was a major undertaking as well it was a very different |
Ckz8XA2hW84 | -time strategy game and for context a real-time strategy game is like it's a competitive sport yeah right we need to be smart you need to have faster you need to have a quick reaction time you there's steam work and you're competing against another team and it's pretty it's pretty it's pretty involved and there is a whole competitive league for that game the game is called DotA 2. and so we train the reinforcement learning agent to play against itself to produce with the goal of the reaching a level so that it could compete against the best players in the world and that was a major undertaking as well it was a very different line it was reinforcement learning yeah I remember the day that you guys announced that work and this is this by the way when I was asking earlier about about there's a there's a large body of work that have come out of open AI some of it seem like detours um but but in fact as you were explaining now they might might have been detours is seemingly detours but they they really led up to some of the important work that we're now talking about chat GPT yeah I mean there has been real convergence where the gpts produce the foundation and in the reinforcement learning from DOTA morphed into reinforcement learning from Human feedback that's right and that combination gave us child GPT you know there's a there's a there's a misunderstanding that that uh chat GPT is in itself just one giant large language model there's a system around it that's fairly complicated is it could could you could you explain um briefly for the audience the the uh the fine-tuning of the the reinforcement learning of the the the um uh you know the various surrounding systems that allows you to keep it on Rails and and let it let it uh give it knowledge and you know so on and so forth yeah I can so the way to think about it is that when we train a large neural network to accurately predict the next word in lots of different texts from the internet what we are doing is that we are learning a world model it looks like we are learning this it may it may look on the surface that we are just learning statistical correlations in text but it turns out that to just learn the statistical correlations in text to compress them really well what the neural network learns is some representation of the process that produced the text this text is actually a projection of the world there is a world out there and it has a projection on this text and so what the neural network is learning is more and more aspects of the world of people of the |
Ckz8XA2hW84 | neural network to accurately predict the next word in lots of different texts from the internet what we are doing is that we are learning a world model it looks like we are learning this it may it may look on the surface that we are just learning statistical correlations in text but it turns out that to just learn the statistical correlations in text to compress them really well what the neural network learns is some representation of the process that produced the text this text is actually a projection of the world there is a world out there and it has a projection on this text and so what the neural network is learning is more and more aspects of the world of people of the human conditions their their hopes dreams and motivations their interactions and the situations that we are in and the neural network learns a compressed abstract usable representation of that this is what's being learned from accurately predicting the next word furthermore the more accurate you are is predicting the next word the higher Fidelity the more resolution you get in this process so that's what the pre-training stage does but what this does not do specify the desired behavior that you wish our neural network to exhibit you see a language model what it really tries to do to answer the following question if I had some random piece of text on the internet which starts with some prefix some prompt what will it complete to if you just randomly ended up on some text from the internet but this is different from well I want to have an assistant which will be truthful that will be helpful that will follow certain guide rules and not violate them that requires additional training this is where the fine tuning and the reinforcement learning from Human teachers and other forms of AI assistance it's not just reinforcement learning from Human teachers it's also reinforcement learning from human and AI collaboration our teachers are working together with an AI to teach our AI to behave but here we are not teaching it new knowledge this is not what's happening we are teaching it we are communicating with it we are communicating to it what it is that we want it to be and this process the second stage is also extremely important the better we do the second stage the more useful the more reliable this neural network will be so the second stage is extremely important too in addition to the first stage of the learn everything learn everything learn as much as you can about the world from the projection of the world which is text now you could tell you could you could uh fine tune it you could instruct it to perform certain things can you instruct it to not perform certain things so that you can give it guard rails about avoid these type of behavior um you know give it some kind of |
Ckz8XA2hW84 | to it what it is that we want it to be and this process the second stage is also extremely important the better we do the second stage the more useful the more reliable this neural network will be so the second stage is extremely important too in addition to the first stage of the learn everything learn everything learn as much as you can about the world from the projection of the world which is text now you could tell you could you could uh fine tune it you could instruct it to perform certain things can you instruct it to not perform certain things so that you can give it guard rails about avoid these type of behavior um you know give it some kind of a bounding box so that so that it doesn't it doesn't wander out of that bounding box and and perform things that that are you know unsafe or otherwise yeah so this second stage of training is indeed where we communicate to the neural network anything we want which includes the bowling box and the better we do this training the higher the Fidelity with which we communicate this bounding box and so with constant research and Innovation on improving this fidelity we are able to improve this fidelity and so it becomes more and more reliable and precise in the way in which it follows the intended intended instructions Chad gbt came out just a few months ago um fastest growing application in the history of humanity lots of lots of uh uh interpretations about why um but some of the some of the things that that is clear it is it is the easiest application that anyone has ever created for anyone to use it performs tasks it performs things it does things that are Beyond people's expectation anyone can use it there are no instruction sets there are no wrong ways to use it you you just use it and uh if it's if your instructions are prompts are ambiguous the conversation refines the ambiguity until your intents are understood by by the by the application by the AI the the impact of course uh clearly remarkable now yesterday this is the day after gpt4 just a few months later the the performance of gpt4 in many areas astounding SAT scores GRE scores bar exams the number of the number of tests that it's able to perform at very capable levels very capable human levels astounding what were the what were the major differences between Chad GPT and gpt4 that led to its improvements in these in these areas so gpt4 is a pretty substantial Improvement on top of chat GPT across very many dimensions we train gpt4 I would say between more than six months ago maybe eight months ago I don't remember exactly GPT is the first build big difference between Shad G |
Ckz8XA2hW84 | day after gpt4 just a few months later the the performance of gpt4 in many areas astounding SAT scores GRE scores bar exams the number of the number of tests that it's able to perform at very capable levels very capable human levels astounding what were the what were the major differences between Chad GPT and gpt4 that led to its improvements in these in these areas so gpt4 is a pretty substantial Improvement on top of chat GPT across very many dimensions we train gpt4 I would say between more than six months ago maybe eight months ago I don't remember exactly GPT is the first build big difference between Shad GPT and gpd4 and that's perhaps is the more the most important difference is that the base on top of gpd4 is built predicts the next word with greater accuracy this is really important because the better a neural network can predict the next word in text the more it understands it this claim is now perhaps accepted by many at this point but it might still not be intuitive or not completely intuitive as to why there is so I'd like to take a small detour and to give an analogy that will hopefully clarify why more accurate prediction of the next word leads to more understanding real understanding let's consider an example say you read a detective novel it's like a complicated plot a storyline different characters lots of events Mysteries like Clues it's unclear then let's say that at the last page of the book the detective has gathered all the clues gathered all the people and saying okay I'm going to reveal the identity of whoever committed the crime and that person's name is predict that word predict that word exactly my goodness right yeah right now there are many different words but by predicting those words better and better and better the understanding of the text keeps on increasing gpt4 predicts the next word better I tell you people say that the Deep learning won't lead to reasoning that deep learning won't lead to reasoning but in order to predict that next word figure out from all of the agents that were there and and all of their you know strengths or weaknesses or their intentions and the context um and to be able to predict that word who who was the murderer that requires some amount of reasoning a fair amount of reasoning and so so how did that how did the how is it that that um that it's able to pre to learn reasoning and and if if it learn reasoning um you know one of the one of the things that I was going to ask you is of all the tests that were that were taken um between Chad gbt and gbd4 |
Ckz8XA2hW84 | to reasoning but in order to predict that next word figure out from all of the agents that were there and and all of their you know strengths or weaknesses or their intentions and the context um and to be able to predict that word who who was the murderer that requires some amount of reasoning a fair amount of reasoning and so so how did that how did the how is it that that um that it's able to pre to learn reasoning and and if if it learn reasoning um you know one of the one of the things that I was going to ask you is of all the tests that were that were taken um between Chad gbt and gbd4 there were some tests that gpt3 or Chad GPT was already very good at there were some tests that gbt3 or 10gb was not as good at um that gbt4 was much better at and there were some tests that neither are good at yet I would love for it you know and some of it has to do with reasoning it seems that you know maybe in in calculus that that it wasn't able to break maybe the problem down um into into its reasonable steps and solve it it is is it but yet in some areas it seems to demonstrate reasoning skills and so is that an area that that um that in predicting the next word you're you're learning reasoning and what are the limitations now of gpd4 that would enhance its ability to reason even even further you know reasoning isn't this super well-defined concept but we can try to Define it anyway which is when you maybe maybe when you go further where you're able to somehow think about it a little bit and get a better answer because of your reasoning and I'd say I'd say that there are neural Nets you know maybe there is some kind of limitation which could be addressed by for example asking the neural network to think out loud this has proven to be extremely effective for reasoning but I think it also remains to be seen just how far the basic neural network will go I think we have yet to tap fully tap out its potential but yeah I mean there is definitely some sense where reasoning is still not quiet at that level as some of the other capabilities of the neural network though we would like the reasoning capabilities of the neural network to be high higher I think that it's fairly likely that business as usual will keep will improve the reasoning capabilities of the neural network I wouldn't I wouldn't necessarily confidently roll out this possibility yeah because one of the things that that is really cool is you ask you as a treasury a question that before it answers the |
Ckz8XA2hW84 | also remains to be seen just how far the basic neural network will go I think we have yet to tap fully tap out its potential but yeah I mean there is definitely some sense where reasoning is still not quiet at that level as some of the other capabilities of the neural network though we would like the reasoning capabilities of the neural network to be high higher I think that it's fairly likely that business as usual will keep will improve the reasoning capabilities of the neural network I wouldn't I wouldn't necessarily confidently roll out this possibility yeah because one of the things that that is really cool is you ask you as a treasury a question that before it answers the question tell me first first of what you know and then to answer the question um you know usually when somebody answers a question if you give me the the foundational knowledge that you have or the foundational assumptions that you're making before you answer the question now that really improves the my believability of of the answer you're also demonstrating some level of reasonable and you're demonstrating reasoning and so it seems to me that chat GPD has this inherent capability embedded in it yeah to some degree yeah this the the the the the the the way the one way to think about what's happening now is that these neural networks have a lot of these capabilities they're just not quite very reliable in fact you could say that reliability is currently the single biggest obstacle for these neural networks being useful truly useful if sometimes it is still the case that these neural networks hallucinate a little bit or maybe make some mistakes which are unexpected which you wouldn't expect the person to make it is this kind of unreliability that makes them substantially less useful but I think that perhaps with a little bit more research with the current ideas that you have and perhaps a few more of the ambitious research plans you'll be able to achieve higher reliability as well and that will be truly useful that will allow us to have very accurate guard rails which are very precise that's right and it will make it ask for clarification where it's unsure or maybe say that it doesn't know something when it does anything it doesn't know and do so extremely reliably so I'd say that these are some of the bottlenecks really so it's not about whether it exhibits some particular capability but more how reliable degree exactly yeah you know one is speaking of speaking of factualness and faithfulness uh hallucination I I I saw in in uh one of the videos uh a demonstration that that um uh links to a Wikipedia page uh to it does retrieval capability uh has that been been included in the gpd4 |
Ckz8XA2hW84 | very precise that's right and it will make it ask for clarification where it's unsure or maybe say that it doesn't know something when it does anything it doesn't know and do so extremely reliably so I'd say that these are some of the bottlenecks really so it's not about whether it exhibits some particular capability but more how reliable degree exactly yeah you know one is speaking of speaking of factualness and faithfulness uh hallucination I I I saw in in uh one of the videos uh a demonstration that that um uh links to a Wikipedia page uh to it does retrieval capability uh has that been been included in the gpd4 is it able to retrieve information from a factful place that that could augment its response to you so the current gpt4 as released does not have a built-in retrieval capability it is just a really really good next word predictor which can also consume images by the way we haven't spoken about yeah it is really good at images which is also then fine-tuned with data and various reinforcement learning variants to behave in a particular way it is perhaps I'm sure someone will it wouldn't surprise me if some of the people who have access could perhaps request gpt4 to maybe make some queries and then populate the results inside inside the context because also the context duration of gpt4 is quite a bit longer now yeah that's right so in short although gbt4 does not support built-in retrieval it is completely correct that it will get better with retrieval multi modality gpt4 has the ability to learn from text and images and respond to input from text and images first of all the foundation of multi-modality learning of course Transformers has made it possible for us to learn from multimodality tokenized text and images but at the foundational level help us understand how multimodality enhances the understanding of the world Beyond text by itself and uh and my understanding is that that that when you when you um I do multi-modality learning that even when it is just a text prompt the text prompt the text understanding could actually be enhanced tell us about multi-modality at the foundation why it's so important and and what was the major breakthrough in the the and the characteristic differences as a result so there are two Dimensions to multimodality two reasons why it is interesting the first reason is a little bit humble the first reason is that multi-modality is useful it is useful for a neural network to see Vision in particular because the world is very visual human beings are very visual animals I believe that a third |
Ckz8XA2hW84 | that that when you when you um I do multi-modality learning that even when it is just a text prompt the text prompt the text understanding could actually be enhanced tell us about multi-modality at the foundation why it's so important and and what was the major breakthrough in the the and the characteristic differences as a result so there are two Dimensions to multimodality two reasons why it is interesting the first reason is a little bit humble the first reason is that multi-modality is useful it is useful for a neural network to see Vision in particular because the world is very visual human beings are very visual animals I believe that a third of the visual core of the human cortex is dedicated to vision and so by not having vision the usefulness of our neural networks though still considerable is not as big as it could be so it is a very simple usefulness argument it is simply useful to see and gpt4 can see quite well the there is a second reason to division which is that we learn more about the world by learning from images in addition to learning from text that is also a powerful argument though it is not as clear-cut as it may seem I'll give you an example or rather before giving an example I'll make the general comment for a human being us human beings we get to hear about 1 billion words in our entire life only only one billion words that's amazing yeah that's not a lot yeah that's not a lot so we need to come we need does that include my own words in my own head make it 2 billion but you see what I mean yeah you know we can see that because um a billion seconds is 30 years so you can kind of see like we don't get to see more than a few words a second then if you're asleep half the time so like a couple billion words is the total we get in our entire life so it becomes really important for us to get as many sources of information as we can and we absolutely learn a lot more from vision the same argument holds true for our neural networks as well except except for the fact that the neural network can learn from so many words so things which are hard to learn about the world from text in a few billion words may become easier from trillions of words and I'll give you an example consider colors surely one needs to see to understand calories and yet the text only neural networks who've never seen a single Photon in their entire life if you ask them which colors are more similar to each other it will know that red is more similar to Orange than to Blue it will know |
Ckz8XA2hW84 | of information as we can and we absolutely learn a lot more from vision the same argument holds true for our neural networks as well except except for the fact that the neural network can learn from so many words so things which are hard to learn about the world from text in a few billion words may become easier from trillions of words and I'll give you an example consider colors surely one needs to see to understand calories and yet the text only neural networks who've never seen a single Photon in their entire life if you ask them which colors are more similar to each other it will know that red is more similar to Orange than to Blue it will know that blue is more similar to purple than to Yellow how does that happen and one answer is that information about the world even the visual information slowly leaksane through text slowly not as quickly but then you have a lot of text you can still learn a lot of course once you also add vision and learning about the world from Vision you will learn additional things which are not captured in text but it is no I would not say that it is a binary there are things which are impossible to learn from the from text only I think this is more of an exchange rate and in particular as you want to learn if we are if you if you are if you are like a human being and you want to learn from a billion words or a hundred million words then of course the other sources of information become far more important yeah and so so the the uh you learn from images is there is there a sensibility that that would suggest that if we wanted to understand um also the construction of the world as in you know the arm is connected to my shoulder that my elbow is connected that somehow these things move the the the the the animation of the world the physics of the world if I wanted to learn that as well can I just watch videos and learn that yes you know and if I wanted to augment all of that would sound like for example if somebody said um the meaning of of great great could be great or great could be great you know so one is sarcastic one is enthusiastic uh there are many many words like that you know uh uh that's sick or you know I'm sick or I'm sick depending on how people say it or would audio also make a contribution to the learning of the the model and and could we put that to good use soon yes yeah I think I think it's definitely the case that well you know what can we say about audio it's useful it's an additional source of information probably not as much as images of video but |
Ckz8XA2hW84 | that would sound like for example if somebody said um the meaning of of great great could be great or great could be great you know so one is sarcastic one is enthusiastic uh there are many many words like that you know uh uh that's sick or you know I'm sick or I'm sick depending on how people say it or would audio also make a contribution to the learning of the the model and and could we put that to good use soon yes yeah I think I think it's definitely the case that well you know what can we say about audio it's useful it's an additional source of information probably not as much as images of video but the reason there is a case to be made for the usefulness of audio as well both on the recognition side and on the production side when you when you um uh on the on the context of the scores that I saw um the thing that was really interesting was was uh the the data that you guys published which which one of the tests were were um uh performed well by gpt3 and which one of the tests performed substantially better with gbt4 uh how did multi-modality contribute to those tests you think oh I mean in a pretty straightforward straightforward way anytime there was a test where a problem would where to understand the problem you need to look at a diagram like for example in some math competitions like there is a cont math competition for high school students called AMC and there presumably many of the problems have a diagram so GPT 3.5 does quite badly on that on that text on that on the test gpt4 with text only does I think I don't remember but it's like maybe from two percent to 20 accuracy of success rate but then when you add Vision it jumps to 40 success rate so the vision is really doing a lot of work the vision is extremely good and I think being able to reason visually as well and communicate visually will also be very powerful and very nice things which go beyond just learning about the world you have several things you got to learn you can learn about the world you can then reason about the world visually and you can communicate visually we're now in the future perhaps in some future version if you ask your neural net hey like explain this to me rather than just producing four paragraphs it will produce hey like it's here's like a little diagram which clearly conveys to you exactly what you need to know and so that's incredible you know one of the things that you said earlier about about an AI generating generating uh tests to train another AI um you know there's there was a paper that |
Ckz8XA2hW84 | nice things which go beyond just learning about the world you have several things you got to learn you can learn about the world you can then reason about the world visually and you can communicate visually we're now in the future perhaps in some future version if you ask your neural net hey like explain this to me rather than just producing four paragraphs it will produce hey like it's here's like a little diagram which clearly conveys to you exactly what you need to know and so that's incredible you know one of the things that you said earlier about about an AI generating generating uh tests to train another AI um you know there's there was a paper that was written about and I I don't I don't completely know whether whether it's factual or not but but there's there's a total amount of somewhere between 4 trillion to something like 20 trillion useful you know tokens in language tokens that that the world will be able to train on you know over some period of time and that way to run out of tokens to train and and um I I well first of all I wonder if that's you feel the same way and then the second there secondarily whether whether the AI generating its own um data could be used to train the AI itself which you could argue is a little circular but um we train our brain with generated data all the time by self-reflection working through a problem in our brain uh you know and and uh or you know some I guess I guess neuroscientists suggest sleeping uh we we do a lot of fair amount of you know developing our neurons um how do you see this this area of synthetic data generation is that going to be an important part of the future of training Ai and and the AI teaching itself well that's I think like I wouldn't underestimate the data that exists out there I think this probably I think is probably more more data Than People realize and as your second question certainly a possibility remains to be seen yeah yeah it it really does seem that that um one of these days our AIS are are um you know when we're not using it maybe generating either adversarial content for itself to learn from or imagine solving problems that that it can go off and and then and then improve itself tell us uh whatever you can about about uh where we are now and and what do you think will be in in not not too distant future but you know pick pick your your horizon a year or two uh where do you think this whole language Model area would be in some of the areas that you're most excited about you know predictions are hard and um it's |
Ckz8XA2hW84 | yeah yeah it it really does seem that that um one of these days our AIS are are um you know when we're not using it maybe generating either adversarial content for itself to learn from or imagine solving problems that that it can go off and and then and then improve itself tell us uh whatever you can about about uh where we are now and and what do you think will be in in not not too distant future but you know pick pick your your horizon a year or two uh where do you think this whole language Model area would be in some of the areas that you're most excited about you know predictions are hard and um it's a bit it's a bit although it's a little difficult to say things which are too specific I think it's safe to assume that progress will continue and that we will keep on seeing systems which Astound us in there in the things they can do and the current Frontiers are will be centered around reliability around the system can be trusted really get into a point where you can trust what it produces really get into a point where if it doesn't understand something it asks for clarification says that it doesn't know something says that it needs more information I think those are perhaps the biggest the areas where improvements will lead to the biggest impact on the usefulness of those systems because right now that's really what stands in the way you have an AF asking you ask a neural net to maybe summarize some long document and you get a summary are you sure that some important detail was intermitted it's still a useful summary but it's a different story when you know with all the important points have been covered at some point like and in particular it's okay like if some if there is ambiguity it's fine but if a point is clearly important such that anyone else who saw that point would say this is really important when the neural network will also recognize that reliably that's when you know same for the guardrail say same for its ability to clearly follow the intent of the user of its operator so I think we'll see a lot of that in the next two years yeah that's terrific because the progress in those two areas will make this technology trusted by people to use and be able to apply for so many things I was thinking that was going to be the last question but I did have another one sorry so Chad uh chat GPT to gpt4 gpt4 when when it first when you first started using it uh what are some of the skills that it demonstrated that surprised even you well there were lots of really cool things that he demonstrated which which |
Ckz8XA2hW84 | say same for its ability to clearly follow the intent of the user of its operator so I think we'll see a lot of that in the next two years yeah that's terrific because the progress in those two areas will make this technology trusted by people to use and be able to apply for so many things I was thinking that was going to be the last question but I did have another one sorry so Chad uh chat GPT to gpt4 gpt4 when when it first when you first started using it uh what are some of the skills that it demonstrated that surprised even you well there were lots of really cool things that he demonstrated which which is which were quite cool and surprising it was it was quite good so I'll mention two excess so let's see I'm just I'm just trying to think about the best way to go about it the short answer is that the level of its reliability was surprising where the previous neural networks if you ask them a question sometimes they might misunderstand something in a kind of a silly way whereas the gpt4 that stopped happening its ability to solve math problems became far greater it's like you could really like say sometimes you know it really do the derivation and like long complicated derivation you could convert the units and so on and that was really cool you know like many people what's your proof it works through a proof it's pretty amazing not all proofs yeah naturally but but quite a few or another example would be like many people noticed that it has the ability to produce poems with you know every word starting with the same letters or every word starting with some it follows instructions really really clearly not perfectly still but much better before yeah really good and on the vision side I really love how it can explain jokes you can explain memes you show it a meme and ask it why it's funny and it will tell you and it will be correct the the vision part I think is very was also very it's like really actually seeing it when you can ask questions follow-up questions about some complicated image with a complicated diagram and get an explanation that's really cool but yeah overall I will say to take a step back you know I've been I've been in this business for quite some time actually like almost exactly 20 years and the thing which most which I find most surprising is that it actually works yeah like it it's turned out to be the same little thing all along which is no longer little and it's a lot more serious and much more intense but it's the same neural network just larger trained on maybe larger data sets in different ways with |
Ckz8XA2hW84 | 's like really actually seeing it when you can ask questions follow-up questions about some complicated image with a complicated diagram and get an explanation that's really cool but yeah overall I will say to take a step back you know I've been I've been in this business for quite some time actually like almost exactly 20 years and the thing which most which I find most surprising is that it actually works yeah like it it's turned out to be the same little thing all along which is no longer little and it's a lot more serious and much more intense but it's the same neural network just larger trained on maybe larger data sets in different ways with the same fundamental training algorithm yeah so it's like wow I would say this is what I find the most surprising yeah whenever I take a step back I go how is it possible those ideas those conceptual ideas about well the brain has neurons so maybe artificial neurons are just as good and so maybe we just need to train them somehow with some learning algorithm that those arguments turned out to be so incredibly correct that would be the biggest surprise I'd say in the in the 10 years that we've known each other uh your your uh the near the models that you've trained and the amount of data you've trained from uh the what you did on alexnet to now is about a million times and and uh no no one in the world of computer science would have would have believed that the amount of computation that was done in that 10 years time would be a million times larger in that that you're dedicated your career to go go do that um you've done two uh many more uh your body of work is incredible but two seminal works and the invention the co-invention with Alex Ned and that that early work and and now with uh GPT at open AI uh it is it is truly remarkable what you've accomplished that's it's great to catch up with you again Ilya my good friend and and um uh it is uh it is a quite an amazing moment and it's uh today's today's talk the way you you uh break down the problem and describe it uh this is one of the one of the the the best PhD Beyond PhD descriptions of the state of the art of large language models I really appreciate that it's great to see you congratulations thank you so much yeah thank you |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.