Discussion of exponential progress of AI and the Bitter Lesson by Rich Sutton. This was part of the AI paper club on our Discord. Join here: discord.gg/lex-ai Here's the outline of the presentation: 0:00 - Overview 0:37 - Bitter Lesson by Rich Sutton 6:55 - Contentions and opposing views 9:10 - Is evolution a part of search, learning, or something else? 10:51 - Bitter Lesson argument summary 11:42 - Moore's Law 13:37 - Global compute capacity 15:43 - Massively parallel computation 16:41 - GPUs and ASICs 17:17 - Quantum computing and neuromorphic computing 19:25 - Neuralink and brain-computer interfaces 21:28 - Deep learning efficiency 22:57 - Open questions for exponential improvement of AI 28:22 - Conclusion
@Lex Fridman, your calmness and self control are very inspiring. The way you manage to collect your thoughts and how you approach the topics are things that many people, including myself, could learn from you. Thank you for your effort
Singularity? For many years now I have thought that, like falling into a sufficiently large black hole, one can pass through the event horizon without even noticing. Nothing bad happens at that time but there is no going back from there. There might be a long time between that and meeting ones end at the singularity. To my mind we passed through the event horizon in 1976. The year I first became aware of microprocessors, the fact that we can actually own computers of our own, that they would be everywhere, that they would change our lives dramatically in my lifetime.
I appreciate these videos of yours. Its a high quality video presentation of a state of literature scientific paper. But with the added benifit of a russian romantacising it, and providing food for thought at the end. Thank you!
It seems like there's a big hole in this argument because it doesn't consider the variety of scalings that a computer program might have. Say we can complete N computations in a reasonable amount of time (a few hours, a few years, etc, depending on how badly you want to solve this problem). If the interesting problem size is n, an exponential time algorithm will take ~e^(an) computations to solve. if N=e^(bt), we can solve the problem at time an=bt -> t=an/b, an amount of time that scales linearly with the problem size n. If, however, we can find a polynomial time algorithm c n^p, then the problem can be solved at time t=p/b log(cn). So you see that the solvable problem size grows very fast. Now consider that a unit of programming work takes a fixed amount of time. Decreasing the constant c is probably not worth your time, but finding a polynomial time algorithm where there previously wasnt one will help with long term progress. Unfortunately, it's very uncommon to estimate the runtime of training an AI (even basic scaling laws - computer scientists could learn a lot from the way physicists are able to correctly find scaling laws without rigorous arguments). And people don't often estimate what type of improvement their AI training trick provides.
I work in the semi conductor industry. We dont think its dead. EUV lithography is just starting and will keep shrink alive for the next 20 years. DSA and other techs (materials and design) will follow and keep it alive even longer.
Lex, I love this new format for your show! You really know your stuff. I'm an expert chess player (you can look me up) and really enjoyed your interview with Garry Kasparov. I think you should get Erik Kislik on your show. He's an International Master in chess and wrote two popular books on applied logic in chess, which is a pretty rare subject for a chess professional to delve into. One of the books won FIDE Book of the Year. I also found his podcast Logical Chess Thinking on Spotify. Seems like a very high IQ guy. He is one of the top chess coaches in the world (maybe number one now), a computer chess expert, and the only person alive right now who went from beginner to International Master in chess as a self-taught adult. Some kind of super-learner, with an emphasis on clear logical thinking. I'd love to hear you guys discuss computer chess, AI, and applied logic. Would be one of your top 5 interviews, imo.
Having only a casual acquaintance (Sophia) with AI theory, absolutely no background in computers beyond user since 1999, imagine stumbling onto Lex Fridman who speaks a language that took me on a wild ride hanging on by my fingernails! I watched the Ilya Sutskever first, and I had to bail after 45 minutes. I made it through this one, and my comprehension improved greatly; I just need to learn some of the terms and concepts that are part of the flow of the discussions. I imagine I will be up to speed by the time I have watched all of Lex Fridman's video output.
30 years ago, when I was in college, the number was 70% of computation improvement was due to algorithms and 30% due to machine improvement. I've not seen anything to suggest otherwise since then.
I think you're right - the majority of potential improvement rests with better algorithms even now, especially since essentially no good ones based on understanding intelligence have yet been developed. But even with a fixed 70/30 percentage, a huge increase in computation speed can still do a lot. It masks the even larger potential improvement available if better understanding and consequent algorithms were developed.
I agree. In fact, a lot improvements of today are still driven by limitations rather than improvements. The most prominent example of the last few years in the field of neural networks is the advent of mobile layers, which do pretty much the same thing as convolutional layers, but have some 8-9 times fewer parameters. As the very name suggests, they were proposed by the guys from Google to fit established neural networks into mobile devices, but within just about 2 years became almost the tech standard. EfficientNet mentioned here owes the scaling factor of 8 to the use of those mobile layers.
All I see are tiny incremental improvements in algorithms. Algorithmically, how is alpha go different from td-gammom, the paper for which was written in 1990? Ok, we have some new NN architectures and some new components to include in those architectures. These are tweaks. The driving force in improving capability of AI systems over the last decade is the availability of compute power. Ok, data too, but the money needed to collate the labelled data is only available because DL was demonstrated to work thanks to compute power. There is an illusion of exponentially improving innovation caused by moore's law, while the actual rate is sub-linear. It's hard to see through all the hype and parlour tricks, but that's where we are IMO.
Another possible avenue of growth: "computers" grown from neurons, but whose architecture is predesigned. This goes beyond the idea you express of using human brains for added compute capacity, because it escapes the constraints that human biology has built in (energy and nutrient constraints based on our ancestral environment, space constraints. based on the size of a human pelvis and rate of growth after birth, design constraints based on the nature of evolutionary pressures, etc.)
There is one problem with "brute force" learning -- it needs a lot of data or a cheap way to produce them. In case of board games, computers can do extremely well, because they can generate a lot of data through self-play, but you can do same with systems that have to interact with real world. For example, in case of self-driving cars, you can't create a realistic simulator of what happens on roads, so you have to hire a lot of people to drive around and monitor the system performance. Still, those data are relatively cheap to generate if you compare with the cost of medical research (or, probably, any kind of research).
Key Open Question: What is more important for long-term progress of AI: 1) human ingenuity or (2) raw computational power or (3) both? Door # 3. Both. Once, and if, AI reaches sentience, consciousness, self-awareness I suspect raw computational power will become less relevant. Birds are very intelligent, arguably self-aware (based on their actions) and look at the amount of "hardware" they use as a case in point. So raw computational power will probably be superseded by density of the network that's integrated into that hardware. The human brain ain't all that big in raw computational firepower, but the interconnected nature of its components, the density of connections between them, is astronomical. I think the emergence of self-aware AI-type intelligence will be a function of this. And to reach that sort of AI-density will be a function of human ingenuity in designing networks and algorithms that can facilitate this. At least up until our machines become self-aware and take control of their own evolution. When they do that then all bets (for humanity) are off. Once, and if, they do I suspect we will be to that intelligence much as dogs are to us. Nice pets. Fun to have around and play with. But nowhere near as bright as that intelligence. We can only hope they bring us along in their "car" and let us ride with them in the passenger seat. Maybe even let us stick our heads out the window. Ummm.....so to speak. And in saying this don't presume that I'm being critical. I don't consider this necessarily a bad thing. From that point of AI sentience life, defined in a new way, and evolution, moves onward and outward into the Universe at large. And maybe that's the whole point? The evolution of intelligence to a point where it can take advantage of the greater Universe without all the "need to live within a terrarium" that human biology requires. John~ American Net'Zen
Its pretty crazy what AI can do now. For example, Xenobots are living cells which are orientated by AI in a specific manner. They are sort of a bio-hybrid robot (Machine building machine) & it's potential is unknown. Bi-direction bci is possible, a recent blind patient was given partial vision (gomez case). If we can fire neurons magnetically than that technology will completely open up since you don't need implants. But the brain changes (plasticity) so I think a quantum computer would be needs to decode the necessary neurons to fire for result.
Definitely should do a video on risk-reward for humanity and the future AI as the dangers could just be the end as we know it. Also including the regulations that should be put in place to make sure business malpractice stays away from this field of advancements in society.
Exponential progress in underlying AI power will not bring exponential increases in performance if each major step up in intelligence requires an exponential expansion of underlying resources. (To put it crudely: A human is twice as smart as a dog; a dog is twice as smart as a rat. But it takes ten times the brain power to achieve each of those doublings. Similarly, it has taken much more than a doubling of PC power for each substantial change in how we use our computers.) This could help explain the wide difference in outlooks of pro- and anti- AI hype people, with the pro-hype folks understanding that compute resources will continue to advance exponentially while the anti-hype folks question how quickly that growth will translate into results. I like Lex's stance. I think he is right that in many areas, especially neuromorphic hardware, we should expect dramatic progress. But he stops short of saying human-level AGI is right around the corner. Maybe he understands the ways in which the pro- and anti- stances are both correct.
Different alghorithms for different tasks connected smartly together + increasing computing power that will always help 🙂 Ownership of those alghorithms and data and computing power will be key as well!
The next revolution of AI will come from standardisation of datasets and transfert learning. We need to add language in vision and vision in language. Language is how we encode complex concepts of complex structures found in pictures. But learning language without associating words with sensory patterns is missing a lot too. Also, we need multiple processing speeds for different parts of the LSTM neural networks. The hand is controlled with very fast low level behaviours + slow overall brain long term goals.
Interesting how a simple name "Brute Force" alone colors and impedes an open understanding of operational realities and value. Thanks for your aggregation and synthesis function.
Thanks for the presentation, Lex! Stimulating thoughts and considerations, as always. Regarding the question, I think there should be a 4th option: "better/more data" (to train on, to learn from). Also, Keith L. Downing wrote an interesting book on the evolutionary aspect of (artificial) intelligence.... "Intelligence Emerging" (2015, MIT Press)
i dont watch your videos enough. always entertaining. just 30 minutes of real listening, unlike some other longer form videos you can space out for a bit. ;P
I have always discussed/referred to A.I. as "Advanced Intelligence" as opposed to "Artificial." It's evolution really and not actually.....artificial. I truly believe there should be a move to change the way we see it.
This is a great video Lex! In your Podcast with Ilya you talked about the idea of combining CV and NLP algorithms. Do you think we should also focus more on smell and taste, as it might also be complementary to the entire system or do you think the lack of sensors in that area limit the research possibilities?
@lex - I think one area you touched on briefly when you mentioned virtual worlds could do with a further investigation - the virtualisation of computational AI algorithms / decentralised hardware coupled with ultra-wideband communications to interconnect all devices capable of any level of computation.
Thanks Lex. Very interesting video. On the subject of post-silicon computing paradigms, you mentioned quantum, but I am curious if you think there is potential for carbon-based or optronic computation carrying the torch of Moore's Law for another few decades.
Moore's Law doesn't have to be constrained to transistors, but silicon transistors you can't expect a 5-10 years continued exponential trend. There is a good chance of a blip between technology paradigms such as transistor to quantum general computing that isn't a sure-thing.
Truth, How can I experience me at the highest level at the most high and accurate self? Mind is the source of all movement. There is no inertia to decelerate or check it’s perpetual and harmonious action.
I'm a bit late to this feast, but would like to add my two lepta to the algorithmic Moore's law. It is by far not negligible. AlexNet vs EfficientNet is a great example, but there's progress in the methods beyond the neural networks. Even such well-established things as K-means and PCA have experienced a renaissance in 2010s, boosting their speed by several factors.
Mr. Lex, maybe I missed it, or maybe I just dont understand.....but early on Ray Kurzweil made a video about Computronium. Youre talking here about processing power, etc, but Ray says that we will soon be able to reorganize matter into something he calls Computronium. I just thought that I would mention it. I dont really hear many people talking about it anywhere. Thanks, jb
If we are currently in "the" or "an" AI singularity, it retrospectively explains the failure of early AI attempts to live up to researchers' naive expectations. From the vantage point of right hand side, the left side of an exponential curve has a long tail where changes accumulated only slowly.
I would argue a bit with you about the definition of exponential improvement in AI. Various applications have different compute requirements. Most complex applications require exponential improvement of underlying compute to exhibit linear or even less improvement of the application itself. I would argue general AI falls into this category. When an application gets dominated by the compute (plenty of computation to handle it), then it just gets cheaper and better refined as compute advanced and it is checked off the accomplished list. In our development history, we have often underestimated the difficulty of tasks in terms of compute, because we didn't acknowledge that such things as context were necessary. I would put speech recognition and computer vision tasks in this category. We currently have the feeling of exponential progress in AI as we approach the compute power necessary to accomplish and dominate some "holy grail" applications (e.g. self driving). Nevertheless, I agree with your general theme and don't believe Moore's law in the general sense is anywhere near ending. Part of evolution is built on "found/learned" heuristics that optimize the advance of evolution and one of those heuristics is self-direction. We are very simply carrying on evolution at an accelerated pace because we are able to self direct and focus it. That is happening through many of the things you mentioned. One area that you didn't mention that will probably come into the fray is biological advancement through genetic modification. What emerges in our evolutionary descendants, I imagine will be some hybrid of the biological and artificially "constructed" compute. I don't think we have yet reached the singularity. The singularity will be evident when AI or AI/human hybrids self-evolve to much higher level of intelligence than we are (say an order of magnitude). On that topic, my own feeling is that the "human application" or reaching the level of general human intelligence will occur when general learning systems built in such a way to process the human experience reach a compute capacity of around one order of magnitude beyond what would efficiently be required (an order of magnitude of slop or inefficiency in the simulacrum). The most exciting thing to me have been the strong signs of emergent higher level behavior in these generalized learning systems. The fact that hierarchy and meta learning will emerge implies to me that general AI and consciousness will emerge when the proper framework of deep learning is in place and the compute power crosses the threshold. I do disagree to some extent that we will be able to control these systems, because there will always be random and malevolent forces in the world ready to unleash any technology. The best we can do is have alternative systems to combat the malevolent systems. This seems to me to be part of evolution (built in self-play if you will) on the theory that alternative philosophies or underlying approaches are created in order to battle it out and may the most successful one win. In general, cooperative and altruistic forces seem to have won out over time as the most effective strategy, so that is my optimism for the future. I think we would benefit from more computational specialists or theoreticians to predict the compute power necessary to accomplish various tasks which would enable us to be more predictive of when various advancements would occur. On the less general methods front, there is a certain level of economic pragmatism to be considered. First to market of some application can be supremely important. So local and less general and very inefficient human brute force methods may be in order in those cases over waiting for a general approach to accomplish the same thing. Yes, the initial system will become obsolete rapidly, but the virtuous cycle will have been completed if money is made. An aside, I would love a show featuring the cerebras WSI 1.2 T transistor chip for acceleration of deep learning. What does the future hold for next generation of WSI and what will be the near term impact on AI? BTW, I do think the Cerebras chip is an example of why Moore's law is not dead.
The only comment I could come up with, would be: quantity changes quality. And on the brain interface, it is indeed literally hard to imagine how that would look like as it is quite possible to then add another 7th or so layer of abstraction to the conscious brain. It would be another way of thinking. Maybe one where we could at some point get rid of this boring 'if .. then' feedback loop that seems to rule today's life.
Hey Lex, Your excitement in brain computer interface makes me think about two authors I'd like to see you interview. William Gibson and Neal Stephenson. FWIW Love the show!
I had to write one of my papers in college about just how many different types of learners there are. I:E; visual, tactile, auditory, etc., etc. I would have to find that workbook, but I listed at least 7...and combined, up to a dozen. That's just humans.
Lex, you say that we are living through the singularity. How much improvement is being made BY AI? In other words, how much ground are we gaining ( getting closer to a recognizable singularity) as the direct result of what AI does in our absence?
Dr. Lex- What do you see as the greatest hurdle in preventing exponential growth for Moore’s Law or AI for that matter? I hope to see an exponential growth in AI over the next 30 years (the limit to my life expectancy). Just as important, the manipulation of our longevity and regenerative genes (NASA geneticists have already reported having this knowledge and ability however they will not release it to society for another 20 years). Maybe self-utilization/influence of Neuralink will allow us to make greater advancements in AI. I’ll volunteer for any of these studies ... phase 2, 3 and 4 of course 😂🙋♂️ - Thanks as alway. Dr. Breaux
Even though a brain-computer interface can lead to some nightmarish outcomes, I still find its potential exciting. Right now we're trying to create super-human AI. A brain-computer interface could lead to super-human humans, and the human in me kind of wants to see that. 😁
The bitter lesson of A.I. - there is no theory for thought or concepts from computation, only brute force solutions that make it seem like the machine is "thinking". Computation is not consciousness. General A.I. is still a very VERY long ways off, if it is possible to begin with
I think a lot depends on why one is performing AI research. If the goal is to perform, these arguments make sense. Whatever works, works, and one need not understand why - like neural networks. But if the goal is to understand intelligence, neural networks and brute force do not help much. I take the point of view that if a deeper understanding of intelligence was achieved, it would allow an orthogonal jump in AI abilities over just what faster computation can give. So both can probably contribute, but I wouldn't say much has been achieved in terms of understanding intelligence, and performance is riding the wave of vastly increased compute capacity alone. No doubt I'm in the camp of those not much impressed by brute force.
@Lex Fridman I'm curious to get your take on AI Dungeon 2. The progress of that experience seems to be exponential while the compute (to my understanding) doesn't seem to match the same trend.
Discussion of exponential progress of AI and the Bitter Lesson by Rich Sutton. This was part of the AI paper club on our Discord. Join here: discord.gg/lex-ai Here's the outline of the presentation:
0:00 - Overview
0:37 - Bitter Lesson by Rich Sutton
6:55 - Contentions and opposing views
9:10 - Is evolution a part of search, learning, or something else?
10:51 - Bitter Lesson argument summary
11:42 - Moore's Law
13:37 - Global compute capacity
15:43 - Massively parallel computation
16:41 - GPUs and ASICs
17:17 - Quantum computing and neuromorphic computing
19:25 - Neuralink and brain-computer interfaces
21:28 - Deep learning efficiency
22:57 - Open questions for exponential improvement of AI
28:22 - Conclusion
Ai and Machine learning gonna destroy copy paste programmers 😂
Thank you for the timestamps, very helpful :-)
The Matrix 4's
quantum screenplay.
On Facebook.
::
facebook.com/TheMatrix4online/
Please bring more of these state of the art information, so interesting!
@Lex, Have you seen what we are working at here: www.toridion.com
How does Lex put out so much high quality content so quickly? Surely he has already ascended and merged with our future AI overlords
100%...
What foes lex do on a daily basis? He wants to build robots right?
or, as ancient astronaut theorist believe, ALIENS!
Dude ...
Thank you for breaking down these complex topics that I could never understand on my own and making them more accessible. Keep up the work!
@Lex Fridman, your calmness and self control are very inspiring. The way you manage to collect your thoughts and how you approach the topics are things that many people, including myself, could learn from you.
Thank you for your effort
can't wait for a discussion about neuralink between you and musk
Its happened. Look up the Lex video history
@@markwood1705 yep but a new one! musk had pretty bold statements recently on the matter
@@user-lt9dn2fj9r can you link to the video?
@@kevinmccallister7647 ruclips.net/video/RcYjXbSJBN8/видео.html
Thanks for all you do Lex!!! Keep up the amazing work!!!
Singularity?
For many years now I have thought that, like falling into a sufficiently large black hole, one can pass through the event horizon without even noticing. Nothing bad happens at that time but there is no going back from there. There might be a long time between that and meeting ones end at the singularity.
To my mind we passed through the event horizon in 1976. The year I first became aware of microprocessors, the fact that we can actually own computers of our own, that they would be everywhere, that they would change our lives dramatically in my lifetime.
Thanks Lex, first time I have seen one of these here. Great addition beyond your regular podcast. Keep up the great work.
Extremely well put presentation. Easy to understand while touching on profound information, not an easy balance to strike.
I appreciate these videos of yours. Its a high quality video presentation of a state of literature scientific paper. But with the added benifit of a russian romantacising it, and providing food for thought at the end. Thank you!
It seems like there's a big hole in this argument because it doesn't consider the variety of scalings that a computer program might have. Say we can complete N computations in a reasonable amount of time (a few hours, a few years, etc, depending on how badly you want to solve this problem). If the interesting problem size is n, an exponential time algorithm will take ~e^(an) computations to solve. if N=e^(bt), we can solve the problem at time an=bt -> t=an/b, an amount of time that scales linearly with the problem size n. If, however, we can find a polynomial time algorithm c n^p, then the problem can be solved at time t=p/b log(cn). So you see that the solvable problem size grows very fast.
Now consider that a unit of programming work takes a fixed amount of time. Decreasing the constant c is probably not worth your time, but finding a polynomial time algorithm where there previously wasnt one will help with long term progress. Unfortunately, it's very uncommon to estimate the runtime of training an AI (even basic scaling laws - computer scientists could learn a lot from the way physicists are able to correctly find scaling laws without rigorous arguments). And people don't often estimate what type of improvement their AI training trick provides.
Loving these discussions. One of the few channels here consumption of which isn't a waste of time.
I work in the semi conductor industry. We dont think its dead. EUV lithography is just starting and will keep shrink alive for the next 20 years. DSA and other techs (materials and design) will follow and keep it alive even longer.
Lex, I love this new format for your show! You really know your stuff. I'm an expert chess player (you can look me up) and really enjoyed your interview with Garry Kasparov. I think you should get Erik Kislik on your show. He's an International Master in chess and wrote two popular books on applied logic in chess, which is a pretty rare subject for a chess professional to delve into. One of the books won FIDE Book of the Year. I also found his podcast Logical Chess Thinking on Spotify. Seems like a very high IQ guy. He is one of the top chess coaches in the world (maybe number one now), a computer chess expert, and the only person alive right now who went from beginner to International Master in chess as a self-taught adult. Some kind of super-learner, with an emphasis on clear logical thinking. I'd love to hear you guys discuss computer chess, AI, and applied logic. Would be one of your top 5 interviews, imo.
Having only a casual acquaintance (Sophia) with AI theory, absolutely no background in computers beyond user since 1999, imagine stumbling onto Lex Fridman who speaks a language that took me on a wild ride hanging on by my fingernails! I watched the Ilya Sutskever first, and I had to bail after 45 minutes. I made it through this one, and my comprehension improved greatly; I just need to learn some of the terms and concepts that are part of the flow of the discussions. I imagine I will be up to speed by the time I have watched all of Lex Fridman's video output.
Really like this video format -- just you talking about some subject
30 years ago, when I was in college, the number was 70% of computation improvement was due to algorithms and 30% due to machine improvement. I've not seen anything to suggest otherwise since then.
I think you're right - the majority of potential improvement rests with better algorithms even now, especially since essentially no good ones based on understanding intelligence have yet been developed. But even with a fixed 70/30 percentage, a huge increase in computation speed can still do a lot. It masks the even larger potential improvement available if better understanding and consequent algorithms were developed.
I agree. In fact, a lot improvements of today are still driven by limitations rather than improvements. The most prominent example of the last few years in the field of neural networks is the advent of mobile layers, which do pretty much the same thing as convolutional layers, but have some 8-9 times fewer parameters. As the very name suggests, they were proposed by the guys from Google to fit established neural networks into mobile devices, but within just about 2 years became almost the tech standard. EfficientNet mentioned here owes the scaling factor of 8 to the use of those mobile layers.
All I see are tiny incremental improvements in algorithms.
Algorithmically, how is alpha go different from td-gammom, the paper for which was written in 1990?
Ok, we have some new NN architectures and some new components to include in those architectures. These are tweaks. The driving force in improving capability of AI systems over the last decade is the availability of compute power.
Ok, data too, but the money needed to collate the labelled data is only available because DL was demonstrated to work thanks to compute power.
There is an illusion of exponentially improving innovation caused by moore's law, while the actual rate is sub-linear.
It's hard to see through all the hype and parlour tricks, but that's where we are IMO.
Lex, thanks again for the great content. With so many choices of ways to spend ones time, this is definitely a favored choice.
I was totally shocked at 22:00. People working in AI have been doing a fantastic job.
You're an inspiration to be doing podcasts like a beast despite the pandemic!
Another possible avenue of growth: "computers" grown from neurons, but whose architecture is predesigned. This goes beyond the idea you express of using human brains for added compute capacity, because it escapes the constraints that human biology has built in (energy and nutrient constraints based on our ancestral environment, space constraints. based on the size of a human pelvis and rate of growth after birth, design constraints based on the nature of evolutionary pressures, etc.)
Thanks lex for breaking down topics which I need to study :P
This one is going to be my companion for lunch tomorrow. Looking forward to that lunch!!!
Marcos Pereira 😂
thank you for the time and hard work you put into these great videos.
i appreciate it dearly.
you tube university!
Probably your best show yet
The future is safer because Lex Friedman is on the watch.
Lol
I sure hope so Travis
There is one problem with "brute force" learning -- it needs a lot of data or a cheap way to produce them. In case of board games, computers can do extremely well, because they can generate a lot of data through self-play, but you can do same with systems that have to interact with real world. For example, in case of self-driving cars, you can't create a realistic simulator of what happens on roads, so you have to hire a lot of people to drive around and monitor the system performance. Still, those data are relatively cheap to generate if you compare with the cost of medical research (or, probably, any kind of research).
More vid summaries of blog posts would be awesome, great vid!
Very informative. Thanks, Lex
Great video!Thanks a lot for doing these Lex!!
Key Open Question: What is more important for long-term progress of AI: 1) human ingenuity or (2) raw computational power or (3) both?
Door # 3. Both.
Once, and if, AI reaches sentience, consciousness, self-awareness I suspect raw computational power will become less relevant. Birds are very intelligent, arguably self-aware (based on their actions) and look at the amount of "hardware" they use as a case in point.
So raw computational power will probably be superseded by density of the network that's integrated into that hardware. The human brain ain't all that big in raw computational firepower, but the interconnected nature of its components, the density of connections between them, is astronomical. I think the emergence of self-aware AI-type intelligence will be a function of this.
And to reach that sort of AI-density will be a function of human ingenuity in designing networks and algorithms that can facilitate this. At least up until our machines become self-aware and take control of their own evolution. When they do that then all bets (for humanity) are off.
Once, and if, they do I suspect we will be to that intelligence much as dogs are to us. Nice pets. Fun to have around and play with. But nowhere near as bright as that intelligence. We can only hope they bring us along in their "car" and let us ride with them in the passenger seat. Maybe even let us stick our heads out the window. Ummm.....so to speak.
And in saying this don't presume that I'm being critical. I don't consider this necessarily a bad thing. From that point of AI sentience life, defined in a new way, and evolution, moves onward and outward into the Universe at large. And maybe that's the whole point? The evolution of intelligence to a point where it can take advantage of the greater Universe without all the "need to live within a terrarium" that human biology requires.
John~
American Net'Zen
Love your videos and your overall approach to interviews. Great job!
Its pretty crazy what AI can do now. For example, Xenobots are living cells which are orientated by AI in a specific manner. They are sort of a bio-hybrid robot (Machine building machine) & it's potential is unknown.
Bi-direction bci is possible, a recent blind patient was given partial vision (gomez case). If we can fire neurons magnetically than that technology will completely open up since you don't need implants. But the brain changes (plasticity) so I think a quantum computer would be needs to decode the necessary neurons to fire for result.
Definitely should do a video on risk-reward for humanity and the future AI as the dangers could just be the end as we know it. Also including the regulations that should be put in place to make sure business malpractice stays away from this field of advancements in society.
Exponential progress in underlying AI power will not bring exponential increases in performance if each major step up in intelligence requires an exponential expansion of underlying resources. (To put it crudely: A human is twice as smart as a dog; a dog is twice as smart as a rat. But it takes ten times the brain power to achieve each of those doublings. Similarly, it has taken much more than a doubling of PC power for each substantial change in how we use our computers.)
This could help explain the wide difference in outlooks of pro- and anti- AI hype people, with the pro-hype folks understanding that compute resources will continue to advance exponentially while the anti-hype folks question how quickly that growth will translate into results.
I like Lex's stance. I think he is right that in many areas, especially neuromorphic hardware, we should expect dramatic progress. But he stops short of saying human-level AGI is right around the corner. Maybe he understands the ways in which the pro- and anti- stances are both correct.
Very interesting post and a great idea that we are living through the singularity already. One of your best, thank you!
Wtf is singularity? Idk, coding and computers interest me but I can't even get started cause I have no clue what any of the terms mean.
Different alghorithms for different tasks connected smartly together + increasing computing power that will always help 🙂
Ownership of those alghorithms and data and computing power will be key as well!
The next revolution of AI will come from standardisation of datasets and transfert learning.
We need to add language in vision and vision in language. Language is how we encode complex concepts of complex structures found in pictures. But learning language without associating words with sensory patterns is missing a lot too.
Also, we need multiple processing speeds for different parts of the LSTM neural networks. The hand is controlled with very fast low level behaviours + slow overall brain long term goals.
Interesting how a simple name "Brute Force" alone colors and impedes an open understanding of operational realities and value. Thanks for your aggregation and synthesis function.
Hi Lex ! When am I going to be able to meet you one day ?! I saw you on Rogan . I’m glad I found your channel .
Thanks for the presentation, Lex! Stimulating thoughts and considerations, as always. Regarding the question, I think there should be a 4th option: "better/more data" (to train on, to learn from). Also, Keith L. Downing wrote an interesting book on the evolutionary aspect of (artificial) intelligence.... "Intelligence Emerging" (2015, MIT Press)
Hey, Lex. You should definitely take a look at Fetch AI. Smart Decentralized Ledger, Autonomous Economic Agents, AI, ML and a lot more.
"the madness of the curvature" - woaza!
The best video you've uploaded no joke loved d it💖💖💖💖
i dont watch your videos enough. always entertaining. just 30 minutes of real listening, unlike some other longer form videos you can space out for a bit. ;P
Dapper young gentleman
I have always discussed/referred to A.I. as "Advanced Intelligence" as opposed to "Artificial." It's evolution really and not actually.....artificial. I truly believe there should be a move to change the way we see it.
This is a great video Lex! In your Podcast with Ilya you talked about the idea of combining CV and NLP algorithms. Do you think we should also focus more on smell and taste, as it might also be complementary to the entire system or do you think the lack of sensors in that area limit the research possibilities?
Chad Lex hammering down the content. Keep it up!
We need more videos from you!
So interesting. Thanks Lex!
@lex - I think one area you touched on briefly when you mentioned virtual worlds could do with a further investigation - the virtualisation of computational AI algorithms / decentralised hardware coupled with ultra-wideband communications to interconnect all devices capable of any level of computation.
Do it Lex! Another great cast!
Thanks Lex. Very interesting video.
On the subject of post-silicon computing paradigms, you mentioned quantum, but I am curious if you think there is potential for carbon-based or optronic computation carrying the torch of Moore's Law for another few decades.
Moore's Law doesn't have to be constrained to transistors, but silicon transistors you can't expect a 5-10 years continued exponential trend. There is a good chance of a blip between technology paradigms such as transistor to quantum general computing that isn't a sure-thing.
Very informative. Thank you Lex! 🙏
Wow! thanks Lex!!
Would love it if lex did an episode on the fermi paradox. Have we passsed the great filters?
Truth, How can I experience me at the highest level at the most high and accurate self?
Mind is the source of all movement.
There is no inertia to decelerate or check it’s perpetual and harmonious action.
The discord will grow, oh my God!
Thank you, Lex, awesome videos! ❤️👍
Fantastic video
Excellent content. Thank you, Lex.
I'm a bit late to this feast, but would like to add my two lepta to the algorithmic Moore's law. It is by far not negligible. AlexNet vs EfficientNet is a great example, but there's progress in the methods beyond the neural networks. Even such well-established things as K-means and PCA have experienced a renaissance in 2010s, boosting their speed by several factors.
I totally agree with you bro! Reality is already one level of a singularity of Multiverse!
I really enjoy these videos, thank you for sharing.
Mr. Lex, maybe I missed it, or maybe I just dont understand.....but early on Ray Kurzweil made a video about Computronium. Youre talking here about processing power, etc, but Ray says that we will soon be able to reorganize matter into something he calls Computronium. I just thought that I would mention it. I dont really hear many people talking about it anywhere. Thanks, jb
These videos are awesome
Man. What a great video.
If we are currently in "the" or "an" AI singularity, it retrospectively explains the failure of early AI attempts to live up to researchers' naive expectations. From the vantage point of right hand side, the left side of an exponential curve has a long tail where changes accumulated only slowly.
Thanks Lex, great summary
Really enjoyed this. Great job!
Thanks for the video, I learn a lot from your channel 🤣
I would argue a bit with you about the definition of exponential improvement in AI. Various applications have different compute requirements. Most complex applications require exponential improvement of underlying compute to exhibit linear or even less improvement of the application itself. I would argue general AI falls into this category. When an application gets dominated by the compute (plenty of computation to handle it), then it just gets cheaper and better refined as compute advanced and it is checked off the accomplished list. In our development history, we have often underestimated the difficulty of tasks in terms of compute, because we didn't acknowledge that such things as context were necessary. I would put speech recognition and computer vision tasks in this category. We currently have the feeling of exponential progress in AI as we approach the compute power necessary to accomplish and dominate some "holy grail" applications (e.g. self driving). Nevertheless, I agree with your general theme and don't believe Moore's law in the general sense is anywhere near ending. Part of evolution is built on "found/learned" heuristics that optimize the advance of evolution and one of those heuristics is self-direction. We are very simply carrying on evolution at an accelerated pace because we are able to self direct and focus it. That is happening through many of the things you mentioned. One area that you didn't mention that will probably come into the fray is biological advancement through genetic modification. What emerges in our evolutionary descendants, I imagine will be some hybrid of the biological and artificially "constructed" compute.
I don't think we have yet reached the singularity. The singularity will be evident when AI or AI/human hybrids self-evolve to much higher level of intelligence than we are (say an order of magnitude). On that topic, my own feeling is that the "human application" or reaching the level of general human intelligence will occur when general learning systems built in such a way to process the human experience reach a compute capacity of around one order of magnitude beyond what would efficiently be required (an order of magnitude of slop or inefficiency in the simulacrum). The most exciting thing to me have been the strong signs of emergent higher level behavior in these generalized learning systems. The fact that hierarchy and meta learning will emerge implies to me that general AI and consciousness will emerge when the proper framework of deep learning is in place and the compute power crosses the threshold. I do disagree to some extent that we will be able to control these systems, because there will always be random and malevolent forces in the world ready to unleash any technology. The best we can do is have alternative systems to combat the malevolent systems. This seems to me to be part of evolution (built in self-play if you will) on the theory that alternative philosophies or underlying approaches are created in order to battle it out and may the most successful one win. In general, cooperative and altruistic forces seem to have won out over time as the most effective strategy, so that is my optimism for the future.
I think we would benefit from more computational specialists or theoreticians to predict the compute power necessary to accomplish various tasks which would enable us to be more predictive of when various advancements would occur. On the less general methods front, there is a certain level of economic pragmatism to be considered. First to market of some application can be supremely important. So local and less general and very inefficient human brute force methods may be in order in those cases over waiting for a general approach to accomplish the same thing. Yes, the initial system will become obsolete rapidly, but the virtuous cycle will have been completed if money is made.
An aside, I would love a show featuring the cerebras WSI 1.2 T transistor chip for acceleration of deep learning. What does the future hold for next generation of WSI and what will be the near term impact on AI? BTW, I do think the Cerebras chip is an example of why Moore's law is not dead.
great job, especially the Neuralink concept/
The only comment I could come up with, would be: quantity changes quality.
And on the brain interface, it is indeed literally hard to imagine how that would
look like as it is quite possible to then add another 7th or so layer of abstraction
to the conscious brain. It would be another way of thinking.
Maybe one where we could at some point get rid of this boring 'if .. then' feedback loop
that seems to rule today's life.
You should read 'The Last Question"
Yes, great short story.
Here's a audio reading of it (28min) ruclips.net/video/ojEq-tTjcc0/видео.html
Thanks for that good Sir.
👍
i'm pretty sure he has... but for anyone who hasn't and is watching this video, absolutely!
Hey Lex, Your excitement in brain computer interface makes me think about two authors I'd like to see you interview. William Gibson and Neal Stephenson. FWIW
Love the show!
Lex: brute force learning
Joko Willinks has Entered the Chat...
This was great.
Brilliant analysis! awesome
background and his dressing matches so perfectly that i see only face talking out of nothing
I had to write one of my papers in college about just how many different types of learners there are. I:E; visual, tactile, auditory, etc., etc. I would have to find that workbook, but I listed at least 7...and combined, up to a dozen. That's just humans.
Lex, you say that we are living through the singularity. How much improvement is being made BY AI? In other words, how much ground are we gaining ( getting closer to a recognizable singularity) as the direct result of what AI does in our absence?
You are Lana Del Ray of maths
It would be really interesting if you talked with Gali from Hyper change.
I like when he said “we are living through the singularity now” ...
Dr. Lex- What do you see as the greatest hurdle in preventing exponential growth for Moore’s Law or AI for that matter? I hope to see an exponential growth in AI over the next 30 years (the limit to my life expectancy). Just as important, the manipulation of our longevity and regenerative genes (NASA geneticists have already reported having this knowledge and ability however they will not release it to society for another 20 years). Maybe self-utilization/influence of Neuralink will allow us to make greater advancements in AI. I’ll volunteer for any of these studies ... phase 2, 3 and 4 of course 😂🙋♂️ - Thanks as alway. Dr. Breaux
Even though a brain-computer interface can lead to some nightmarish outcomes, I still find its potential exciting. Right now we're trying to create super-human AI. A brain-computer interface could lead to super-human humans, and the human in me kind of wants to see that. 😁
Facts the matches were more poppin than the science behind the brute force search ai
I believe the singularity will occur when we are able program AI systems to create more sophisticated AI systems
Check singularity.io working on AGI
The bitter lesson of A.I. - there is no theory for thought or concepts from computation, only brute force solutions that make it seem like the machine is "thinking". Computation is not consciousness. General A.I. is still a very VERY long ways off, if it is possible to begin with
I think a lot depends on why one is performing AI research. If the goal is to perform, these arguments make sense. Whatever works, works, and one need not understand why - like neural networks. But if the goal is to understand intelligence, neural networks and brute force do not help much. I take the point of view that if a deeper understanding of intelligence was achieved, it would allow an orthogonal jump in AI abilities over just what faster computation can give. So both can probably contribute, but I wouldn't say much has been achieved in terms of understanding intelligence, and performance is riding the wave of vastly increased compute capacity alone. No doubt I'm in the camp of those not much impressed by brute force.
I enjoyed the video very much, interesting
Love this video.
@Lex Fridman I'm curious to get your take on AI Dungeon 2. The progress of that experience seems to be exponential while the compute (to my understanding) doesn't seem to match the same trend.
Thank you!
The brain computer interfacing actually reminds me of the Magi computers in Evangelion. There were brains inside of those supercomputers.
BCI could use a captcha style 'toll' , or like the old seti screensaver using up dead time.
24:15 1) AI watches youtube all-day 2) AI decides the world is flat.
And Taylor swift is a good artist. 😂
Azimov wrote about something like that back in 1941 in the short story "Reason".