Here are the timestamps. Please check out our sponsors to support this podcast. 0:00 - Introduction & sponsor mentions: - Public Goods: publicgoods.com/lex and use code LEX to get $15 off - Indeed: indeed.com/lex to get $75 credit - ROKA: roka.com/ and use code LEX to get 20% off your first order - NetSuite: netsuite.com/lex to get free product tour - Magic Spoon: magicspoon.com/lex and use code LEX to get $5 off 0:36 - Self-supervised learning 10:55 - Vision vs language 16:46 - Statistics 22:33 - Three challenges of machine learning 28:22 - Chess 36:25 - Animals and intelligence 46:09 - Data augmentation 1:07:29 - Multimodal learning 1:19:18 - Consciousness 1:24:03 - Intrinsic vs learned ideas 1:28:15 - Fear of death 1:36:07 - Artificial Intelligence 1:49:56 - Facebook AI Research 2:06:34 - NeurIPS 2:22:46 - Complexity 2:31:11 - Music 2:36:06 - Advice for young people
@@BeckyYork but if you just suppose for a moment that u are already a copy of a previous reality ... wouldn't the notion of you being a clone act as a "safe keep" for the best parts of yourself?
It's truly beautiful I hope one day I'm brilliant enough to be considered, even though Lex ignores me on Twitter haha, so I guess I'm here to bring awareness to him, I make funny jokes about bucky balls like how lex has to handle even my best jokes. I know this doesn't make sense to anyone else, so Nostrovia to family.
I think this was my favorite Lex podcast. No other *(super popular) podcaster has the technical proficiency to go so deep into a discussion of computer vision. This is why I'm subbed.
I get really scared when "Chief AI Scientists" are that bad at predicting AI capabilities. LeCun 57:55: You take an object, place it on a table, and then push the table. It's completely obvious to you that the object will be pushed along with the table, because it's sitting on it. I believe there is no text in the world that explicitly explains this. So, if you train a machine, as powerful as it could be - let's say your GPT-5000 or whatever - it's never going to learn about this phenomenon. ChatGPT (GPT 4): If you push the table gently, the object might stay in place due to friction, although it may slide or wobble slightly. If you push the table with a greater force, the object might slide or fall over, especially if the object is top-heavy or not very stable.
I asked it to compare a ball and a box in that scenario, to describe order of events if the accelaration of the table would increase over time... it blew me away
Yann is the epitome of a research con artist. And it's CRAZY how many "intelligent" people like Lex can't see this. But Lex is an overrated midwit himself so I guess that really shouldn't be too surprising in his case. Don't get me wrong, as a PERSON - I like Lex. But as far as rating his intelligence? It's absolutely batsh*t to me people think he's some kinda super smart guy. He's A TEACHER. TEACHERS ARE NEVER EXPERTS >> FULL STOP. THAT'S WHY THEY TEACH INSTEAD OF ACTUALLY MAKE SOMETHING.
Many people including me are indebted to the perseverance of people like Yann LeCun. Luckily for me, I got to meet him and thank him. What an inspiring person.
It's interesting coming back to this now. I put Yann's example of the smart phone on the table through GPT4 and of course it got the right answer "If the smartphone was on the table and you pushed the table 5 feet to the left, the smartphone would also move 5 feet to the left, assuming it stayed on the table during the push. So, relative to where it started, the smartphone is now 5 feet to the left." It's just interesting that people at the bleeding edge of this technology didn't realize how competent these system could get using only text.
58:27 LeCun: "GPT-5000 would never learn that a phone sitting on a table will move with the table when you push it" GPT-4: *in depth physics explanation about the conditions in which the phone would move with the table and when it would slide off*
Very interesting talk, I like when Lex and his guests put the bigger questions inside the balance when talking about current and next technology. I wonder when this was recorded though? 1:54:31
As a data scientist, who works on various areas in data science, this podcast was amazing to hear. Loved his response at 17:50 about intelligence and statistics.
01:46:52 "I think the Chinese Room Argument is a ridiculous one..." As someone who winced at, and was underwhelmed by, LeCun's critique of nativism and innate ideas, this was music to my ears!
Out of curiosity, why is it a ridiculous argument? Unfortunately LeCun doesn’t really say why here, he just kind of handwaves it away, as others like Hassabis and Dennett have also done in the past. Hassabis basically said “it doesn’t matter if something only appears intelligent, it’s enough for what we’re aiming to achieve” … which is fair and valid, especially to avoid getting bogged down by semantics - but it doesn’t address the underlying criticism that Searle first raised. LeCun seems to suggest here that the sum of all human experience can reduced to a mechanistic “solution” - just not in the forseeable future, but in a blind faith eventuality which itself is an unsatisfying non-answer.
@meatskunk Thanks for your comments -- and I hope it was clear that I was partly joking. Of course, I don't really believe "ridiculous" is a fair characterization of Searle's position, however much I may have misgivings about it (more on that later). After all, anyone who convinced Putnam to walk back his commitment to computation/functionalism deserves eminent respect. And if I've missed something in Searle, I'm happy to be corrected. It seems to me the greatest liability or limitation in CRA is that it entirely inverts the relationship of processing and output to consciousness, or the personal and the subpersonal. Recall the premise: the man in the room is fed instructions, which he enacts. In short, he understands, has some conscious understanding of, the instructions -- the processing. But the problem for computational studies allegedly arises when the man in the room doesn't understand, has no conscious understanding, of whatever "content" the instructions are meant to yield, i.e. the output. So he has a personal grasp of the instructions, but no more an understanding of his output than he'd be able to consciously introspect into sub-personal processes (say, cardiovascular activity or involuntary memory). As should be clear from the above, whatever CRA is evoking, it's the diametrical opposite of whatever is being claimed in computational, or at least computational-leaning theories of language processing (Chomsky), perception (Marr) or thought (Fodor). In all of these and like other studies, the emphasis is that our computations are inaccessible to introspection (subpersonal). In short, in direct opposition to the man in the room, we are not personally aware of the operations whereby we process external stimuli. To wit, the man has the lived experience, conscious and phenomenological, of the blow-by-blow whereby he walks through certain instructions (e.g. "I am now matching x to 2 on this look-up table"). By fitting contrast, no sentient being, in real time, has personal access, or is required to consciously plan and think out, say, the nodes in a Chomsky tree diagram, when speaking a sentence in ordinary language (!). To briefly spell this out: nobody, when speaking "John expects to hurt himself," has to consciously think, in order to speak the sentence in real time, "ok, in enacting the operations of TGG, I need to displace 'John' from 'hurt himself' and raise it; but, in doing so, I also need to leave a trace, or a PRO, from its displaced position, and decide, to top it off, which is it: trace or PRO?" Unlike the man in the room, we're not aware of the operations we are enacting -- we just do it, all day, and every day, all the time. (See The Minimalist Program). So, it's not clear, as Catherine Elgin once noted, what Searle's little thought experiment is meant to show. That's not say there aren't compelling arguments against computational or functional studies of the mind or brain -- Ned Block (in my view) perceptively adapts some ideas from Nelson Goodman; von Neumann, as early as 1958, was sounding the alarm. Heck, even Noam, way back in 1957, posed powerful thought experiments as to why mental language processing, pace machine models, WASN'T probabilistic, statistical, or a posteriori (based on what a speaker-hearer had heard or been fed before). To sum up, there are good claims to be made against pushing studies of the mind/brain too far down the rabbit hole of machine processing. It's just that, Searle ain't it.
I really liked this conversation. This guy's awesome. As a kind of related aside, the auto-generated CC are amazing for someone with such a strong French accent.
It seems like an important concept is undervalued in ML right now: objective Building a world model is good, but it's far better to have a world model that predicts whether or not X will happen (for some finite set of objectives X). Our objectives are what determine every action we take. All animal brains are capable of forming a *minimal* world model (not exhaustive!) that can effectively predict actions and observations that relate to a few important objectives: - do not get hurt - eat food & reproduce - explore In order to achieve these goals, brains must be capable of forming intermediate "objectives" (ideal perceptions) that can be created, reordered, remembered, reevaluated, ... Solving a prediction problem is easy with time and data, but creating the *right* prediction problem is the hard part we don't know how to do.
I've loved everything about the Lex Fridman podcast since day one except that it _marked_ the end of the artificial intelligence podcast. However, among the many things I learned from today's episode is the fortuitous fact that the AIP lives _inside_ the LFP.
There are many complex instinctual behaviors seen in animals merely hours after birth, I might disagree with Dr. LeCun at 1:00. I suspect that there is a far deeper encoding somehwere .... one need only watch how a newborn foal quickly stands and walks. We have no idea of how this behavior is passed down genetically.
Is he though!? -- for all his "research" his AI ideas have actually had VERY FEW practical implementations. The mark of a good idea is one that actually provides VALUE to REAL THINGS that you can make. Yan is remarkably lacking in this department. Anyone who actually does real AI development knows how full of it Yann is... Lex is more or less the same. Teachers teach, mostly because they can't do. If Lex is such an "AI expert" name ONE THING that he's done to significantly advance AI capability besides just talk about it. These are not geniuses. Lex and Yann are the EPITOME of "midwit" -- smart people that are just BARELY smarter than an average idiot -- enough to convince the average idiot they're geniuses. But they're not. They're just barely above regular intelligence and contribute really nothing to field besides chit chat.
And Josha Bach on computering these men have to be the smartest bc like Max Tegmark said we have to be pro active on this subject It is the most important revolution in human history.
Love the words of wisdom at the end of every podcast Lex! They really tie an elegant bow to the whole conversation. Generally just love your podcasts! Been following since you started and I am forever grateful for the amount of uploads as well as the wide variety of topics you bring up in them. Keep up the good work!
I think the major missing piece of AI is "abstraction", human brain relies on highly abstracted concepts to think, express and understand the world. Abstract concepts are the basic building block to achieve higher intelligence and will improve the efficiency of learning significantly. For example: A person can learn knowledge easily by reading a book. His brain doesn't learn, think, understand the content by the combination of characters in the book, but that’s what current AI does (like sequence models). Without higher level of abstractions, AI will reach the bottle neck soon.
one feature of a cat, is that it catches things that move... even a little bit of yarn, or a laser pen dot... movement is key.
2 года назад+2
Wow one can extract multiple dystopian novels from this conversation and turn them into best sellers! Love you both, thanks for keeping pushing the envolpe!
I'm only comprehending this action of previous me in the current sense, however causally these things only matter in past tense to anyone including myself. If you thought of that as significant why? If not why not?
I actually implemented barlow twins for FTU segmentation in tissue images. By the way object localization is extremely useful in bio medical imaging applications.
I thought this would be a direction to try within a controlled safe zone, having a robot interact outside of the video learning. As much information and knowledge already enlisted, see if it can learn as a teenager does driving. Sooner or later "hands-on" will be applied. Lex is always on it. 1:21:43 perhaps confusion comes from saftey shutoff if a person is driving so maybe it's hangup there, such a as talking or texting while driving. (Position swap)
So Yann says that AI human companions will have emotions and consciousness, but that nevertheless we will own them as our "intellectual property", and we can back them up and erase their memories at will. Good for you Lex for pushing back against the blithe moral horror of this vision.
Here's a deep question, how is this possible? A guy in a suit, asking questions to the most intellectually blessed people in the world? One after the other. I just joined lex, about a month ago. Why isnt this mainstream as it would have been about 20 years ago? Thanks to everyone who contributes! Ive learned so much in the last month than what I have since high school.
Thank you for a great discussion. I did checkout the sponsors. I rarely post information and hope the following do not contravene protocols for this system. It is very important to use machines to discover what is known and not known and we should continue to do so. Yann made it clear self-supervised learning is one of many types of AI tools. He also made it clear different tools are for different purposes. It was a casual conversation with lots of personal observations which could not be either proved or disproved. Who cares, I do not. It was like a flaw in an otherwise good paper. You do not have to agree on those points, however, a big take away is using a model and in my opinion what self supervised learning is good for and what it is not good for. As an example, he did not say it, but due to the paucity of data and the time involved; it is frustrating and expensive for domain experts to train systems to do what experts already knows how to do, particularly if it only involves text. This is relevant if the relevant information is easily and well represented by text alone, which as he pointed out is often not the case. Starting with self supervised learning would frequently slow down the development of creating useful analytical tools for end users who do not have the same expertise as the expert doing the supervision. In effect it is machine learning’s version of the knowledge acquisition gap which constrained the expert systems of the past. It sometimes the tool is worth using, sometimes it is not and over time that can change. The real future benefit of machine learning is to help monitor and guide (assist) the work of experts in many knowledge domains simultaneously. They can do this by learning from each expert with a much more limited form of machine learning which is beyond the scope of this post.
The car door example the mod talks about at around 29:30 is the perceived state dinging back and fourth between models. I can show you a diagram of how it works... it's not that hard to understand...
28:30 Every object has a state and number of possible actions or motions. We dedicate attention on things with the most possible future actions. We predict a lot of this based on motion, thats why our eyes are so responsive to motion.
My humble opinion ( I am not an expert ) : The core of the discussion , whether AGI will ever be accessible is around emergent complexity ( not necessarilly to measure it , but to model the different layers of it and express it algorithmicly to digital computers ) . But unfortunely we are far from understanding the emergent complexity yet .
This was an amazing interview and most of all -- this interview reminds me of the bigger concerns and areas that exists that looms over the rather useless scraps of so-called ' news' that has nothing to do with changing the actual global world and global community. Thank you very much, Lex, for your inspiring and probing podcasts.
Regarding Yann's opinion about the importance of language in human intelligence is still an underestimation. Yes, babies can learn basic skills without language (assuming no parental supervision which is going to be communicated in language), yet, adults cannot learn any advanced concepts like physics or chemistry without language that is a very efficient knowledge transfer mechanism for humans saving us from going into large number of trails (that could be very costly as some situations) to figure out the laws of these concepts. At some extreme scenarios, language (whether English or Math) is all what we have to explain the world, just look at what Eientstien did to communicate the relativity theory while at that time there was no mean to prove some of its findings.
All learning is conducted through the matrix of prior learning. In the earliest moments, learning is written in the broadest strokes (which becomes the system through which later learning is understood).
Can be mapped almost like the development of a tree. Trunk/branch/twig/leaf then stabilise until death. Violence in my trunk to branch phase means I’m an anxious person in adult life. Perhaps 😊
18:12 casualty. I can't agree completely, it would like scientists not allowed to do experiments, just let them watch! That's why children want to try thing themselves. When you can try it, you can give specific input to the system, to improve your model exactly where there are holes.
42:30 I'm a Computer Scientist and not a Biologist, but I would assume our evolved musculoskeletal structure maximizes utility in mobility and flexibility when operated as a biped. Thus, babies will naturally tend towards standing upright as soon as they explore the benefits of doing so. Asking why we evolved into bipeds would be a different question.
They can clearly reach higher by standing up and move faster by walking than by crawling. I’m pretty sure that’s obvious to the little ones. And then there’s the imitation aspect on top of that.
Here are the timestamps. Please check out our sponsors to support this podcast.
0:00 - Introduction & sponsor mentions:
- Public Goods: publicgoods.com/lex and use code LEX to get $15 off
- Indeed: indeed.com/lex to get $75 credit
- ROKA: roka.com/ and use code LEX to get 20% off your first order
- NetSuite: netsuite.com/lex to get free product tour
- Magic Spoon: magicspoon.com/lex and use code LEX to get $5 off
0:36 - Self-supervised learning
10:55 - Vision vs language
16:46 - Statistics
22:33 - Three challenges of machine learning
28:22 - Chess
36:25 - Animals and intelligence
46:09 - Data augmentation
1:07:29 - Multimodal learning
1:19:18 - Consciousness
1:24:03 - Intrinsic vs learned ideas
1:28:15 - Fear of death
1:36:07 - Artificial Intelligence
1:49:56 - Facebook AI Research
2:06:34 - NeurIPS
2:22:46 - Complexity
2:31:11 - Music
2:36:06 - Advice for young people
Interesting tppics
That WhatsApp Bot does a funny lil trick. The pic changes. Seen it happen in another chat space too.
@@BeckyYork but if you just suppose for a moment that u are already a copy of a previous reality ... wouldn't the notion of you being a clone act as a "safe keep" for the best parts of yourself?
@@BeckyYork the mind is capable of so much more if one would allow it the freedom to do so. I do like this reality too.
I think Professor George Karniadakis might have some interesting insight regarding NN and physics applications.
That gentleman must have created for himself one of the most fantastic job ever : to meet brilliant minds and to LEARN every time . Bravo !
More importantly, spread all this leaning to everybody else through video interviews!!
It's truly beautiful I hope one day I'm brilliant enough to be considered, even though Lex ignores me on Twitter haha, so I guess I'm here to bring awareness to him, I make funny jokes about bucky balls like how lex has to handle even my best jokes. I know this doesn't make sense to anyone else, so Nostrovia to family.
@@TimeLordRaps drop Twitter account. I want to follow you.
So has Lex, much respect to both.
wish he'd still do AI podcasting :(
This just came up on my RUclips feed two years later. Wow, what an extraordinarily prescient discussion.
I think this was my favorite Lex podcast. No other *(super popular) podcaster has the technical proficiency to go so deep into a discussion of computer vision. This is why I'm subbed.
Check out machine learning street talk, they go deeper and yann was also on there
@@kwillo4 Thanks for the suggestion!!
The beauty of this channel. Finally, someone who can talk to so many people about so many advanced things.
I get really scared when "Chief AI Scientists" are that bad at predicting AI capabilities.
LeCun 57:55:
You take an object, place it on a table, and then push the table. It's completely obvious to you that the object will be pushed along with the table, because it's sitting on it. I believe there is no text in the world that explicitly explains this. So, if you train a machine, as powerful as it could be - let's say your GPT-5000 or whatever - it's never going to learn about this phenomenon.
ChatGPT (GPT 4):
If you push the table gently, the object might stay in place due to friction, although it may slide or wobble slightly. If you push the table with a greater force, the object might slide or fall over, especially if the object is top-heavy or not very stable.
I asked it to compare a ball and a box in that scenario, to describe order of events if the accelaration of the table would increase over time... it blew me away
Yet the same Lecun: "ChatGPT is 'not particularly innovative'"
Yann is the epitome of a research con artist. And it's CRAZY how many "intelligent" people like Lex can't see this. But Lex is an overrated midwit himself so I guess that really shouldn't be too surprising in his case. Don't get me wrong, as a PERSON - I like Lex. But as far as rating his intelligence? It's absolutely batsh*t to me people think he's some kinda super smart guy. He's A TEACHER. TEACHERS ARE NEVER EXPERTS >> FULL STOP. THAT'S WHY THEY TEACH INSTEAD OF ACTUALLY MAKE SOMETHING.
Many people including me are indebted to the perseverance of people like Yann LeCun. Luckily for me, I got to meet him and thank him. What an inspiring person.
How so? Does he also research medicine or something?
@@harryseaton7444 I work in CV/ML/AI.
@@SallyErfanian so his work has made a difference in your work life then? Or just his ideas being educational
HE IS THE GOAT OF ARTIFICIAL INTELLIGENCE
@@harryseaton7444 no he is a pioneer in the field of Artificial Intelligence a True Legend in the Field
It's interesting coming back to this now. I put Yann's example of the smart phone on the table through GPT4 and of course it got the right answer
"If the smartphone was on the table and you pushed the table 5 feet to the left, the smartphone would also move 5 feet to the left, assuming it stayed on the table during the push. So, relative to where it started, the smartphone is now 5 feet to the left."
It's just interesting that people at the bleeding edge of this technology didn't realize how competent these system could get using only text.
In my opinion one of the best of your podcasts. I watched them all by the way on a sidenote.
58:27
LeCun: "GPT-5000 would never learn that a phone sitting on a table will move with the table when you push it"
GPT-4: *in depth physics explanation about the conditions in which the phone would move with the table and when it would slide off*
This guy has become a massive AI Safety skeptic. Not great to hear him making confidently wrong predictions like this
It’s really to think about it
Thanks!
I see Yann, and I like immediately. Geoff may be the grandfather of the field, but Yann still has ideas that are super-interesting going forward.
Very interesting talk, I like when Lex and his guests put the bigger questions inside the balance when talking about current and next technology. I wonder when this was recorded though? 1:54:31
This is worth multiple watch through. For understanding learning, learn what you find different on each watch to begin to learn your own instincts.
As a data scientist, who works on various areas in data science, this podcast was amazing to hear. Loved his response at 17:50 about intelligence and statistics.
When Lex talked about death and how we try to ignore or hide from it. And everything we do centered around that... I got goose bumps.
01:46:52 "I think the Chinese Room Argument is a ridiculous one..." As someone who winced at, and was underwhelmed by, LeCun's critique of nativism and innate ideas, this was music to my ears!
Out of curiosity, why is it a ridiculous argument? Unfortunately LeCun doesn’t really say why here, he just kind of handwaves it away, as others like Hassabis and Dennett have also done in the past.
Hassabis basically said “it doesn’t matter if something only appears intelligent, it’s enough for what we’re aiming to achieve” … which is fair and valid, especially to avoid getting bogged down by semantics - but it doesn’t address the underlying criticism that Searle first raised. LeCun seems to suggest here that the sum of all human experience can reduced to a mechanistic “solution” - just not in the forseeable future, but in a blind faith eventuality which itself is an unsatisfying non-answer.
@meatskunk Thanks for your comments -- and I hope it was clear that I was partly joking. Of course, I don't really believe "ridiculous" is a fair characterization of Searle's position, however much I may have misgivings about it (more on that later). After all, anyone who convinced Putnam to walk back his commitment to computation/functionalism deserves eminent respect. And if I've missed something in Searle, I'm happy to be corrected.
It seems to me the greatest liability or limitation in CRA is that it entirely inverts the relationship of processing and output to consciousness, or the personal and the subpersonal. Recall the premise: the man in the room is fed instructions, which he enacts. In short, he understands, has some conscious understanding of, the instructions -- the processing. But the problem for computational studies allegedly arises when the man in the room doesn't understand, has no conscious understanding, of whatever "content" the instructions are meant to yield, i.e. the output. So he has a personal grasp of the instructions, but no more an understanding of his output than he'd be able to consciously introspect into sub-personal processes (say, cardiovascular activity or involuntary memory).
As should be clear from the above, whatever CRA is evoking, it's the diametrical opposite of whatever is being claimed in computational, or at least computational-leaning theories of language processing (Chomsky), perception (Marr) or thought (Fodor). In all of these and like other studies, the emphasis is that our computations are inaccessible to introspection (subpersonal). In short, in direct opposition to the man in the room, we are not personally aware of the operations whereby we process external stimuli. To wit, the man has the lived experience, conscious and phenomenological, of the blow-by-blow whereby he walks through certain instructions (e.g. "I am now matching x to 2 on this look-up table"). By fitting contrast, no sentient being, in real time, has personal access, or is required to consciously plan and think out, say, the nodes in a Chomsky tree diagram, when speaking a sentence in ordinary language (!).
To briefly spell this out: nobody, when speaking "John expects to hurt himself," has to consciously think, in order to speak the sentence in real time, "ok, in enacting the operations of TGG, I need to displace 'John' from 'hurt himself' and raise it; but, in doing so, I also need to leave a trace, or a PRO, from its displaced position, and decide, to top it off, which is it: trace or PRO?" Unlike the man in the room, we're not aware of the operations we are enacting -- we just do it, all day, and every day, all the time. (See The Minimalist Program).
So, it's not clear, as Catherine Elgin once noted, what Searle's little thought experiment is meant to show. That's not say there aren't compelling arguments against computational or functional studies of the mind or brain -- Ned Block (in my view) perceptively adapts some ideas from Nelson Goodman; von Neumann, as early as 1958, was sounding the alarm. Heck, even Noam, way back in 1957, posed powerful thought experiments as to why mental language processing, pace machine models, WASN'T probabilistic, statistical, or a posteriori (based on what a speaker-hearer had heard or been fed before).
To sum up, there are good claims to be made against pushing studies of the mind/brain too far down the rabbit hole of machine processing. It's just that, Searle ain't it.
Saw his name and HAD to click the video!! I cited his work in my undergrad thesis, he is a walking legend 👏
im a cs undergrad and understood almost nothing of what was said
Maybe you should rewind frequently his answers bc Yann think and talk fast like all great scientists .
It is like that I understood everything .
I really liked this conversation. This guy's awesome.
As a kind of related aside, the auto-generated CC are amazing for someone with such a strong French accent.
It seems like an important concept is undervalued in ML right now: objective
Building a world model is good, but it's far better to have a world model that predicts whether or not X will happen (for some finite set of objectives X). Our objectives are what determine every action we take. All animal brains are capable of forming a *minimal* world model (not exhaustive!) that can effectively predict actions and observations that relate to a few important objectives:
- do not get hurt
- eat food & reproduce
- explore
In order to achieve these goals, brains must be capable of forming intermediate "objectives" (ideal perceptions) that can be created, reordered, remembered, reevaluated, ... Solving a prediction problem is easy with time and data, but creating the *right* prediction problem is the hard part we don't know how to do.
Can't get enough of him, hope this series (with lecun) goes to round 20!
Agreed 👍
I've loved everything about the Lex Fridman podcast since day one except that it _marked_ the end of the artificial intelligence podcast. However, among the many things I learned from today's episode is the fortuitous fact that the AIP lives _inside_ the LFP.
it's a privilege to hear LeCun talk about ML
Feels good to have someone so deep in the field to be optimistic about the future of ai!
I will follow your videos for a long time. You seem to me, to be a good guy, rational and aware. I wish you success good sir
Lex you gotta try and talk to Gabor Mate, I think you guys would have a very deep and quite frankly important conversation.
There are many complex instinctual behaviors seen in animals merely hours after birth, I might disagree with Dr. LeCun at 1:00. I suspect that there is a far deeper encoding somehwere .... one need only watch how a newborn foal quickly stands and walks. We have no idea of how this behavior is passed down genetically.
LeCun is a real genius. Good to see them on our own time.
Is he though!? -- for all his "research" his AI ideas have actually had VERY FEW practical implementations. The mark of a good idea is one that actually provides VALUE to REAL THINGS that you can make. Yan is remarkably lacking in this department. Anyone who actually does real AI development knows how full of it Yann is... Lex is more or less the same. Teachers teach, mostly because they can't do. If Lex is such an "AI expert" name ONE THING that he's done to significantly advance AI capability besides just talk about it. These are not geniuses. Lex and Yann are the EPITOME of "midwit" -- smart people that are just BARELY smarter than an average idiot -- enough to convince the average idiot they're geniuses. But they're not. They're just barely above regular intelligence and contribute really nothing to field besides chit chat.
Blessed to have Yann to be in your podcast finally. Most deserving figure in the field of modern computer and AI.
He was in much earlier in the Lex Fridman Show. This is his second time on
Don’t forget François Chollet
And Josha Bach on computering these men have to be the smartest bc like Max Tegmark said we have to be pro active on this subject
It is the most important revolution in human history.
Love the words of wisdom at the end of every podcast Lex! They really tie an elegant bow to the whole conversation. Generally just love your podcasts! Been following since you started and I am forever grateful for the amount of uploads as well as the wide variety of topics you bring up in them. Keep up the good work!
My favorite parts as well. Amazing formula
This is basically a whole semester of self supervision ml, the knowledge is golden.
I think the major missing piece of AI is "abstraction", human brain relies on highly abstracted concepts to think, express and understand the world. Abstract concepts are the basic building block to achieve higher intelligence and will improve the efficiency of learning significantly.
For example: A person can learn knowledge easily by reading a book. His brain doesn't learn, think, understand the content by the combination of characters in the book, but that’s what current AI does (like sequence models).
Without higher level of abstractions, AI will reach the bottle neck soon.
I think this is a typical gross overestimation of what humans do. Next thing you're gonna try and convince me that humans have free will.
How do you know AI doesn't use abstract concepts?
“The fear of death”
and the awareness thereof I call,
as I get older and older
“The reality of our mortality”
Thank you for these conversation. It keeps my brain working.
Yes! About time to do a second round. Really looking forward to this
Well said about the "Printing Press" by Yann LeCun
Great thing about great people is when you listen to them you can sense experience they carry
STOKED, 3 hour podcast with Tom Arnold! I fkn loved him in True Lies!
Whut? That's not Roy Orbison...
Lex is killing it! Appreciate the work brother
one feature of a cat, is that it catches things that move... even a little bit of yarn, or a laser pen dot... movement is key.
Wow one can extract multiple dystopian novels from this conversation and turn them into best sellers!
Love you both, thanks for keeping pushing the envolpe!
Lex, thanks for putting together high quality interviews with rock stars of the nerd-verse. I appreciate these videos a lot 😬, keep it up 👍
Great talk! I wish there was a written version of this conversation.
1:00:00 Lex quoting Drake
At the gym right now and this episode got me on edge
How clear and eloquent thinking. Always a joy to listen.
Amongst these testing times in our world an oasis of knowledge easing the start to my day. Thanks Yann and of course Lex as always
Really fun one here. Great guest!
Thank you, great conversation :)
Paused at 17:45 because if I am this prolific I gotta switch to a laptop and sleep so I'll see yall in the morning. Nice sharing ideas.
I'm only comprehending this action of previous me in the current sense, however causally these things only matter in past tense to anyone including myself. If you thought of that as significant why? If not why not?
@2:02:05 I still find a large aspect being overlooked. “different operating incentives” exactly Lex
I actually implemented barlow twins for FTU segmentation in tissue images. By the way object localization is extremely useful in bio medical imaging applications.
Have you listened to a 2.5 hours long podcast two times and back to back?
I just did.
I might listen this once more.
Please invite Andrej Karpathy and Sam Altman.
The conversation is ... fantastic!
This is difficult. Cats spend much time alone. I loved my Cat, Bilbo. People need good company.
would love to see you get geoffrey hinton
"Do you think RUclips has enough data to learn how to be a cat?" - Great questions as always Lex 😄
What's a better source to learn how to be a cat than RUclips!
Grateful that we can watch it for "free"
1:23:20 “For risks and side effects, read the leaflet…”
This talk is quite good, you know.
Lex, this was a phenomenal conversation! This is why I keep coming back to your podcasts. Keep up the incredible work.
14:57 Run inference on the neural network in reverse. When given a concrete output, you will see a distribution of probable inputs.
Thank you so much for the interview.
I thought this would be a direction to try within a controlled safe zone, having a robot interact outside of the video learning. As much information and knowledge already enlisted, see if it can learn as a teenager does driving. Sooner or later "hands-on" will be applied. Lex is always on it. 1:21:43 perhaps confusion comes from saftey shutoff if a person is driving so maybe it's hangup there, such a as talking or texting while driving. (Position swap)
You're really doing everyone a favor by bringing him on, so awesome to hear from such an important figure of the machine learning community
One of the best if not the best episode from Lex Fridman podcast 👍
You are as impressive as always, Lex. Wow oh wow! Thank you so much for doing what you do!
Great Podcast session, I learned a lot during this conversation. Thank you, Lex Friedman and Yann LeCun !
So Yann says that AI human companions will have emotions and consciousness, but that nevertheless we will own them as our "intellectual property", and we can back them up and erase their memories at will. Good for you Lex for pushing back against the blithe moral horror of this vision.
Here's a deep question, how is this possible? A guy in a suit, asking questions to the most intellectually blessed people in the world? One after the other. I just joined lex, about a month ago. Why isnt this mainstream as it would have been about 20 years ago? Thanks to everyone who contributes! Ive learned so much in the last month than what I have since high school.
Dudes dad is likely close to the bug dogs in tech and Washington.
"Started at the bottom, now we here" lex too good 🤣
Thank you for a great discussion. I did checkout the sponsors.
I rarely post information and hope the following do not contravene protocols for this system.
It is very important to use machines to discover what is known and not known and we should continue to do so.
Yann made it clear self-supervised learning is one of many types of AI tools. He also made it clear different tools are for different purposes. It was a casual conversation with lots of personal observations which could not be either proved or disproved. Who cares, I do not. It was like a flaw in an otherwise good paper. You do not have to agree on those points, however, a big take away is using a model and in my opinion what self supervised learning is good for and what it is not good for.
As an example, he did not say it, but due to the paucity of data and the time involved; it is frustrating and expensive for domain experts to train systems to do what experts already knows how to do, particularly if it only involves text. This is relevant if the relevant information is easily and well represented by text alone, which as he pointed out is often not the case.
Starting with self supervised learning would frequently slow down the development of creating useful analytical tools for end users who do not have the same expertise as the expert doing the supervision. In effect it is machine learning’s version of the knowledge acquisition gap which constrained the expert systems of the past. It sometimes the tool is worth using, sometimes it is not and over time that can change.
The real future benefit of machine learning is to help monitor and guide (assist) the work of experts in many knowledge domains simultaneously. They can do this by learning from each expert with a much more limited form of machine learning which is beyond the scope of this post.
The car door example the mod talks about at around 29:30 is the perceived state dinging back and fourth between models.
I can show you a diagram of how it works... it's not that hard to understand...
would the question be why would we want to replicate in machines?
Lex is the best at what he does, good luck bro
28:30 Every object has a state and number of possible actions or motions. We dedicate attention on things with the most possible future actions. We predict a lot of this based on motion, thats why our eyes are so responsive to motion.
A 16 year old does not learn to drive a car in 20 hours. It takes them 16 years.
Does anyone know the name of the paper referenced at 2:12:42?
Good conversation with Yann.
My humble opinion ( I am not an expert ) : The core of the discussion , whether AGI will ever be accessible is around emergent complexity ( not necessarilly to measure it , but to model the different layers of it and express it algorithmicly to digital computers ) . But unfortunely we are far from understanding the emergent complexity yet .
This was an amazing interview and most of all -- this interview reminds me of the bigger concerns and areas that exists that looms over the rather useless scraps of so-called ' news' that has nothing to do with changing the actual global world and global community.
Thank you very much, Lex, for your inspiring and probing podcasts.
What paper is referenced at 1:03:00? Is this out?
I would've loved to hear a discussioin around the intrepretibility of Convolutions, Self-Attention and MLPs.
Regarding Yann's opinion about the importance of language in human intelligence is still an underestimation. Yes, babies can learn basic skills without language (assuming no parental supervision which is going to be communicated in language), yet, adults cannot learn any advanced concepts like physics or chemistry without language that is a very efficient knowledge transfer mechanism for humans saving us from going into large number of trails (that could be very costly as some situations) to figure out the laws of these concepts. At some extreme scenarios, language (whether English or Math) is all what we have to explain the world, just look at what Eientstien did to communicate the relativity theory while at that time there was no mean to prove some of its findings.
All learning is conducted through the matrix of prior learning.
In the earliest moments, learning is written in the broadest strokes (which becomes the system through which later learning is understood).
Can be mapped almost like the development of a tree. Trunk/branch/twig/leaf then stabilise until death. Violence in my trunk to branch phase means I’m an anxious person in adult life. Perhaps 😊
This is pure bliss!
A podcast with Demis Hassabis would be great!
Great stuff, but I would really appreciate a Rumble channel also ;)
Does anyone know what papers is he referring to at 1:03:16 ?
Thanks for this great conversation, it's a real gift.
We watch to outside. So what if we watched to inside phenomena as well? (hands, eyes, etc). To connect different purviews.
I think we now see how Mark is going to respond to tougher questions, when he does get on.
"..that's the problem with relationships, you can't delete the other human.." - Lex
I need this on a t-shirt
Nice! I was very excited when I saw the name.
Any hope for another Aubrey de Grey episode?
18:12 casualty. I can't agree completely, it would like scientists not allowed to do experiments, just let them watch!
That's why children want to try thing themselves.
When you can try it, you can give specific input to the system, to improve your model exactly where there are holes.
May I suggest what may be the difference between empathy and sympathy.
42:30 I'm a Computer Scientist and not a Biologist, but I would assume our evolved musculoskeletal structure maximizes utility in mobility and flexibility when operated as a biped. Thus, babies will naturally tend towards standing upright as soon as they explore the benefits of doing so. Asking why we evolved into bipeds would be a different question.
They can clearly reach higher by standing up and move faster by walking than by crawling. I’m pretty sure that’s obvious to the little ones. And then there’s the imitation aspect on top of that.
52:45 nice job holding that burp in haha
realy realy good , thank you