Here are the timestamps. Please check out our sponsors to support this podcast. 0:00 - Introduction & sponsor mentions: - Public Goods: publicgoods.com/lex and use code LEX to get $15 off - Indeed: indeed.com/lex to get $75 credit - ROKA: roka.com/ and use code LEX to get 20% off your first order - NetSuite: netsuite.com/lex to get free product tour - Magic Spoon: magicspoon.com/lex and use code LEX to get $5 off 0:36 - Self-supervised learning 10:55 - Vision vs language 16:46 - Statistics 22:33 - Three challenges of machine learning 28:22 - Chess 36:25 - Animals and intelligence 46:09 - Data augmentation 1:07:29 - Multimodal learning 1:19:18 - Consciousness 1:24:03 - Intrinsic vs learned ideas 1:28:15 - Fear of death 1:36:07 - Artificial Intelligence 1:49:56 - Facebook AI Research 2:06:34 - NeurIPS 2:22:46 - Complexity 2:31:11 - Music 2:36:06 - Advice for young people
@@BeckyYork but if you just suppose for a moment that u are already a copy of a previous reality ... wouldn't the notion of you being a clone act as a "safe keep" for the best parts of yourself?
It's truly beautiful I hope one day I'm brilliant enough to be considered, even though Lex ignores me on Twitter haha, so I guess I'm here to bring awareness to him, I make funny jokes about bucky balls like how lex has to handle even my best jokes. I know this doesn't make sense to anyone else, so Nostrovia to family.
I think this was my favorite Lex podcast. No other *(super popular) podcaster has the technical proficiency to go so deep into a discussion of computer vision. This is why I'm subbed.
Many people including me are indebted to the perseverance of people like Yann LeCun. Luckily for me, I got to meet him and thank him. What an inspiring person.
I've loved everything about the Lex Fridman podcast since day one except that it _marked_ the end of the artificial intelligence podcast. However, among the many things I learned from today's episode is the fortuitous fact that the AIP lives _inside_ the LFP.
I really liked this conversation. This guy's awesome. As a kind of related aside, the auto-generated CC are amazing for someone with such a strong French accent.
And Josha Bach on computering these men have to be the smartest bc like Max Tegmark said we have to be pro active on this subject It is the most important revolution in human history.
It's interesting coming back to this now. I put Yann's example of the smart phone on the table through GPT4 and of course it got the right answer "If the smartphone was on the table and you pushed the table 5 feet to the left, the smartphone would also move 5 feet to the left, assuming it stayed on the table during the push. So, relative to where it started, the smartphone is now 5 feet to the left." It's just interesting that people at the bleeding edge of this technology didn't realize how competent these system could get using only text.
It seems like an important concept is undervalued in ML right now: objective Building a world model is good, but it's far better to have a world model that predicts whether or not X will happen (for some finite set of objectives X). Our objectives are what determine every action we take. All animal brains are capable of forming a *minimal* world model (not exhaustive!) that can effectively predict actions and observations that relate to a few important objectives: - do not get hurt - eat food & reproduce - explore In order to achieve these goals, brains must be capable of forming intermediate "objectives" (ideal perceptions) that can be created, reordered, remembered, reevaluated, ... Solving a prediction problem is easy with time and data, but creating the *right* prediction problem is the hard part we don't know how to do.
58:27 LeCun: "GPT-5000 would never learn that a phone sitting on a table will move with the table when you push it" GPT-4: *in depth physics explanation about the conditions in which the phone would move with the table and when it would slide off*
Very interesting talk, I like when Lex and his guests put the bigger questions inside the balance when talking about current and next technology. I wonder when this was recorded though? 1:54:31
As a data scientist, who works on various areas in data science, this podcast was amazing to hear. Loved his response at 17:50 about intelligence and statistics.
This was an amazing interview and most of all -- this interview reminds me of the bigger concerns and areas that exists that looms over the rather useless scraps of so-called ' news' that has nothing to do with changing the actual global world and global community. Thank you very much, Lex, for your inspiring and probing podcasts.
Love the words of wisdom at the end of every podcast Lex! They really tie an elegant bow to the whole conversation. Generally just love your podcasts! Been following since you started and I am forever grateful for the amount of uploads as well as the wide variety of topics you bring up in them. Keep up the good work!
Is he though!? -- for all his "research" his AI ideas have actually had VERY FEW practical implementations. The mark of a good idea is one that actually provides VALUE to REAL THINGS that you can make. Yan is remarkably lacking in this department. Anyone who actually does real AI development knows how full of it Yann is... Lex is more or less the same. Teachers teach, mostly because they can't do. If Lex is such an "AI expert" name ONE THING that he's done to significantly advance AI capability besides just talk about it. These are not geniuses. Lex and Yann are the EPITOME of "midwit" -- smart people that are just BARELY smarter than an average idiot -- enough to convince the average idiot they're geniuses. But they're not. They're just barely above regular intelligence and contribute really nothing to field besides chit chat.
2 года назад+2
Wow one can extract multiple dystopian novels from this conversation and turn them into best sellers! Love you both, thanks for keeping pushing the envolpe!
I think the major missing piece of AI is "abstraction", human brain relies on highly abstracted concepts to think, express and understand the world. Abstract concepts are the basic building block to achieve higher intelligence and will improve the efficiency of learning significantly. For example: A person can learn knowledge easily by reading a book. His brain doesn't learn, think, understand the content by the combination of characters in the book, but that’s what current AI does (like sequence models). Without higher level of abstractions, AI will reach the bottle neck soon.
I get really scared when "Chief AI Scientists" are that bad at predicting AI capabilities. LeCun 57:55: You take an object, place it on a table, and then push the table. It's completely obvious to you that the object will be pushed along with the table, because it's sitting on it. I believe there is no text in the world that explicitly explains this. So, if you train a machine, as powerful as it could be - let's say your GPT-5000 or whatever - it's never going to learn about this phenomenon. ChatGPT (GPT 4): If you push the table gently, the object might stay in place due to friction, although it may slide or wobble slightly. If you push the table with a greater force, the object might slide or fall over, especially if the object is top-heavy or not very stable.
I asked it to compare a ball and a box in that scenario, to describe order of events if the accelaration of the table would increase over time... it blew me away
Yann is the epitome of a research con artist. And it's CRAZY how many "intelligent" people like Lex can't see this. But Lex is an overrated midwit himself so I guess that really shouldn't be too surprising in his case. Don't get me wrong, as a PERSON - I like Lex. But as far as rating his intelligence? It's absolutely batsh*t to me people think he's some kinda super smart guy. He's A TEACHER. TEACHERS ARE NEVER EXPERTS >> FULL STOP. THAT'S WHY THEY TEACH INSTEAD OF ACTUALLY MAKE SOMETHING.
I actually implemented barlow twins for FTU segmentation in tissue images. By the way object localization is extremely useful in bio medical imaging applications.
Thank you for a great discussion. I did checkout the sponsors. I rarely post information and hope the following do not contravene protocols for this system. It is very important to use machines to discover what is known and not known and we should continue to do so. Yann made it clear self-supervised learning is one of many types of AI tools. He also made it clear different tools are for different purposes. It was a casual conversation with lots of personal observations which could not be either proved or disproved. Who cares, I do not. It was like a flaw in an otherwise good paper. You do not have to agree on those points, however, a big take away is using a model and in my opinion what self supervised learning is good for and what it is not good for. As an example, he did not say it, but due to the paucity of data and the time involved; it is frustrating and expensive for domain experts to train systems to do what experts already knows how to do, particularly if it only involves text. This is relevant if the relevant information is easily and well represented by text alone, which as he pointed out is often not the case. Starting with self supervised learning would frequently slow down the development of creating useful analytical tools for end users who do not have the same expertise as the expert doing the supervision. In effect it is machine learning’s version of the knowledge acquisition gap which constrained the expert systems of the past. It sometimes the tool is worth using, sometimes it is not and over time that can change. The real future benefit of machine learning is to help monitor and guide (assist) the work of experts in many knowledge domains simultaneously. They can do this by learning from each expert with a much more limited form of machine learning which is beyond the scope of this post.
01:46:52 "I think the Chinese Room Argument is a ridiculous one..." As someone who winced at, and was underwhelmed by, LeCun's critique of nativism and innate ideas, this was music to my ears!
Out of curiosity, why is it a ridiculous argument? Unfortunately LeCun doesn’t really say why here, he just kind of handwaves it away, as others like Hassabis and Dennett have also done in the past. Hassabis basically said “it doesn’t matter if something only appears intelligent, it’s enough for what we’re aiming to achieve” … which is fair and valid, especially to avoid getting bogged down by semantics - but it doesn’t address the underlying criticism that Searle first raised. LeCun seems to suggest here that the sum of all human experience can reduced to a mechanistic “solution” - just not in the forseeable future, but in a blind faith eventuality which itself is an unsatisfying non-answer.
@meatskunk Thanks for your comments -- and I hope it was clear that I was partly joking. Of course, I don't really believe "ridiculous" is a fair characterization of Searle's position, however much I may have misgivings about it (more on that later). After all, anyone who convinced Putnam to walk back his commitment to computation/functionalism deserves eminent respect. And if I've missed something in Searle, I'm happy to be corrected. It seems to me the greatest liability or limitation in CRA is that it entirely inverts the relationship of processing and output to consciousness, or the personal and the subpersonal. Recall the premise: the man in the room is fed instructions, which he enacts. In short, he understands, has some conscious understanding of, the instructions -- the processing. But the problem for computational studies allegedly arises when the man in the room doesn't understand, has no conscious understanding, of whatever "content" the instructions are meant to yield, i.e. the output. So he has a personal grasp of the instructions, but no more an understanding of his output than he'd be able to consciously introspect into sub-personal processes (say, cardiovascular activity or involuntary memory). As should be clear from the above, whatever CRA is evoking, it's the diametrical opposite of whatever is being claimed in computational, or at least computational-leaning theories of language processing (Chomsky), perception (Marr) or thought (Fodor). In all of these and like other studies, the emphasis is that our computations are inaccessible to introspection (subpersonal). In short, in direct opposition to the man in the room, we are not personally aware of the operations whereby we process external stimuli. To wit, the man has the lived experience, conscious and phenomenological, of the blow-by-blow whereby he walks through certain instructions (e.g. "I am now matching x to 2 on this look-up table"). By fitting contrast, no sentient being, in real time, has personal access, or is required to consciously plan and think out, say, the nodes in a Chomsky tree diagram, when speaking a sentence in ordinary language (!). To briefly spell this out: nobody, when speaking "John expects to hurt himself," has to consciously think, in order to speak the sentence in real time, "ok, in enacting the operations of TGG, I need to displace 'John' from 'hurt himself' and raise it; but, in doing so, I also need to leave a trace, or a PRO, from its displaced position, and decide, to top it off, which is it: trace or PRO?" Unlike the man in the room, we're not aware of the operations we are enacting -- we just do it, all day, and every day, all the time. (See The Minimalist Program). So, it's not clear, as Catherine Elgin once noted, what Searle's little thought experiment is meant to show. That's not say there aren't compelling arguments against computational or functional studies of the mind or brain -- Ned Block (in my view) perceptively adapts some ideas from Nelson Goodman; von Neumann, as early as 1958, was sounding the alarm. Heck, even Noam, way back in 1957, posed powerful thought experiments as to why mental language processing, pace machine models, WASN'T probabilistic, statistical, or a posteriori (based on what a speaker-hearer had heard or been fed before). To sum up, there are good claims to be made against pushing studies of the mind/brain too far down the rabbit hole of machine processing. It's just that, Searle ain't it.
I'm only comprehending this action of previous me in the current sense, however causally these things only matter in past tense to anyone including myself. If you thought of that as significant why? If not why not?
So Yann says that AI human companions will have emotions and consciousness, but that nevertheless we will own them as our "intellectual property", and we can back them up and erase their memories at will. Good for you Lex for pushing back against the blithe moral horror of this vision.
I love what Lex does. 🙏 I read this yesterday and it opened my eyes: *”You don’t get what you want in life, you get who you are!”* Really think about it 😉
Here's a deep question, how is this possible? A guy in a suit, asking questions to the most intellectually blessed people in the world? One after the other. I just joined lex, about a month ago. Why isnt this mainstream as it would have been about 20 years ago? Thanks to everyone who contributes! Ive learned so much in the last month than what I have since high school.
All learning is conducted through the matrix of prior learning. In the earliest moments, learning is written in the broadest strokes (which becomes the system through which later learning is understood).
Can be mapped almost like the development of a tree. Trunk/branch/twig/leaf then stabilise until death. Violence in my trunk to branch phase means I’m an anxious person in adult life. Perhaps 😊
damn. i was always saying that intelligence/IQ is just a collection of experiences/statistics, no one ever really wanted to agree. good to know im not alone.
Ye maybe some subset of intelligence will be trivialized
2 года назад+5
I never understood why so many people feel the need to invoke mysticism to explain consciousness. People can't even tell what AI systems they designed themselves do once they are trained. Why would brains be different? It's just a very complex "complex system" that has evolved through a _very_ long series of random events... _Of course_ it's hard to make sense of when you look at it.
There's been a paper published last year that mathematically proved infinite width neural networks are equivalent to kernel machines, which isn't that much more from a collection of statistics/lookup table based on all possible features. Also primate brain works the same way, correlating a lot of shapes/colors/sounds/other things we can test 1:1 with individual neurons, except when you're ramping up complexity of it, it eventually breaks down and (presumably) continues in a more heuristical/approximation mode, this whole concept for when the brain exits it's purely statistical work is known/debated as the "grandmother neuron" (due to now disproven belief that there should be a single neuron that 's sole purpose is to identify your grandmother, based on the image you're seeing, and nothing else) or sometimes also called "jennifer aniston neuron" (due to similar but funny reasons).
My humble opinion ( I am not an expert ) : The core of the discussion , whether AGI will ever be accessible is around emergent complexity ( not necessarilly to measure it , but to model the different layers of it and express it algorithmicly to digital computers ) . But unfortunely we are far from understanding the emergent complexity yet .
heh... I also went through a expressive-music-instrument phase of fighting against MIDI, doing OSC, ChucK/Csound; and hobby helicopters. The former sent me through an education on iOS music instruments, and embedded hardware; in which I learned more than I did in school in some areas.
Fascinating watching this after he released H-JPEA, the whole time you can feel him dancing around energy based models but obviously didn't want to explicitly leak the core principles of his unreleased work
Humans are physically optimized for upright walking, thats a big factor in children picking that mode of movement up. Same als using the hands for grabbing instead of the toes.
Regarding Yann's opinion about the importance of language in human intelligence is still an underestimation. Yes, babies can learn basic skills without language (assuming no parental supervision which is going to be communicated in language), yet, adults cannot learn any advanced concepts like physics or chemistry without language that is a very efficient knowledge transfer mechanism for humans saving us from going into large number of trails (that could be very costly as some situations) to figure out the laws of these concepts. At some extreme scenarios, language (whether English or Math) is all what we have to explain the world, just look at what Eientstien did to communicate the relativity theory while at that time there was no mean to prove some of its findings.
That's said, it looks like the bottleneck of self-supervised learning is in its dependency on supervised formulation of training signals! For example, filing the gaps is only one supervised way to formulate labels, there are other ways that the human designer/researcher needs to explore and see if it works like finding synounms or solving a jigsaw puzzle in images etc. Thus, it is still somehow dependent on careful supervision.
I think the main problem is that we think that intelligence is the result of a single type of learning method, while it might be a combination of many of them. Like one can work for language, while another one for image. For example, language has meaning imbedded in it, while image .. doesn`t. Image is representation of physicality .. of interaction ...
the knowledge we have is not only what our neurons are able to process and train "background models" - we also have knowledge we are born with, in our DNA.
A living legend Yann LeCun. I haven't heard a lot his opinion about GPT-4, it would be interesting to know it. Some, like Lex, say we're facing an inflexion point in human history, there are some recent papers talking about inherent limits of classical computers, no matter what algorithm they run. Many opinions, the truth is out there but even when AGI has not been achieved, these transformer based systems, could be very good emulating human habilities.
Here are the timestamps. Please check out our sponsors to support this podcast.
0:00 - Introduction & sponsor mentions:
- Public Goods: publicgoods.com/lex and use code LEX to get $15 off
- Indeed: indeed.com/lex to get $75 credit
- ROKA: roka.com/ and use code LEX to get 20% off your first order
- NetSuite: netsuite.com/lex to get free product tour
- Magic Spoon: magicspoon.com/lex and use code LEX to get $5 off
0:36 - Self-supervised learning
10:55 - Vision vs language
16:46 - Statistics
22:33 - Three challenges of machine learning
28:22 - Chess
36:25 - Animals and intelligence
46:09 - Data augmentation
1:07:29 - Multimodal learning
1:19:18 - Consciousness
1:24:03 - Intrinsic vs learned ideas
1:28:15 - Fear of death
1:36:07 - Artificial Intelligence
1:49:56 - Facebook AI Research
2:06:34 - NeurIPS
2:22:46 - Complexity
2:31:11 - Music
2:36:06 - Advice for young people
Interesting tppics
That WhatsApp Bot does a funny lil trick. The pic changes. Seen it happen in another chat space too.
@@BeckyYork but if you just suppose for a moment that u are already a copy of a previous reality ... wouldn't the notion of you being a clone act as a "safe keep" for the best parts of yourself?
@@BeckyYork the mind is capable of so much more if one would allow it the freedom to do so. I do like this reality too.
I think Professor George Karniadakis might have some interesting insight regarding NN and physics applications.
That gentleman must have created for himself one of the most fantastic job ever : to meet brilliant minds and to LEARN every time . Bravo !
More importantly, spread all this leaning to everybody else through video interviews!!
It's truly beautiful I hope one day I'm brilliant enough to be considered, even though Lex ignores me on Twitter haha, so I guess I'm here to bring awareness to him, I make funny jokes about bucky balls like how lex has to handle even my best jokes. I know this doesn't make sense to anyone else, so Nostrovia to family.
@@TimeLordRaps drop Twitter account. I want to follow you.
So has Lex, much respect to both.
wish he'd still do AI podcasting :(
This just came up on my RUclips feed two years later. Wow, what an extraordinarily prescient discussion.
I think this was my favorite Lex podcast. No other *(super popular) podcaster has the technical proficiency to go so deep into a discussion of computer vision. This is why I'm subbed.
Check out machine learning street talk, they go deeper and yann was also on there
@@kwillo4 Thanks for the suggestion!!
Many people including me are indebted to the perseverance of people like Yann LeCun. Luckily for me, I got to meet him and thank him. What an inspiring person.
How so? Does he also research medicine or something?
@@harryseaton7444 I work in CV/ML/AI.
@@SallyErfanian so his work has made a difference in your work life then? Or just his ideas being educational
HE IS THE GOAT OF ARTIFICIAL INTELLIGENCE
@@harryseaton7444 no he is a pioneer in the field of Artificial Intelligence a True Legend in the Field
The beauty of this channel. Finally, someone who can talk to so many people about so many advanced things.
I've loved everything about the Lex Fridman podcast since day one except that it _marked_ the end of the artificial intelligence podcast. However, among the many things I learned from today's episode is the fortuitous fact that the AIP lives _inside_ the LFP.
I really liked this conversation. This guy's awesome.
As a kind of related aside, the auto-generated CC are amazing for someone with such a strong French accent.
In my opinion one of the best of your podcasts. I watched them all by the way on a sidenote.
Blessed to have Yann to be in your podcast finally. Most deserving figure in the field of modern computer and AI.
He was in much earlier in the Lex Fridman Show. This is his second time on
Don’t forget François Chollet
And Josha Bach on computering these men have to be the smartest bc like Max Tegmark said we have to be pro active on this subject
It is the most important revolution in human history.
Saw his name and HAD to click the video!! I cited his work in my undergrad thesis, he is a walking legend 👏
im a cs undergrad and understood almost nothing of what was said
Maybe you should rewind frequently his answers bc Yann think and talk fast like all great scientists .
It is like that I understood everything .
I see Yann, and I like immediately. Geoff may be the grandfather of the field, but Yann still has ideas that are super-interesting going forward.
It's interesting coming back to this now. I put Yann's example of the smart phone on the table through GPT4 and of course it got the right answer
"If the smartphone was on the table and you pushed the table 5 feet to the left, the smartphone would also move 5 feet to the left, assuming it stayed on the table during the push. So, relative to where it started, the smartphone is now 5 feet to the left."
It's just interesting that people at the bleeding edge of this technology didn't realize how competent these system could get using only text.
When Lex talked about death and how we try to ignore or hide from it. And everything we do centered around that... I got goose bumps.
It seems like an important concept is undervalued in ML right now: objective
Building a world model is good, but it's far better to have a world model that predicts whether or not X will happen (for some finite set of objectives X). Our objectives are what determine every action we take. All animal brains are capable of forming a *minimal* world model (not exhaustive!) that can effectively predict actions and observations that relate to a few important objectives:
- do not get hurt
- eat food & reproduce
- explore
In order to achieve these goals, brains must be capable of forming intermediate "objectives" (ideal perceptions) that can be created, reordered, remembered, reevaluated, ... Solving a prediction problem is easy with time and data, but creating the *right* prediction problem is the hard part we don't know how to do.
This is worth multiple watch through. For understanding learning, learn what you find different on each watch to begin to learn your own instincts.
58:27
LeCun: "GPT-5000 would never learn that a phone sitting on a table will move with the table when you push it"
GPT-4: *in depth physics explanation about the conditions in which the phone would move with the table and when it would slide off*
This guy has become a massive AI Safety skeptic. Not great to hear him making confidently wrong predictions like this
It’s really to think about it
Lex, thanks for putting together high quality interviews with rock stars of the nerd-verse. I appreciate these videos a lot 😬, keep it up 👍
Lex you gotta try and talk to Gabor Mate, I think you guys would have a very deep and quite frankly important conversation.
Can't get enough of him, hope this series (with lecun) goes to round 20!
Agreed 👍
You're really doing everyone a favor by bringing him on, so awesome to hear from such an important figure of the machine learning community
Yes! About time to do a second round. Really looking forward to this
Very interesting talk, I like when Lex and his guests put the bigger questions inside the balance when talking about current and next technology. I wonder when this was recorded though? 1:54:31
I will follow your videos for a long time. You seem to me, to be a good guy, rational and aware. I wish you success good sir
As a data scientist, who works on various areas in data science, this podcast was amazing to hear. Loved his response at 17:50 about intelligence and statistics.
This was an amazing interview and most of all -- this interview reminds me of the bigger concerns and areas that exists that looms over the rather useless scraps of so-called ' news' that has nothing to do with changing the actual global world and global community.
Thank you very much, Lex, for your inspiring and probing podcasts.
Feels good to have someone so deep in the field to be optimistic about the future of ai!
Lex, this was a phenomenal conversation! This is why I keep coming back to your podcasts. Keep up the incredible work.
Love the words of wisdom at the end of every podcast Lex! They really tie an elegant bow to the whole conversation. Generally just love your podcasts! Been following since you started and I am forever grateful for the amount of uploads as well as the wide variety of topics you bring up in them. Keep up the good work!
My favorite parts as well. Amazing formula
Have you listened to a 2.5 hours long podcast two times and back to back?
I just did.
I might listen this once more.
LeCun is a real genius. Good to see them on our own time.
Is he though!? -- for all his "research" his AI ideas have actually had VERY FEW practical implementations. The mark of a good idea is one that actually provides VALUE to REAL THINGS that you can make. Yan is remarkably lacking in this department. Anyone who actually does real AI development knows how full of it Yann is... Lex is more or less the same. Teachers teach, mostly because they can't do. If Lex is such an "AI expert" name ONE THING that he's done to significantly advance AI capability besides just talk about it. These are not geniuses. Lex and Yann are the EPITOME of "midwit" -- smart people that are just BARELY smarter than an average idiot -- enough to convince the average idiot they're geniuses. But they're not. They're just barely above regular intelligence and contribute really nothing to field besides chit chat.
Wow one can extract multiple dystopian novels from this conversation and turn them into best sellers!
Love you both, thanks for keeping pushing the envolpe!
This is basically a whole semester of self supervision ml, the knowledge is golden.
“The fear of death”
and the awareness thereof I call,
as I get older and older
“The reality of our mortality”
I think the major missing piece of AI is "abstraction", human brain relies on highly abstracted concepts to think, express and understand the world. Abstract concepts are the basic building block to achieve higher intelligence and will improve the efficiency of learning significantly.
For example: A person can learn knowledge easily by reading a book. His brain doesn't learn, think, understand the content by the combination of characters in the book, but that’s what current AI does (like sequence models).
Without higher level of abstractions, AI will reach the bottle neck soon.
I think this is a typical gross overestimation of what humans do. Next thing you're gonna try and convince me that humans have free will.
How do you know AI doesn't use abstract concepts?
Thank you for these conversation. It keeps my brain working.
I get really scared when "Chief AI Scientists" are that bad at predicting AI capabilities.
LeCun 57:55:
You take an object, place it on a table, and then push the table. It's completely obvious to you that the object will be pushed along with the table, because it's sitting on it. I believe there is no text in the world that explicitly explains this. So, if you train a machine, as powerful as it could be - let's say your GPT-5000 or whatever - it's never going to learn about this phenomenon.
ChatGPT (GPT 4):
If you push the table gently, the object might stay in place due to friction, although it may slide or wobble slightly. If you push the table with a greater force, the object might slide or fall over, especially if the object is top-heavy or not very stable.
I asked it to compare a ball and a box in that scenario, to describe order of events if the accelaration of the table would increase over time... it blew me away
Yet the same Lecun: "ChatGPT is 'not particularly innovative'"
Yann is the epitome of a research con artist. And it's CRAZY how many "intelligent" people like Lex can't see this. But Lex is an overrated midwit himself so I guess that really shouldn't be too surprising in his case. Don't get me wrong, as a PERSON - I like Lex. But as far as rating his intelligence? It's absolutely batsh*t to me people think he's some kinda super smart guy. He's A TEACHER. TEACHERS ARE NEVER EXPERTS >> FULL STOP. THAT'S WHY THEY TEACH INSTEAD OF ACTUALLY MAKE SOMETHING.
Great thing about great people is when you listen to them you can sense experience they carry
Great talk! I wish there was a written version of this conversation.
it's a privilege to hear LeCun talk about ML
I actually implemented barlow twins for FTU segmentation in tissue images. By the way object localization is extremely useful in bio medical imaging applications.
Thank you for a great discussion. I did checkout the sponsors.
I rarely post information and hope the following do not contravene protocols for this system.
It is very important to use machines to discover what is known and not known and we should continue to do so.
Yann made it clear self-supervised learning is one of many types of AI tools. He also made it clear different tools are for different purposes. It was a casual conversation with lots of personal observations which could not be either proved or disproved. Who cares, I do not. It was like a flaw in an otherwise good paper. You do not have to agree on those points, however, a big take away is using a model and in my opinion what self supervised learning is good for and what it is not good for.
As an example, he did not say it, but due to the paucity of data and the time involved; it is frustrating and expensive for domain experts to train systems to do what experts already knows how to do, particularly if it only involves text. This is relevant if the relevant information is easily and well represented by text alone, which as he pointed out is often not the case.
Starting with self supervised learning would frequently slow down the development of creating useful analytical tools for end users who do not have the same expertise as the expert doing the supervision. In effect it is machine learning’s version of the knowledge acquisition gap which constrained the expert systems of the past. It sometimes the tool is worth using, sometimes it is not and over time that can change.
The real future benefit of machine learning is to help monitor and guide (assist) the work of experts in many knowledge domains simultaneously. They can do this by learning from each expert with a much more limited form of machine learning which is beyond the scope of this post.
01:46:52 "I think the Chinese Room Argument is a ridiculous one..." As someone who winced at, and was underwhelmed by, LeCun's critique of nativism and innate ideas, this was music to my ears!
Out of curiosity, why is it a ridiculous argument? Unfortunately LeCun doesn’t really say why here, he just kind of handwaves it away, as others like Hassabis and Dennett have also done in the past.
Hassabis basically said “it doesn’t matter if something only appears intelligent, it’s enough for what we’re aiming to achieve” … which is fair and valid, especially to avoid getting bogged down by semantics - but it doesn’t address the underlying criticism that Searle first raised. LeCun seems to suggest here that the sum of all human experience can reduced to a mechanistic “solution” - just not in the forseeable future, but in a blind faith eventuality which itself is an unsatisfying non-answer.
@meatskunk Thanks for your comments -- and I hope it was clear that I was partly joking. Of course, I don't really believe "ridiculous" is a fair characterization of Searle's position, however much I may have misgivings about it (more on that later). After all, anyone who convinced Putnam to walk back his commitment to computation/functionalism deserves eminent respect. And if I've missed something in Searle, I'm happy to be corrected.
It seems to me the greatest liability or limitation in CRA is that it entirely inverts the relationship of processing and output to consciousness, or the personal and the subpersonal. Recall the premise: the man in the room is fed instructions, which he enacts. In short, he understands, has some conscious understanding of, the instructions -- the processing. But the problem for computational studies allegedly arises when the man in the room doesn't understand, has no conscious understanding, of whatever "content" the instructions are meant to yield, i.e. the output. So he has a personal grasp of the instructions, but no more an understanding of his output than he'd be able to consciously introspect into sub-personal processes (say, cardiovascular activity or involuntary memory).
As should be clear from the above, whatever CRA is evoking, it's the diametrical opposite of whatever is being claimed in computational, or at least computational-leaning theories of language processing (Chomsky), perception (Marr) or thought (Fodor). In all of these and like other studies, the emphasis is that our computations are inaccessible to introspection (subpersonal). In short, in direct opposition to the man in the room, we are not personally aware of the operations whereby we process external stimuli. To wit, the man has the lived experience, conscious and phenomenological, of the blow-by-blow whereby he walks through certain instructions (e.g. "I am now matching x to 2 on this look-up table"). By fitting contrast, no sentient being, in real time, has personal access, or is required to consciously plan and think out, say, the nodes in a Chomsky tree diagram, when speaking a sentence in ordinary language (!).
To briefly spell this out: nobody, when speaking "John expects to hurt himself," has to consciously think, in order to speak the sentence in real time, "ok, in enacting the operations of TGG, I need to displace 'John' from 'hurt himself' and raise it; but, in doing so, I also need to leave a trace, or a PRO, from its displaced position, and decide, to top it off, which is it: trace or PRO?" Unlike the man in the room, we're not aware of the operations we are enacting -- we just do it, all day, and every day, all the time. (See The Minimalist Program).
So, it's not clear, as Catherine Elgin once noted, what Searle's little thought experiment is meant to show. That's not say there aren't compelling arguments against computational or functional studies of the mind or brain -- Ned Block (in my view) perceptively adapts some ideas from Nelson Goodman; von Neumann, as early as 1958, was sounding the alarm. Heck, even Noam, way back in 1957, posed powerful thought experiments as to why mental language processing, pace machine models, WASN'T probabilistic, statistical, or a posteriori (based on what a speaker-hearer had heard or been fed before).
To sum up, there are good claims to be made against pushing studies of the mind/brain too far down the rabbit hole of machine processing. It's just that, Searle ain't it.
Amongst these testing times in our world an oasis of knowledge easing the start to my day. Thanks Yann and of course Lex as always
Paused at 17:45 because if I am this prolific I gotta switch to a laptop and sleep so I'll see yall in the morning. Nice sharing ideas.
I'm only comprehending this action of previous me in the current sense, however causally these things only matter in past tense to anyone including myself. If you thought of that as significant why? If not why not?
How clear and eloquent thinking. Always a joy to listen.
You are as impressive as always, Lex. Wow oh wow! Thank you so much for doing what you do!
Please invite Andrej Karpathy and Sam Altman.
So Yann says that AI human companions will have emotions and consciousness, but that nevertheless we will own them as our "intellectual property", and we can back them up and erase their memories at will. Good for you Lex for pushing back against the blithe moral horror of this vision.
STOKED, 3 hour podcast with Tom Arnold! I fkn loved him in True Lies!
Whut? That's not Roy Orbison...
Great Podcast session, I learned a lot during this conversation. Thank you, Lex Friedman and Yann LeCun !
Well said about the "Printing Press" by Yann LeCun
Lex is killing it! Appreciate the work brother
I would've loved to hear a discussioin around the intrepretibility of Convolutions, Self-Attention and MLPs.
one feature of a cat, is that it catches things that move... even a little bit of yarn, or a laser pen dot... movement is key.
At the gym right now and this episode got me on edge
I love what Lex does. 🙏
I read this yesterday and it opened my eyes: *”You don’t get what you want in life, you get who you are!”*
Really think about it 😉
Here's a deep question, how is this possible? A guy in a suit, asking questions to the most intellectually blessed people in the world? One after the other. I just joined lex, about a month ago. Why isnt this mainstream as it would have been about 20 years ago? Thanks to everyone who contributes! Ive learned so much in the last month than what I have since high school.
Dudes dad is likely close to the bug dogs in tech and Washington.
Grateful that we can watch it for "free"
1:00:00 Lex quoting Drake
All learning is conducted through the matrix of prior learning.
In the earliest moments, learning is written in the broadest strokes (which becomes the system through which later learning is understood).
Can be mapped almost like the development of a tree. Trunk/branch/twig/leaf then stabilise until death. Violence in my trunk to branch phase means I’m an anxious person in adult life. Perhaps 😊
"Do you think RUclips has enough data to learn how to be a cat?" - Great questions as always Lex 😄
What's a better source to learn how to be a cat than RUclips!
would love to see you get geoffrey hinton
One of the best if not the best episode from Lex Fridman podcast 👍
Great stuff, but I would really appreciate a Rumble channel also ;)
Nice! I was very excited when I saw the name.
Any hope for another Aubrey de Grey episode?
damn. i was always saying that intelligence/IQ is just a collection of experiences/statistics, no one ever really wanted to agree. good to know im not alone.
Ye maybe some subset of intelligence will be trivialized
I never understood why so many people feel the need to invoke mysticism to explain consciousness. People can't even tell what AI systems they designed themselves do once they are trained. Why would brains be different?
It's just a very complex "complex system" that has evolved through a _very_ long series of random events... _Of course_ it's hard to make sense of when you look at it.
There's been a paper published last year that mathematically proved infinite width neural networks are equivalent to kernel machines, which isn't that much more from a collection of statistics/lookup table based on all possible features. Also primate brain works the same way, correlating a lot of shapes/colors/sounds/other things we can test 1:1 with individual neurons, except when you're ramping up complexity of it, it eventually breaks down and (presumably) continues in a more heuristical/approximation mode, this whole concept for when the brain exits it's purely statistical work is known/debated as the "grandmother neuron" (due to now disproven belief that there should be a single neuron that 's sole purpose is to identify your grandmother, based on the image you're seeing, and nothing else) or sometimes also called "jennifer aniston neuron" (due to similar but funny reasons).
@@FourOneNineOneFourOne nice reply. aciu.
I have always called probabilistic thinking. Every decision usually has a range of outcomes.
suggested guest: Eliezer Yudkowsky. I think you both would have an incredibly enjoyable and exciting conversation!
This talk is quite good, you know.
@2:02:05 I still find a large aspect being overlooked. “different operating incentives” exactly Lex
Thank you, great conversation :)
Thank you so much for the interview.
Really fun one here. Great guest!
Lex is the best at what he does, good luck bro
This is difficult. Cats spend much time alone. I loved my Cat, Bilbo. People need good company.
A podcast with Demis Hassabis would be great!
My humble opinion ( I am not an expert ) : The core of the discussion , whether AGI will ever be accessible is around emergent complexity ( not necessarilly to measure it , but to model the different layers of it and express it algorithmicly to digital computers ) . But unfortunely we are far from understanding the emergent complexity yet .
heh... I also went through a expressive-music-instrument phase of fighting against MIDI, doing OSC, ChucK/Csound; and hobby helicopters. The former sent me through an education on iOS music instruments, and embedded hardware; in which I learned more than I did in school in some areas.
The conversation is ... fantastic!
would the question be why would we want to replicate in machines?
Fascinating watching this after he released H-JPEA, the whole time you can feel him dancing around energy based models but obviously didn't want to explicitly leak the core principles of his unreleased work
I think we now see how Mark is going to respond to tougher questions, when he does get on.
Humans are physically optimized for upright walking, thats a big factor in children picking that mode of movement up. Same als using the hands for grabbing instead of the toes.
Regarding Yann's opinion about the importance of language in human intelligence is still an underestimation. Yes, babies can learn basic skills without language (assuming no parental supervision which is going to be communicated in language), yet, adults cannot learn any advanced concepts like physics or chemistry without language that is a very efficient knowledge transfer mechanism for humans saving us from going into large number of trails (that could be very costly as some situations) to figure out the laws of these concepts. At some extreme scenarios, language (whether English or Math) is all what we have to explain the world, just look at what Eientstien did to communicate the relativity theory while at that time there was no mean to prove some of its findings.
You looked sleepy Lex! Get some sleep man! ;) Nice talk! Really enjoyed. Thanks!
1:23:20 “For risks and side effects, read the leaflet…”
That's said, it looks like the bottleneck of self-supervised learning is in its dependency on supervised formulation of training signals! For example, filing the gaps is only one supervised way to formulate labels, there are other ways that the human designer/researcher needs to explore and see if it works like finding synounms or solving a jigsaw puzzle in images etc. Thus, it is still somehow dependent on careful supervision.
Good conversation with Yann.
The next podcast should be about Dark Energy of Deep Reinforcement Learning
I think the main problem is that we think that intelligence is the result of a single type of learning method, while it might be a combination of many of them. Like one can work for language, while another one for image. For example, language has meaning imbedded in it, while image .. doesn`t. Image is representation of physicality .. of interaction ...
Sorry for repeating Lex I'm learning how to use the reply
the knowledge we have is not only what our neurons are able to process and train "background models" - we also have knowledge we are born with, in our DNA.
A living legend Yann LeCun. I haven't heard a lot his opinion about GPT-4, it would be interesting to know it. Some, like Lex, say we're facing an inflexion point in human history, there are some recent papers talking about inherent limits of classical computers, no matter what algorithm they run. Many opinions, the truth is out there but even when AGI has not been achieved, these transformer based systems, could be very good emulating human habilities.
"Started at the bottom, now we here" lex too good 🤣
The thing I don't get is Lex keeps pushing in his own beliefs and ideas. Let Yann (and other guests) speak!
The only reason I am commenting is because I know you did a podcast with Ryan Gordon and John Danaher and the people need to see that shit!!!!!!!!!!!!
Sir , when will you have Ido portal on your podcast? Thank you
Thanks for this great conversation, it's a real gift.
14:57 Run inference on the neural network in reverse. When given a concrete output, you will see a distribution of probable inputs.
"..that's the problem with relationships, you can't delete the other human.." - Lex
I need this on a t-shirt