Yann LeCun: Dark Matter of Intelligence and Self-Supervised Learning | Lex Fridman Podcast
HTML-код
- Опубликовано: 1 авг 2024
- Yann LeCun is the Chief AI Scientist at Meta, professor at NYU, Turing Award winner, and one of the seminal researchers in the history of machine learning. Please support this podcast by checking out our sponsors:
- Public Goods: publicgoods.com/lex and use code LEX to get $15 off
- Indeed: indeed.com/lex to get $75 credit
- ROKA: roka.com/ and use code LEX to get 20% off your first order
- NetSuite: netsuite.com/lex to get free product tour
- Magic Spoon: magicspoon.com/lex and use code LEX to get $5 off
EPISODE LINKS:
Yann's Twitter: / ylecun
Yann's Facebook: / yann.lecun
Yann's Website: yann.lecun.com/
Books and resources mentioned:
Self-supervised learning (article): bit.ly/3Aau1DQ
PODCAST INFO:
Podcast website: lexfridman.com/podcast
Apple Podcasts: apple.co/2lwqZIr
Spotify: spoti.fi/2nEwCF8
RSS: lexfridman.com/feed/podcast/
Full episodes playlist: • Lex Fridman Podcast
Clips playlist: • Lex Fridman Podcast Clips
OUTLINE:
0:00 - Introduction
0:36 - Self-supervised learning
10:55 - Vision vs language
16:46 - Statistics
22:33 - Three challenges of machine learning
28:22 - Chess
36:25 - Animals and intelligence
46:09 - Data augmentation
1:07:29 - Multimodal learning
1:19:18 - Consciousness
1:24:03 - Intrinsic vs learned ideas
1:28:15 - Fear of death
1:36:07 - Artificial Intelligence
1:49:56 - Facebook AI Research
2:06:34 - NeurIPS
2:22:46 - Complexity
2:31:11 - Music
2:36:06 - Advice for young people
SOCIAL:
- Twitter: / lexfridman
- LinkedIn: / lexfridman
- Facebook: / lexfridman
- Instagram: / lexfridman
- Medium: / lexfridman
- Reddit: / lexfridman
- Support on Patreon: / lexfridman Наука
Here are the timestamps. Please check out our sponsors to support this podcast.
0:00 - Introduction & sponsor mentions:
- Public Goods: publicgoods.com/lex and use code LEX to get $15 off
- Indeed: indeed.com/lex to get $75 credit
- ROKA: roka.com/ and use code LEX to get 20% off your first order
- NetSuite: netsuite.com/lex to get free product tour
- Magic Spoon: magicspoon.com/lex and use code LEX to get $5 off
0:36 - Self-supervised learning
10:55 - Vision vs language
16:46 - Statistics
22:33 - Three challenges of machine learning
28:22 - Chess
36:25 - Animals and intelligence
46:09 - Data augmentation
1:07:29 - Multimodal learning
1:19:18 - Consciousness
1:24:03 - Intrinsic vs learned ideas
1:28:15 - Fear of death
1:36:07 - Artificial Intelligence
1:49:56 - Facebook AI Research
2:06:34 - NeurIPS
2:22:46 - Complexity
2:31:11 - Music
2:36:06 - Advice for young people
Interesting tppics
That WhatsApp Bot does a funny lil trick. The pic changes. Seen it happen in another chat space too.
@@BeckyYork but if you just suppose for a moment that u are already a copy of a previous reality ... wouldn't the notion of you being a clone act as a "safe keep" for the best parts of yourself?
@@BeckyYork the mind is capable of so much more if one would allow it the freedom to do so. I do like this reality too.
I think Professor George Karniadakis might have some interesting insight regarding NN and physics applications.
That gentleman must have created for himself one of the most fantastic job ever : to meet brilliant minds and to LEARN every time . Bravo !
More importantly, spread all this leaning to everybody else through video interviews!!
It's truly beautiful I hope one day I'm brilliant enough to be considered, even though Lex ignores me on Twitter haha, so I guess I'm here to bring awareness to him, I make funny jokes about bucky balls like how lex has to handle even my best jokes. I know this doesn't make sense to anyone else, so Nostrovia to family.
@@TimeLordRaps drop Twitter account. I want to follow you.
So has Lex, much respect to both.
wish he'd still do AI podcasting :(
This just came up on my RUclips feed two years later. Wow, what an extraordinarily prescient discussion.
I think this was my favorite Lex podcast. No other *(super popular) podcaster has the technical proficiency to go so deep into a discussion of computer vision. This is why I'm subbed.
Check out machine learning street talk, they go deeper and yann was also on there
@@kwillo4 Thanks for the suggestion!!
The beauty of this channel. Finally, someone who can talk to so many people about so many advanced things.
Thanks for keeping it real Lex. Can't thank you enough. You and the guests you choose have been opening my mind in the most magnificent ways.
Many people including me are indebted to the perseverance of people like Yann LeCun. Luckily for me, I got to meet him and thank him. What an inspiring person.
How so? Does he also research medicine or something?
@@harryseaton7444 I work in CV/ML/AI.
@@SallyErfanian so his work has made a difference in your work life then? Or just his ideas being educational
HE IS THE GOAT OF ARTIFICIAL INTELLIGENCE
@@harryseaton7444 no he is a pioneer in the field of Artificial Intelligence a True Legend in the Field
In my opinion one of the best of your podcasts. I watched them all by the way on a sidenote.
When Lex talked about death and how we try to ignore or hide from it. And everything we do centered around that... I got goose bumps.
It's interesting coming back to this now. I put Yann's example of the smart phone on the table through GPT4 and of course it got the right answer
"If the smartphone was on the table and you pushed the table 5 feet to the left, the smartphone would also move 5 feet to the left, assuming it stayed on the table during the push. So, relative to where it started, the smartphone is now 5 feet to the left."
It's just interesting that people at the bleeding edge of this technology didn't realize how competent these system could get using only text.
This is worth multiple watch through. For understanding learning, learn what you find different on each watch to begin to learn your own instincts.
I really liked this conversation. This guy's awesome.
As a kind of related aside, the auto-generated CC are amazing for someone with such a strong French accent.
Great Podcast session, I learned a lot during this conversation. Thank you, Lex Friedman and Yann LeCun !
Thank you for these conversation. It keeps my brain working.
Yes! About time to do a second round. Really looking forward to this
Lex is killing it! Appreciate the work brother
Very interesting talk, I like when Lex and his guests put the bigger questions inside the balance when talking about current and next technology. I wonder when this was recorded though? 1:54:31
Love the words of wisdom at the end of every podcast Lex! They really tie an elegant bow to the whole conversation. Generally just love your podcasts! Been following since you started and I am forever grateful for the amount of uploads as well as the wide variety of topics you bring up in them. Keep up the good work!
My favorite parts as well. Amazing formula
Thank you so much for the interview.
Lex, thanks for putting together high quality interviews with rock stars of the nerd-verse. I appreciate these videos a lot 😬, keep it up 👍
I see Yann, and I like immediately. Geoff may be the grandfather of the field, but Yann still has ideas that are super-interesting going forward.
You are as impressive as always, Lex. Wow oh wow! Thank you so much for doing what you do!
Great conversation, thank you so much!
I've been waiting for this for so long! Thank you ♥
Great talk! I wish there was a written version of this conversation.
Blessed to have Yann to be in your podcast finally. Most deserving figure in the field of modern computer and AI.
He was in much earlier in the Lex Fridman Show. This is his second time on
Don’t forget François Chollet
And Josha Bach on computering these men have to be the smartest bc like Max Tegmark said we have to be pro active on this subject
It is the most important revolution in human history.
How clear and eloquent thinking. Always a joy to listen.
I will follow your videos for a long time. You seem to me, to be a good guy, rational and aware. I wish you success good sir
Wow one can extract multiple dystopian novels from this conversation and turn them into best sellers!
Love you both, thanks for keeping pushing the envolpe!
I've loved everything about the Lex Fridman podcast since day one except that it _marked_ the end of the artificial intelligence podcast. However, among the many things I learned from today's episode is the fortuitous fact that the AIP lives _inside_ the LFP.
Amongst these testing times in our world an oasis of knowledge easing the start to my day. Thanks Yann and of course Lex as always
Feels good to have someone so deep in the field to be optimistic about the future of ai!
Saw his name and HAD to click the video!! I cited his work in my undergrad thesis, he is a walking legend 👏
im a cs undergrad and understood almost nothing of what was said
Maybe you should rewind frequently his answers bc Yann think and talk fast like all great scientists .
It is like that I understood everything .
As a data scientist, who works on various areas in data science, this podcast was amazing to hear. Loved his response at 17:50 about intelligence and statistics.
Great thing about great people is when you listen to them you can sense experience they carry
Thanks so much for these conversations Lex.
Thanks for this great conversation, it's a real gift.
At the gym right now and this episode got me on edge
Good conversation with Yann.
Lex, this was a phenomenal conversation! This is why I keep coming back to your podcasts. Keep up the incredible work.
You're really doing everyone a favor by bringing him on, so awesome to hear from such an important figure of the machine learning community
The conversation is ... fantastic!
I would've loved to hear a discussioin around the intrepretibility of Convolutions, Self-Attention and MLPs.
I love what Lex does. 🙏
I read this yesterday and it opened my eyes: *”You don’t get what you want in life, you get who you are!”*
Really think about it 😉
I actually implemented barlow twins for FTU segmentation in tissue images. By the way object localization is extremely useful in bio medical imaging applications.
One of the best if not the best episode from Lex Fridman podcast 👍
Can't get enough of him, hope this series (with lecun) goes to round 20!
Agreed 👍
Thank you for a great discussion. I did checkout the sponsors.
I rarely post information and hope the following do not contravene protocols for this system.
It is very important to use machines to discover what is known and not known and we should continue to do so.
Yann made it clear self-supervised learning is one of many types of AI tools. He also made it clear different tools are for different purposes. It was a casual conversation with lots of personal observations which could not be either proved or disproved. Who cares, I do not. It was like a flaw in an otherwise good paper. You do not have to agree on those points, however, a big take away is using a model and in my opinion what self supervised learning is good for and what it is not good for.
As an example, he did not say it, but due to the paucity of data and the time involved; it is frustrating and expensive for domain experts to train systems to do what experts already knows how to do, particularly if it only involves text. This is relevant if the relevant information is easily and well represented by text alone, which as he pointed out is often not the case.
Starting with self supervised learning would frequently slow down the development of creating useful analytical tools for end users who do not have the same expertise as the expert doing the supervision. In effect it is machine learning’s version of the knowledge acquisition gap which constrained the expert systems of the past. It sometimes the tool is worth using, sometimes it is not and over time that can change.
The real future benefit of machine learning is to help monitor and guide (assist) the work of experts in many knowledge domains simultaneously. They can do this by learning from each expert with a much more limited form of machine learning which is beyond the scope of this post.
Have you listened to a 2.5 hours long podcast two times and back to back?
I just did.
I might listen this once more.
Thank you, great conversation :)
LeCun is a real genius. Good to see them on our own time.
Well said about the "Printing Press" by Yann LeCun
This was an amazing interview and most of all -- this interview reminds me of the bigger concerns and areas that exists that looms over the rather useless scraps of so-called ' news' that has nothing to do with changing the actual global world and global community.
Thank you very much, Lex, for your inspiring and probing podcasts.
Love the content Lex!
very much enjoined, thank you ....
Love you Lex, your awesome buddy.
“The fear of death”
and the awareness thereof I call,
as I get older and older
“The reality of our mortality”
Lex is the best at what he does, good luck bro
Cant wait to hear you on a Dan Carlin Podcast! That is true success.
This is pure bliss!
it's a privilege to hear LeCun talk about ML
Thank you!
Really great!! thanks!
It seems like an important concept is undervalued in ML right now: objective
Building a world model is good, but it's far better to have a world model that predicts whether or not X will happen (for some finite set of objectives X). Our objectives are what determine every action we take. All animal brains are capable of forming a *minimal* world model (not exhaustive!) that can effectively predict actions and observations that relate to a few important objectives:
- do not get hurt
- eat food & reproduce
- explore
In order to achieve these goals, brains must be capable of forming intermediate "objectives" (ideal perceptions) that can be created, reordered, remembered, reevaluated, ... Solving a prediction problem is easy with time and data, but creating the *right* prediction problem is the hard part we don't know how to do.
thx for upload :)
58:27
LeCun: "GPT-5000 would never learn that a phone sitting on a table will move with the table when you push it"
GPT-4: *in depth physics explanation about the conditions in which the phone would move with the table and when it would slide off*
This guy has become a massive AI Safety skeptic. Not great to hear him making confidently wrong predictions like this
It’s really to think about it
This is basically a whole semester of self supervision ml, the knowledge is golden.
Lex you gotta try and talk to Gabor Mate, I think you guys would have a very deep and quite frankly important conversation.
Very interesting talk, thanks
This talk is quite good, you know.
Nice! I was very excited when I saw the name.
Any hope for another Aubrey de Grey episode?
Grateful that we can watch it for "free"
thank you
Paused at 17:45 because if I am this prolific I gotta switch to a laptop and sleep so I'll see yall in the morning. Nice sharing ideas.
I'm only comprehending this action of previous me in the current sense, however causally these things only matter in past tense to anyone including myself. If you thought of that as significant why? If not why not?
realy realy good , thank you
heh... I also went through a expressive-music-instrument phase of fighting against MIDI, doing OSC, ChucK/Csound; and hobby helicopters. The former sent me through an education on iOS music instruments, and embedded hardware; in which I learned more than I did in school in some areas.
Thanks!
All learning is conducted through the matrix of prior learning.
In the earliest moments, learning is written in the broadest strokes (which becomes the system through which later learning is understood).
@2:02:05 I still find a large aspect being overlooked. “different operating incentives” exactly Lex
Thanks lex!
one feature of a cat, is that it catches things that move... even a little bit of yarn, or a laser pen dot... movement is key.
Yann, enfin !
WOW! I was just listening to Tom Brands interview
would love to see you get geoffrey hinton
STOKED, 3 hour podcast with Tom Arnold! I fkn loved him in True Lies!
Whut? That's not Roy Orbison...
You looked sleepy Lex! Get some sleep man! ;) Nice talk! Really enjoyed. Thanks!
"Do you think RUclips has enough data to learn how to be a cat?" - Great questions as always Lex 😄
What's a better source to learn how to be a cat than RUclips!
Sir , when will you have Ido portal on your podcast? Thank you
Wow, What a pleasant surprise
When you want to know the outcome of various association processes... it is the perceived benefit to them from the machine or device.
That varies by programming.
"Started at the bottom, now we here" lex too good 🤣
Hello from Wisconsin!
I like this guy
28:30 Every object has a state and number of possible actions or motions. We dedicate attention on things with the most possible future actions. We predict a lot of this based on motion, thats why our eyes are so responsive to motion.
14:57 Run inference on the neural network in reverse. When given a concrete output, you will see a distribution of probable inputs.
52:45 nice job holding that burp in haha
anyone have a link to Karpathy's car door talk @ MIT? Also, would be very cool if Lex moderated a panel discussion on AGE: LeCun, Y Bengio, Hassabis, Hinton, Koch, Marcus, Chalmers ...
very good
I think we now see how Mark is going to respond to tougher questions, when he does get on.
A living legend Yann LeCun. I haven't heard a lot his opinion about GPT-4, it would be interesting to know it. Some, like Lex, say we're facing an inflexion point in human history, there are some recent papers talking about inherent limits of classical computers, no matter what algorithm they run. Many opinions, the truth is out there but even when AGI has not been achieved, these transformer based systems, could be very good emulating human habilities.