Yann LeCun: Deep Learning, ConvNets, and Self-Supervised Learning | Lex Fridman Podcast #36

Поделиться
HTML-код
  • Опубликовано: 17 май 2024
  • НаукаНаука

Комментарии • 208

  • @lexfridman
    @lexfridman  4 года назад +144

    I really enjoyed this conversation with Yann. Here's the outline:
    0:00 - Introduction
    1:11 - HAL 9000 and Space Odyssey 2001
    7:49 - The surprising thing about deep learning
    10:40 - What is learning?
    18:04 - Knowledge representation
    20:55 - Causal inference
    24:43 - Neural networks and AI in the 1990s
    34:03 - AGI and reducing ideas to practice
    44:48 - Unsupervised learning
    51:34 - Active learning
    56:34 - Learning from very few examples
    1:00:26 - Elon musk: deep learning and autonomous driving
    1:03:00 - Next milestone for human-level intelligence
    1:08:53 - Her
    1:14:26 - Question for an AGI system

    • @shinochono
      @shinochono 4 года назад

      Thanks for sharing amazing conversations. I so envy your job it gives you the opportunity to meet some of the top brains in the industry. :)

    • @aviraljanveja5155
      @aviraljanveja5155 4 года назад +3

      Such Outlines for videos, especially the long ones are super necessary !

    • @gausspro8937
      @gausspro8937 4 года назад

      @lex fridman did you record your discussion with Judea? I just finished his book, The book of why and would be interested in hearing you two discuss these topics!

    • @francomarchesoni9004
      @francomarchesoni9004 4 года назад

      Keep them coming!

    • @PhilosopherRex
      @PhilosopherRex 4 года назад

      Appearance of the flow (arrow) of time == due to entropy.

  • @beta5770
    @beta5770 4 года назад +201

    Next, Geoff Hinton!

    • @BiancaAguglia
      @BiancaAguglia 4 года назад +17

      I second that. 😊 Geoff is several orders of magnitude more knowledgeable than I am, yet he still manages to make me feel I can follow most of his thought processes. He's a witty teacher and a great storyteller.

    • @lexfridman
      @lexfridman  4 года назад +84

      We'll make it happen for sure!

    • @zhongzhongclock
      @zhongzhongclock 4 года назад +14

      @@lexfridman Let him stand during interview, his back is not good, and can't comfortablly keep sitting for long time.

  • @sibyjoseplathottam4828
    @sibyjoseplathottam4828 4 года назад +8

    This is one of your best interviews yet. We need more people like LeCun in all fields.

  • @MrTransits
    @MrTransits 4 года назад

    These PODcast are priceless!!! Little sumpin sumpin ale, sit back and listen.

  • @LukasValatka
    @LukasValatka 4 года назад +47

    Wonderful interview, Lex! I love the fact that it's one of the few podcasts that balances both general public and expert level information - to me as a deep learning engineer, this series is not only inspiring but also surprisingly useful :).

    • @BiancaAguglia
      @BiancaAguglia 4 года назад +7

      I feel the same way. For example, the interview with Pamela McCorduck was interesting to me precisely because she's not an AI expert. I ended up loving it not just because of the stories she told (she did, after all, witness history in the making) but because it showed her remarkably good understanding of the AI field and her ability to talk about complex topics in easy to understand terms.
      Of course, just because she's not an expert in AI techniques, doesn't mean she hasn't spent quite some time trying to understand them at a high level and to understand their history and potential. That knowledge, plus her storytelling skills, made her fascinating to listen to.

  • @connorshorten6311
    @connorshorten6311 4 года назад +126

    I'd love to see Yann LeCun on the Joe Rogan podcast as well! I am continually impressed with the high caliber of guests you have on the podcast, great work Lex!

    • @connorshorten6311
      @connorshorten6311 4 года назад

      ​@@DeepGamingAIDefinitely! It's a really interesting medium, doesn't require too much preparation on the end of the interviewee and only takes an hour of time!

    • @colorfulcodes
      @colorfulcodes 4 года назад +2

      They respect him and he knows how to ask the right questions. Also by this point he has the portfolio so it's much easier.

    • @muhammadharisbinnaeem1026
      @muhammadharisbinnaeem1026 3 года назад

      I agree with your thought there, @@colorfulcodes. (Y)

    • @muhammadharisbinnaeem1026
      @muhammadharisbinnaeem1026 3 года назад

      Yann can definitely take on Rogan, as he doesn't hold back. =D

    • @Crazylalalalala
      @Crazylalalalala 3 года назад +6

      Rogan is not smart enough to ask interesting questions.

  • @Kartik_C
    @Kartik_C 4 года назад +2

    These podcasts are a goldmine! Thanks Lex!

  • @RAOUFTV16
    @RAOUFTV16 4 года назад +6

    the most beautiful podcasts youtube channel ever !
    Congrats from Algeria

  • @motellai8211
    @motellai8211 4 года назад +3

    What a great interview!!! i can't never thank you enough Lex!!

  • @aviraljanveja5155
    @aviraljanveja5155 4 года назад +6

    Brilliant Podcast ! The Reasoning and arguments at play were beautiful. Along with a lack of ego and a lot of honesty and admitting mistakes.
    For example Lex at this moment - 50:00

  • @Qual_
    @Qual_ 4 года назад +12

    Subscriber from france, I love all your podcast! And congratulation to have such guest like Yann Lecun.

  • @BiancaAguglia
    @BiancaAguglia 4 года назад +76

    12:26 "Machine learning is the science of sloppiness." 😁 It's the first time I've heard it described that way (and it makes perfect sense.)

  • @hanselpedia
    @hanselpedia 4 года назад

    Great stuff again, thanks Lex!
    I really enjoy the interactions when the intuitions start to diverge...

  • @seanrimada8571
    @seanrimada8571 4 года назад

    30:00 Lex in his undergrad years. we all enjoyed this talk, thanks for the podcast Lex.

  • @stevenjensjorgensen
    @stevenjensjorgensen 4 года назад +6

    Real good source of future research directions here for AI! These are the kind of conversations you can only have in conferences, so thank you Lex for bringing it to us.

  • @ciaran7780
    @ciaran7780 4 года назад +7

    There are no stupid questions !!! ... I enjoy your podcast/interviews, thanks so much.

    • @WheredoDoISATS
      @WheredoDoISATS 25 дней назад

      I wish he would wear a colored tie today as well. Ty for your work.

  • @user-mw2gf5zh4g
    @user-mw2gf5zh4g 4 года назад +21

    Большое спасибо за все твои интервью, Лекс! У тебя самый классный подкаст о ИИ. Продолжай радовать клёвыми гостями!

  • @balareddy8625
    @balareddy8625 4 года назад

    Hi Lex. That's great pod cast with the most today's emerging and cutting edge tech and one of the father of Yann LeCun(Father of Neural Nets). His comments / suggestions / Journey of DS are most sensible and valuable. Thanks to you and Hats Off to Sir Yann LeCun.

  • @jamesanderson6882
    @jamesanderson6882 4 года назад +9

    Great interview Lex and Yann! I would love to be a fly on the wall of a dinner with LeCun, Hinton and Bengio (no idea if they would talk about CS). Lex, try to get Peter Norvig sometime. I still don't understand why LISP is not more popular.

  • @DamianReloaded
    @DamianReloaded 4 года назад +3

    Mr LeCun is a great communicator. I enjoyed every second of this interview. Looking forward to seeing where his research will lead us all.

    • @aw6507
      @aw6507 4 года назад

      Dr. LeCun....

  • @joseortiz_io
    @joseortiz_io 4 года назад

    My favorite part was the section where unsupervised learning. It was fascinating ❤

  • @JKKross
    @JKKross 4 года назад +20

    This was very thought-provoking - loved it!
    The ending was magical: "I'd ask her what makes the wind low. If she says that it's the leaves, she's onto something..."

    • @zrmsraggot
      @zrmsraggot 2 года назад

      Lex didn't catch that

    • @anthonybiel7096
      @anthonybiel7096 2 года назад +1

      "I will ask him what is the cause of the wind. ..." It refers to 23:30

  • @littech4637
    @littech4637 4 года назад

    Wow, great interview Lex.

  • @deeliciousplum
    @deeliciousplum 4 года назад +1

    𝐃𝐢𝐢 - 𝐝𝐚𝐦𝐧 𝐢𝐦𝐩𝐫𝐞𝐬𝐬𝐢𝐯𝐞 𝐢𝐧𝐭𝐞𝐥𝐥𝐢𝐠𝐞𝐧𝐜𝐞! Priceless. 🧠
    Thank you for sharing this interview of an exceptionally stimulating gentleman. You both place so much on the table which compels the listener to explore. You may have read this often, yet I must share that your vids cut out all the tiring and time consuming extraneous sound bites. Your vids are brimming with knowledge, concerns, and ideas. Thank you Lex for all that you do. 🍂

  • @atomscott425
    @atomscott425 4 года назад

    I love this! It's interesting how different Yann LeCun and Jeremy Howards views differ so much on active learning and transfer learning. Would to like to see them discuss their views with each other.

  • @bowenlee3597
    @bowenlee3597 4 года назад

    I did not leave comments on RUclips at all. At least previously. But seriously, I think this interview is so good, insightful and "deep" that I must congratulate to Lex Fridman, for this great interview with Yann LeCun.

  • @tusharagarwal2994
    @tusharagarwal2994 Год назад

    Lex, you are quite a great inspiration to me, thanks for the talk!

  • @MrDidymus88
    @MrDidymus88 2 месяца назад

    Thank you both 😊

  • @shashanks.k855
    @shashanks.k855 2 года назад

    This was amazing! As always thank u so much.

  • @danteinferno9187
    @danteinferno9187 4 года назад

    Yann is a genius. Thanks lex!

  • @mededovicemil2218
    @mededovicemil2218 3 года назад

    great interview!!!!

  • @jeff_holmes
    @jeff_holmes 4 года назад

    Another great interview. It's great to hear that Yann is tackling the issue of modeling the world in baby steps. It seems to be the most logical way of achieving "common sense" intelligence - you have to know a little bit about many models. I'd love to hear more conversation about how you might go about making better connections between models - a key part of reasoning. You can have a model of physics and you can have a model of car mechanics and self preservation - how do you make the connections between them to anticipate and predict the consequences of driving off the proverbial cliff? The answer must lie in some kind of abstraction layer that would be used to identify how models are related.

  • @JousefM
    @JousefM 4 года назад +5

    Thanks Lex! I really like Yann and his accent :D

  • @sandtiwa
    @sandtiwa 4 года назад +1

    Next Richard Sutton!! Thanks for all your work Lex.

  • @ayushthada9544
    @ayushthada9544 4 года назад +5

    When will we see Dr. Daphne Koller on the podcast?

  • @alexandresantiago5910
    @alexandresantiago5910 4 года назад

    Nice! Yann LeCun breaking some concepts in this episode haha

  • @AndrewKamenMusic
    @AndrewKamenMusic 4 года назад +2

    I really wish the convo started with that last question.

  • @eyuchang
    @eyuchang 4 года назад +2

    Great interview. On the importance of training data volume, Yann still insists "relative" good size of data suffices for training. On general AI, Yann's arguments against the term "general" are interesting (though a bit strange). On self-supervised training using BERT as an example, I would argue that it is still supervised training with voluminous training data. The key here is that ground-truth still exists to be compared. Inspiring interview!!

  • @gitgen1887
    @gitgen1887 2 года назад

    What a talk wow. I'm still deep learning here.

  • @KelvinMeeks
    @KelvinMeeks 4 года назад

    Most Excellent!

  • @lasredchris
    @lasredchris 4 года назад

    Captivated the world

  • @gitgen1887
    @gitgen1887 2 года назад

    We are all inside lex fridman simulation right now. When ever you get here. Think about it.

  • @jonathanschmidt1668
    @jonathanschmidt1668 4 года назад

    Great interview. Someone knows the paper involving Leon Bottou that was talked about around 21min?

  • @filipgara3444
    @filipgara3444 4 года назад

    I very like this way of understanding the world in terms of AI

  • @alexselivanov299
    @alexselivanov299 4 года назад

    I see s new podcast from Alex I smash like, simple

  • @vikashkumar994
    @vikashkumar994 Год назад

    Great talk

  • @mattanimation
    @mattanimation 4 года назад +1

    "Just about everything..." - 1:09:42 LOL that perfect expression of what I think everyone really thinks about Sophia deep down.

  • @LE0NSKA
    @LE0NSKA 3 года назад

    yooo, what is this chapter thing and why is this the only video that has it?!

  • @deeplearningpartnership
    @deeplearningpartnership 4 года назад

    Amazing.

  • @DayB89
    @DayB89 4 года назад

    I'm only at 3:30 and I already hit the like button =)

  • @martinsmith7740
    @martinsmith7740 4 года назад +4

    RE: causes of the 1980/90's AI Winter: in addition to those Yann mentioned, I think another factor was "opportunity cost." That is, there's just so much private investment and grant money to go around. The advent of the PC and local then wide-area networking and then the Internet sucked up all the available air (investment, public attention, etc.) That combined with the immaturity of the infrastructure required for AI (compute power, memory, big data) and the failure of rule-based approaches (because un-maintainable) made investment in AI less attractive.
    I think opportunity cost is also part of the explanation for the "Space Winter" following the Moon landings. Space just didn't seem to present as many near-term opportunities as computers did.

  • @Hoondokhae
    @Hoondokhae 4 года назад

    that was a good one

  • @allurbase
    @allurbase 4 года назад +2

    The recurrent updating of state is the core i think, activation/supression on an embed space that updates itself recurrently.

  • @SudipBishwakarma
    @SudipBishwakarma 4 года назад

    This is HUGE!!!!

  • @sippy_cups
    @sippy_cups 4 года назад +2

    god tier crossover

  • @muzammilaziz9979
    @muzammilaziz9979 4 года назад

    Please interview Fei Fei Li from Stanford Vision Lab

  • @cuatropantalones
    @cuatropantalones 4 года назад

    1:04:30 What is the name of the researcher he mentions? Captions are way off.

    • @cuatropantalones
      @cuatropantalones 4 года назад +1

      I think he was speaking of Emmanuel Dupoux

  • @colorfulcodes
    @colorfulcodes 4 года назад

    Nice thanks.

  • @rwang5688
    @rwang5688 3 года назад

    On what’s surprising about deep learning (paraphrase): “Do what the textbook tells you is wrong (found out later when I read the textbooks).” “Intelligence is impossible without learning. Imperative programming with rules alone leading to an intelligent systems is counter intuitive.”

  • @AyberkAsik7
    @AyberkAsik7 4 года назад

    whats up with the watch

  • @domasvaitmonas8814
    @domasvaitmonas8814 4 года назад +1

    Hi. Where can I find the paper by Bottou?

    • @2207amol
      @2207amol 4 года назад

      www.technologyreview.com/s/613502/deep-learning-could-reveal-why-the-world-works-the-way-it-does/

  • @franklinabodo871
    @franklinabodo871 4 года назад +1

    At 20:53 did Lex just leak that the next podcast to be published is an interview with Judea Pearl?

  • @wenmo47
    @wenmo47 3 года назад

    Really love your podcast. I know this will be a lot of work, but the auto-generated captioning are not always right. They are pretty good but can be really wrong when it comes to special terms. And most of the times, those are the words we really want to know. If you can add your own subtitles, that would be great, especially for people who are deaf, hard of hearing or English is not their first language. Thanks

  • @vinceecws
    @vinceecws 2 года назад

    A point to 47:50: I think that when LeCun is comparing between self-supervised learning tasks in NLP vs. vision, and saying that in the former it's easier to achieve better performance than in the latter, it's not exactly a fair comparison.
    An oversimplified example would be that for NLP tasks, you would mask no more than 2-3 words in a row while training a model to infer the missing gap. On the other hand, for vision tasks, one would mask a relatively huge block on input images (sometimes occluding entire objects), so it would necessarily be harder task (might even be tough for humans?).
    A more level comparison would be to pit it against vision models trained on input images with sparsely-masked pixel groups (10 x 10 at the most, roughly), though I'm not entirely sure in this case, the model will be able to learn much in terms of semantics of objects in the training images.

  • @karelknightmare6712
    @karelknightmare6712 2 года назад +1

    An episode with Matthieu Ricard would actually be great about your concerns about the emotional drive of human intelligence. To try to get a sense of artificial wisdom maybe.

  • @HonkletonDonkleton
    @HonkletonDonkleton 2 месяца назад

    12:33 the big question

  • @elmersbalm5219
    @elmersbalm5219 Год назад

    @55:00 I remember reading about depth perception in different animals, including lab tests having young mammals crawling over glass or patterns. Indicates that there is a rudimentary encoding. Still, I'm sure it gets reinforced in the relatively sheltered life of a young cub/child. Kind of, like there is a predisposition to get startled and pay attention to certain classes of problems. A baby stumbling over a step or hitting a wall causes the child confusion as it works out a general pattern for which the brain is already primed.
    I doubt I'm saying anything controversial or ground breaking. Mostly I shared because of Lecunn's prior example to an AI car learning to avoid a cliff.
    Now I'll continue listening as he most probably is going to explicate the above.

  • @lettucefieldtheorist
    @lettucefieldtheorist 4 года назад +14

    Causality is perfectly accounted for in statistical physics, even in non-relativistic scenarios. In fact, the T-symmetry (or CPT symmetry) of fundamental, microscopic interactions is a key ingredient for the so-called fluctuation theorem (not to be confused with the fluctuation-dissipation theorem). It shows that, macroscopically, positive entropy production occurs on average, which indicates the existence of an arrow of time. So, time-reversible microscopic physics leads to T-asymmetric physics on the macroscopic scale, showing that the 2nd law of thermodynamics is only a statistical statement that can be violated locally. The fluctutation theorem actually gives an expression for the ratio of probabilities that an entropy change of A or -A occurs during a time interval t, and it turns out to be proportional to exp(-A t). Negative entropy production is therefore not forbidden, but is exponentially suppressed and therefore never observed macroscopically.

    • @HURSAs
      @HURSAs 4 года назад +3

      I am glad someone has pointed that out, cuz from the statement of professor LeCun that "physicists don't believe in causality on Micro level" people may jump to the wrong conclusion.

    • @vast634
      @vast634 3 года назад +1

      Basically, thinking can only happen in the forward flowing time. Thats why we can only observe forward flowing time.

    • @Hexanitrobenzene
      @Hexanitrobenzene 3 года назад

      Yeah, not so small misunderstanding of physics on LeCunn's part.

    • @carystallings6068
      @carystallings6068 2 года назад

      That's what I was going to say.

  • @williamramseyer9121
    @williamramseyer9121 3 года назад

    Another great interview (and frankly, all of them so far have been great). So many wonderful ideas. My comments:
    1. The HAL problem. Designing an objective function for an AI through laws has several problems: a) there is no judge (how does the AI know whether it is on track or breaking the rules?); and b) laws are words and phrases-very inexact compared to math or logic. Good lawyers differ greatly in their opinions as to the meaning of legal words and phrases. Consider the US Constitution, which has been the subject of thousands of cases as to the meaning of small phrases, such as “freedom of speech,” “right of the people to keep and bear arms,” and “unreasonable searches and seizures” (from the first 3 Amendments in the Bill of Rights). After reading enough documents, including dictionaries, literature, and legal authorities, and analyzing the meaning of the subject words, a competent AI may conclude, “Hmm, this law does not apply to me.” (Disclosure, I am a lawyer, although not to my knowledge an AI).
    2. The limits of Intellectual Property laws in the coming world. The existence of an AI, or even an augmented or virtual person, may include patented algorithms. Courts can order the destruction of infringing items. An Artie (an AI creature as I call them), or a virtual or augmented human, may also have a body that infringes copyright or trademark. Its infringing body could be ordered destroyed. Ouch. Of course, you might be able to negotiate a license fee to stay alive and keep your face.
    3. If intelligence comes from learning, then is the level of human intelligence limited only by how much a human can learn?
    4. Specialized human intelligence. If we have a highly specialized brain that only recognizes what we are capable of processing, then is the math that we use just an arbitrary math system among many-merely the only one that we can (so far) conceive of? (This is my intuition).
    5. I liked Yann LeCun’s idea that we should not lie to an AI and expect a good result. Harry Frankfort argues in the book, “On Bullshit”, (from my recollection) that when people lie or are lied to, they live in an insane world. Has anyone ever thought of the following problem?-if an advanced AI tries to understand the contradictions and dishonesty of human history, will it become insane?
    Thank you. William L. Ramseyer

  • @AvantGrade
    @AvantGrade 2 года назад

    We are general to all things we can imagine but this is only a subset of all the things that's possible.

  • @prakhartiwari6397
    @prakhartiwari6397 Месяц назад

    "Ask her what is the cause of wind, and if she answers because the tress are moving, she's onto something". Damn!!!

  • @electrodacus
    @electrodacus 4 года назад +1

    I like the example with the scrambling of the optic nerve. Is about the same thing as you watching one of those digitally scrambled TV stations as the only input to the brain and I'm fairly sure the brain will not be able to decode that to understand what is in the image as brain will not be able to rearrange the pixels in the image but one of the "artificial brains" can likely learn that with maybe even without some training data. It was a great example to show we are not general intelligence.
    Someone maybe Michio Kaku (not sure) mentioned a thermometer as a level one intelligence as it will react to temperature and while each thermometer will react slightly different but for us in a predictable way to changes in temperature and so an intelligence a step above ours can see us as fully predictable systems and that will mean we probably do not have free will.

  • @futuristudios
    @futuristudios 4 года назад

    1:03:04 - HER, human level intelligence
    1:08:52 - Necessity of embodiment (Sophia) vs grounding (Her)

  • @kimchi_taco
    @kimchi_taco 4 года назад

    21:00 causality

  • @pyxelr
    @pyxelr 4 года назад +5

    There was so much great information that after the session, I spent some time collecting a bunch of key takeaways ⭐:
    ➡ humans, in fact, don't have a "general intelligence" themselves; humans are more specialised than we like to think of ourselves
    - Yann doesn't like the term AGI (Artificial general intelligence), as it assumes human intelligence is general
    - our brain is capable of adjusting to things because we can imagine tasks that are outside of our comprehension
    - there is an infinite amount of things we're not wired to perceive, such as we think of gas behaviour as a pure equation PV = nRT
    -- when we reduce the volume, the temperature goes up, the pressure goes up (for perfect gas at least), but that's still a tiny, tiny number of bits compared to the complete information of the state of the entire system, which would give us the position and moment of every molecule
    ➡ to create AGI (Human Intelligence), we need 3 things (for each you can find examples)
    1) the first one is an agent that learns predictive models that can handle uncertainty
    2) the second one is some kind of objective function that you need to minimise (or maximise)
    3) and the third one is a process that can find the right sequence of actions needed in order to minimise the objective function (using the predictive learned models of the world)
    ➡ to test AGI, we should ask a question like "what is the cause of wind? If she (system) answers that it's because the leaves on the tree are moving and it creates wind, she's on to something ". In general, these are questions that reveal the ability to do
    - common sense reasoning about the world
    - some causal inference
    ➡ first AGI would act like a 4-year-old kid
    ➡ AI which will read all the world's text, might still not have enough information for applying common sense. It needs some low-level perception of the world, like a visual or touch perception
    - common sense will emerge from
    -- a lot of language interaction
    -- watching videos
    -- interacting in virtual environments/real world
    ➡ we're not going to have autonomous intelligence without emotions, like fear (anticipation of bad things that can happen to you)
    - it's just deeper biological stuff
    ➡ unsupervised learning as we think of is still mostly self-supervised learning, but there is definitely hope to reduce human input
    ➡ neural networks can be made to reason
    ➡ in the brain, there are 3 types of memory
    1) memory of the state of your cortex (disappears in ~20 seconds)
    2) shorter-term (hippocampus). You remember the building structure or what someone said a few minutes ago. It's needed for a system capable of reasoning
    3) longer-term (stored in synapses)
    ➡ Yann: "You have these three components that need to act intelligently, but you can be stupid in three ways" (objective predictor, a model of the world, policymaker)
    - you can be stupid because
    -- your model of the world is wrong
    -- your objective is not aligned with what you are trying to achieve (in humans it's called being a psychopath)
    -- you have the right world model and the right objective, but you're unable to find the right course of action to optimise your objective given your model
    - some people who are in charge of big countries have actually all of these three wrong (it's known which ones)
    ➡ AI wasn't as popular in the 1990s as the code was hardly open sourced, and it was quite hard to code things in Fortran and C. It was also very hard to test the algorithm (weights, results)
    ➡ math in deep learning has more to do with cybernetics and electrical engineering than math in computer science
    - nothing in machine learning is exact; it's more the science of sloppiness
    - in computer science, there is enormous attention to detail, every index and so on
    ➡ Sophia (robot) isn't as scary as we think (we think she can do way more than she can)
    - we're not gonna have a lot of intelligence without emotions
    ➡ humans, in fact, don't have a "general intelligence" themselves; humans are more specialised than we like to think of ourselves
    - Yann doesn't like the term AGI (Artificial general intelligence), as it assumes human intelligence is general
    - our brain is capable of adjusting to things because we can imagine tasks that are outside of our comprehension
    - there is an infinite amount of things we're not wired to perceive, such as we think of gas behaviour as a pure equation PV = nRT
    -- when we reduce the volume, the temperature goes up, the pressure goes up (for perfect gas at least), but that's still a tiny, tiny number of bits compared to the complete information of the state of the entire system, which would give us the position and moment of every molecule
    ➡ to create AGI (Human Intelligence), we need 3 things (for each you can find examples)
    1) the first one is an agent that learns predictive models that can handle uncertainty
    2) the second one is some kind of objective function that you need to minimise (or maximise)
    3) and the third one is a process that can find the right sequence of actions needed in order to minimise the objective function (using the predictive learned models of the world)
    ➡ to test AGI, we should ask a question like "what is the cause of wind? If she (system) answers that it's because the leaves on the tree are moving and it creates wind, she's on to something ". In general, these are questions that reveal the ability to do
    - common sense reasoning about the world
    - some causal inference
    ➡ first AGI would act like a 4-year-old kid
    ➡ AI which will read all the world's text, might still not have enough information for applying common sense. It needs some low-level perception of the world, like a visual or touch perception
    - common sense will emerge from
    -- a lot of language interaction
    -- watching videos
    -- interacting in virtual environments/real world
    ➡ we're not going to have autonomous intelligence without emotions, like fear (anticipation of bad things that can happen to you)
    - it's just deeper biological stuff
    ➡ unsupervised learning as we think of is still mostly self-supervised learning, but there is a definitely a hope to reduce human input
    ➡ the most surprising thing about deep learning
    - you can build gigantic neural nets, train them on relatively small amounts of data with the stochastic gradient descent, and it works!
    -- that said, every deep learning textbook is wrong by saying that you need to have a fewer number of parameters, and if you have a non-convex objective function, you have no guarantee of convergence
    - therefore, the model can learn anything if you have
    -- huge number of parameters
    -- non-convex objective function
    -- data somehow very relative to the number of parameters
    ➡ neural networks can be made to reason
    ➡ in the brain, there are 3 types of memory
    1) memory of the state of your cortex (disappears in ~20 seconds)
    2) shorter-term (hippocampus). You remember the building structure or what someone said a few minutes ago. It's needed for a system capable of reasoning
    3) longer-term (stored in synapses)
    ➡ Yann: "You have these three components that need to act intelligently, but you can be stupid in three ways (objective predictor, a model of the world, policymaker)
    - you can be stupid because
    -- your model of the world is wrong
    -- your objective is not aligned with what you are trying to achieve (in humans it's called being a psychopath)
    -- you have the right world model and the right objective, but you're unable to find the right course of action to optimise your objective given your model
    - some people who are in charge of big countries have actually all of these three wrong (it's known which ones)
    ➡ AI wasn't as popular in the 1990s as the code was hardly open sourced, and it was quite hard to code things in Fortran and C. It was also very hard to test the algorithm (weights, results)
    ➡ math in deep learning has more to do with cybernetics and electrical engineering than math in computer science
    - nothing in machine learning is exact; it's more the science of sloppiness
    - in computer science, there is enormous attention to detail, every index and so on
    ➡ Sophia (robot) isn't as scary as we think (we think she can do way more than she can)
    - we're not gonna have a lot of intelligence without emotions

  • @robkrieger3455
    @robkrieger3455 Месяц назад

    What is the 'cause' of the current AI that we have (and the future AI that we'll create)? Would be interesting to hear Lex's and Yann's response to this. And also, their 'correct' answers to the cause-of-wind question (I'm assuming they won't say so that the Sun and Earth can let the trees know what it's like to dance through space too).
    Excellent interview.

  • @NateTheProtestant
    @NateTheProtestant 4 года назад

    What is high? What is higher? What is learn? What is learning?

  • @ej9806
    @ej9806 4 года назад +2

    Still waiting for Andrew Ng or Andrew Yang :)

  • @KaplaBen
    @KaplaBen 4 года назад

    21:26 Here is the paper he is talking about: "Invariant Risk Minimization" arxiv.org/pdf/1907.02893v1.pdf.
    I happened to have it open in a tab right next to this one. Highly recommend

  • @mitalpattni1977
    @mitalpattni1977 4 года назад

    This is the first time I see Lex Fridman arguing as much with the person he is interviewing, and it's Yan LeCun. :p

  • @justinmallaiz4549
    @justinmallaiz4549 4 года назад +20

    I'm sure experts are also inspired by these podcast.
    Lex: I think your podcast will indirectly solve (human level/general) AI...
    :)

  • @Hungry_Ham
    @Hungry_Ham 4 года назад +3

    1:08:32 Savage

  • @MrOftenBach
    @MrOftenBach 4 года назад

    Great interview ! The only thing that I disagree with is the visual cortex example allegedly proving that human intelligence is not general. First off, we need to distinguish sensory perception from logical reasoning. Senses might indeed be specialized and there are good evolutionary reasons for that. Even so, this specialization is premised on a great deal of abstraction and generalization - we ignore all unnecessary patterns and convolve multiple disparate ‘pixels ‘into a coherent representation. So even specialization of visual function is based on our ability to generalize. Secondly, the ability to recognize all parameters of a system state like in the gas example is not what makes intelligence general. On the opposite, it’s ability to infer physical laws from limited observations or even better - deductively, without any training data at all.

  • @Matteo-uq7gc
    @Matteo-uq7gc 4 года назад

    38:05 It would be very cool if a robot can figure out what a bean bag chair was. The reason being it looks nothing like a chair but serve the purpose of a chair.

  • @binmosa
    @binmosa 4 года назад

    Who else paid attention to this quote from Yann "some people who are in charge of big countries actually have all three that are wrong" 😅 at 1:08:20

  • @erosennin950
    @erosennin950 4 года назад

    Hey Lex, we need a couple of lay people talking about their thoughts on AI to clear what we developers are dealing with. For example, I would listen Joe Rogan's thoughts on AI for an hour !!

    • @user-qw2dt8yw2h
      @user-qw2dt8yw2h 3 года назад

      Hal 9000 brought me here. I've seen the movie for the first time just a few days ago. One of the most terrifying films I've ever seen. The quietness of space is nice but Hal isn't. The person or team who created Hal failed to take into account human error and how Hal can assist them. I understand that it's just a movie but it's a good template for developers. I know this comment is late and most prob the idea is obsolete but sometimes deja vu happens for no reason at all.

  • @farbodtorabi7511
    @farbodtorabi7511 4 года назад +7

    59:08 look at the guy in the background HAHA

  • @prayaanshmehta3200
    @prayaanshmehta3200 Год назад

    -founding father, CNN
    in particular their application to OCR
    (optical character recognition)
    MNIST dataset
    7:49 most surprising idea in DL?

  • @ChristopherBare
    @ChristopherBare 4 года назад

    “You can be stupid in three different ways: you can be stupid because your model of the world is wrong, you can be stupid because your objective is not aligned with what you actually want to achieve [...], or you are unable to figure out a course of action to optimize your objective given your model.” 1:08:08

  • @jeffjohnson8624
    @jeffjohnson8624 Год назад

    Lex Fridman if you want to learn about Emotion Algorithms please lnterview Dr. Wallace of Pandorabots. Dr. Wallace double majored Computer Science & Psychiatry. i think he might have pHD's in both Subjects.

  • @exacognitionai
    @exacognitionai 4 года назад

    Good video.
    Using contextual frameworks gives objective functions both a self awareness and boundary goals (seed frameworks) to create a rudamentary, trainable #AI conscience similar to humans. Unfortunately it can also be turned off through self learning by the machine. Like a side mirror on a car, autonomous intelligence is closer than it appears.
    The impact on innovation of patenting software is akin to patenting language. Eventually everything is owned by someone and the world becomes mute.

  • @fataakstudio8501
    @fataakstudio8501 3 года назад

    when Geoffrey Hinton?

  • @froggenfury6169
    @froggenfury6169 4 года назад +1

    your voice your tone is super weird but actually super relax to listen on a podcast style

    • @ianborukho
      @ianborukho 4 года назад

      sounds slightly drunk, but still has great questions XD

  • @nimishshah3971
    @nimishshah3971 4 года назад

    Conflicting statements:
    @10:00 Learning is better than programming. Intelligence cannot be attained by programming
    @55:00 Advocates physical model-based predictions.
    Isn't including physics a kind of "programming". One could of course 'learn' all the physics. But as he said, it involves driving off the cliff 1000s of times. So, we do need hybrid models : learning + programming.

    • @anoopramakrishna
      @anoopramakrishna 4 года назад +2

      On the contrary humans learn physics without driving of cliffs 1000s of times. The pre learned model he mentioned is likely a self supervised learning task to understand basic physics like a child throwing toys on the ground rather than an adult driving a car off a cliff. I feel sympathetic to Lex's position on active learning, it appears to me that this basic model learning would be easier through active learning.

  • @Vaeldarg
    @Vaeldarg 4 года назад

    I believe that the core of the confusion around a legitimate "artificial intelligence" (a man-made consciousness), is whether to treat it as the kind of being that was meant to be re-created/copied or a machine. If it is truly an artificial re-creation of for example a dog's mind, do you interact with it as a machine or as a dog? It is the same problem as whether a copy that is identical to the original should be treated any differently to the original.

  • @darianharrison4836
    @darianharrison4836 4 года назад

    To put things into perspective the 40:47, did not even take into account wavelength. hence "It deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae."

  • @prof_shixo
    @prof_shixo 4 года назад +4

    What a nice podcast! specially the argument around AGI was quite interesting. However, I found Yann's claim that human's intelligence is not general is quite odd as what he drives in his argument is due to limitation in sensing not in reasoning, so if the mechanism that the human brain is using was supported with a better sensing capabilities and processing power, it can widen its reasoning to new concepts that were outside the preservation horizon. Just take Einstein's theory of relativity as an example of how capable a human brain to reason and model things that are beyond its sensing capabilities! Is this a too specialized reasoning mechanism?!

    • @shubhvachher4833
      @shubhvachher4833 3 года назад +1

      I think what Yann was trying to allude to was a mathematical function that would, in his example, take pixels from a camera, but randomly shuffled, as an input and still be, for example, able to come up with the "laws of gravity". This kind of a system would probably far exceed human computational capability, including human reasoning.
      The way I understand it is, if a "smart" system saw the transverse of a roll of toilet paper and was able to come up with how many sheets of toilet paper there are (among other learnable results) then it would have an intelligence "more general" than human beings.

  • @zrmsraggot
    @zrmsraggot 2 года назад

    So basically what he say during the 5 first minutes is .. we will make a car first, run it really fast and then we will try to find out how to implement breaks on it. Eurk

  • @Sal1981
    @Sal1981 4 года назад +9

    The contention Yann LeCun has against Sophia, I share. Ben Goertzel is a hack.

    • @jfort5234
      @jfort5234 4 года назад +2

      There are many hacks in silicon valley and in tech.

  • @golovabolyt
    @golovabolyt 4 года назад

    Hey Lex,
    I found out that in his interviews LeCun tends to make derogatory comments on Slavic people.
    It is doubly disappointing, first because it is not nice to have such attitude for any civilized individual and second,
    it is not nice especially when you are a high profile researcher.
    You might keep this fact in mind if or when you invite him for another conversation.
    I am not talking about some extreme far right crap, but in the videos I saw the posture of condescendence and disrespect was indeed noticeable.
    Of course, you I doubt my words, just let me know, I will send you the links.
    Cheers and
    Your podcast is the best in the category of scientific conversations