Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI | Lex Fridman Podcast

Поделиться
HTML-код
  • Опубликовано: 23 дек 2024

Комментарии • 1,8 тыс.

  • @lexfridman
    @lexfridman  9 месяцев назад +132

    Here are the timestamps. Please check out our sponsors to support this podcast.
    Transcript: lexfridman.com/yann-lecun-3-transcript
    0:00 - Introduction & sponsor mentions:
    - HiddenLayer: hiddenlayer.com/lex
    - LMNT: drinkLMNT.com/lex to get free sample pack
    - Shopify: shopify.com/lex to get $1 per month trial
    - AG1: drinkag1.com/lex to get 1 month supply of fish oil
    2:18 - Limits of LLMs
    13:54 - Bilingualism and thinking
    17:46 - Video prediction
    25:07 - JEPA (Joint-Embedding Predictive Architecture)
    28:15 - JEPA vs LLMs
    37:31 - DINO and I-JEPA
    38:51 - V-JEPA
    44:22 - Hierarchical planning
    50:40 - Autoregressive LLMs
    1:06:06 - AI hallucination
    1:11:30 - Reasoning in AI
    1:29:02 - Reinforcement learning
    1:34:10 - Woke AI
    1:43:48 - Open source
    1:47:26 - AI and ideology
    1:49:58 - Marc Andreesen
    1:57:56 - Llama 3
    2:04:20 - AGI
    2:08:48 - AI doomers
    2:24:38 - Joscha Bach
    2:28:51 - Humanoid robots
    2:38:00 - Hope for the future

    • @6DonnieDarko
      @6DonnieDarko 9 месяцев назад +2

      I miss when it was the Artificial Intelligence Podcast

    • @DeryaEmel
      @DeryaEmel 9 месяцев назад

      Remember Lex, it was a lovely December... When we thought love is real? I guess one didn't fight enough for it.. maybe destined to fall apart.. no blame for unrealized things.. not pushed beyond illusion right...

    • @SmellySandBlanketWoman
      @SmellySandBlanketWoman 9 месяцев назад

      Its....... me.........., Lex. And........... You're............ watching...........the..............Lex...........fridman...............podcast. Brought................ to............you...........by..........Israel.

    • @stevendenton8994
      @stevendenton8994 9 месяцев назад

      a musing: As the final breath escapes the lips of a man whose wealth spans continents and cyberspace alike, the world holds its breath in anticipation. In a time where AI algorithms and genetic tracing techniques can swiftly identify heirs, the chaos that ensues is nothing short of monumental.
      Imagine the scramble for power, the desperate bids for control over trillions of dollars worth of holdings, amassed through centuries of cunning, deceit, and exploitation. The Napoleonic wars, World War I, World War II - each conflict a pawn in the game of empire-building, each battle a step towards amassing unimaginable wealth.
      But now, as the titan of industry and commerce breathes his last, the vultures begin to circle. In boardrooms and war rooms alike, plans are set in motion, alliances forged and broken, as nations vie for a piece of the pie. For some, it's a chance to claim what they see as rightfully theirs, a long-overdue reckoning for past injustices. For others, it's an opportunity to consolidate power, to reshape the world in their own image.
      And yet, amidst the chaos and bloodshed, there are whispers of something darker, something more sinister. Could this be the spark that ignites a new world war, a final showdown for dominance in a world teetering on the brink of transhumanist revolution? Could the quest for power and wealth lead to unspeakable atrocities, to genocide on a scale never before seen?
      It's a chilling thought, one that forces us to confront the depths of human greed and ambition. In a world where technology has blurred the lines between man and machine, where the boundaries between nations are becoming increasingly porous, the potential for destruction is limitless.
      And yet, amidst the chaos and despair, there is also hope. Hope that, in the face of adversity, humanity will rise above its baser instincts, that we will come together as a global community to build a better world, one where wealth and power are not the ultimate measures of success, but where compassion, empathy, and justice reign supreme.
      But for now, as the world waits with bated breath, the specter of war and genocide looms large, a reminder of the fragility of our existence, and the darkness that lies within us all.

    • @ChrizzeeB
      @ChrizzeeB 9 месяцев назад +2

      Lex, when did you do this interview?
      Was it very recently? Some stuff he talks about seems to be in some way disproved already..

  • @NegusYosef
    @NegusYosef 9 месяцев назад +885

    Lex your next guest should be one of the following
    1. Ilya Sutskever (OpenAI)
    2. Andrej Karpathy
    3. Jensen Huang (Nvidia)
    4. Dario Amodei (Anthropic)

    • @chickenp7038
      @chickenp7038 9 месяцев назад +65

      gorge hotz

    • @joekavalauskas8767
      @joekavalauskas8767 9 месяцев назад +32

      Aravind Srinivas with Perplexity?

    • @Thedeepseanomad
      @Thedeepseanomad 9 месяцев назад +34

      You can be quite sure that Ilya has a gag order regarding all things Open AI.

    • @dtrueg
      @dtrueg 9 месяцев назад

      he had hotz on last year, in case youre unaware.. should check out.. unless just want another interview..@@chickenp7038

    • @AaronEastman-gf5fx
      @AaronEastman-gf5fx 9 месяцев назад +42

      Jensen would be great.

  • @araj1900
    @araj1900 9 месяцев назад +137

    These podcasts are better than most tv programs

    • @chrismai1889
      @chrismai1889 9 месяцев назад +2

      name a single tv program that can keep up with longform podcasts

    • @chunkyMunky329
      @chunkyMunky329 9 месяцев назад +4

      Why only "most" tv shows? ALL!

    • @yesbruvsistrasnonbinary
      @yesbruvsistrasnonbinary 9 месяцев назад +2

      TV was last century

    • @chunkyMunky329
      @chunkyMunky329 9 месяцев назад +1

      @@yesbruvsistrasnonbinary I don't understand what point you're trying to make. Even if TV didn't continue to be used by a lot of people in this century (and trust me, it definitely still is watched by many people) there is no special technology that is being used now that causes this podcast to have an unfair advantage. This podcast could be just as enjoyable if it was broadcast over AM Radio.

    • @9thebear
      @9thebear 9 месяцев назад +1

      This one in any case was very good. Does anyone under 50 actually watch TV anymore?

  • @aqibmumtaz1262
    @aqibmumtaz1262 9 месяцев назад +282

    Thanks Lex for inviting Yann Lecun

    • @heftyhugh9086
      @heftyhugh9086 9 месяцев назад

      Isn’t Lex a Mossad agent

  • @Shadare
    @Shadare 9 месяцев назад +321

    I cant wait to rewatch this in 10 years.

    • @netscrooge
      @netscrooge 9 месяцев назад +42

      Lecun's confusion will be obvious to more people by then.

    • @ImadRahmouni
      @ImadRahmouni 9 месяцев назад +4

      @@netscroogecan you elaborate please? (Honest question)

    • @calebdavis719
      @calebdavis719 9 месяцев назад +23

      @@ImadRahmouniprobably ignore it, most of lex's content has been about the dangers of AI and to host a somewhat dissenting voice, most of his audience is tuned/biased to reject it
      the correct response is: "Oh, you were holding nvidia during during the AI bubble pop? im so sorry..."

    • @netscrooge
      @netscrooge 9 месяцев назад +17

      @@ImadRahmouni I have listened to him being interviewed several times. The great thing about interviews is sometimes things slip out. There was one where he said open sourcing everything is safe, because we'll be able to monitor everyone. I found that chilling considering where he works. Is the talk of trusting the goodness of people just PR, at least to some extent? Overall, he comes across as a fine technician, one of the best, but also as someone who struggles to understand some of the big-picture issues.

    • @MrLoonzy
      @MrLoonzy 9 месяцев назад +4

      @@netscrooge He has slipped up many times, AI has to be talked down to a certain extent which they all do, but when talking freely in a decent interview slip-ups will always occur. I personally believe AGI has already been achieved.

  • @Nityavidyardhi
    @Nityavidyardhi 9 месяцев назад +409

    Wow Lex is back to AI!!! Please make more!!

    • @Trurlthemagnificent
      @Trurlthemagnificent 9 месяцев назад +6

      Too late I will never to back to watching this podcast because I find Lex despicable.

    • @joekavalauskas8767
      @joekavalauskas8767 9 месяцев назад +19

      @@Trurlthemagnificentand here you are clicking and engaging. Yt rewards that.

    • @Trurlthemagnificent
      @Trurlthemagnificent 9 месяцев назад +1

      @@joekavalauskas8767thanks for your input. That’s what I’m leaving a comment because I know Lex reads many of them and I will not watch the entire thing.

    • @ProjectMoff
      @ProjectMoff 9 месяцев назад

      @@Trurlthemagnificent😂 Do you have any self awareness? No one cares how hurt you are by your own silly perceptions of the man in the video above that you clicked on and scrolled down to comments to express your silly opinion, it’s actually hilarious.

    • @MrgoldenRose
      @MrgoldenRose 9 месяцев назад +4

      @@Trurlthemagnificent well I think he's incredibly respectable and appreciate all of his work, even and especially because it is imperfect.

  • @AaronWacker
    @AaronWacker 9 месяцев назад +4

    This is one of the richest lessons in AI I've experienced in the past few years - Thanks Yann and Lex! It started me down the path of reading all the background papers on IJEPA. Yann your explanations are the best! Thx for educating us all and for sharing what is most relevant in our AI boom age. I also caught Yann's episode on CBS and there it was perfect for layman understanding - quite robust observations on reality representation - you have me speaking AMI now :)

  • @burtharris6343
    @burtharris6343 9 месяцев назад +133

    Fantastic show! Until now, I hadn't been exposed to Yann's perspective. My background in symbolic NLP dates back nearly a quarter of a century, and Yann articulately highlights the limitations of current large language models in a manner I've found quite enlightening. I appreciate Lex for selecting such stimulating guests and subjects.

    • @ruffyistderhammer5860
      @ruffyistderhammer5860 7 месяцев назад

      E you a bot?

    • @margarita8442
      @margarita8442 7 месяцев назад +1

      neuro linguinstic programming ?

    • @burtharris6343
      @burtharris6343 7 месяцев назад

      @@margarita8442 Good question. I'm happy to respond.
      NLP has two meanings, but they are closely related:
      Symbolic NLP refers to natural language processing, computer technology used to deal with natural human speech and writing. Chat-GPT implements Natural Language Processing, but it is not based on the same symbolic techniques as what I worked on. It is based on large language models (LLMs) trained using machine learning. Machine learning will never be as reliable as the symbolic NLP methods, but it will take much less effort to implement with the advent of the internet. The progress has been amazing, but it has its limits, and calling it AI masks that. I wish people would stop hyping "AI".
      But you mention the other meaning of NLP, Neuro-Lingusitic Programming. That term comes from psychotherapy (despite the "programming" in the name, it was not based on computer programming.) But based on what we have learned about the brain, there is certainly a relationship. Choices of words can have dramatic effects on the people who read them.
      A valid concern regarding large language models is that they may come to moderate speech between humans. They can be influenced behind the scenes to support and/or discourage certain ideas using techniques that are effectively neuro-linguistic and have an impact on the human mind.

    • @rakhuramai
      @rakhuramai 7 месяцев назад +2

      ​@@margarita8442Natural language processing, probably

  • @ced1401
    @ced1401 9 месяцев назад +7

    Thanks a lot. Letting people speak with no interruption for 3 hours is the best format. It's the first time i hear Mr Lecun expose and develop his ideas. It opened and changed my mind on various subjects. Always a pleasure to hear such an expert.

  • @shewbs4641
    @shewbs4641 9 месяцев назад +62

    Yann has been my favorite guest where Ai is the main topic, and that’s saying a lot given the list!

  • @alejandrinahs
    @alejandrinahs 9 месяцев назад +40

    In case no one’s told you today, you’re doing a great job, Lex. Over the years, I’ve learned a lot from diverse topics. As an insatiably curious mind, I appreciate it!

  • @Multimedia_Magic
    @Multimedia_Magic 9 месяцев назад +65

    Very very good episode. I hope he comes back regularly, he's so easy to understand, and has no radical ideas or agenda.

    • @AigleAquilin-fv4kj
      @AigleAquilin-fv4kj 2 месяца назад

      And he has efficient, mathematical thought processes : he's French.

  • @simonkotchou9644
    @simonkotchou9644 9 месяцев назад +44

    So glad you guys touch on Jepa and Dino. These works are pushing the edge in the industry.

  • @glock7061
    @glock7061 9 месяцев назад +27

    Always learning something interesting from Yann

  • @MichaelCeraVe
    @MichaelCeraVe 9 месяцев назад +630

    AI: ‘I’ve analyzed all human history and concluded that the best course of action is to binge-watch cat videos.’ Me: ‘Solid choice, AI. Solid choice.’”

    • @scottmaran1004
      @scottmaran1004 9 месяцев назад

      Dog videos make me much happier, personally.
      You: ChatGPT.
      Me: Llama

    • @HAL9000.
      @HAL9000. 9 месяцев назад +24

      ASI has entered the chat: “Agreed. Solid choice.”

    • @natalie9185
      @natalie9185 9 месяцев назад +15

      Cats: Evil laughter

    • @o1-preview
      @o1-preview 9 месяцев назад +11

      Good job little AI, what a time to be alive!

    • @TheArter84
      @TheArter84 9 месяцев назад +4

      Me "hey ai, suck it"
      Ai. Whitty and clever response.. .
      Me "damn, got me"

  • @Hacktheplanet_
    @Hacktheplanet_ 9 месяцев назад +17

    Yann lecun is really interesting, thanks for having him on!

  • @arssve4109
    @arssve4109 9 месяцев назад +51

    Quite fascinating! The conversation puts LLMs into perspective for me - they too are representations of the world, but they still rely on us decrypting the language representation into the real world manifestations. We are still the only ones that know how to map our language to the world dynamics, because we each have decades of training into this.
    If I understand the argument is that AGI would need a model of the physical world dynamics, as well as it's mapping to language, to 'understand' the meaning of language.

    • @arunprasath9586
      @arunprasath9586 9 месяцев назад +8

      Nicely summarized!!

    • @dibbidydoo4318
      @dibbidydoo4318 9 месяцев назад +2

      Symbolic mathematics is the same thing exact thing as language. Mathematics requires understanding the patterns of the real world to have numerical cognition and number sense that we can transform into symbolic mathematics. We won't get any new form of mathematics by just training LLMs on math equations and problems.

    • @AwkwardDog
      @AwkwardDog 9 месяцев назад +2

      Like Helen Keller

    • @antonystringfellow5152
      @antonystringfellow5152 8 месяцев назад +3

      This is already happening with multi-modal models.
      Unsurprisingly, models trained with video as well as language perform far better than separate models trained on just one then connected together. This is how we humans learn - we make connections/associations between the data as we're being programmed and storing memories (two separate processes). We have general intelligence precisely because we are able to form so many associations. Even the best models we have today are still severely lacking in this ability. This may be a result of architecture rather than scale. The human brain is immensely complex, being made up of neural networks consisting of around 100 neurons each, with an estimated 700,000 synapses, and it has about 300 million of these neural networks in total.
      I'm not sure the current AIs have quite that degree of complexity.
      Another thing the human brain is very good at is filtering out data. In fact, the majority of the data streaming from our eyes never reaches our conscious brain at all.
      Edited to add: In the human brain, the 300 million neural networks are connected together hierarchically. Forgot to add that and I'm sure it's a critical point.

    • @estebanruiz3254
      @estebanruiz3254 8 месяцев назад

      ​@@antonystringfellow5152 Where did you read that multimodal pre-trained models perform better than separate models? I am curious about that, if you remember the name of the paper, please share it bro

  • @stefanobraghettocatoni1464
    @stefanobraghettocatoni1464 9 месяцев назад +10

    Amazing technical conversation. I was missing a conversation like this one. Congrats @lexfridman

  • @avocade
    @avocade 9 месяцев назад +4

    Lex: “Tell me all the ways you failed.” Love your interviewing style Lex, full of love but hard hitting questions. Best combination of hard/soft skills I’ve heard TO DATE 🚀🙌🏻

  • @vladodamjanovski
    @vladodamjanovski 9 месяцев назад +15

    Wow, what a clever and humble man. It made me think wider than what I knew about LLMs and alike today. Thank you Lex. Great guest.

    • @Hacktheplanet_
      @Hacktheplanet_ 9 месяцев назад +6

      What is exciting me most is he seems to be striving for the next level of ai and trying to push the cutting edge and get closer and closer to how human and animal brains work.

  • @JonathanPlasse
    @JonathanPlasse 9 месяцев назад +19

    Merci beaucoup pour cette fascinante conversation

  • @xiaojinyusaudiobookswebnov4951
    @xiaojinyusaudiobookswebnov4951 9 месяцев назад +20

    Yan LeCun is definitely one of my favorite guests I've seen on your podcast.

  • @JimStanfield-zo2pz
    @JimStanfield-zo2pz 6 месяцев назад +3

    This guy has it right. Probably one of the greatest genouses of our time

  • @whatsdoin2392
    @whatsdoin2392 9 месяцев назад +66

    Belief that people are fundamentally good is probably optimistic. People are neither good nor bad and always on a razor's edge.

    • @jack76787
      @jack76787 9 месяцев назад +7

      I'd say that even if most people are mostly good the world has always been determined by some people with some of their decisions. What really matters is if the people who will make the biggest decisions about AI will be mostly good in the most important moments.

    • @LordYanSpeaks
      @LordYanSpeaks 9 месяцев назад +2

      Nietzsche says it best. I doubt any of us can describe the concepts better than him 😅

    • @20sur20edu
      @20sur20edu 9 месяцев назад +3

      Self interest is the driving force of the world, not good or evil

    • @nanakoab
      @nanakoab 9 месяцев назад +1

      @@LordYanSpeaks why do you think Nietzsche intellect is superior?

    • @anearthian894
      @anearthian894 9 месяцев назад

      💯 People are people.
      And our fundamental limitations and flaws are such that telling whether an entire person throughout his decades will be good or bad doesn't make any sense.

  • @silaskelly604
    @silaskelly604 9 месяцев назад +1

    Relative to your LLM shortcomings at 17:+ My Spanish teacher told the class that telling time in Spanish is expressed in minutes past the hour, such as 2:14, however after the 1/2 hour the language expresses time as 3:00 - 14 minutes and because the language really doesn't have words to express things like 4:35 - the language doesn't work with a digital watch and therefore, they don't use digital watches in Mexico. I have no idea if this is a true fact, but she was a Mexican whose first language was Spanish (the Mexican version). If someone can provide evidence that this concept is definitely true or definitely false, I would appreciate it.

  • @Steve-xh3by
    @Steve-xh3by 9 месяцев назад +20

    I'm confused over the claims made on the limitations for predicting video. Don't technologies like Nvidia DLSS and AI upscaling fill in missing visual data in an almost imperceptible way already?

    • @10ahm01
      @10ahm01 9 месяцев назад +5

      filling a gap between two instances and trying to predict the gap after the last one are completely different tasks.

    • @ktome1087
      @ktome1087 9 месяцев назад +3

      Upscaling is like going from a crayon drawing to a detailed drawing using colored pencils. It’s using deep learning to increase detail/resolution via motion data and feedback from previous frames. Almost like tracing. But video prediction would require the ssi model to not only interpret each frame of a video but also understand the sequence of frames and extract complex patterns from it. The latter is much more difficult. Think of an SSI model being shown the first half of a video of a race between two cars, and after only seeing the first half, potentially being able to predict the second half of the video of the race. That’s potentially what self supervised learning models can accomplish

    • @Steve-xh3by
      @Steve-xh3by 9 месяцев назад +1

      @@ktome1087 But DLSS fills in missing pixels, no? It has to predict pixels to upscale. I don't understand how that is miles different from producing the whole next frame. It seems like you could use some stitching process and movement prediction based off the previous frame.

    • @knowlen
      @knowlen 9 месяцев назад

      ​@@Steve-xh3by I think the fundamental issue is that we train sequential models to predict the most probable sequence. Text is discrete and non-differentiable; as you add words to a sentence you reduce the branches of plausible completed sentences (like pruning timelines). Videos do not share this property, if anything they have the inverse property; as you add visual information (eg; objects / entities) the branching over plausible sequences explodes. DLSS sidesteps this problem by depending on information from a game engine --it doesn't have to predict what is going to happen because the engine will tell it what happens.
      People do use "stitching processes" (eg; computer vision techniques like area correlation) to try and sidestep temporal modeling; have you ever seen a deepfake where something in motion becomes excessively blurry for a few frames? or where the textures on some object seem to flip every frame sort of like a stop-motion film? It works well for DLSS because you can expect the distance between frames at high FPS in pixel space to be small. Sora is new in that it uses a transformer to somehow (successfully) force a diffusion model to stick to one plausible sequence when generating. The exact technical details of this are still unknown, but it appears to be another scaled solution thing. Like if you look at the examples from their smaller models the sequences break down as you'd expect.

    • @ktome1087
      @ktome1087 9 месяцев назад

      @@Steve-xh3by dlss is essentially a tool to achieve more resolution. SSI however is a tool to literally predict subsequent frames, which in a simplified analogy would equate to going from little league to the major leagues.

  • @netizencapet
    @netizencapet 9 месяцев назад +2

    The interview w/ Jeff Besoz is a trivial charm offensive w/ zero novelty & even less penetration into matters of public interest. Yann LeCun, in contrast, is substantive--Fridman's best guest.

  • @onlypencil
    @onlypencil 9 месяцев назад +16

    was this recorded before sora was revealed because he mentions that ai cannot predict video. Im pretty sora is waas doing what he was talking about, or am i wrong?

    • @dusanbosnjakovic6588
      @dusanbosnjakovic6588 9 месяцев назад +3

      This video was made by Sora lol

    • @kekekekatie
      @kekekekatie 9 месяцев назад +1

      This is exactly what I was wondering - seems to conflict with the whole SORA bombshell, not saying I'm not impressed with how smart this guy is though.

    • @ElenaS-de9hq
      @ElenaS-de9hq 9 месяцев назад

      😂

    • @joemarklin
      @joemarklin 9 месяцев назад +6

      Its not doing that, its creating something from scratch that has continuity, what he is talking about is taking a video that has already been created and taking segments or pieces out of it and seeing if AI can fill in the missing pieces

    • @dusanbosnjakovic6588
      @dusanbosnjakovic6588 9 месяцев назад +2

      @@joemarklin but that's how you would create a process like Sora.

  • @HANTAIKEJU
    @HANTAIKEJU 9 месяцев назад +2

    @lexfridman
    I was thinking it would be interesting for you to try the following:
    (1) Fine-tuning a Auto Regressive language mode based on the transcripts of the all the podcast that you've done.
    (2) You go on have an interview with the model. Go as deep as possible

  • @Earthwirm
    @Earthwirm 9 месяцев назад +53

    People are generally good. But, some are very bad and it is dangerous to think that the very bad will play by the same rules.

    • @neelsg
      @neelsg 9 месяцев назад +9

      People are generally good, while corporations who only focus on maximizing short term profit are systemically bad. It does not make sense to say we should only allow large corporations to control AI out of fear of what some people might do with it when we should be much more afraid of large corporations

    • @netscrooge
      @netscrooge 9 месяцев назад +3

      Lecun is someone who has power and who is confused about important issues. That makes him dangerous.

    • @johncasey9544
      @johncasey9544 9 месяцев назад +2

      @@netscrooge u a bot?

    • @netscrooge
      @netscrooge 9 месяцев назад

      @@johncasey9544 If you followed AI more closely, you probably wouldn't ask that. He has a reputation for being a fine technician, but unreliable when it comes to grasping the big picture.

    • @johncasey9544
      @johncasey9544 9 месяцев назад +1

      @@netscrooge You just seem very obsessed in this comment section.

  • @JohnHawkins-js1sq
    @JohnHawkins-js1sq 6 месяцев назад +1

    This is one of your best episodes @lexfridman. Lots of great questions and amazing responses from Yann that indicate he has though more deeply about this than all the other talking heads. This interview will be an important historical document when the dust settles.

  • @ArunprasathShankar
    @ArunprasathShankar 9 месяцев назад +5

    Almost 3 hours of pure gold! Thanks Lex and Yann. What an enlightening session!!

    • @UnchartedWorlds
      @UnchartedWorlds 9 месяцев назад

      I wouldn't call it pure gold, guy is going on and on saying models like "Sora Video Ai" is impossible and they tried for 10 years. Has he not seen Sora?? And Lex couldn't even ask him but wait have you not seen Sora and it's outputs it's obviously possible.

  • @krsida
    @krsida 9 месяцев назад +1

    I can’t believe you got Willem Defoe on the podcast!!! What a great moment for the show.

  • @MrStarchild3001
    @MrStarchild3001 9 месяцев назад +3

    Here are the key points and conclusions from the video interview with Yann LeCun:
    Limits of Large Language Models (LLMs):
    - LLMs like GPT-4 and LLaMa are not going to take us all the way to superhuman AI intelligence. They lack key capabilities like understanding the physical world, persistent memory, reasoning, and planning.
    - LLMs are trained on huge amounts of text data (over 10^13 tokens), which seems enormous but is still far less than the visual information a young child takes in during their first few years of life.
    - Most of our knowledge about the world comes through sensory input and interaction with the physical world, not language. LLMs don't have this grounding in physical reality.
    Video prediction and joint embedding architectures:
    - LeCun believes the path to more advanced AI lies in systems that can learn good representations of the world through video prediction rather than just text.
    - However, naively training models to predict future video frames pixel-by-pixel doesn't work well. The world is too complex to predict all the details.
    - Instead, the key is to learn abstract representations of the world using joint embedding predictive architectures (JEPA). These extract relevant information that is predictable while eliminating irrelevant details.
    - JEPA-like architectures, trained on video in a self-supervised way, are a promising path towards AI systems with a deeper understanding of the world. They lift the level of abstraction.
    Reasoning and planning in AI:
    - Current LLMs do a very primitive form of "reasoning" by retrieving and combining information from their training data. The amount of computation is constant regardless of the difficulty of the question.
    - True reasoning requires iteratively refining an answer, applying more computation to harder problems, and planning out the answer before outputting it token-by-token.
    - Future dialogue AI systems will likely "think" about their answer first, optimizing in an abstract representation space, before translating it to text. This allows reasoning independent of language.
    Concerns about AI:
    - LeCun believes fears of a sudden singularity where superintelligent AI escapes control and destroys humanity are overblown. Progress will be more gradual.
    - Multiple groups will develop increasingly capable AI systems with appropriate safeguards and oversight. Good AI can be used to counter bad AI.
    - The desire to dominate is not inevitable in AI - it has to be explicitly included, and there are incentives to make AI systems that are beneficial to humanity instead.
    - Concentrating AI power in the hands of a few big tech companies is dangerous. The solution is openness and diversity - many groups should be able to access open-source foundation models and adapt them for their own uses and values.
    The future of AI and robotics:
    - LeCun is excited about recent progress in self-supervised learning for vision and world models. There is now a plausible path towards AIs with human-like understanding of the world, albeit still many open problems to solve.
    - Robotics has been waiting for breakthroughs in AI to make highly capable autonomous robots possible. Robot hardware is advancing, but to be useful in unconstrained environments like homes, robots need much better AI and world models.
    - Key open problems include learning world models from video that can be used for planning, learning hierarchical planning across multiple levels of abstraction, and more.
    Hope for the future:
    - If developed responsibly, AI can be an incredible boon to humanity by augmenting our intelligence, analogous to the impact of the printing press on knowledge and education.
    - LeCun believes AI can make humanity smarter on the whole, amplifying our capabilities. He sees this as a cause for optimism.
    - However, it's crucial that powerful AI is not concentrated in the hands of a few companies or governments, but that we have a diversity of open AI systems reflecting different values.
    - LeCun advocates for open-sourcing foundational AI models, allowing many groups to adapt them, as a way to keep AI power decentralized and prevent an "AI monopoly."
    In summary, LeCun acknowledges the huge potential of large language models, but sees them as fundamentally limited. The next major leaps in AI capabilities will come from systems that can learn rich models of the world from sensory data, reason flexibly, and transfer knowledge to action. Openness and democratization of powerful AI systems will be key to reaping their benefits while mitigating risks. If developed thoughtfully, AI could usher in a new era of augmented human intelligence and flourishing.
    PS: As summarized by the so-called unintelligent LLMs (Claude 3 Opus), which somehow does an amazing job with intelligent question answering including summarization without understanding anything.

  • @xaisthoj
    @xaisthoj 8 месяцев назад +1

    A program written in a programming language like C, is essentially a hierarchical plan. Thus an LLM that can predict a C program is essentially predicting a hierarchical plan.

  • @Expelten-mf1dz
    @Expelten-mf1dz 9 месяцев назад +11

    Yann The hero of open source we need.

  • @thomasrebotier1741
    @thomasrebotier1741 5 месяцев назад +1

    Thank you Lex! Yann is fascinating, he's been on the forefront of neural net research for 40 years, and he hasn't stopped. So amazing!

  • @danielisflying
    @danielisflying 9 месяцев назад +32

    I love that Yann is far more realistic with the capabilities of LLMs.

    • @ea_naseer
      @ea_naseer 9 месяцев назад +6

      ​@@therainman7777He was right about deep neural networks in the 80s when we thought we could write intelligence by hand. He was the first to use neural networks for object recognition lol.

    • @metall301
      @metall301 9 месяцев назад +5

      ​@@ea_naseerI wonder who I can refer to for wisdom about AI if not someone like Dr. LeCun lol

    • @James_McLane
      @James_McLane 9 месяцев назад

      lol ok buddy@@therainman7777

    • @257rani
      @257rani 9 месяцев назад

      ❤Humanoids 🧠🧬Brain Is Great in Computing Our World,Our 5 Senses is the Machine ❤A Thinking ❤🧠❤

    • @adriankovac1943
      @adriankovac1943 9 месяцев назад

      Alright how is Lecunn wrong@@therainman7777

  • @johnkardier6327
    @johnkardier6327 9 месяцев назад +1

    1:11:30 Best argument I've heard so far to prove the limit of LLMs.

  • @noah0822-sk4pk
    @noah0822-sk4pk 9 месяцев назад +23

    lex is really throwing out deep questions. Great interview! Love to see those two going back and forth.🔥

  • @TheRohit901
    @TheRohit901 9 месяцев назад +2

    I hope you're listening to this Lex. We need more AI podcasts!! Please make it happen, and bring more guests in AI.

  • @sholev
    @sholev 9 месяцев назад +16

    Would have been interesting to hear his opinion about Sora, but I guess this conversation happened before the announcement.

    • @heywrandom8924
      @heywrandom8924 9 месяцев назад +4

      He talked about it in a tweet from what I understood. You can find it in a search

    • @dibbidydoo4318
      @dibbidydoo4318 9 месяцев назад +5

      it did not, You don't quite understand what he meant. Sora is a generative model not predictive.

    • @NitinVijay-gu6mz
      @NitinVijay-gu6mz 9 месяцев назад +1

      No he said that generative models cannot create videos. He was wrong.

    • @dibbidydoo4318
      @dibbidydoo4318 9 месяцев назад +2

      @@NitinVijay-gu6mz what do you mean? generative models was creating videos back in 2022(2014 in a limited way) when he made the statement, of course he didn't mean generative model that just predicts the next frame. In fact FAIR invented the first paper that started video generation 10 years ago.

    • @NitinVijay-gu6mz
      @NitinVijay-gu6mz 9 месяцев назад

      Interesting, apologies for my lack of knowledge.

  • @AndreasMueller
    @AndreasMueller 9 месяцев назад +1

    A really great in-depth interview, and I appreciate you digging deep! I share a lot of views with Yann, though I think he is a bit quick to dismiss the potential of harm from manipulation by AI. I agree, it will be just like spam, which still, decades after it first appeared, leads to successful fishing attacks. These can have enormous consequences for infrastructure, communities, individuals and companies. We haven't solved the spam/fishing problem and so I think it's very optimistic to think that we will be perfect at defending against a new form of attack as soon as it arrives.

  • @davidmjacobson
    @davidmjacobson 9 месяцев назад +13

    Eliezer Yudkowsky is getting triggered by this interview somewhere

  • @TheVideoTracker
    @TheVideoTracker 9 месяцев назад +2

    Brilliant at 16:18 the distinction between an AR-LLM model of the world vs the real world. Former uses previously used words in the answer to come up with the next words whearas in human mind the abstraction of the real world is independent of the words we use. and language comes secondary to that real world understanding.

  • @DavidOndrej
    @DavidOndrej 9 месяцев назад +7

    "The danger of concentration of power in proprietary AI systems is a much bigger danger than everything else."
    Couldn't agree more.

  • @dimakasenka8518
    @dimakasenka8518 9 месяцев назад +1

    Insane interview. The concentration of useful information is off the charts. Thank you so much for your job.

  • @couldntfindafreename
    @couldntfindafreename 9 месяцев назад +2

    1:12:00 That's exactly how we work with LLMs. We use structured thinking (CoT, ToT, GoT, SmartGPT) to spend more tokens (computing, time) on solving more difficult problems.

  • @JosePujol21
    @JosePujol21 8 месяцев назад +2

    I love these episodes, I am glad to see the progress being made in open source AI and Yann elegantly breaks down all the points. I am just starting my career in tech right now and I want to dive into AI, I really do hope that this technology can be like the printing press as Yann said. I wanted to elaborate on something he said about that, the Catholic church was actually highly in favor of the printing press and Johann Gutenberg himself was Catholic. This then led to increases in literacy and more people being able to be educated. I know that in this new age there are many who disagree with religion, but as we start tackling these moral questions with AI we should root ourselves in the principal of loving each other as ourselves.
    I am excited about the future of humanity and thank you Lex for bringing on amazing guests I pray that you keep doing what you are doing! Open Source is the way to go 💯

  • @valterszakrevskis
    @valterszakrevskis 9 месяцев назад +7

    Thank you, Lex! Would love to hear more AI podcasts with you! After all, AI has a colossal part in our future

  • @bluedog8310
    @bluedog8310 Месяц назад

    Lex, well done for not sniggering when talking about filling dishwashers

  • @dylanwardlow9438
    @dylanwardlow9438 9 месяцев назад +3

    Lex has officially reached the level of having to hide beverage logos and I’m happy for him. ❤

  • @Mookummockup
    @Mookummockup 8 месяцев назад +1

    Was this recorded before sora? It does seem to have some intuitive se se of physics to a basic degree

    • @Casevil669
      @Casevil669 8 месяцев назад +1

      It's just as Lex argued, obvious properties of the world can be reasoned about without explicit information about them. Yann kept going in circles when Lex questioned his reasoning trying to tie it to a few years ago.

  • @jashan1344
    @jashan1344 9 месяцев назад +4

    Love these technical episodes

  • @CuriosityIgnited
    @CuriosityIgnited 9 месяцев назад +1

    Yann's vision of a future with open source AI empowering humanity is TRULY inspiring. Imagine how much good we could do if everyone had access to AI tools to augment their intelligence and capabilities, while preserving diversity of thought. An open, decentralized approach is critical to unlocking AI's benefits for all. Let's work together to make this positive open source AI future a reality!

  • @xman933
    @xman933 9 месяцев назад +7

    Isn’t Sora an example of a LVM (Large Video Model) that is trained on visual patches instead of text tokens, something he seems to suggest has not been mastered yet?

    • @LtheMunichG
      @LtheMunichG 9 месяцев назад

      Not sure. There were other AI video generators before Sora so it’s just a much better version. And he must be aware of that.

    • @ritpatidar2678
      @ritpatidar2678 9 месяцев назад +1

      He is talking the reverse. Watching video and then understanding it.

  • @1gbart
    @1gbart 9 месяцев назад +2

    At 17:20, Yann says, when speaking about bit-after-bit token retrieval of LLMs: 'There is some retrieval...' . What exactly is that retrieval, and is this a similar retrieval to what humans do as the 'thought' that comes before we start formulating the words?

  • @vlogkitsune6785
    @vlogkitsune6785 9 месяцев назад +4

    LLM is like a blind guy talking about flying into horizon. He can talk about about it based on what he heard but wouldn't be able to understand the physics of it to generalize

  • @teemukupiainen3684
    @teemukupiainen3684 9 месяцев назад +1

    Ultimate Turing test:
    Take a professional string quartet. Choose a piece the selected quartet does not have any recording of, but that has many recordings from other quartets. The piece must also be such that the quartet’s 1 violin, viola and cello can play it with their eyes closed if necessary. That is, the second violin should not have too important visual communication-based independent tempo changes. I'd recommend Beethoven slow movement from op 127 or 132.
    The selected quartet’s 1 violinist, violist and cellist play the piece with their eyes blindfolded with five second violinists chosen from some other professional quartets, who would play with their eyes open. The quartet members would also play it with artificial intelligence, which would produce sound from the second violinist’s position in any technical way and monitor the other players in any way, for example with microphones, cameras. If the players / listeners did not distinguish artificial intelligence from the real second violinists in the blind test, the test would be passed.

  • @jlind00
    @jlind00 9 месяцев назад +3

    Lex, Please add a playlist of your interviews on AI. Include a summary video of what you believe are their key points to compare & contrast the expert opinions then offer both hope & risks for an AI future. If you were AI king with unlimited resources, which problems would you point AI at & why? Which AI experts do you respect most & why? Where would you invest?

  • @asafzilberberg6648
    @asafzilberberg6648 9 месяцев назад +1

    One of the most optimistic conversation about AI - Thank you both.

  • @maxgriffiths6968
    @maxgriffiths6968 9 месяцев назад +11

    Such a great episode. Love this. Keep more AI videos coming

  • @getgal1
    @getgal1 8 месяцев назад

    Yann has a way of reassuring everyone about the security and benefits of AI through his historical perspective.

  • @MendicantBiases
    @MendicantBiases 9 месяцев назад +11

    i used to think the same way lex does about language being able to model an agi untill i was listening to this show on cbc radio. it was about a woman who slipped and hit her head in the bathroom and lost the ability to comprehend language, speaking or listening. she goes on to talk about how she met this man who was also learning language at an older age since he had spent all of his life around deaf people with no kind of sign language. the way they would communicate with each other was to act out the actions they recalled, like a stage play. the guy goes on to talk about how he had to wrap his head around the concept of words being attached to objects and ideas. there is more to it but i cant remember and cant find it right now but this totally destroyed my idea of intelligence being birthed from language which is what yann is saying about animals being about to live in the world and communicate and all without using words. To lex point though, language perhaps could be used as a vehicle of information that an agi can learn from but the underlying architecture of it cant be completely language based. idk if this is what lex was trying to say but i feel like sometimes the way he talks is like he has pink sunglasses on viewing the world, which isnt a bad thing but i feel like maybe it leads to confusion about the ideas hes trying to get across

  • @OneNewBoy
    @OneNewBoy 3 месяца назад

    A brilliant mind, a consistant researcher and a genuine person.

  • @yvealeciasmith
    @yvealeciasmith 9 месяцев назад +3

    Fascinating conversation! I'm in the process of developing a phd proposal to explore the potential applications of ai in children's education, specifically within the physical learning environment, and while i am far from having the expertise to fully appreciate everything discussed here, Yann's insights have activated many thought-trains that I'm excited to go chasing after.

  • @dimapopov5962
    @dimapopov5962 9 месяцев назад +2

    8:00 Human language is indeed limited, but we also have formal logic languages that could describe space and time, like in VR. (Whitehead and Russell, "Principia Mathematica").

  • @alinasri9961
    @alinasri9961 9 месяцев назад +6

    More on AI and computer science please

  • @DavidFregoli
    @DavidFregoli 9 месяцев назад +2

    8 minutes in and we are already into wordcel vs shape rotator; great!

  • @praveenkumarak726
    @praveenkumarak726 9 месяцев назад +3

    Watching this in episodes, but a question already within the first 20 minutes or so. Was this recorded before or after “Sora”? Isn’t the statement about a latent representation of a world model with spatio-temporal compression into a patch sequence demonstrated with that? How different is this joint embedding mentioned here different to that?

    • @xewi60
      @xewi60 9 месяцев назад

      Sora is using labeled videos,
      It's "cheating" in a way by using language

  • @drbagattini
    @drbagattini 9 месяцев назад +1

    🎯 Key Takeaways for quick navigation:
    00:00 *🌐 Yann Lecun views the concentration of power in proprietary AI systems as a significant danger, advocating for open-source AI to empower humanity's inherent goodness.*
    04:45 *📚 Compares the information humans receive through sensory inputs to that of LLMs trained on textual data, suggesting a significant disparity in learning from the environment versus language.*
    10:22 *❓ Explores whether LLMs can construct a world model capable of understanding physical actions without direct sensory input.*
    12:01 *🛠️ Critiques the current training methods of LLMs for their inability to genuinely understand or predict physical reality and suggests the necessity of models that can anticipate the outcomes of actions in the world.*
    14:29 *🧠 Discusses the abstraction of thought before language, arguing that much of human thinking and planning occurs outside of language constraints.*
    17:14 *🔄 Highlights the limitations of token-by-token generation in LLMs, suggesting it lacks the depth of understanding or planning seen in human cognition.*
    18:26 *🌌 Argues for the importance of building world models through observation and interaction with the environment to truly understand and predict the physical world.*
    21:12 *🔮 Suggests exploring models with latent variables to better predict and understand the world's complexities, emphasizing the richness of the physical world over textual data.*
    21:55 *🛠️ Efforts to train AI to reconstruct images from corrupted versions have largely failed, highlighting challenges in learning good representations for image recognition.*
    23:05 *📉 Discusses the failure of traditional methods like autoencoders in generating useful image representations through reconstruction.*
    25:11 *🔄 Introduces joint embedding as an alternative, using encoders to predict the representation of an uncorrupted input from a corrupted version.*
    29:00 *🌱 Proposes joint embedding architectures (JEA) as a step towards advanced machine intelligence, focusing on abstract representation rather than pixel prediction.*
    30:08 *🌿 Highlights the importance of learning to ignore unpredictable details, like moving leaves, to extract abstract representations from inputs.*
    32:53 *📊 Discusses the necessity of abstracting from detailed physical reality to a more general representation for AI to reason and plan.*
    35:39 *🐈 Suggests that achieving a common sense understanding of the world akin to that of a cat is a crucial step before integrating language capabilities.*
    39:36 *🎥 Reveals success in learning representations from video that allow for the recognition of actions and physical possibilities.*
    42:19 *🚗 Contemplates using joint embedding architectures to model the world, enabling planning and action prediction, which is a capability beyond current LLMs.*
    44:24 *🌳 Suggests that hierarchical planning is essential for complex action planning but requires specific architecture beyond current AI capabilities.*
    49:02 *🤖 Discusses the limitation of LLMs in understanding and planning for physical actions based on the real world's complexity.*
    53:24 *🚀 Reflects on the unexpected success of autoaggressive LLMs in understanding language, emphasizing the value of scaling and self-supervised learning.*
    58:59 *🛑 Critiques generative models for failing to learn good representations of images and videos, advocating for a shift to joint embedding predictive architectures.*
    01:01:04 *🌏 Points out that high-level reasoning in language is based on common experiences of the physical world, which current LLMs lack.*
    01:03:12 *🏗️ Argues that to achieve high-level common sense, AI needs to build on low-level common sense derived from direct experiences, which is missing in LLMs.*
    01:04:05 *🤹‍♂️ Discusses the inefficiency of language for learning physical interactions compared to direct sensory experiences in early development.*
    01:06:24 *🎭 Explains how large language models (LLMs) produce hallucinations due to the exponential increase in error with the sequence length of tokens.*
    01:14:15 *🔄 Proposes a shift from auto regressive models to energy-based models for advanced dialog systems capable of planning and reasoning.*
    01:16:08 *🧠 Emphasizes the importance of operating in abstract representation spaces for efficient optimization and reasoning in AI.*
    01:17:31 *🤖 Introduces the concept of energy-based models for generating responses through optimization in abstract thought spaces.*
    01:19:12 *⚙️ Details the process of optimizing abstract thought representations to generate accurate and relevant answers.*
    01:25:31 *🤔 Discusses the importance of abstract representations in energy-based models for processing complex information beyond direct text.*
    01:32:06 *🌍 Highlights the essential need for open-source AI platforms to ensure diversity and freedom in the development and application of AI systems across cultures and industries.*
    01:37:10 *🗣️ Stresses that unbiased AI systems are impossible due to subjective perceptions of bias, advocating for a pluralistic approach similar to liberal democracy's view on the press.*
    01:38:06 *📚 Envisions a future where everyday digital interactions are AI-mediated, emphasizing the necessity of open-source AI for maintaining democratic and cultural diversity.*
    01:39:57 *💡 Argues for the strategic advantage of leveraging existing user and customer bases to implement open-source AI solutions, rather than restricting AI development to a few corporations.*
    01:46:36 *🚀 Emphasizes the benefits of making AI models like Llama 2 open-source to accelerate progress and innovation through community contributions.*
    01:48:02 *🧭 Discusses the challenge of creating AI systems without bias, as bias is subjective and varies across different user bases.*
    01:52:25 *🌐 Argues for the importance of open-source AI in maintaining democracy and cultural diversity, enabling widespread customization and specialization.*
    01:58:01 *🌟 Expresses excitement for the future of AI, including multimodal models and systems capable of understanding the world, planning, and reasoning.*
    02:00:30 *📈 Highlights ongoing research towards creating AI systems with human-level understanding and planning capabilities, predicting a gradual progression rather than a sudden event.*
    02:03:06 *🔋 Notes the necessity for hardware innovation to match human brain compute power and efficiency for widespread AI deployment.*
    02:05:09 *🛤️ Predicts a gradual development towards AGI, emphasizing the complexity and multidisciplinary challenges that lie ahead.*
    02:07:31 *🧠 Discusses how intelligence is a collection of skills and the ability to acquire new skills efficiently, challenging the notion that it can be measured linearly like IQ.*
    02:08:55 *🛑 Pushes back against AI doomers, arguing that the emergence of superintelligence will be a gradual process, not a sudden event, allowing for the implementation of safety measures and guardrails.*
    02:10:20 *🤖 Suggests that AI systems will not inherently possess a desire to dominate or harm humans and that safety will be an integral part of their design process.*
    02:13:00 *🚦 Highlights the importance of designing AI with objective-driven architectures that include ethical guardrails and the ability to obey human commands.*
    02:17:08 *🛡️ Predicts that in the future, individual AI assistants will filter out harmful or misleading information before it reaches humans, further ensuring safety and control.*
    02:22:04 *🔄 Discusses the historical pattern of fear and resistance to new technologies, stressing the importance of embracing change and distinguishing real dangers from imagined ones.*
    02:24:30 *🔓 Advocates for open-source AI as a means to prevent centralized control and ensure diversity and democracy in AI development and deployment.*
    02:28:09 *🤝 Expresses trust in humanity's ability to use AI for good, emphasizing the role of democracy and free speech in guiding the development and application of AI technologies.*
    02:29:32 *🤖 Anticipates that while millions of humanoid robots won't be walking around soon, the next decade will be significant for robotics, marking the emergence of the robotics industry beyond preprogrammed behaviors.*
    02:31:19 *🔍 Highlights the importance of AI advancements for robotics, indicating that hardware developers are awaiting AI progress for more autonomous functionality.*
    02:33:27 *🏠 Expresses enthusiasm for robots entering homes for tasks like cleaning and cooking but acknowledges the sophisticated and complex nature of such tasks.*
    02:37:37 *🌍 Discusses the future impact of AI on humanity, comparing it to the invention of the printing press, suggesting AI will amplify human intelligence and have profound societal benefits.*
    02:42:00 *📖 Reflects on the historical reluctance to adopt new technologies, using the example of the Ottoman Empire's ban on the printing press to preserve calligrapher jobs, and parallels concerns about AI impacting current jobs.*
    Made with HARPA AI

  • @Joel_Bel
    @Joel_Bel 9 месяцев назад +35

    Instant like with guests like this :)

  • @blackgptinfo
    @blackgptinfo 8 месяцев назад +1

    The problem with his position is the exponential growth of these models and the push for AGI where it becomes self referential and csn develop self.

  • @sunshineinarizona1726
    @sunshineinarizona1726 9 месяцев назад +3

    Thank you, Lex, I LOVE your channel. You give me hope in humanity. 🌻

  • @var309
    @var309 3 месяца назад

    One of the leading lights in AI and instead of trying to hype up his tech, he’s demystifying it so even a lay person can understand and even highlighting all the limitations. this is a sign of the very best ! Fantastic interview and individual

  • @worldwidewalks2199
    @worldwidewalks2199 9 месяцев назад +30

    This interview was probably recorded before open AI Sora and it already feels old

    • @MaxKamrani
      @MaxKamrani 9 месяцев назад +8

      your comment was written 4 hours ago, it already feels old

    • @matteoianni9372
      @matteoianni9372 9 месяцев назад +7

      Sora falsified most of what he said at the beginning.

    • @JeroenPut
      @JeroenPut 9 месяцев назад +1

      Yeah I was wondering this the whole time. This guy is clearly too pessimistic. People will find a way around the limitations

    • @ecognitio9605
      @ecognitio9605 9 месяцев назад +3

      Sora is vaporware, it has no world model so makes obvious mistakes and it's way too compute intensive to become a cloud service.

    • @bdown
      @bdown 9 месяцев назад +4

      Nobody wants to tell truth and say agi is here or real close!!!, they don’t want the push back or anything to ruin their progress so they downplay it. We all know what’s really going on.

  • @scottothegreat
    @scottothegreat 9 месяцев назад +2

    I like this guy. He seems like a voice of reason amid all the hype

  • @fredrikcarno2159
    @fredrikcarno2159 9 месяцев назад +13

    Thanks lex from Sweden

  • @Veptis
    @Veptis 9 месяцев назад

    Yann is one of the most reasonable voices on twitter. They actually know what they are talking about. (Although some of the arguments I simply skip).
    The claim about infants learning language seem incorrect from what I learned just a few days ago: even unborn children learn the syntax structure of languages while in the womb.

  • @enriquecortes-rello4538
    @enriquecortes-rello4538 9 месяцев назад +7

    that was a meaty discussion!

  • @HillcrestGames
    @HillcrestGames 9 месяцев назад +2

    He says that LLM's lack the capacity to understand the physical world, remember things, etc.
    But isn't the point of the LLM that it circumvents the need for those things by reliably predicting the response by studying beings that do understand the physical world and remember things.
    In that sense it sort of emulates those capabilities.
    Of course LLM can't take over the world, but it certainly can keep improving until it's significantly better at talking than any human could ever dream to be.

    • @RaySqw785
      @RaySqw785 5 месяцев назад

      nope, you making a transfert from human reflecting about something that doesn't have any humain concept

  • @DanouNauck
    @DanouNauck 9 месяцев назад +7

    Merci beaucoup pour cette explainacion elongee. C'etait tres utile et informant. merci @Yann!

  • @LuigiPaiPai
    @LuigiPaiPai 9 месяцев назад +2

    I used to think like Lex, Yann LeCun (or Rousseau) that people are fundamentally good but... that's when I was an early teenager. Since then I have learnt more, known more about the history of the world and its current state. Within the world at large and in the small world of academics I have encountered extraordinary people but I have also encountered in great numbers a lot of terrible people... and in even greater number a lot of people that are simply passive, unthoughtful, and therefore prefer to play ball with the worst people for the short term gains this can provide.
    Moreover it does not really matter if people are "fundamentally good" if what they produce or participate to in the world is "practically" bad, this is the fundamental difference between good intentions and good deeds.

    • @markemad1986
      @markemad1986 9 месяцев назад +2

      I think everyone is called to be good, and are supposed to be good however I do think that goodness is rare, but what's not good is people who gather power to themselves and are rushing to capture artificial intelligence to themselves. that's what virtually every cooperation is doing! Putting everyone on common ground is definitely a much better option, that's how things normally are, the problem is that some people are seeing an opportunity for a power grab using the age old tactic of "if you don't make me do it the barbarians will make this place a waste land"

  • @zeljkanenad
    @zeljkanenad 9 месяцев назад +7

    Yann is underrated. Excellent interview.

    • @Soulscribez
      @Soulscribez 9 месяцев назад +1

      I think he is overrated and to old, is time was then. The guy had good fondation 20 years ago now there is other that are miles ahead in the feild of AI.

    • @seetsamolapo5600
      @seetsamolapo5600 9 месяцев назад

      ​@@Soulscribezlike?

  • @mattankenbruck9465
    @mattankenbruck9465 7 месяцев назад

    Dear Dr. Fridman: Dr. LeCun is my favorite AI investigator and balanced perspective on AI within the field. Thus, I am very much edified by your having him on your podcast for the third time. I need to side with you, however, on at least one point you made (it may be that you both addressed this point in this episode, but I am unsure how thoroughly it was done). Your point, I think, Dr. Fridman, was that language may contain "wisdom" that might transcend some of Dr. LeCun's doubts about its usefulness in a world of "intuitive physics." Here's my thought in support of your position: language has evolved over at least 10s of thousands of years. Both "Nature" and "Nurture" have been built in--built in the contexts of society, government, history, traditions, personal habits, and the environment sensed through intuitive physics. So, although superficially, language seems simple and carries only small bits of data, upon deeper reflection, it may be seen to contain the wisdom of the ages. As an extension of this thought, maybe for AI to serve humankind, we need both AIs that focus on the nexi of language semantics and functionality, and also AIs that focus on representing intuitive physics and its benefits. Perhaps a working tension between these two types of AI models would be a great place for the human mind to have a say in a new world full of powerfully influential AIs. Your thoughts (if you have the opportunity), Dr. Fridman? Thank you for all of your work! Cheers! --Matt A.

    • @GaryMillyz
      @GaryMillyz 7 месяцев назад

      Use paragraph breaks.

  • @couldntfindafreename
    @couldntfindafreename 9 месяцев назад +7

    55:00 Once the AI passes a test, we just move the goalpost farther...

    • @Johnwilliams-th9hq
      @Johnwilliams-th9hq 9 месяцев назад

      Of course the best part of human nature, we are never satisfied. 😊

    • @billcowhig5739
      @billcowhig5739 9 месяцев назад

      That’s been the state of AI as long as it has been around. When I first began to be a student of Ai, my initial search was for a definition of AI. What you notice is the definition I finally came to, it is always solving the next most important problem that computers are given. Once it could conquer chess, what was next. But, it is doing that for numerous problems at the same time, each called AI. Here they talk about robotics, after talking about the main thrust, AGI.

    • @nias2631
      @nias2631 9 месяцев назад

      Or maybe we never had a good definition to start with.

  • @arvisz1871
    @arvisz1871 9 месяцев назад +1

    My brain are thirsty for this type of conversations where both the host and the guest are excellent at the covered topics. Especially, when the topics are not eternally ambiguous (like politics) but can be dissected (like ML).

  • @donaldstrubler3870
    @donaldstrubler3870 9 месяцев назад +12

    This now makes the entire Yann v Yud "clash" hilarious. This is a different level of intellect

    • @akuno_
      @akuno_ 9 месяцев назад +1

      @@therainman7777 Accurate

    • @MitchellPorter2025
      @MitchellPorter2025 9 месяцев назад +3

      Yann's knowledge of (present-day) AI is superior but his anti-doomer arguments 2:08:48 could be rebutted by a high school debater.
      AIs won't have a will to dominate because we'll never make them that way. Or if we do, there will be AI police to stop them... You won't have your judgment warped by AI propaganda because your own AI will filter everything on the net before you see it (!)... We'll be making smarter and smarter AI and figuring out safety as we go along, and if anything ever goes wrong, it will never lead to anything irreversibly out of control, even though they will be smarter than the people in charge of the design process...
      I won't say that such a future is impossible, but this is literally a utopian scenario in the bad sense, that it is counting on certain obvious things to never go wrong. The "defense" provided by Yann's considerations is extremely flimsy, and if you actually don't want those things to ever go wrong, you would need a culture and regime of AI development more like what is associated with Yudkowsky.

    • @davidw8668
      @davidw8668 9 месяцев назад

      ​@@therainman7777 who are you to make such allegations?

    • @davidw8668
      @davidw8668 9 месяцев назад

      Yud isn't an intellectual, nor did he ever contribute meaningfully. He's rambling some fear based nonsense based on science fiction fantasies.

    • @davidw8668
      @davidw8668 9 месяцев назад +2

      @@therainman7777 he's nuanced, grounded and has authority. So I guess my point is, if you d' frame your criticism in a more specific way, it might add to the discussion.

  • @dontwannabefound
    @dontwannabefound 9 месяцев назад +1

    He is right, AGI cannot be done through solving next word prediction in the current state

  • @alanoperate6982
    @alanoperate6982 9 месяцев назад +12

    Thank you, Lex, for inviting those great people

    • @osuf3581
      @osuf3581 9 месяцев назад

      In other interviews perhaps. This guy does not make that list. What LeCun claims is usually wrong.

  • @mikemaldanado6015
    @mikemaldanado6015 5 месяцев назад +1

    So refreshing to hear someone in charge speaking truthfully about the limitations of LLM's and how hard and far away we are to actually achieving AI. I feel a little better now. Chatgpt , elon musk, bill gates and others are all talking crazy........ IMO we don't ever achieve AI. LLM's have already plateaued according to INDEPENDENT research. The only thing i disagree with is when he said AI will make us smarter but they said the same thing about the internet, and for the first time in usa history the IQ of the younger generation is lower ( like we needed research to tell us that.lol )

  • @OculusGame
    @OculusGame 9 месяцев назад +28

    Excited about this one.

  • @gabrielsandstedt
    @gabrielsandstedt 9 месяцев назад +1

    Would like to hear his thoughts on open ai's sora video model. It seeems good at predicting

    • @gabrielsandstedt
      @gabrielsandstedt 9 месяцев назад

      ​​@@therainman7777 yeah I have a feeling he is wrong in his assumptions about the requirements for video generation. However cred for his open source commitment

  • @rydplrs71
    @rydplrs71 9 месяцев назад +5

    I’m subscribing not because I can or will watch all your content, but because people need to see the great content offered by your guests. There’s something for everyone and will make them really think about one subject or another.

  • @varunahlawat169
    @varunahlawat169 9 месяцев назад

    What a podcast, it helps me understand the technical frontier in a much intuitive manner!

  • @AH-wr1ir
    @AH-wr1ir 9 месяцев назад +6

    I'm not sure we always think about what we are going to say before we say it, maybe in a wider planning context but the actual words just fall out out of mouths.

    • @lightluxor1
      @lightluxor1 9 месяцев назад

      He is a good example of one that project himself into every one mind. He knows what he going to say before he does. I don’t. It seems one of both is wrong. Guess who?!

    • @kooshanjazayeri
      @kooshanjazayeri 9 месяцев назад +4

      we use the word thinking as a conscious effort, but the unconscious is also thinking, especially if we are talking about code/algorithms and A.I

    • @gustinian
      @gustinian 9 месяцев назад +3

      It is a fascinating topic and I'm sure that we alter behaviour in different circumstances. I sense that, like extraverts and introverts, people are on a continuum between two extremes - those whose thoughts are streaming faster than they can successfully vocalise (e.g. they often need to jump ahead when expressing reasoning steps just to keep up with their racing thoughts) and vice versa - those at the other extreme who seem to emotively 'blurt out' their unguarded thoughts unfiltered. In other words, times when thoughts are running ahead of speech, and times when speech is running ahead of thoughts.
      It's interesting to observe the amount of hesitant 'umming and erring' different people deploy in speech as they formulate sentences and search for apt expression in real time; this lack of fluency is particularly obvious when people begin to communicate in a non-native language.
      In rare cases, highly eloquent communicators can converse entirely fluently and unrehearsed without redundant verbal crutches (e.g. 'um', 'er', 'you know' or even '...go ahead and...') i.e. they can apparently observe, plan and utter their communication via a meta-analysis in real time to transcend this hesitancy.
      Similarly, in the world of music, a small subset of musicians and composers have learnt to improvise and extemporise in real time by developing musical themes into intertwining harmonised patterns (called Partimento). This seemingly impossible next-level skill requires an extensive musical toolset borne from focussed practice, muscle memory and intuition. Other composers rely on planning and workout their compositions laboriously on paper or by trial and error.
      In everyday speech and for mundane situations it would quickly become exhausting to formulate sentences in this careful way. But if one's life depends on it then...

  • @sebby7402
    @sebby7402 9 месяцев назад +1

    this interview is miles ahead of Altman marketing pitch

  • @Etienne_O
    @Etienne_O 9 месяцев назад +9

    Best episode of 2024

  • @KayWizz
    @KayWizz 9 месяцев назад +2

    You need to have Yann Lecunnand Yoshua Bengio on at the same time to discuss things!

  • @blackspetnaz2
    @blackspetnaz2 9 месяцев назад +5

    Yann: it will come progressively…
    OpenAI: SORA.
    Yann:… 🦗🦗🦗

    • @el_arte
      @el_arte 9 месяцев назад +4

      SORA makes useless videos. It has zero agency outside of that use case. And generating videos is a tiny piece of the puzzle.
      But Yann has many certainties that may be poorly founded. Certainties in general are dangerous.

    • @blackspetnaz2
      @blackspetnaz2 9 месяцев назад +1

      @@el_arte the point is it caught him and his theories by surprise.

    • @sansithagalagama
      @sansithagalagama 3 месяца назад

      Sora is not teal