Why LLMs hallucinate | Yann LeCun and Lex Fridman

Поделиться
HTML-код
  • Опубликовано: 9 мар 2024
  • Lex Fridman Podcast full episode: • Yann Lecun: Meta AI, O...
    Please support this podcast by checking out our sponsors:
    - HiddenLayer: hiddenlayer.com/lex
    - LMNT: drinkLMNT.com/lex to get free sample pack
    - Shopify: shopify.com/lex to get $1 per month trial
    - AG1: drinkag1.com/lex to get 1 month supply of fish oil
    GUEST BIO:
    Yann LeCun is the Chief AI Scientist at Meta, professor at NYU, Turing Award winner, and one of the most influential researchers in the history of AI.
    PODCAST INFO:
    Podcast website: lexfridman.com/podcast
    Apple Podcasts: apple.co/2lwqZIr
    Spotify: spoti.fi/2nEwCF8
    RSS: lexfridman.com/feed/podcast/
    Full episodes playlist: • Lex Fridman Podcast
    Clips playlist: • Lex Fridman Podcast Clips
    SOCIAL:
    - Twitter: / lexfridman
    - LinkedIn: / lexfridman
    - Facebook: / lexfridman
    - Instagram: / lexfridman
    - Medium: / lexfridman
    - Reddit: / lexfridman
    - Support on Patreon: / lexfridman
  • НаукаНаука

Комментарии • 28

  • @LexClips
    @LexClips  4 месяца назад +4

    Full podcast episode: ruclips.net/video/5t1vTLU7s40/видео.html
    Lex Fridman podcast channel: ruclips.net/user/lexfridman
    Guest bio: Yann LeCun is the Chief AI Scientist at Meta, professor at NYU, Turing Award winner, and one of the most influential researchers in the history of AI.

  • @natsudragnir4131
    @natsudragnir4131 4 месяца назад +1

    IT would have been nicer if you could cliped it untill the part about the constant speed of computing no matter the difficulty of the question argument to showcase the curent level of intelligence of such system, very nice insight

  • @hranko3143
    @hranko3143 4 месяца назад +3

    Word salad is fun but doesn't get us anywhere

  • @panthersoul
    @panthersoul 4 месяца назад +6

    They don’t hallucinate they just guess wrong

    • @abinashkarki
      @abinashkarki 4 месяца назад +3

      Guessing wrong is hallucinating i guess

    • @panthersoul
      @panthersoul 4 месяца назад

      @@abinashkarki yeah. I think this interview is pretty good. I've always felt the thought the whole LLM was just a way of telling a better lie... a way of telling you what it thinks you want to hear based on data that is heavily tainted with human bias and data set bias. It is only going to amplify our own insanity. It will not help up get out of it.

  • @meatskunk
    @meatskunk 4 месяца назад +2

    Kudos to whoever popularized the term “hallucination” in AI performance, it’s a convenient way to never admit errors or simply being wrong. Brilliant doublespeak.

  • @artukikemty
    @artukikemty 4 месяца назад

    Of course hallucinations occurs, because the knowledge is modeled, not undesrtood by the giant DL model as Open AI just acknowledged.

  • @pebre79
    @pebre79 4 месяца назад +6

    You could see the cognitive dissonance on Lex's face as an expert explains to him of LLM worship is mistaken

  • @gemini_537
    @gemini_537 4 месяца назад +16

    I don't agree with what he said that if one predicted word is wrong the following words will be wrong and the error will be accumulated exponentially. This is not how LLMs work. In fact, LLMs are very resilient to local errors due to the attention mechanism. You can even deliberately make some typos or mistakes, still LLMs can understand the question and continue to produce the right answer.

    • @AutitsicDysexlia
      @AutitsicDysexlia 4 месяца назад +5

      That's what I have noticed as well. It can handle typos and words out of place on prompts quite well, but I think that's different from what it says in response. Once it goes down a wrong path, it has a tendency to double-down on the narrative rather than correct itself.
      However, I think it's fair to say that they will bias towards generating an answer rather than simply saying, "I don't know." which in some cases would be a better response. In many cases when you're asking things that are facts in the real world, such as "Who is the most wealthy person in City XYZ." even a GPT 3.5T will produce erroneous results and spit out a name that isn't correct. Which is part of the "cannot be made factual." They're great conversationalist, but they do "lie" a lot with confidence... and clearly can be controlled- as Gemini proves with all its prescribed nonsense. 😂

    • @shawnmeade7251
      @shawnmeade7251 4 месяца назад +1

      Refer to the google autocorrect skit, and then square it to the millionth power lol. Some of these A.I. models are being fed so much

    • @rubes8065
      @rubes8065 4 месяца назад +10

      Lol you really think you know more than him? 🤣

    • @abinashkarki
      @abinashkarki 4 месяца назад +3

      I think typos and hallucinating errors are a bit different.

    • @needaneym1932
      @needaneym1932 4 месяца назад +7

      he is talking about the model making a mistake, not understanding user mistakes

  • @dusanbosnjakovic6588
    @dusanbosnjakovic6588 4 месяца назад +2

    Lexs comment about the gravitational pull, as he puts. It, makes a lot more sense to me. If that was not right then we would see more hallucinations deeper in the output of the llm completion. I have not seen any evidence to that fact. Hallucinations do occur obviously, but not the way that Yann described it.

  • @guillaumeleguludec8454
    @guillaumeleguludec8454 4 месяца назад +1

    Just because Le Cun is a Turing prize recipient and chief AI of meta does not mean everyone has to agree with everything he says. He holds a lot of controversial opinions within the ML community which pertain more to his own intuition that to established facts. His views are very interesting but obviously some important figures of AI would disagree.

  • @DrJanpha
    @DrJanpha 4 месяца назад +3

    You must intentionally deceive or engage in illogical behavior to truly cause ChatGPT to derail. But again, same can be said about normal humans...

    • @attheblank
      @attheblank 4 месяца назад +5

      Nope, that is false, it is really easy to "derail". In every conversation on every subject I find mistakes, even on the current best models and if I want them to "derail", it is frustrating how easy it is. Most humans are saying garbage most of the time, that is correct tho...

  • @gemini_537
    @gemini_537 4 месяца назад +2

    His logic is like "Java is not a new technology, it is just assembly under the hood, it cannot even allow you to manage memory precisely, Java is doomed". You can replace Java with LLM here to understand his logic.

    • @attheblank
      @attheblank 4 месяца назад +2

      ehhh, haha, not at all. He is not saying anything like that analogy, sorry... you just prove you did not understand.