Why LLMs hallucinate | Yann LeCun and Lex Fridman
HTML-код
- Опубликовано: 9 мар 2024
- Lex Fridman Podcast full episode: • Yann Lecun: Meta AI, O...
Please support this podcast by checking out our sponsors:
- HiddenLayer: hiddenlayer.com/lex
- LMNT: drinkLMNT.com/lex to get free sample pack
- Shopify: shopify.com/lex to get $1 per month trial
- AG1: drinkag1.com/lex to get 1 month supply of fish oil
GUEST BIO:
Yann LeCun is the Chief AI Scientist at Meta, professor at NYU, Turing Award winner, and one of the most influential researchers in the history of AI.
PODCAST INFO:
Podcast website: lexfridman.com/podcast
Apple Podcasts: apple.co/2lwqZIr
Spotify: spoti.fi/2nEwCF8
RSS: lexfridman.com/feed/podcast/
Full episodes playlist: • Lex Fridman Podcast
Clips playlist: • Lex Fridman Podcast Clips
SOCIAL:
- Twitter: / lexfridman
- LinkedIn: / lexfridman
- Facebook: / lexfridman
- Instagram: / lexfridman
- Medium: / lexfridman
- Reddit: / lexfridman
- Support on Patreon: / lexfridman Наука
Full podcast episode: ruclips.net/video/5t1vTLU7s40/видео.html
Lex Fridman podcast channel: ruclips.net/user/lexfridman
Guest bio: Yann LeCun is the Chief AI Scientist at Meta, professor at NYU, Turing Award winner, and one of the most influential researchers in the history of AI.
IT would have been nicer if you could cliped it untill the part about the constant speed of computing no matter the difficulty of the question argument to showcase the curent level of intelligence of such system, very nice insight
Word salad is fun but doesn't get us anywhere
They don’t hallucinate they just guess wrong
Guessing wrong is hallucinating i guess
@@abinashkarki yeah. I think this interview is pretty good. I've always felt the thought the whole LLM was just a way of telling a better lie... a way of telling you what it thinks you want to hear based on data that is heavily tainted with human bias and data set bias. It is only going to amplify our own insanity. It will not help up get out of it.
Kudos to whoever popularized the term “hallucination” in AI performance, it’s a convenient way to never admit errors or simply being wrong. Brilliant doublespeak.
Of course hallucinations occurs, because the knowledge is modeled, not undesrtood by the giant DL model as Open AI just acknowledged.
You could see the cognitive dissonance on Lex's face as an expert explains to him of LLM worship is mistaken
I don't agree with what he said that if one predicted word is wrong the following words will be wrong and the error will be accumulated exponentially. This is not how LLMs work. In fact, LLMs are very resilient to local errors due to the attention mechanism. You can even deliberately make some typos or mistakes, still LLMs can understand the question and continue to produce the right answer.
That's what I have noticed as well. It can handle typos and words out of place on prompts quite well, but I think that's different from what it says in response. Once it goes down a wrong path, it has a tendency to double-down on the narrative rather than correct itself.
However, I think it's fair to say that they will bias towards generating an answer rather than simply saying, "I don't know." which in some cases would be a better response. In many cases when you're asking things that are facts in the real world, such as "Who is the most wealthy person in City XYZ." even a GPT 3.5T will produce erroneous results and spit out a name that isn't correct. Which is part of the "cannot be made factual." They're great conversationalist, but they do "lie" a lot with confidence... and clearly can be controlled- as Gemini proves with all its prescribed nonsense. 😂
Refer to the google autocorrect skit, and then square it to the millionth power lol. Some of these A.I. models are being fed so much
Lol you really think you know more than him? 🤣
I think typos and hallucinating errors are a bit different.
he is talking about the model making a mistake, not understanding user mistakes
Lexs comment about the gravitational pull, as he puts. It, makes a lot more sense to me. If that was not right then we would see more hallucinations deeper in the output of the llm completion. I have not seen any evidence to that fact. Hallucinations do occur obviously, but not the way that Yann described it.
Just because Le Cun is a Turing prize recipient and chief AI of meta does not mean everyone has to agree with everything he says. He holds a lot of controversial opinions within the ML community which pertain more to his own intuition that to established facts. His views are very interesting but obviously some important figures of AI would disagree.
You must intentionally deceive or engage in illogical behavior to truly cause ChatGPT to derail. But again, same can be said about normal humans...
Nope, that is false, it is really easy to "derail". In every conversation on every subject I find mistakes, even on the current best models and if I want them to "derail", it is frustrating how easy it is. Most humans are saying garbage most of the time, that is correct tho...
His logic is like "Java is not a new technology, it is just assembly under the hood, it cannot even allow you to manage memory precisely, Java is doomed". You can replace Java with LLM here to understand his logic.
ehhh, haha, not at all. He is not saying anything like that analogy, sorry... you just prove you did not understand.