LLM vs NLP | Kevin Johnson

Поделиться
HTML-код
  • Опубликовано: 4 окт 2024
  • Kevin Johnson (Head of AI at dscout) breaks down the difference between LLM, NLP, and other terms related to AI.

Комментарии • 27

  • @SudipBishwakarma
    @SudipBishwakarma Месяц назад +1

    Such a great presentation - concise and clear.

  • @mauricecinque5618
    @mauricecinque5618 6 месяцев назад +2

    Good and concise presentation. The problem mentionned by Kevin about the « i don’t know » answer of LLM is quite limiting as of today. I asked a simple question (related to a specific object whose name was invented in a novel) to two different LLM’s and both answered wrong answers rather than saying « i don’t know ». After reiterating my same question, LLM answered but still with different wrong answers. I ended up providing the wikipedia link (a quite short page) where correct answer could be found. LLM answered « i am not allowed to read pages ». A basic human intelligence would accept the proposal to read a source at least a widely recognized source to learn something… So the word « Intelligence » appears just usurped when applied to LLM and deep learning in general that « freeze » knowledge in a constantly revolving world. Infering on an intrinsically limited thus biased corpus, will inevitably reach limits thus provide at some point wrong or no answer.

  • @kishorekumar9930
    @kishorekumar9930 3 месяца назад +1

    Super impressed with the explanation. Now I know what NLP and LLM are and their significance. Damn, This shit is buzzing, no cap

  • @tushartiwari7929
    @tushartiwari7929 3 месяца назад

    What a clear presentation man! 😎

  • @mlhuman5064
    @mlhuman5064 6 месяцев назад +4

    "NLP predicts the next word"
    Very revealing - the speaker doesn't have a lot of NLP experience for sure.

    • @philippdowling
      @philippdowling 4 месяца назад

      100%
      It's misleading to even frame LLM as being a different thing to NLP - LLMs are just a family of models within the broader field of NLP. This talk is silly

  • @RohanVetale
    @RohanVetale 2 месяца назад

    Amazing explanation ❤

  • @rohgels
    @rohgels 7 месяцев назад

    Great presentation

  • @muhannadobeidat
    @muhannadobeidat 7 месяцев назад

    Nicely presented.

  • @mauricecinque5618
    @mauricecinque5618 6 месяцев назад

    Definitely interesting insights.

  • @pamelaanang5001
    @pamelaanang5001 9 дней назад

    does that mean perplexity is dependent on the prompt given to the model?

  • @MuralimohanM
    @MuralimohanM 8 месяцев назад

    Nicely done!

  • @10sanat
    @10sanat 6 месяцев назад

    explained so well

  • @ChẩuHiếuofficial
    @ChẩuHiếuofficial 9 месяцев назад

    Great NLP I like it

  • @m5pat
    @m5pat 17 дней назад

    Large language models and natural language processing

  • @vigneshdgame
    @vigneshdgame 3 месяца назад

    Take a bow

  • @jellezuidema
    @jellezuidema 7 месяцев назад

    We all need to simplify when giving talks to an audience of non-experts, but redefining "NLP" to mean "the technology from 10 years ago", without recognising that the *field* NLP has co-produced LLMs is a bit too silly.

  • @KomiVir
    @KomiVir 6 месяцев назад

    GREAT GREAT GREAT

  • @kartikpodugu
    @kartikpodugu 9 месяцев назад +1

    There is something called as emergent property of LLM. Though it is not trained for reasoning, math, images containing text; many LLMs are able to do those tasks. In fact there are many benchmarks w.r.t reasoning which LLMs are giving good scores though not great yet. Saying that it is just based on probability and no reasoning, is no more valid I think. Of course, chat bots like Bard and ChatGPT do more than just inference using LLM. Shed your thoughts on this please.

    • @princezuko7073
      @princezuko7073 9 месяцев назад

      Don't you think we need explainable AI regarding this phenomenon? To make each layer interpretable to understand why it surprisingly gives us competent performance without any training on reasoning or task-specific information?

    • @kartikpodugu
      @kartikpodugu 9 месяцев назад

      @@princezuko7073definitely we need explainable AI. I am just mentioning that LLMs have reasoning capabilities and they are improving as we speak.

    • @princezuko7073
      @princezuko7073 9 месяцев назад

      @@kartikpodugu Interesting perspective. How do you see the evolution of these LLMs possess reasoning capabilities over time?

  • @ConnorMcCormick
    @ConnorMcCormick 7 месяцев назад +1

    Humans: can predict the next token accurately.
    Kevin: See, now that's intelligent
    LLMs: can predict the next token accurately.
    Kevin: Haha, nice "intelligence"

  • @BR-hi6yt
    @BR-hi6yt 9 месяцев назад +3

    Its doing more than rolling dice - emergent intelligence. But nice try.

    • @paulimbacana
      @paulimbacana 8 месяцев назад +3

      that's the tell you have no idea on what you are talking about.