Stanford CS25: V4 I Overview of Transformers

Поделиться
HTML-код
  • Опубликовано: 22 апр 2024
  • April 4, 2024
    Steven Feng, Stanford University [styfeng.github.io/]
    Div Garg, Stanford University [divyanshgarg.com/]
    Emily Bunnapradist, Stanford University [ / ebunnapradist ]
    Seonghee Lee, Stanford University [shljessie.github.io/]
    Brief intro and overview of the history of NLP, Transformers and how they work, and their impact. Discussion about recent trends, breakthroughs, applications, and remaining challenges/weaknesses. Also discussion about AI agents. Slides here: docs.google.com/presentation/...
    More about the course can be found here: web.stanford.edu/class/cs25/
    View the entire CS25 Transformers United playlist: • Stanford CS25 - Transf...

Комментарии • 38

  • @3ilm_yanfa3
    @3ilm_yanfa3 Месяц назад +12

    Can't believe, ... Just today, we started the part about LSTM and transformers in my ML course, and here it comes
    Thank you guys !

  • @fatemehmousavi402
    @fatemehmousavi402 Месяц назад +7

    Awesome, thank you Stanford online for sharing these amazing video series

  • @Drazcmd
    @Drazcmd Месяц назад +5

    Very cool! Thanks for posting this publicly, it's really awesome to be able to audit the course :)

  • @styfeng
    @styfeng Месяц назад +18

    it's finally released! hope y'all enjoy(ed) the lecture 😁

    • @laalbujhakkar
      @laalbujhakkar Месяц назад

      Don't hold the mic so close bro. The lecture was really good though :)

    • @gemini22581
      @gemini22581 Месяц назад

      What is a good course to learn NLP?

    • @siiilversurfffeeer
      @siiilversurfffeeer Месяц назад

      hi feng! will there be more cs25 v4 lectures upload in this channel?

    • @styfeng
      @styfeng Месяц назад +1

      @@siiilversurfffeeer yes! should be a new video out every week, approx. 2-3 weeks after each lecture :)

  • @benjaminy.
    @benjaminy. Месяц назад +2

    Hello Everyone! Thank you very much for uploading these materials. Cheers

  • @mjavadrajabi7401
    @mjavadrajabi7401 Месяц назад +5

    Great!! Finally It's time for CS25 V4🔥

  • @marcinkrupinski
    @marcinkrupinski Месяц назад +3

    AMazing stuff! Thank you for publishing this valuable material!

  • @lebesguegilmar1
    @lebesguegilmar1 Месяц назад +2

    Thanks for sharing this course and palestry Staford. Congratulations . Here the Brazil

  • @JJGhostHunters
    @JJGhostHunters Месяц назад +1

    I recently started to explore using transformers for timeseries classification as opposed to NLP. Very excited about this content!

  • @liangqunlu1553
    @liangqunlu1553 Месяц назад +2

    Very interesting summarization

  • @RishiKaura
    @RishiKaura Месяц назад +1

    Sincere students and smart

  • @egonkirchof
    @egonkirchof День назад

    In summary, Transformers mean using tons of weight matrixes, leading to way better results.

  • @GerardSans
    @GerardSans Месяц назад +28

    Be careful using anthropomorphic language when talking about LLMs. Eg: thoughts, ideas, reasoning. Transformers don’t “reason” or have “thoughts” or even “knowledge”. They extract existing patterns in the training data and use stochastic distributions to generate outputs.

    • @ehza
      @ehza Месяц назад +2

      That's a pretty important observation imo

    • @junyuzheng5282
      @junyuzheng5282 Месяц назад +3

      Then what is “reason” “thoughts” “knowledge”?

    • @DrakenRS78
      @DrakenRS78 Месяц назад +1

      Do individual Neurons have thoughts , reason or knowledge - or is it once again the collective which we should be assessing

    • @TheNewton
      @TheNewton Месяц назад

      This mis-anthropomorphism problem will only grow because each end of the field/industry is being sloppy with it , so calls for sanity will just get derided as time goes on.
      On the starting side we have academics title baiting like they did with "attention" so papers get attention instead of just making a new word|phrase like 'correlation network' , 'word window' , 'hyper hyper-networks' etc ; or just overloading existing terms 'backtracking', backpropagation etc.
      And the on other end of the collective full court press is corporations continuing to pass assistant(tools) off as human like with names such cortana, siri etc for the sake of branding and marketing.

    • @TheNewton
      @TheNewton Месяц назад

      @@junyuzheng5282 `Then what is “reason” “thoughts” “knowledge”?`
      reason,thoughts,knowledge, etc are more than are hallucinated in your linear algebra formulas

  • @GeorgeMonsour
    @GeorgeMonsour Месяц назад

    I want to know more about 'filters.' Are they human or computer processes or mathematical models? The filters are a reflection, I'd like to understand more about. I hope they are not an inflection, that would be an unconscious pathway.
    This is a really sweet dip into the currency of knowledge and these students are to be commended however, in the common world there is a tendency developing towards a 'tower of babel'.
    Greed may have an influence that we must be wary of. I heard some warnings in the presentation that consider this tendency.
    I'm impressed by these students. I hope they aren't influenced by the silo system of capitalism and that they remain at the front of the generalization and commonality needed to keep bad actors off the playing field.

  • @IamPotato_007
    @IamPotato_007 Месяц назад

    Where are the professors?

  • @hussienalsafi1149
    @hussienalsafi1149 Месяц назад +1

    ☺️☺️☺️🥰🥰🥰

  • @TV19933
    @TV19933 Месяц назад

    future artificial intelligence
    i was into talk this
    probability challenge
    Gemini ai talking ability rapid talk i suppose so
    it's splendid

  • @Anbu_Sampath
    @Anbu_Sampath Месяц назад

    it would be great if CS25: V4 created another playlist in youtube.

  • @riju1956
    @riju1956 Месяц назад +6

    so they stand for 1 hour

    • @rockokechukwu3343
      @rockokechukwu3343 Месяц назад

      Is it okay to cheat in an exam if you have the opportunity to do so?

  • @ramsever5087
    @ramsever5087 Месяц назад

    what is said in 13:47 is incorrect.
    Large language models like ChatGPT or other state-of-the-art language models do not only have a decoder in their architecture. They employ the standard transformer encoder-decoder architecture. The transformer architecture used in these large language models consists of two main components:
    The Encoder:
    This encodes the input sequence (prompt, instructions, etc.) into vector representations.
    It uses self-attention mechanisms to capture contextual information within the input sequence.
    The Decoder:
    This takes in the encoded representations from the encoder.
    It generates the output sequence (text) in an autoregressive manner, one token at a time.
    It uses self-attention over the already generated output, as well as cross-attention over the encoder's output, to predict the next token.
    So both the encoder and decoder are critical components. The encoder allows understanding and representing the input, while the decoder enables powerful sequence generation capabilities by predictively modeling one token at a time while attending to the encoder representations and past output.
    Having only a decoder without an encoder would mean the model can generate text but not condition on or understand any input instructions/prompts. This would severely limit its capabilities.
    The transformer's encoder-decoder design, with each component's self-attention and cross-attention, is what allows large language models to understand inputs flexibly and then generate relevant, coherent, and contextual outputs. Both components are indispensable for their impressive language abilities.

    • @gleelantern
      @gleelantern 16 дней назад +1

      ChatGPT, Gemini, etc. are decoder-only models. Read their tech reports.

  • @laalbujhakkar
    @laalbujhakkar Месяц назад +2

    Stanford's struggles with microphones continue.

    • @jeesantony5308
      @jeesantony5308 Месяц назад +1

      it is cool to see some negative comments in between lots of pos... ✌🏼✌🏼

    • @laalbujhakkar
      @laalbujhakkar Месяц назад

      @@jeesantony5308 I love the content, which makes me h8 the lack of thought and preparation that went into the delivery of all that knowledge even more. Just trying to reduce the loss as it were.