308 - An introduction to language models with focus on GPT

Поделиться
HTML-код
  • Опубликовано: 8 ноя 2024

Комментарии • 30

  • @rashadulislamsumon9815
    @rashadulislamsumon9815 Год назад +2

    Best cahnnel for computer vision

    • @AshutoshDhaka
      @AshutoshDhaka 10 месяцев назад

      This video is about gpt. But I agree. He's god in teaching

  • @aggreym.muhebwa7077
    @aggreym.muhebwa7077 Год назад +1

    I am super excited about your next series on language models. Thanks alot (in advance.)

  • @nyariimani7281
    @nyariimani7281 Год назад +2

    THIS IS SO GREAT! This is incredibly timely. Thank you for this. Excited for the next one.

  • @RahulKumar-xb2js
    @RahulKumar-xb2js Год назад

    eagerly waiting for the next part.

  • @Michaeljamieson10
    @Michaeljamieson10 Год назад

    Really useful understanding for the creation of prompts when using open ai thanks

  • @rashadulislamsumon9815
    @rashadulislamsumon9815 Год назад

    I am eagerly waiting your next series on language models.

  • @trapbushali542
    @trapbushali542 Год назад +1

    GOAT! Let's GO!!!

  • @awaisahmad5908
    @awaisahmad5908 Год назад +1

    Thank you so much. I always wanted to learn NLP concepts from you Sir.

  • @rv0_0
    @rv0_0 Год назад +1

    waiting for a NLP series from you

  • @simonclark5936
    @simonclark5936 Год назад

    Fantastic tutorial, just what I needed. Thank you!

  • @ajay0909
    @ajay0909 Год назад +1

    Wow, i was waiting for this. I would love to see a road map covering the topics on NLP

  • @edomedo9137
    @edomedo9137 2 месяца назад

    this guy is so wholesome

  • @yujanshrestha3841
    @yujanshrestha3841 Год назад +1

    Excellent video Sreeni! I especially enjoyed the solar system analogy. I'll borrow this analogy for discussions I have with my clients.
    I have heard about people using transformers for image processing by "tokenizing" images into embeddings. A CT scan can be thought of as a string of anatomical regions, strung together sort of like a sentence. I would be very curious to hear discuss any parallels to the image processing world.

    • @DigitalSreeni
      @DigitalSreeni  Год назад +1

      Thanks Yujan, you are very generous.
      As for image processing using transformers, while they can be used for some specific tasks like image captioning, they are not typically used as the primary architecture for image processing. It would like fitting technology to a problem rather than finding the right technology that fits the challenge.

  • @dr.aravindacvnmamit3770
    @dr.aravindacvnmamit3770 Год назад

    Experienced Excellent Explanation !!!!!!

  • @bindurao3463
    @bindurao3463 Год назад +1

    Love this narrative / explanation, well done. would love to do a project with you.

  • @thosedreams
    @thosedreams Год назад

    Hi Srini, thank you for sharing your knowledge! Can you please explain why GPT (left-to-right) is more suitable than BERT (Bidirectional) for summarization? Your reasoning at 12:18 on why BERT is better at understanding the context of the content made sense; doesn't summarization also need the context or are there some things that GPT does better than BERT to be better at this task?

    • @DigitalSreeni
      @DigitalSreeni  Год назад

      GPT's left-to-right architecture is more suitable for summarization than BERT's bidirectional architecture because summarization requires capturing the context of a text and generating coherent and concise summaries, which is better accomplished by a unidirectional model like GPT. Also, GPT has been fine-tuned specifically for language generation tasks such as summarization.

  • @NikhilSharma0704
    @NikhilSharma0704 Год назад

    Thanks

  • @awaisahmad5908
    @awaisahmad5908 Год назад

    Although your primary focus is on Computer vision but this topic was also necessary to be covered up.

  • @khaikit1232
    @khaikit1232 Год назад

    With self-attention, how would the model understand which contextual words are relevant or not relevant in relation to each word in a sentence?
    Great video btw!👍

    • @DigitalSreeni
      @DigitalSreeni  Год назад +1

      Self-attention allows the model to determine the relevance of each word in a sentence by calculating attention scores between all pairs of words in the sentence.

  • @aryansakhala3930
    @aryansakhala3930 Год назад

    give more videos on transformer
    explore t5 and all

  • @pranabsarma18
    @pranabsarma18 Год назад

    Hi Sreeni what are the prerequisites to watch this tutorials on NLP?

    • @DigitalSreeni
      @DigitalSreeni  Год назад

      Nothing. This video is just an explainer so I don't see the need for any prerequisites.

  • @limon_halder
    @limon_halder Год назад

    how to get an intern.

  • @linda772010
    @linda772010 Год назад

    Thanks