Sir you are at very close to breakout. ML/DL in India will take wind fast soon. These days everything changes fast . Your vision of becoming the hub for this content , i see it very clearly. I am forever indebted to you , as you have helped me save a lot of time and anxiety in this information overloaded times , where quality is rare to find.
🎯 Key points for quick navigation: 00:00 *🎥 Introduction to Self Attention* - Introduction by Nitesh, explaining the importance of understanding self-attention for mastering transformers, - Announcement of the video being part of a series, and the structure of the series. 01:18 *📚 Importance of Self Attention in NLP* - Explains how understanding self-attention helps in mastering transformers, - Mentions that transformers are crucial for advanced AI applications like LLMs and generative AI. 02:14 *❓ Key Requirement in NLP Applications* - Highlights the need to convert words into numbers for any NLP application, - Discusses various NLP tasks like sentiment analysis and machine translation. 03:30 *🔢 Vectorization Techniques* - Introduces vectorization as the process of converting words to numbers, - Talks about basic techniques like one-hot encoding and its limitations, - Describes the initial steps in NLP focusing on efficient word-to-number conversion. 05:46 *🧳 Bag of Words* - Explains the Bag of Words technique as an improvement over one-hot encoding, - Describes how Bag of Words considers word frequency in sentences. 06:40 *🔍 TF-IDF and Word Embeddings* - Discusses the limitations of Bag of Words and introduces TF-IDF, - Introduction to word embeddings as an advanced technique, - Emphasizes the importance of capturing semantic meaning with word embeddings. 08:14 *🌐 Creating Word Embeddings* - Explanation of the process of creating word embeddings using large training datasets, - Describes how neural networks are used to convert words into n-dimensional vectors, - Examples of how similar words have similar vectors in multi-dimensional space. 10:18 *🔍 Dimensionality and Meaning* - Discusses how each dimension in a word embedding can represent different aspects of word meaning, - Explains how similar words have similar vectors due to shared contextual meanings, - Mentions the inherent problem of neural networks not explicitly defining dimension meanings. 11:39 *⚠️ Limitations of Word Embeddings* - Describes a potential issue with word embeddings when the training data focuses heavily on a specific word, - Uses a hypothetical example to explain how over-representation of certain words can skew embeddings, - Explains the importance of balanced training data to avoid biased embeddings. 13:30 *🍎 Word Embedding Averaging Issue* - Describes the averaging issue with word embeddings, - Example of how different contexts for the word "apple" affect its embedding, - Emphasizes that embeddings capture average meaning, not specific contextual meaning. 17:02 *🛠️ Static Nature of Word Embeddings* - Explains the static nature of word embeddings and their limitations, - Discusses how static embeddings can cause problems in applications like translation, - Highlights the need for contextual embeddings that adjust based on sentence context. 19:05 *✨ Introduction to Self-Attention* - Introduces self-attention as a solution to the limitations of static word embeddings, - Describes self-attention's role in generating smart contextual embeddings, - Explains how self-attention adjusts embedding values based on context dynamically. 21:01 *📚 Mechanism of Self-Attention* - Describes how self-attention generates contextual embeddings, - Each input word embedding produces a contextual output embedding, - These embeddings understand the context of each word, making them more useful for NLP applications like transformers. 22:09 *🧩 Self-Attention Summary and Next Steps* - Summarizes self-attention as a mechanism for generating contextual embeddings from static embeddings, - Highlights the improved utility of these embeddings for NLP applications, - Introduces the next video’s focus on the calculations inside self-attention, including concepts like query, key, and value vectors. Made with HARPA AI
Sir I dont Know how to Thank you , I am a student from kerala who really intrested in the field of Machine Learning and AI. I can easly say how lucky Iam to find your class , I was completely a beginner in this field and i found out your 100 days of ML , It was my turning point, you made my base undersatnding Very well ,Now I can almost do all the ML work flow by my self but iam not good at it though. I need More practices ,I tried almost all project on your Channel and It made me More Under Standing about the things and Once More THANKSSS. now as i said i can do all ml work flow but iam not much good at any of and i know it need more Practices and Mainly some Domain Knowelge in each field . Sir Can you give Some advice to increase my progress like how should i practice now ,should I repeate the practice (like taking some competition dataset and then do all the work then check others work , compare to yourself), and Lastly and Mainly I would appreciate if you do More ML and Data analysis Project on your Channel so I can Increase My domain Knowledge and diffrent feature engineering technique (Sorry about my broken english)
I was eagerly waiting for the next video for last 1 week. Thank you for providing this amazing content. Also, I request you to please release the other videos of transformer soon.
Thankyou For this Video @CampusX, I am learning all of the concepts of deep learning from your playlist so I was waiting for this video, Thank you for this great playlist of Deep learning. this playlist is very helpful for me in understanding Deep Learning.
Thank you so much Nitish. I know this was a very tough nut to break it into simple words with examples. Looking forward to the next set of videos which could solve the "how" mystery and how it also flows into multiheaded attention. Once again Kudos for the excellent effort and for great service to community.
Sir ap ki video buhut lambi hoti hai magar Jo knowledge milta hai wo unbelievable hai. Thanks for dedicating your important time for us. Your videos are late but they worth it❤❤❤.
And please nitish grow bro With new Unique content like how to read and interpret any new DL papers ...in want u have the family of millions subscribers
Sir really appreciate your kind efforts. I have a humble Suggestion, please organize this series properly so that the beginner levels can follow it accordingly.
What about differentiating all English words based on parts of speech? Like we normal humans do l, and then training a model to understand how the parts of speeches for together to form a sentence?
sir if embedding gives values based on average meaning so how do embedding knows in which context is words used in data as machine can only understand numbers and by seeing numbers how can it conclude context
A̶t̶t̶e̶n̶t̶i̶o̶n̶ Nitish Sir is all you need.
100%
110%
💯
I think researchers were the fan of dua lipa
@@Hbchnhdfjjjas5483 or charlie puth, but why dua lipa
Sir you are at very close to breakout. ML/DL in India will take wind fast soon. These days everything changes fast . Your vision of becoming the hub for this content , i see it very clearly. I am forever indebted to you , as you have helped me save a lot of time and anxiety in this information overloaded times , where quality is rare to find.
Kudos to the Nitish Sir!! who made self-attention a simple melody.
For people like me, this man is the coach, brother, & guide. Thank you so much Nitish. Loads of prayers and love.
Thankyou so much Nitish, whatever deep learning I know is because of you and might be able to switch my career with this learning.
I am an NLP engineer, but no one I have ever seen to explain in such simpler manner...God bless you Nitish
how do you get into this role, sir? I'm doing a Master's in AI from a tier-1 college, Please suggest me a path.
From past 3 months , I am only giving "ATTENTION" to your videos. Mad Respect!
🎯 Key points for quick navigation:
00:00 *🎥 Introduction to Self Attention*
- Introduction by Nitesh, explaining the importance of understanding self-attention for mastering transformers,
- Announcement of the video being part of a series, and the structure of the series.
01:18 *📚 Importance of Self Attention in NLP*
- Explains how understanding self-attention helps in mastering transformers,
- Mentions that transformers are crucial for advanced AI applications like LLMs and generative AI.
02:14 *❓ Key Requirement in NLP Applications*
- Highlights the need to convert words into numbers for any NLP application,
- Discusses various NLP tasks like sentiment analysis and machine translation.
03:30 *🔢 Vectorization Techniques*
- Introduces vectorization as the process of converting words to numbers,
- Talks about basic techniques like one-hot encoding and its limitations,
- Describes the initial steps in NLP focusing on efficient word-to-number conversion.
05:46 *🧳 Bag of Words*
- Explains the Bag of Words technique as an improvement over one-hot encoding,
- Describes how Bag of Words considers word frequency in sentences.
06:40 *🔍 TF-IDF and Word Embeddings*
- Discusses the limitations of Bag of Words and introduces TF-IDF,
- Introduction to word embeddings as an advanced technique,
- Emphasizes the importance of capturing semantic meaning with word embeddings.
08:14 *🌐 Creating Word Embeddings*
- Explanation of the process of creating word embeddings using large training datasets,
- Describes how neural networks are used to convert words into n-dimensional vectors,
- Examples of how similar words have similar vectors in multi-dimensional space.
10:18 *🔍 Dimensionality and Meaning*
- Discusses how each dimension in a word embedding can represent different aspects of word meaning,
- Explains how similar words have similar vectors due to shared contextual meanings,
- Mentions the inherent problem of neural networks not explicitly defining dimension meanings.
11:39 *⚠️ Limitations of Word Embeddings*
- Describes a potential issue with word embeddings when the training data focuses heavily on a specific word,
- Uses a hypothetical example to explain how over-representation of certain words can skew embeddings,
- Explains the importance of balanced training data to avoid biased embeddings.
13:30 *🍎 Word Embedding Averaging Issue*
- Describes the averaging issue with word embeddings,
- Example of how different contexts for the word "apple" affect its embedding,
- Emphasizes that embeddings capture average meaning, not specific contextual meaning.
17:02 *🛠️ Static Nature of Word Embeddings*
- Explains the static nature of word embeddings and their limitations,
- Discusses how static embeddings can cause problems in applications like translation,
- Highlights the need for contextual embeddings that adjust based on sentence context.
19:05 *✨ Introduction to Self-Attention*
- Introduces self-attention as a solution to the limitations of static word embeddings,
- Describes self-attention's role in generating smart contextual embeddings,
- Explains how self-attention adjusts embedding values based on context dynamically.
21:01 *📚 Mechanism of Self-Attention*
- Describes how self-attention generates contextual embeddings,
- Each input word embedding produces a contextual output embedding,
- These embeddings understand the context of each word, making them more useful for NLP applications like transformers.
22:09 *🧩 Self-Attention Summary and Next Steps*
- Summarizes self-attention as a mechanism for generating contextual embeddings from static embeddings,
- Highlights the improved utility of these embeddings for NLP applications,
- Introduces the next video’s focus on the calculations inside self-attention, including concepts like query, key, and value vectors.
Made with HARPA AI
Amazing explanation of Attention mechanism, Bag of words, and embedding. Let me say superb you are real guru
Sir I dont Know how to Thank you , I am a student from kerala who really intrested in the field of Machine Learning and AI. I can easly say how lucky Iam to find your class , I was completely a beginner in this field and i found out your 100 days of ML , It was my turning point, you made my base undersatnding Very well ,Now I can almost do all the ML work flow by my self but iam not good at it though. I need More practices ,I tried almost all project on your Channel and It made me More Under Standing about the things and Once More THANKSSS. now as i said i can do all ml work flow but iam not much good at any of and i know it need more Practices and Mainly some Domain Knowelge in each field . Sir Can you give Some advice to increase my progress like how should i practice now ,should I repeate the practice (like taking some competition dataset and then do all the work then check others work , compare to yourself), and Lastly and Mainly I would appreciate if you do More ML and Data analysis Project on your Channel so I can Increase My domain Knowledge and diffrent feature engineering technique (Sorry about my broken english)
I was eagerly waiting for the next video for last 1 week.
Thank you for providing this amazing content.
Also, I request you to please release the other videos of transformer soon.
Such an amazing explanation.
You have also the talent of self-attention,
Based on the student context you have explained very well.
thank you so much sir for such an amazing concepts, sir please make video how it work when there is images not textual data
Amazing video on what is self attention waiting for upcoming videos, thank you so much for all your hardwork and efforts 😊
Thank you sir for the wonderful overview of self-attention
You are a guru!
Nitish Sir is one-stop solution for all the AI-related problems
The Best video on Internet!!!
Khatarnak Explanation 👏🏻👏🏻👏🏻
as always, explanation is detailed , in-depth, intuitive, looking forward for further videos
God Bless You ..Master
Thank you so much for keeping such quality content free!
Please do not stop making these videos.
This is Ultimate ....you have made the embedding easy and created attention i.e very nicely explained
You are superb in making everyone learn and understand. Hats off to you
Thankyou For this Video @CampusX, I am learning all of the concepts of deep learning from your playlist so I was waiting for this video, Thank you for this great playlist of Deep learning. this playlist is very helpful for me in understanding Deep Learning.
So easily explained. Thanks and heartfelt gratitude for such quality content. 🙏🙏🙏🙏
Thank you so much Nitish. I know this was a very tough nut to break it into simple words with examples. Looking forward to the next set of videos which could solve the "how" mystery and how it also flows into multiheaded attention. Once again Kudos for the excellent effort and for great service to community.
Great and very simple explanation
Never thought such a complicated topic could have a simple one-liner explanation.
Nice explanation Sir. I am very excited for next 2-3 videos...
Hi Nitish,
Thank you so much for these intuitive detailed explanations!
Please upload the other parts soon too!
Such an amazing explanation.
The way you explain is amazing!!!
bro you nailed it but i need next videos too, i have interview tomorrow🥲. Anyways love from PAKISTAN
Sir ap ki video buhut lambi hoti hai magar Jo knowledge milta hai wo unbelievable hai. Thanks for dedicating your important time for us.
Your videos are late but they worth it❤❤❤.
Great content...keep going... waiting for next parts 😊
Very well explained , Thanks
sir, please make one video on encoder model like BERT after you finish this series.
Love the way u explain topics ❤❤❤❤❤
Bhai great level of explanation
kya batau sir????? badiya bahut badiya
NITISH YOU ARE THE BEST MENTOR EVER
And please nitish grow bro With new Unique content like how to read and interpret any new DL papers ...in want u have the family of millions subscribers
Thankyou so much for creating such a content. Waiting for next videos related to transformers.
Sir really appreciate your kind efforts. I have a humble Suggestion, please organize this series properly so that the beginner levels can follow it accordingly.
very nice lecture sir
needed part 3 and part 4 of this transformers series
Thank you so much for your video! You are special brother!
Next level teaching...
Amazing explanation!
Thank You Sir. Great Explanation.
Amazing ❤
Waiting for next
You deserve lot more sir🙏
Best Explanation!!!
Thank you very much for your efforts sir😍❤....
Nitish sir you are need❤
great teacher
Always excited 😊
Thank you sir for this series ❤
beautifully explained
Again Thankusomuch sir❤
Been waiting for this 🔥 . . . Ik what im gonna do tonight .
Awesome !!!!!
NitishJi Kaha the ab tak aap .
sir how it work in case of images please make video on it
Thank you so much sir 🙏🙏🙏
this is helpful 🖤🤗
great Sir
What about differentiating all English words based on parts of speech? Like we normal humans do l, and then training a model to understand how the parts of speeches for together to form a sentence?
sir, kindly please upload the remaining videos on architecture of the transformers
Thank you sir 🙏
sir if embedding gives values based on average meaning so how do embedding knows in which context is words used in data as machine can only understand numbers and by seeing numbers how can it conclude context
sir make a video on ViT
amazing!
kis kis ko lagata hai ki nitish sir krish naik se THE BEST hai....??
dono alag alag hai bhai...krish sir first step hai to understand a topic...and nitish sir poora depth mein samjhate hain!
Amazing
Thank you sir
sir kindly also continue Mlops playlist 😢❤
Sir, great video as always
how can we contact you?
i wanted to discuss a project-related topic with you
Waiting for new videos on this topic.
love your explanation.how to translate a new language ?
Yes sir next vedio plz
Thank you
ur genius
Please complete this Deep Learning Playlist
[Just about anything] -> [numbers] -> Neural Network. This is literally entire ML
Sir 💖♥♥💚💚💛💛💜💜
sir please jaldi se ye playlist complete kar di jiye...please sir
love u sir
In this beginning era of teaching RNN, DL, Transformer etc, how are you the best! May I know where you learned
Sir can you please upload all the videos ASAP 😢😢
sir pls complete this playlist , we need llms
when is the next batch of DSMP starting?
quick quick baki ke videos bhi dal do
💖💕
Understood++
❤
what if the sentence was Apple launched an iphone while i was eating apple .So will there be 2 different embeddings for "apple" ?
Sir videos roj upload kariye na
Data
*is*
What an amazing explanation sir!! The way you explain concepts from scratch is just phenomenal. 🫡🫡