I am currently a working AI engineer at a computer vision company. I have taken multiple deep learning courses throughout my journey ranging from MIT, Stanford lectures to regular youtube channels. And I am being completely honest here that I have found your way of teaching, explanations to be the MOST Intuitive and easy to understand. Please keep up the good work. You are helping loads of students out there who like me, find it difficult to grasp such complex concepts. Cheers!
even though i didnt understand half of the words, because english is not my first language, this video was so good and intuitive on the subject that it outmatches so many other articles on the internet. truely a gem. great job and thankyou
Sir truely I love you so much from my depth of heart. I respect you alot your videos just make me in love with AI subjects. thank you so much for this, thank you thank you so much for doing such an selfless work for us and making this videos, truely sir so much respect for you
Sir sorry but 17:39 I don’t understand why we are branching out on O3 instead of Wo cause O3 is comming after Wo and it is dependent on the Wh, O2,X3,Wi, my only point is branching from Wo makes more sense rather then O3, please someone correct me if I am wrong here
that is because we need to calculate the derivative of L w.r.t Wi, hence we are trying to find which all terms are dependent on Wi. W0 is not dependent on Wi, hence we do not consider it. However, O3 is dependent of Wi, hence we branch that out to reach Wi. On branching it can be seen that not only O3 is dependent on Wi, but O2 is also dependent on Wi, hence we branch O2 further, and this process goes on until we find no element that is dependent of Wi
Hi Nitish I have been following your channel and covered almost everything in ml and deep learning yet. I understand your time constraint Can you help us with topics left to cover after this in the data science journey so we can learn them from other books and notes since we have to complete the syllabus.
@@campusx-official sir, kab tak ho paayegi? 🥺🥺it's one of the best deep learning playlist and i am completely depenedent on it🥺🥺🥺please sir, jitna ho sake jaldi jaldi daalna aur videos and kabhi stop mat karna deep learning ki videos daalna
Vocabulary Size: The len(tokenizer.word_index) gives the number of unique tokens (words) found in your text. If your text contains 10 unique words, len(tokenizer.word_index) will be 10. Padding Token: Keras uses a special token (usually represented as 0) for padding. This token is not part of your vocabulary but is essential to ensure that all sequences have the same length. Therefore, we need to reserve an extra index for this padding token. therefore insead of 17 it should be 17+1 =18 inside the embedding function because there are now 18 unique representation of words.
I am currently a working AI engineer at a computer vision company. I have taken multiple deep learning courses throughout my journey ranging from MIT, Stanford lectures to regular youtube channels. And I am being completely honest here that I have found your way of teaching, explanations to be the MOST Intuitive and easy to understand. Please keep up the good work. You are helping loads of students out there who like me, find it difficult to grasp such complex concepts. Cheers!
company name?plzz
Brother which company ?? where ??
What do you actually work on? Is pay good here?
Stanford ❌ campus x ✅
Andrew NG would be proud of you.. keep up the good work
Bhai literally his last course of the dl specialization (sequence one) is terrible.
@@Omunamantechandrew ka ?
well said
@@LiveLifeWithLove yep
@@Omunamantech you bought the course??
even though i didnt understand half of the words, because english is not my first language, this video was so good and intuitive on the subject that it outmatches so many other articles on the internet. truely a gem. great job and thankyou
Sir, your videos are amazing, you boil down complex topics with intuitive understanding. Thanks a lot...
Awesome man...best DL lectures till now on RUclips I have found.which are very easy to understand.
you care us a lot ...
Thanks man..God bless you
Sir you are blessing to us learners.. Eagerly waiting for the video thanks for uploading and hats off to your commitment.
Nitish sir always explains what generally people consider as obvious but in reality which is not. He touches the nitty gritty of every topic
Thanks for all of hard work your putting to explain this.
Sir You are a true gem . Thanks a lot for this state of art teaching 🙂
Sir truely I love you so much from my depth of heart. I respect you alot your videos just make me in love with AI subjects. thank you so much for this, thank you thank you so much for doing such an selfless work for us and making this videos, truely sir so much respect for you
sir you're like "Aaj ka video ekdamm important hai" in every video, isiliye mai sab video dekhchuka😅
thank you so much for all these videos.
Great way of teaching complicated concepts in a intuitive way. Amazing work. Keep rocking.
Thanks bro,please be consistent with this playlist
best explaitin on whole youtube
Sir, please complete this playlist as soon as possible.
Doing a great job sir
This is outstanding explanation, I found is very easy and understandable. thanks again
Awesome video
Sir one video please on LSTM and GRU intuition with hyperparameter tuning.
god level content.....mad respect broooo
best one .. Thank you nitish sir
You explained it so well! Thankyou so much i am in love with your channel!
Brilliant man! Awesome!
God bless you!
really amazing content, appreciate all your efforts! so comprehensive man, its a joy to learn from you :)
r u single?
Sir aapne 10:20 par o1 me +b nahi lagaya hai
Sir your channel is growing rapidly, glad to see that😁
Amazing tutorial. CFBR!
Best explanation!
I am working on AI development on a side real world project and the requirement there is RNN, in desperate need of these RNN videos.
this playlist is sooo muchh amazing 👏👏
Sir sorry but 17:39 I don’t understand why we are branching out on O3 instead of Wo cause O3 is comming after Wo and it is dependent on the Wh, O2,X3,Wi, my only point is branching from Wo makes more sense rather then O3, please someone correct me if I am wrong here
that is because we need to calculate the derivative of L w.r.t Wi, hence we are trying to find which all terms are dependent on Wi. W0 is not dependent on Wi, hence we do not consider it. However, O3 is dependent of Wi, hence we branch that out to reach Wi. On branching it can be seen that not only O3 is dependent on Wi, but O2 is also dependent on Wi, hence we branch O2 further, and this process goes on until we find no element that is dependent of Wi
He is Indian Andrew Ng.
Best explanation :)
Excellent!!
Nitish Sir,
When you will upload next deep learning vedio? Sir u r simply best
Best Video
Sir please keep uploading videos on RNN.
you are best sir
I wish the series could be complete. I am almost finished with the videos and just wondering how I can learn the remaining topics without your videos.
Last year you were telling about a playlist of time series I didn’t find it yet
So , Have We to store the values activation of each layer, o1,o2...... ? so that we make the calculations
Sir, we all are waiting for the following video. Could you upload it as soon as possible?
Y hat = sigma ( oiwi) but i think you have by mistake write sigma (o3wo), correct? (11:48)
Please upload daily one videos in Deep learning playlist Pls
Sir plz increase the frequency of videos, at this rate idk how much time will be required to complete it.
Thank You Sir.
14:18 you havent added bias to the outout equations
Sir model.predict(sequences) throws error in the embedding part. i dont know why
Nice
Please complete deep learning playlist
🙏 sir please make complete intuitive video on LSTM.
plz tell the time complexity of backpropogation algorithm ?
sir plz complete this playlist, and plz explain LSTM
rnn layer is like group discussion: and after discussion everyone passes there output forward..haha
Thank you so much sir!!!
Iss notebook ki link de dijiye for future reference
Please upload next videos
Sir please upload next videos
Pronounced dell (d) so many times that, me watching the video and went to sleep and dreaming of working in DELL company.
sir why you are not considering the biases ?
Sir please make a video on LSTM🙏🙏🙏🙏🙏🙏🙏
Time keise manage karte ho sir😅 time management pe video dalo ek😁
Bas time management pe hi video banane ka time nai mil pa raha hai
@@campusx-official😂😂😂😂😂
@@campusx-official 🤣🤣🤣
@@campusx-official দাদা রক্স।
@@campusx-official😂
Next video ks din upload hogi ?
Sir please make a video on how to convert a ML code into federated code as there are no resources in RUclips please sir
Wow Sir
Hi Nitish
I have been following your channel and covered almost everything in ml and deep learning yet.
I understand your time constraint
Can you help us with topics left to cover after this in the data science journey so we can learn them from other books and notes since we have to complete the syllabus.
since when you are following this channel
@@abhishektehlan7814 Been two three months.
He's amazing
Bhai deep learning playlist december month ky end tk complete ho jayi gi ?
Sorry, nai ho paayegi. Will tak more time. Will try today cover rnn before Dec
@@campusx-official sir, kab tak ho paayegi? 🥺🥺it's one of the best deep learning playlist and i am completely depenedent on it🥺🥺🥺please sir, jitna ho sake jaldi jaldi daalna aur videos and kabhi stop mat karna deep learning ki videos daalna
@@muskangupta4864 If u get more videos on DL series please let me know. Iam in learning phase. 😊🙏
best
Stanford Who ?
Sir please video upload kijiye
sir hope you are feeling good.
sir please tell me how to have question answer with you.
From comment chat😂
Vocabulary Size: The len(tokenizer.word_index) gives the number of unique tokens (words) found in your text. If your text contains 10 unique words, len(tokenizer.word_index) will be 10.
Padding Token: Keras uses a special token (usually represented as 0) for padding. This token is not part of your vocabulary but is essential to ensure that all sequences have the same length. Therefore, we need to reserve an extra index for this padding token. therefore insead of 17 it should be 17+1 =18 inside the embedding function because there are now 18 unique representation of words.
next plzx
sir aap apna name ab
'MahaTracher' rakhlo
ভাই ৮%
Sir aaj video nahi aaya?
Haan edit nai ho paya. Kal upload hoga
finished watching
🤯
test