lolololsdfnasldfjsdilgjspdjgkx, I DIDN'T HEAR ANY OF THOSE IMPORTANT KEY TERMS GET MENTIONED AT ALL? PLEASE STOP, YOU'RE GOING NUTS, OK? THIS IS ALL IMPORTANT VERY SMALL WORDS BEING PUT ACROSS MAKE SURE YOU PULL BACK AND STOP BLOWING A COVER GASKET! IF I CAN LET YOU IN, PLEASE DON'T TALK TO ANYONE OK THIS IS IMPORTTANT LMFCO LFOASHOEJGAEGJKDKDIJFKDLOLOLOLO
Can you change the order of the video in playlist, the LSTM comes first and backpropogation comes next, but LSTM video requires one to see the Backpropogation video first
I think there is sth. wrong with the dL/dwaa equation. Based on the explanation previously in the video, I think it should be SUM(i)(accerlation_product(dL/dO * dO/da1...dai/dwaa)). Am i right?
The maths is always the easy part, esp when writing equations. Conceptually I feel much is desired to be explained by the video. For instance, @ what INSTANCE is the backpropagation done. Nothing about sequence length is mentioned. There is no effort to run an actual example with some iterations to show the working and map it to the concepts and then the maths. Neural architectures require a conceptual grasp.
finally I got an easy to understand explanation for BPTT. thank you so much
Welcome! Glad I could help!
thanks! this video deserves more exposure
Thank you so much. I appreciate you think so!
thank you jay patel it is really interesting to watch your videos
very helpful to me. Thank you!!
Glad it was helpful!
Hey your videos are helping me and my friends study for our exam. Thank you for creating these!
Happy to help!
Its just the concatenation game 😮. We're just concatenating two feedforward neural networks here.
OMG I JUST GOT THIS!@!!!!!!! been looking everwhere how adding and multiplying work together in BPTT.... OMG THANKS!
after all the innumerable days of watching countless videos trying to understand this concept I stumble upon the right one
Glad it helped!
THANK YOU!! Very very helpful to understand the summation while calculation Waa
Can you explain the attention model and why it used 'query, value, key ' words in the model?
I will try to cover that topic if I can. Thanks for the suggestion.
thanks for your reply. I am waiting for the video to clarify my doubts about the attention model.
lolololsdfnasldfjsdilgjspdjgkx, I DIDN'T HEAR ANY OF THOSE IMPORTANT KEY TERMS GET MENTIONED AT ALL? PLEASE STOP, YOU'RE GOING NUTS, OK? THIS IS ALL IMPORTANT VERY SMALL WORDS BEING PUT ACROSS MAKE SURE YOU PULL BACK AND STOP BLOWING A COVER GASKET! IF I CAN LET YOU IN, PLEASE DON'T TALK TO ANYONE OK THIS IS IMPORTTANT LMFCO LFOASHOEJGAEGJKDKDIJFKDLOLOLOLO
You made it easy.
Glad I could help!
very helpful bro to remember rnn and understand what going in side the the rnn . Thank you so much for creating best videos of deep learning
Can you change the order of the video in playlist, the LSTM comes first and backpropogation comes next, but LSTM video requires one to see the Backpropogation video first
Yep, I was thinking the same...
So, in this example, there is no hidden layer and no non-linear function in previous time stamps and also. Am i right?
How will you do Back Propagation for many to many RNN? My loss function changes with each RNN cell.
I think there is sth. wrong with the dL/dwaa equation. Based on the explanation previously in the video, I think it should be SUM(i)(accerlation_product(dL/dO * dO/da1...dai/dwaa)). Am i right?
If there are more than 1 output y...What is the equation for it?
Bro , please make some videos on RNN Projects (by Python coding ), by applying all these concepts .And you are doing a great job . Thanks a lot
Sure.. I will upload those videos soon!
it took me way too long to find this bptt video... THANK YOU!
Ahh… glad to help!
thanks bro
You're welcome
Thanks broo, u made it better to understand than krish naiks video on this
Glad I could help!
The maths is always the easy part, esp when writing equations. Conceptually I feel much is desired to be explained by the video. For instance, @ what INSTANCE is the backpropagation done. Nothing about sequence length is mentioned. There is no effort to run an actual example with some iterations to show the working and map it to the concepts and then the maths. Neural architectures require a conceptual grasp.
Hi Arjun, thanks for the feedback. Will see if I can improve
Please share pdf