No human alive has ever tutored this abstract topic as good n simplied it as you. Kudos! Were it not for you, I would have long given up on this topic.I keep watching the video over n over , again n again, and I don't get bored..it inspires me to believe that maths n science are deemed hard due to poorly written textbooks for beginners. Wish I could be taught by you for just a semester! I envy your students. Your just too gud,marvellous! Keep it up!
Thank you very much for the tutorials. Most tutorials on this topic are all trying to bury me with formulas and math written poorly without examples. They succeeded, until I finished these three videos.
Thanks. i have a question, you have calculated for a particular input. What if we have more inputs how the algorithm work then? for each input do we update the weights only once or given error threshold we think its acceptable, and stop. Then, do we feed to next input into the network? Another point is the multiple outputs? It may be multiclass problem, then how we solve this problem?
I have been looking for tutorials on the internet for a while, but almost everything i looked into was confusing and hard to understand, then i found your tutorial ( this video and also video 2, didn't saw video 1 yet), everything turned to be more simple ( a lot of calculation and iterations, but very easy to understand the way you explained). Well, thank you very much for provide this learn material and have my like ^^
finally a person that can explain this topic... thank you really nice and clear i think the best part is when u actually do the feedforward and the backprop example thanz again
Thanks Sir for very clear explanation of this tutorial and I am waiting for more videos about Artificial Neural Networks other algorithms, please continue the tutorial and if you can make some video tutorials about Deep Learning it will be very much kind of you.
Dear Sir, wonderful video! but what are n and alpha (training and mobility factor)?who decides it and what it signifies, If we don't know these values what default values we can consider? hope you will reply to this query.
Thank you for the tutorials. I want to ask a question. I am working on the Artifical Neural Network (ANN) model in the matlab. I have experimental data. I want to use an ANN model for the modeling to the my experimental data. I use the back-propagation learning algorithm in a feed-forward single hidden layer neural network. and I use logistic sigmoid (logsig) transfer function for both the hidden layer and the output layer. I complate my ANN model and I get the weights. Now I want to find results as manuel by using the weights and so I will create a formula. I tried but I could not it. Can you help me this subject, and do you have any document, pdf, or video about this subject. Thaks for your interest..
In any case, very good video !!! I am waiting for the matrice form of your example, it will be useful to take advantage of matrice computation (in case of many layers and neurons)
What does the n = 0.25 (eta?) stands for next to the learning rate (alpha) of 0.0001 ?? I don't seem to get it at all. I know this is an old video I don't expect an answer, but it's worth a shot.
Suffix in x1 means first feature vector and x2 means 2nd feature vector. w11(n) means the value of the weight w11 at instant of time n and w11(n+1) means the value of the weight w11 at instant of time n+1. Hope that explains.
That is the beauty of the back propagation. You go one pass forward and then go one pass backward. Each pass backward is one time step. Each backward pass change the weights so that final error value (desired - output or d-y) is reduced in the next forward pass.
homevideotutor cool. I think I got it. Thank you very much for replying. Really appreciate it.plus Do you know any good(good and basic as yours) video tutorials/articles of RNN? If yes, can you give me the link? Thanks in advance
I did a search in You tube. I do not know how good this is but it looks simple. ruclips.net/video/kMLl-TKaEnc/видео.html If I create one in the future I will let you know. It will be hosted in: scholastic.teachable.com
No human alive has ever tutored this abstract topic as good n simplied it as you. Kudos! Were it not for you, I would have long given up on this topic.I keep watching the video over n over , again n again, and I don't get bored..it inspires me to believe that maths n science are deemed hard due to poorly written textbooks for beginners. Wish I could be taught by you for just a semester! I envy your students. Your just too gud,marvellous!
Keep it up!
Thank you very much for the tutorials. Most tutorials on this topic are all trying to bury me with formulas and math written poorly without examples. They succeeded, until I finished these three videos.
Thank you for the nice comment
exactly what my problem was.
Thanks you. My new site is scholastic.teachable.com
Man you have really made the best tutorial video in Whole RUclips.
You have really saved me :)
Thanks for your efforts
Ali Younes Thanks Ali Younes. It is my pleasure. Currently putting up another site with simple images instead of videos. scholastic-images.webs.com
.
+homevideotutor thank you very much sir , the best explanation ever (y)
Thanks. i have a question, you have calculated for a particular input. What if we have more inputs how the algorithm work then? for each input do we update the weights only once or given error threshold we think its acceptable, and stop. Then, do we feed to next input into the network? Another point is the multiple outputs? It may be multiclass problem, then how we solve this problem?
I have the same same question, what happen How it works for multiple inputs and outputs?
Thank you, It is help lots, Can you add another lesson about Matlab nntool
I have been looking for tutorials on the internet for a while, but almost everything i looked into was confusing and hard to understand, then i found your tutorial ( this video and also video 2, didn't saw video 1 yet), everything turned to be more simple ( a lot of calculation and iterations, but very easy to understand the way you explained). Well, thank you very much for provide this learn material and have my like ^^
Dear Shann, Many thanks for the great positive feedback!!! ruclips.net/video/h2w8LueoQi8/видео.html
thankyou for making this concept clear and understandable by real examples helps a lot man!!
I finally understood how to calculate my hidden layers. Thank you very much for the inspiration :)
The best techer, it is very clear! thanks so much.
finally a person that can explain this topic... thank you really nice and clear i think the best part is when u actually do the feedforward and the backprop example thanz again
This is the clearest video I've seen!
No one else does a worked example! Thank you so much! It's helped me heaps!
Doug Lee Many thanks Doug Lee. Pleasure
I guess this makes sense if you already know why you are using back-prop.
There's a mistake when you substitute the values for deltao1 how do you get this value?
github.com/mauricioribeiro/pyNeural/tree/master/3.4 uses this video as example
Super explanation..Thank you so much
finally a clear and straight forward explanation, thank you ! :)
SIR WHAT is mobilty factor
The best video tutorial on Neural Networks.
Thanks Sir for very clear explanation of this tutorial and I am waiting for more videos about Artificial Neural Networks other algorithms, please continue the tutorial and if you can make some video tutorials about Deep Learning it will be very much kind of you.
Thanks man
thank you so much.... can you give me the slides of the lectures. for learning purpose. thank you.
Naim Ali Slides can be downloaded from scholastic-videos.com/ in PDF format.
Dear Sir,
wonderful video! but what are n and alpha (training and mobility factor)?who decides it and what it signifies, If we don't know these values what default values we can consider?
hope you will reply to this query.
Thank you for the tutorials. I want to ask a question. I am working on the Artifical Neural Network (ANN) model in the matlab. I have experimental data. I want to use an ANN model for the modeling to the my experimental data. I use the back-propagation learning algorithm in a feed-forward single hidden layer neural network. and I use logistic sigmoid (logsig) transfer function for both the hidden layer and the output layer. I complate my ANN model and I get the weights. Now I want to find results as manuel by using the weights and so I will create a formula. I tried but I could not it. Can you help me this subject, and do you have any document, pdf, or video about this subject. Thaks for your interest..
why not explaining the backpropagation with gradient descent without momentum .The updated rule is complicated and not easy to understand.
Thank you for the great comment.
In any case, very good video !!!
I am waiting for the matrice form of your example, it will be useful to take advantage of matrice computation (in case of many layers and neurons)
This very clearly and basic example for backpropagation .Thanks for this tutor.
Very useful!!!!
Thank you for this incredible tutorial video. You have just save me from big trouble. The explanations were quite understandable, bravo.
Very nice explanation.
thks man for saving my ass...really good xplanation....
Aditya Rawat Thank you for the nice comment
No comments. Just Wow.
What does the n = 0.25 (eta?) stands for next to the learning rate (alpha) of 0.0001 ?? I don't seem to get it at all. I know this is an old video I don't expect an answer, but it's worth a shot.
Dominic Leclerc it is the learning rate of the ANN. neuralnetworksanddeeplearning.com/chap3.html
homevideotutor then what is alpha ! ?
Please refer to FAQ section of scholastic.teachable.com/p/pattern-classification for more information on this. Many thanks for the interest.
thank you so much
Amazing tutorial sir. Thankyou very much.keep making more
very simply explained.. thnx a ton :)
thank you. please refer to our new Web site for any further additions. scholastic.teachable.com. best regards
thanks so much
1000 Thanks
how to use this example in nntool? what training and learning function should i use?
sorry im still learning and im really stuck on how to get the (exp) value ?? thanks
Thanks for the interest. Please use a calculator and use e^(-v). If for example v=0.4851 then phi(v) = 1/(1 + e^(-0.4851))=0.619 . Hope that helps.
Thank you soo much... just pass the exam :)
excellent teaching sir... thank you so much sir
Nobody has ever explained this topic better than you, my friend. Huge respect!!
thank you very much , its the best i have found in youtube
Pleasure
Gracias, al fin entendí esto..!
thank u so much.u saved us brother.finally i got a feeling i understand it
Titu Ti thanks bro
thank you so much !
Thank you so much :)
Thanks again!
@homevideotutor Can you explain the difference between 'n' time step and suffixes of inputs? It's confusing
Suffix in x1 means first feature vector and x2 means 2nd feature vector. w11(n) means the value of the weight w11 at instant of time n and w11(n+1) means the value of the weight w11 at instant of time n+1. Hope that explains.
homevideotutor How did you fit 'time' in the context? like in the literal sense? I am totally new to nn. sorry.
That is the beauty of the back propagation. You go one pass forward and then go one pass backward. Each pass backward is one time step. Each backward pass change the weights so that final error value (desired - output or d-y) is reduced in the next forward pass.
homevideotutor cool. I think I got it. Thank you very much for replying. Really appreciate it.plus Do you know any good(good and basic as yours) video tutorials/articles of RNN? If yes, can you give me the link? Thanks in advance
I did a search in You tube. I do not know how good this is but it looks simple. ruclips.net/video/kMLl-TKaEnc/видео.html If I create one in the future I will let you know. It will be hosted in: scholastic.teachable.com