Sir, you deserve a big thanks. My teacher gave me an assignment and i was searching for 2 days on youtube for weight calculation. But finally your video has done the work. It was really satisfying. Thank you sir.
Guys! Threshold or bias is a tunning parameter you can select something low like 0.01 , 0.02 or high like 0.2 to check error is getting low or not. I hope this will help you.
Thank you. I've been trying to implement a reinforcment algorithm from scratch. I understood everything except back propogation and every video on it that I've watched has always been vague until I saw this video. Good stuff!
I got crystal clear understanding of this concept only because of you sir. The flow of video is excellant, appreciate your efforts!! Thank you and keep up the good work !!
everyone says "thank you", but only a few understand that this video is useless if there are more neurons on one layer. Those who say "thank you" do not even plan to make a neural network
Thanks for the video, sir. I have a doubt. How did you update weights without Gradient Descent (GD) or any other optimization technique, sir? Because I read in blogs that Networks don't get trained without GD and by only using backpropagation. In other words, my doubt is how does the calculation change if we also implemented GD in this? I'm a rookie; kindly guide me, sir.
gradient descent has been used in this video while updating the weights. the change in weights is done through gradient descent. But here he has not written the derivative math.
Can you explain this The input of a neuran is 0.377 Output of the same neuran is 0.5932 How this happens How does the 2.71^-0377 give the answer of 0.6867
@@MaheshHuddar did that, question though. for the forward pass, what about biases? they are values with their own weights right. were they just, not included for this example?
I have a doubt. In many places, I have seen that the error calculation is done using the formula E = 1/2 (y - y*)^2, but you have calculated by using subtraction. Which is correct?
@@keertichauhan6221 I think there are different methods to compute the error. The one that this guy above has mentioned is mean square error. The one shown in the video is also correct, but using MSE or RMSE, are generally regarded as better measures.
For a multiple data point, you use error function and for single output you make use of loss function. Loss function is error = actual - target Error function is 1/2(actual-target)square
For a multiple data point, you use error function. Error function=1/2 summation(y actual - y target)^2. For a single data point, you use the loss function. Loss function= (y actual - y target)
The formulars for delta W only work because of the nature of the actiation function right? if it is a hyperbolic tangent or RELU , the formulas change right?
He did not add it to keep it simple i guess. But you can add the bias by making it a third input. Then the method doesnt change you just have a new (constant) input for each layer. Normaly set to one 1 and the weight associated will act as your bias. since 1*biasWeight = biasWeight. I just added 1 to my input vector and generate an additionol weight for the weights matrix. But im also just learning an not sure if im 100% correct..
man l looked many bp explanations and most of them telling of rocket science stuff this is the easy and clear one ever, thanx for sharing
Awesome explanation! Thanks a lot!
Thank You
Do like share and subscribe
best explanation
Very clear! How about bias b? What is the formula in case we add a bias?
👍👍👍👍👍👍👍👍
Thank You
Do like share and subscribe
This makes the math very clear. I now know the math and have some intuition, so I hope to fully connect the two soon. Thanks for the great video!
Thank You
Do like share and subscribe
It looks like error in ∆wji notation you followed just opposite
@@MaheshHuddar
Sir, you deserve a big thanks. My teacher gave me an assignment and i was searching for 2 days on youtube for weight calculation. But finally your video has done the work. It was really satisfying. Thank you sir.
Welcome
Do like share and subscribe
I have an Exam in 2 days and your videos just saved me from failing this module.
Thank you so much and much love from 🇩🇿🇩🇿.
Welcome
Do like share and subscribe
Guys! Threshold or bias is a tunning parameter you can select something low like 0.01 , 0.02 or high like 0.2 to check error is getting low or not. I hope this will help you.
I believe for hidden units, the w_kj in delta(j) formula should have been w_jk. Namely, other way around.
And delta(w_ji) should have been delta(w_ij), again the other way around.
yeh i was about to comment the same thing
Easy to understand. Give next one is CNN
And where is the b biased
There should be some constant also?
Damn sir, u are the best youtube teacher in AI.. Love youu ma sir
Good job, but why no account for the bias term before applying sigmoid function?
Thank you. I've been trying to implement a reinforcment algorithm from scratch. I understood everything except back propogation and every video on it that I've watched has always been vague until I saw this video. Good stuff!
Welcome
Do like share and subscribe
Use urdu language
Boht zbrdast lecture ha ....boht shukrya....keep it up
how are you deriving deltaj formula, u can include derivations of sigmoid functions.
how did you update the weights of connections connecting input layer and the hidden layer?
I got crystal clear understanding of this concept only because of you sir. The flow of video is excellant, appreciate your efforts!! Thank you and keep up the good work !!
Welcome
Do like share and subscribe
How do you update de weights if you have more input data?? In this case he only has 1 input, how do you do it with 2 inputs? Do you do the same twice?
everyone says "thank you", but only a few understand that this video is useless if there are more neurons on one layer. Those who say "thank you" do not even plan to make a neural network
in one epoch how many times back propagation takes place?
Thanks for the video, sir. I have a doubt. How did you update weights without Gradient Descent (GD) or any other optimization technique, sir? Because I read in blogs that Networks don't get trained without GD and by only using backpropagation. In other words, my doubt is how does the calculation change if we also implemented GD in this? I'm a rookie; kindly guide me, sir.
gradient descent has been used in this video while updating the weights. the change in weights is done through gradient descent. But here he has not written the derivative math.
how he got ytarget = 0.5@@adityachalla7677
Simple lucid example illustrated. Please continue.
Thank You
Can you explain this
The input of a neuran is 0.377
Output of the same neuran is 0.5932
How this happens
How does the 2.71^-0377 give the answer of 0.6867
You taught Very good. Today is my exam. Your videos were really helpful. I hope I pass well without getting a backlog in this subject. 👍👍
Welcome
Do like share and subscribe
It's always the Indians 😂❤
Can you please do vedios on Cnn with mathematical concepts..your vedios are much useful and understandable.Thank you
Great work Dr. Mahesh. Thanks from Pakistan.
Welcome
Do like share and subscribe
thank you! understood the concept smoothly with your video!
Thank You
Do like share and subscribe
@@MaheshHuddar did that, question though. for the forward pass, what about biases? they are values with their own weights right. were they just, not included for this example?
ah nevermind, got it from your next video in this series lol
I have a doubt. In many places, I have seen that the error calculation is done using the formula E = 1/2 (y - y*)^2, but you have calculated by using subtraction. Which is correct?
same doubt .. what is the correct method?
@@keertichauhan6221 I think there are different methods to compute the error. The one that this guy above has mentioned is mean square error. The one shown in the video is also correct, but using MSE or RMSE, are generally regarded as better measures.
For a multiple data point, you use error function and for single output you make use of loss function.
Loss function is error = actual - target
Error function is 1/2(actual-target)square
For a multiple data point, you use error function. Error function=1/2 summation(y actual - y target)^2.
For a single data point, you use the loss function. Loss function=
(y actual - y target)
The errors are made to be positive by squaring.
Best video explanation on ANN back propagation. Many thanks sir
Thank You
Do like share and subscribe
Very clear. Thanks for uploading this video.
Welcome
Do like share and subscribe
The formulars for delta W only work because of the nature of the actiation function right? if it is a hyperbolic tangent or RELU , the formulas change right?
yes
From which book u have taken problem from?
What about the bias factor??
He did not add it to keep it simple i guess.
But you can add the bias by making it a third input. Then the method doesnt change you just have a new (constant) input for each layer. Normaly set to one 1 and the weight associated will act as your bias. since 1*biasWeight = biasWeight. I just added 1 to my input vector and generate an additionol weight for the weights matrix. But im also just learning an not sure if im 100% correct..
Today is my exam again well explain sir
Nice video, easily understood the topic, thank you
Welcome
Do like share and subscribe
Mr.Huddar, thanks a lot for perfect explanation. One thing though, how do I calculate the change of bias term for each neuron in my neural network?
Last night !!
Today is exam well explain boss
I have a question, what if we have a bias term and some bias weights. Do we need to account for those or it would be 0?
Yes you have to consider
Follow this video: ruclips.net/video/n2L1J5JYgUk/видео.html
It'd an awesome explanation sir....no words to thank you sir
You are most welcome
Do like share and subscribe
this is a great lecture!
At what stage we have to stop Forward and Backward pass unti Error becomes zero
this is specified as epoch and must be given in a question. There is no defined condition to stop. It really depends on your needs.
Great video, easy to understand
Thank You
Do like share and subscribe
Thank you so much! You saved me. I subscribed. Thanks
Welcome
Please do like share
Sir gradient descant video kocham fastta video upload pannuga
Videos on Gradient Decent:
ruclips.net/video/ktGm0WCoQOg/видео.html
ruclips.net/video/5hB4_8o34GU/видео.html
ruclips.net/video/ibKP0nIT7YU/видео.html
Awesome explanation.
Thanks
Welcome
Do like share and subscribe
Very Well Explained ...Keep up the Good Work
Thank You
Do like share and subscribe
I have a confusion. We use ReLU on hidden layer and not sigmoid. Shouldn't we calculate hidden layer's activation using ReLU instead of sigmoid?
Yes you have to
Do calculation Based on activation function
@Mahesh Huddar ik. You have used sigmoid function on the hidden layer. This will result in an error.
Sir, thank you for this great video! It was really helpful. I appreciate the clear explanation
Glad it was helpful!
Do like share and subscribe
GOATED Explaning, mantab mantab
Sir how to update bias in back propagation
Refer this video: ruclips.net/video/n2L1J5JYgUk/видео.html
Thank you sir, all of you machine learning videos have helped us students a lot
Welcome
Do like share and subscribe
@@MaheshHuddar Sir Can you solve problems on HMM , CNN pls
YOU ARE THE GBEST
Concept is clear , i got confidence in this concept sir,thank you👍👍👍👍
Welcome
Do like share and subscribe
@@MaheshHuddar sir can you provide videos in gaussian process in machine learning
Nice Video sir.. where is the bias here
Great sir, it is very clear example how to calculate ANN. Thanks, keep being productive
Thank You
Do like share and subscribe
very clear thank you for the content
Welcome
Do like share and subscribe
Clear explanation. recommanded...
Thank You
Do like share and subscribe
thank u sir
Welcome
Do like share and subscribe
thanks a lot for your explanation!
Welcome
Do like share and subscribe
THANK YOU SIR , BIRILIANT INDIAN MIND
Welcome
Do like share and subscribe
mahesh daale
Thank you Sir, for each and every point explain it.
Welcome
Do like share and Subscribe
#Mahesh Hunder, dnt we need bias, just curious
Follow this video
ruclips.net/video/n2L1J5JYgUk/видео.html
@@MaheshHuddar @6:31 min Just curious can you share the web page for these formulae ?
Thank you So much Sir.....This videos made us clear understanding of Machine learning all concepts...
Welcome
Do like share and subscribe
Finally one that makes sense
Thank You
Do like share and subscribe
sir, also each perceptron has bias with it right ?
Yes
Follow this video for bias: ruclips.net/video/n2L1J5JYgUk/видео.html
Thank u sir a lot!! @@MaheshHuddar
Thanks a lot sir 🙏🙏🙏🙏🙏🙏🙏
Most welcome
Do like share and subscribe
Amazing stuff just to the point and clear.
Thank You
Do like share and subscribe
What about the bias terms ?
不错,老印!
Very useful, thanks!
Welcome
Do like share and subscribe
Thank you so much sir
Welcome
Do like share and subscribe
Thanks a lot Sir!!!
Welcome
Do like share and subscribe
you are a life saver, thank you soooo much.
Thank You
Do like share and subscribe
based
Based...?
Thanks for solution
Welcome
Do like share and subscribe
@@MaheshHuddar sure. Thanks 😊
you are a legend
Welcome
Do like share and subscribe
thank you sir
Most welcome
Do like share and subscribe
GODDD!!!!!!!!!!!!!!!!!!!
Thank You
Do like share and subscribe
thank you sir.
Most welcome
Do like share and subscribe
awesome.
Thanks!
Do like share and subscribe
Thank you
Welcome
Do like share and subscribe
well done sir
Thank You
Do like share and subscribe
what about bias
It is assumed to be zero here
You saved me you're a hero thank you
Welcome
Do like share and subscribe
good video
Thank You
Do like share and subscribe
Very helpful
Do like share and subscribe
Thank you!
Welcome
Do like share and subscribe
Thank you
Welcome
Do like share and subscribe
🙏🙏🙏🙏🙏🙏
Do like share and subscribe
good staff thank u
Welcome
Do like share and subscribe
Soooooo Good❤
Thank You
Do like share and subscribe
txs
Welcome
Do like share and subscribe
youre a legend
Thank You
Do like share and subscribe
you are a king sir. Thank you for saving me from my exam tommorow.
Welcome
Do like share and subscribe
All the very best for your exams
thank you man
Welcome
Do like share and subscribe
Köszönjük!
Welcome
Do like share and subscribe
thank you
Welcome
Do like share and subscribe