Important Correction: Activation Function for this video is defined as: if y(in)> 0 : then f(y(in)) is 1 if y(in) = 0: then f(y(in)) is 0 if y(in) < 0: then f(y(in)) is -1
Clear, informative explanation...I like it👍 I was wondering though, does the ordering of the input layer affect the overall turnout? In terms of shifts and overall adjustments
Hi great video, I have a question: why you don't use backward propagation algorithms ? and sigmoid function is 1/(1+e^-x), did you round this function ?
Back Propagation algorithms are obviously useful but in this example it works without using this, but definitely for complex datasets we will require back propagation.
Hi bro I think In the first iteration and last 4 input the values of Delta w1, delta w2 ,delta b , w1 ,w2 and b are not correct. My values are 1, 1 ,-1 ,2 , 2 and -2 respectively comming plz Guide me
@7:25, you didn't tell about value of sigmoid function in perceptron learning algo. , not even y = f(yin), How can you assume that we know it already..
Yep that's what i said.. After passing it to activation fn the result should be either 0 or 1 according to the activation fn which u have taken.. But u are getting - 1 for some input.. Which can't be possible... If your activation fn would have classified the output to either 1 or - 1 then it was ok.. But for this act. fn that is not the case..
@@ThinkXAcademy I think me and Shahid have the same question, your solution might be correct but the question here is "how" did you get to the solution, "how" did you end up with a -1, *how*? thanks for the video btw.
@@ThinkXAcademy bro ek update formula mene ye bhi dekha tha: w new = w old + alpha * t * xi. Where t is either 1 or -1. Use 2 when the class should be above the line and use -1 when class should be below the line. Please reply bro if you see this, i have exam today
Important Correction: Activation Function for this video is defined as:
if y(in)> 0 : then f(y(in)) is 1
if y(in) = 0: then f(y(in)) is 0
if y(in) < 0: then f(y(in)) is -1
@@rohitchitte9155 he is defining the new activation function; he is not using the sigmoid function that we have seen in the other tutorials.
Chapri explanation
@@wfkpk Why not use sigmoid? Any reason for that?
I think w_new=W_old+Learning_rate*(Expected value-Predicted value)*feature
but u gave it directly expected value.
right it should be true - predicted
that was the best teaching of perceptron that I have ever seen. thank you so much.
In 2nd iteration table you have written x1=1,x2=1,t=-1 but it should be x1=1,x2=1,t=1.
why?
Yes, you're correct. It's just a small mistake he made though and everything else should still be valid.
@@RazaHussain96 Because it's an AND function which means x1=1 AND x2=1 should give t=1 and not t=-1.
Clear, informative explanation...I like it👍 I was wondering though, does the ordering of the input layer affect the overall turnout? In terms of shifts and overall adjustments
do you mean ordering of nodes within input layer?
In 4th time of 1st iteration my w1=2, w2=2 & b(new)=-2. Please check it once, & correct me if i am wrong, Thank you
Yes correct.
The formula for Δw is a(target - output)xi, in this video your formula is atxi ?
Same question!!
awesome explanation
thanks✨
thank you brooo
the condition u used in this prob f(yin) is McCulloch Pitts bro
welcome🤗
Hi great video, I have a question: why you don't use backward propagation algorithms ? and sigmoid function is 1/(1+e^-x), did you round this function ?
Back Propagation algorithms are obviously useful but in this example it works without using this, but definitely for complex datasets we will require back propagation.
What is formula of Linear activation function? For using here.....
Amazing explenation!!!!
why did u not use threshold
Hi bro I think In the first iteration and last 4 input the values of Delta w1, delta w2 ,delta b , w1 ,w2 and b are not correct. My values are 1, 1 ,-1 ,2 , 2 and -2 respectively comming plz Guide me
You might have made some mistake in calculations. Because it is correct.
Hi sir, for the inputs 1,1 we should get 1 only right. In the first iteration if we update the weights earlier then we will get Y=1.
nicely explained.
Thank you😃
in the second iteration for input 1 the target is -1 but I guess it should be 1 and will our perceptron performs it right???
Try it that way if you are getting final values satisying the AND condition then it will be correct👍🏻
good explanation
Thank you sir😄
@7:25, you didn't tell about value of sigmoid function in perceptron learning algo. , not even y = f(yin), How can you assume that we know it already..
Because this video is part of the Machine Learning course and you skipped this tutorial of the course: ruclips.net/video/ysQun8VbUmM/видео.html
sir, since u have used the activation fn which classifies the out either 0 or 1, so how did u get -1 as output (y)?
output of y will be passed to activation function for classification
Yep that's what i said.. After passing it to activation fn the result should be either 0 or 1 according to the activation fn which u have taken.. But u are getting - 1 for some input.. Which can't be possible... If your activation fn would have classified the output to either 1 or - 1 then it was ok.. But for this act. fn that is not the case..
it is correct i have rechecked the solution in the book...
@@ThinkXAcademy I think me and Shahid have the same question, your solution might be correct but the question here is "how" did you get to the solution, "how" did you end up with a -1, *how*? thanks for the video btw.
Activation Function for this video is defined as:
if input > 0 : then output is 1
if input = 0: then output is 0
if input < 0: then output is -1
How you write on screen its amazing pless tell.
Sir , how will be the truth table for logical nor for bipolar inputs ? Is it done same way as binary just that we consider 0 to be -1 ?
How to compute iteration 2 and what are the conditions
Compute the same way as we did in iteration 1 but take values of bias from first iteration
is it complsory to use formula to update the weights and bias? or can we assume it by our own
yes formula is needed to update the weights and bias.
@@ThinkXAcademy but in some videos and site i have seen that they have solved with using formula [ w;(new) = zv;(old) + ct.t:x;
]
@@ThinkXAcademy is using this formula [w;(new) = zv;(old) + ct.t:x;
]is compulsory or not?
it is important to use this formula because weight needs to be readjusted after each Iteration
value of alpha will be same for both the iterations?
yes
What is the terminology for ALPA and T in this perceptron Model
How dou you construct the line using the weights and bias?
using the formula y = w.x + b
@@ThinkXAcademy perfect, thanks!
Helped a lot, Thanks!
Share my videos to help my channel grow😊💯
Thank you! It helped
Keep Learning👨🏻🏫
how to interpret delta w
Bro its change in wheight which we do get targeted output and reduce error during traning.
why are u make it so complex
😵💫
Thanks man 👍
Thanks😄 Share it with other students to help others too💫
Bhai jaldi jaldi bol liya kar
playback speed 1.5x pr krke dekhle
Wahi kiya bhai, par bata raha future k liye
haan meri new videos mei maine fast krdi h speed
btw thanks for feedback
@@ThinkXAcademy bro ek update formula mene ye bhi dekha tha: w new = w old + alpha * t * xi. Where t is either 1 or -1. Use 2 when the class should be above the line and use -1 when class should be below the line. Please reply bro if you see this, i have exam today