I have read in several articles that excitatory inputs are assigned weight 1 and inhibitory inputs are assigned weight -1. Just mentioning for information of other students.
the concepts are given in a very simple and clear manner. Thank u.... Just need a clarification for my doubt....For the function x1 and !x2, the threshold value is given as 1. if x1=1 and x2= 0, the sum is 1, threshold satisfies, fires -- OK....But if x1=0 and x2=1, then too the sum is 1, threshold satisfies , fires...BUT NOT OK... how is this taken care? The same for X1=1 and X2= 1, sum =2, threshold satisfies,,,fires....BUT NOT OK...Pls clarify
Fantastic lecture. I have a doubt. How is the threshold for x1 and !x2 1? Since x2 is inhibitory input it should always be 0, now if x1 is assigned 1 or 0 the resultant Boolean operation would be (1 and 0), (0 and 0) would still be 0(because the operation mentioned is AND). Could anyone explain please?
When x2 is 0, the resulting Boolean operations become (1 AND 1), (0 AND 1). Note that it is !x2 (NOT x2). Hence the threshold is 1 (x1 has to be 1) for the output to be 1. Hope this is clear.
x2 is an inibitory input, meaning if it is 1 then then y=0, irrespective of the values of other inputs. So it is sort of connected with and operation with other inputs. We can see it as !x2 (not x2). So we have x1 connected with and operation with !x2 (which changes the values of inputs if given 0 then !x2 produces 1 and vice versa). If x2=1 means !x2=0, so y=0 for any value of x1. Now if x2=0 means !x2=1, so for x1=1, y=1. So threshold is x1+x2=1+0=1.
@@Musical_Era3 While experimenting, I have found that the boundaries makes more sense if we take the transformed values instead of the raw input values. For example, while plotting x_1 ^ !x_2, if we plot the graph of x_1 and x_2, the problem of decision boundary arises as you mentioned in your point. However, if we plot x_1 vs !x_2, then this problem is solved. However, in the later case the threshold also changes which I think can be handled.
Think of x2 as a power switch where x2=1 means power is OFF and x2=0 means power is ON. When power switch is OFF (x2=1) then output is ALWAYS 0 irrespective of what the other input and threshold value is. Thus, out of total 4 possible inputs (i.e. 00,01,10,11) 01 and 11 are ruled out bcz x2 is 1 in both of these. Applicable inputs are 00 and 10 (bcz x2 is 0 in both i.e. power is ON). For 00 input, output is 0. For 10 input, output is 1. Therefore, for all applicable inputs the threshold is 1. **At the bottom of the slide it's clearly written : if any inhibitory input is 1 the output will be 0 **
I was really confused, as I didn't get what Inhibitory Input implied and missed the footnote... finally, I got it... if any inhibitory input gets 1, the output becomes zero regardless of other conditions and the neuron and other inputs... so, that solves it... I'm leaving a link to an article on towardsdatascience here: towardsdatascience.com/mcculloch-pitts-model-5fdf65ac5dd1
Sir I want to know about if you use three input x1 ,x2 and x3 then the decision boundary is plane because it matches with the plane equation ie x1+x2+x3+d=0 but you told its hyperplane how sir please explain it.
Tautology implies that output is TRUE or 1 always irrespective of the inputs. So the threshold is 0 since even if both x1 and x2 are 0 , the output is still 1.
This is the first nptel lecture I found which is useful
😂😂😂
Can't thank enough. The concepts are explained in an absolutely amazing manner.
best explanation i found on the internet so far
If you’ve also studied psychology, these concepts are easy and relatable; Great explanation, by the way.
very nice sir , your way of teaching is awesome
Simple and informative .
Why don't we have such professors in NIT
I have read in several articles that excitatory inputs are assigned weight 1 and inhibitory inputs are assigned weight -1. Just mentioning for information of other students.
you are right.
Love younteacher....great explanation
Great explanation!
the concepts are given in a very simple and clear manner. Thank u.... Just need a clarification for my doubt....For the function x1 and !x2, the threshold value is given as 1. if x1=1 and x2= 0, the sum is 1, threshold satisfies, fires -- OK....But if x1=0 and x2=1, then too the sum is 1, threshold satisfies , fires...BUT NOT OK... how is this taken care? The same for X1=1 and X2= 1, sum =2, threshold satisfies,,,fires....BUT NOT OK...Pls clarify
Omg.....what a teacher he is.....🤌
a very nice explanation. Thank you
amazing lecture
thank you sir
nice explanation
intro music just like Doordarshan Shaktimaan tv show.
Fantastic lecture. I have a doubt. How is the threshold for x1 and !x2 1? Since x2 is inhibitory input it should always be 0, now if x1 is assigned 1 or 0 the resultant Boolean operation would be (1 and 0), (0 and 0) would still be 0(because the operation mentioned is AND). Could anyone explain please?
When x2 is 0, the resulting Boolean operations become (1 AND 1), (0 AND 1). Note that it is !x2 (NOT x2). Hence the threshold is 1 (x1 has to be 1) for the output to be 1. Hope this is clear.
x2 is an inibitory input, meaning if it is 1 then then y=0, irrespective of the values of other inputs. So it is sort of connected with and operation with other inputs. We can see it as !x2 (not x2). So we have x1 connected with and operation with !x2 (which changes the values of inputs if given 0 then !x2 produces 1 and vice versa).
If x2=1 means !x2=0, so y=0 for any value of x1.
Now if x2=0 means !x2=1, so for x1=1, y=1.
So threshold is x1+x2=1+0=1.
@@MrMopuri thanks for the explanation.
threshold for ANDNOT and NOR? Can anyone explain?
@Gokul Gopakumar threshold zero means I think it gets fired for every value for binary.
@@Musical_Era3 While experimenting, I have found that the boundaries makes more sense if we take the transformed values instead of the raw input values. For example, while plotting x_1 ^ !x_2, if we plot the graph of x_1 and x_2, the problem of decision boundary arises as you mentioned in your point. However, if we plot x_1 vs !x_2, then this problem is solved. However, in the later case the threshold also changes which I think can be handled.
Think of x2 as a power switch where x2=1 means power is OFF and x2=0 means power is ON.
When power switch is OFF (x2=1) then output is ALWAYS 0 irrespective of what the other input and threshold value is. Thus, out of total 4 possible inputs (i.e. 00,01,10,11) 01 and 11 are ruled out bcz x2 is 1 in both of these.
Applicable inputs are 00 and 10 (bcz x2 is 0 in both i.e. power is ON). For 00 input, output is 0. For 10 input, output is 1. Therefore, for all applicable inputs the threshold is 1.
**At the bottom of the slide it's clearly written : if any inhibitory input is 1 the output will be 0 **
@@deepakkumarsisodia7092 that power switch analogy was great!!
I was really confused, as I didn't get what Inhibitory Input implied and missed the footnote...
finally, I got it... if any inhibitory input gets 1, the output becomes zero regardless of other conditions and the neuron and other inputs...
so, that solves it...
I'm leaving a link to an article on towardsdatascience here:
towardsdatascience.com/mcculloch-pitts-model-5fdf65ac5dd1
Hello, Please tell me which book I should refer to.
Sir I want to know about if you use three input x1 ,x2 and x3 then the decision boundary is plane because it matches with the plane equation ie x1+x2+x3+d=0 but you told its hyperplane how sir please explain it.
For more than 3 dimensions the decision boundary will be a hyperplane
Can anyone explain why threshold for tautology 11:02 is zero ?
Tautology implies that output is TRUE or 1 always irrespective of the inputs. So the threshold is 0 since even if both x1 and x2 are 0 , the output is still 1.
But how will the equation look like for NAND? can you kindly explain.
its same as AND!
why weights were not included
this is not perceptron, this is mp neuron model, which doesn't have weights
@@thanioruvan4556 True. Different videos on RUclips suggest otherwise, but the fact is MP Neuron does not have weights.
why weights were not included
you dont need weights if there is only one connection going out from x_i?
Becuase MP neurons doesnt have weights
Because MP Neurons are the early model, which doesn't have concept of weights