Teachers teach from notes. But if we bring own notes in exam, we are accused of cheating! It should be vice versa 😜 things the teacher's cannot remember in many years but expected from students by studying in 6 month during exams
nah! maam is correct del Wi = eta (y-y cap) xi xi is the input so that becomes the output right which maam told when se broke down the perceptron inro sigma and transfer function phi = Oj
Teachers teach from notes. But if we bring own notes in exam, we are accused of cheating! It should be vice versa 😜 things the teacher's cannot remember in many years but expected from students by studying in 6 month during exams
Hi Ma'am,
One small doubt. At the time 33:26, shouldn't it be (o(j) - t(j)) * t(j) * (1 - t(j)) ?
Thanks
nah! maam is correct
del Wi = eta (y-y cap) xi
xi is the input so that becomes the output right which maam told when se broke down the perceptron inro sigma and transfer function phi = Oj
The laptop from which she is reading could had been placed near the camera to avoid her tilting head every 30 sec. But good lecture
what a concept!
Mam it look like that you have invented Machine Learning Neural Network.
:p
Thank you very much, Prof Sudeshna Sarkar for an amazing lecture. It was very efficiently delivered with a clear explanation.
Don't lie.
Thank you so much ma'am.
Clean and clear...
Why Oi(i.e. dnetj/dwij) is not considered in the final expression?
how many hidden layer nodes are created for 3 input and 1 output nodes ?
is there any formula to create hidden layer nodes?
gr8 videos.. in very detail... in depth concept
why we use hidden layer in MNN
thank you very much mam.very useful class
Thankyou very much mam for such a clear explanation of the concepts
Can't believe IIT's have teacher with such poor teaching skills.
Very useful and the prof. is very clear in transferring concepts. Thanks