For anyone trying to learn, this video is full of errors... 2:194:10 those are not the output neurons, that is the second hidden layer. There is an implied output layer via the connections going out to the right; you can tell because you have the setting at "2 hidden layers". 5:00 That is a multi-layer perceptron, because there is 1 hidden layer. Just using a perceptron would mean setting the number of hidden layers to 0 (then there's only 2 layers - input and output), which tensorflow playground allows you to do (which does not solve the problem). You *do* need deep learning in this case to solve the circle problem because the data is clearly not linearly separable if you're only using the x1 and x2 features.
If you press regenerate (over towards the bottom left)it will reload randomized input test and training data without removing the neural network that has already developed. changing the inputs seems to really get the kinks out of the spiral.
Thank you for the wonderful explanation. I want to know how we give our own input function such as Cos(X1) instead of Sin(X1) ? and can we enter our own training data set here?
Thank you for this video. My question is which way is more efficient? (i) By increasing neurons in single layer OR (ii) By adding new layer with less neurons
I just watched a person who is performing some actions without any reasoning 5:05 Incorrect...You should say this: I don't need a "Deep Neural Network". So, even if you use a "Shallow Neural Network", you are still doing "Deep Learning" 7:40 Incorrect...You cannot call this as "Initial input layer". It is a "Hidden layer"
Thanks for the video. I couldn't make sense of the Playground because of the X1 X2 thing..you refered to X2 as Y which helped me get it. Does changing the activation change all layers including output ? 5:10 It still is a Multilayer perceptron by deffinition . an input ,a hidden and an output layer. en.wikipedia.org/wiki/Multilayer_perceptron
It's the multiplication between the two variables Normally you would see x and y to represent those points, but in classification the "x" are the variables and the "y" is the output Since normally we have many variables and only one output, for n variables we use x1, x2, ..., xn their names
Probably the best one .. i tried many videos that try to explain this.. and he does it best. Well done.
For anyone trying to learn, this video is full of errors... 2:19 4:10 those are not the output neurons, that is the second hidden layer. There is an implied output layer via the connections going out to the right; you can tell because you have the setting at "2 hidden layers". 5:00 That is a multi-layer perceptron, because there is 1 hidden layer. Just using a perceptron would mean setting the number of hidden layers to 0 (then there's only 2 layers - input and output), which tensorflow playground allows you to do (which does not solve the problem). You *do* need deep learning in this case to solve the circle problem because the data is clearly not linearly separable if you're only using the x1 and x2 features.
If you press regenerate (over towards the bottom left)it will reload randomized input test and training data without removing the neural network that has already developed. changing the inputs seems to really get the kinks out of the spiral.
Thank you for the wonderful explanation. I want to know how we give our own input function such as Cos(X1) instead of Sin(X1) ? and can we enter our own training data set here?
Thank you for this video. My question is which way is more efficient? (i) By increasing neurons in single layer OR (ii) By adding new layer with less neurons
I just watched a person who is performing some actions without any reasoning
5:05 Incorrect...You should say this: I don't need a "Deep Neural Network". So, even if you use a "Shallow Neural Network", you are still doing "Deep Learning"
7:40 Incorrect...You cannot call this as "Initial input layer". It is a "Hidden layer"
Thanks for the video. I couldn't make sense of the Playground because of the X1 X2 thing..you refered to X2 as Y which helped me get it.
Does changing the activation change all layers including output ?
5:10 It still is a Multilayer perceptron by deffinition . an input ,a hidden and an output layer. en.wikipedia.org/wiki/Multilayer_perceptron
Same here! ornage blue Images are kind of misleading because it is only a 2 nodes input layer
Could we import the dataset using csv?
please let me know if we can import our csv data. My mail id is mgokul2596@gmail.com
what does the X1X2 input mean?
It's the multiplication between the two variables
Normally you would see x and y to represent those points, but in classification the "x" are the variables and the "y" is the output
Since normally we have many variables and only one output, for n variables we use x1, x2, ..., xn their names
Cool, now I understand....thank you.
Very nice...
nice one thx