Deep Learning Lecture 3: Hands-On in the Playground

Поделиться
HTML-код
  • Опубликовано: 13 дек 2024

Комментарии • 20

  • @shameemmohamed3090
    @shameemmohamed3090 Год назад +1

    Probably the best one .. i tried many videos that try to explain this.. and he does it best. Well done.

  • @jackhurwitz
    @jackhurwitz 3 года назад +6

    For anyone trying to learn, this video is full of errors... 2:19 4:10 those are not the output neurons, that is the second hidden layer. There is an implied output layer via the connections going out to the right; you can tell because you have the setting at "2 hidden layers". 5:00 That is a multi-layer perceptron, because there is 1 hidden layer. Just using a perceptron would mean setting the number of hidden layers to 0 (then there's only 2 layers - input and output), which tensorflow playground allows you to do (which does not solve the problem). You *do* need deep learning in this case to solve the circle problem because the data is clearly not linearly separable if you're only using the x1 and x2 features.

  • @HowToOverthink
    @HowToOverthink 6 лет назад +3

    If you press regenerate (over towards the bottom left)it will reload randomized input test and training data without removing the neural network that has already developed. changing the inputs seems to really get the kinks out of the spiral.

  • @suzy6091
    @suzy6091 Год назад +1

    Thank you for the wonderful explanation. I want to know how we give our own input function such as Cos(X1) instead of Sin(X1) ? and can we enter our own training data set here?

  • @sachinjoshi171
    @sachinjoshi171 2 года назад +1

    Thank you for this video. My question is which way is more efficient? (i) By increasing neurons in single layer OR (ii) By adding new layer with less neurons

  • @Pierluigi-ns4ms
    @Pierluigi-ns4ms 3 года назад +2

    I just watched a person who is performing some actions without any reasoning
    5:05 Incorrect...You should say this: I don't need a "Deep Neural Network". So, even if you use a "Shallow Neural Network", you are still doing "Deep Learning"
    7:40 Incorrect...You cannot call this as "Initial input layer". It is a "Hidden layer"

  • @lalalalalafify
    @lalalalalafify 5 лет назад +2

    Thanks for the video. I couldn't make sense of the Playground because of the X1 X2 thing..you refered to X2 as Y which helped me get it.
    Does changing the activation change all layers including output ?
    5:10 It still is a Multilayer perceptron by deffinition . an input ,a hidden and an output layer. en.wikipedia.org/wiki/Multilayer_perceptron

    • @peterfrankenstone2875
      @peterfrankenstone2875 2 дня назад

      Same here! ornage blue Images are kind of misleading because it is only a 2 nodes input layer

  • @weiweitan4011
    @weiweitan4011 5 лет назад

    Could we import the dataset using csv?

    • @gokulm5005
      @gokulm5005 4 года назад

      please let me know if we can import our csv data. My mail id is mgokul2596@gmail.com

  • @NikosKatsikanis
    @NikosKatsikanis 5 лет назад

    what does the X1X2 input mean?

    • @arnaldosantoro6812
      @arnaldosantoro6812 4 года назад

      It's the multiplication between the two variables
      Normally you would see x and y to represent those points, but in classification the "x" are the variables and the "y" is the output
      Since normally we have many variables and only one output, for n variables we use x1, x2, ..., xn their names

  • @SGTRitacca
    @SGTRitacca 5 лет назад

    Cool, now I understand....thank you.

  • @magica2z
    @magica2z 6 лет назад

    Very nice...

  • @NikosKatsikanis
    @NikosKatsikanis 5 лет назад

    nice one thx