Simple Neural Network in 3 Minutes

Поделиться
HTML-код
  • Опубликовано: 31 дек 2024

Комментарии •

  • @3-minutedatascience
    @3-minutedatascience  7 месяцев назад +1

    To support more videos like this, please check out my O'Reilly books.
    Essential Math for Data Science
    amzn.to/3Vihfhw
    Getting Started with SQL
    amzn.to/3KBudSY
    Access all my books, online trainings, and video courses on O'Reilly with a 10-day free trial!
    oreillymedia.pxf.io/1rJ1P6

  • @Harduex
    @Harduex 11 месяцев назад +2

    Very good explained. Keep up the good work!

  • @lonelomessi
    @lonelomessi Год назад +2

    beautiful

  • @antonlinares2866
    @antonlinares2866 11 месяцев назад +2

    i cant believe it, this is so well explained

  • @vanuatu2027
    @vanuatu2027 Год назад +4

    Love your ML videos, keep up the effort.

  • @Baekho-nyang
    @Baekho-nyang Год назад +4

    I WANT MOREEE

  • @sevgikorkmaz
    @sevgikorkmaz Год назад +1

    Great videos, thank you.

  • @terenzjomardelacruz8012
    @terenzjomardelacruz8012 Год назад +1

    You just earned yourself a sub. Good job!

  • @shakilkhan4306
    @shakilkhan4306 Год назад +2

    GREAT...❤
    MORE Videos ON Deep Learning Would Be Awesome __
    Waiting

  • @wangpenge
    @wangpenge Год назад +1

    good job! thank for your code!

    • @3-minutedatascience
      @3-minutedatascience  Год назад

      You're welcome! If you like Manim, please help out the Manim community wherever you can. They're great folks doing amazing work.

  • @lex8799
    @lex8799 5 дней назад

    Thank you 🙏 Could you make more content, please?

  • @HughDeviluke
    @HughDeviluke Год назад +1

    Can you make a video talking about ridge and lasso regression?
    Amazing work!!

    • @3-minutedatascience
      @3-minutedatascience  Год назад

      Thank you! I can definitely try to bring an animated spin to that. If you have not already, I would check out Josh Starmer's great videos on those topics.
      ruclips.net/video/Xm2C_gTAl8c/видео.html

  • @henryash38
    @henryash38 Год назад

    How do you get the weights to start with and are the weights always the same (per hidden node input)

    • @3-minutedatascience
      @3-minutedatascience  Год назад +2

      Good question! I briefly mentioned this when the hidden weights were passed to the nodes. A large amount of labelled training data is provided and then the weights and biases are adjusted (typically using stochastic gradient descent) until the neural network predictions match the training data as closely as possible. This is similar to a linear regression where you fit a line through some data points, but is a rougher process that uses random-based sampling. And yes, each node is going to end up with different weights but will stay fixed until the neural network is "updated" with new training data (which does not happen when using it to predict).
      I do plan on talking about gradient descent and stochastic gradient descent later. I do also cover that in full in my book.
      oreillymedia.pxf.io/rQYLrD

  • @TonKaPoon
    @TonKaPoon Месяц назад

    Why is the value I calculated not equal to the value you showed in the Clip Video ?
    Equation (3.56*0.00)+(8.49*0.77)+(1.59*0.43)+(-6.67)
    I calculated the value to be 0.5510. In your video clip, the value was calculated to be 0.571.
    Equation (4.29*0.00)+(8.36*0.77)+(1.37*0.43)+(-6.34)
    I calculated the value to be 0.6863. In your video clip, the value was calculated to be 0.704.
    Equation (3.72*0.00)+(8.13*0.77)+(1.48*0.43)+(-6.11)
    I calculated the value to be 0.7865. In your video clip, the value was calculated to be 0.812.

  • @hamzarageh463
    @hamzarageh463 9 месяцев назад +1

    my question is neural network good for predicting a binary outcome

  • @prajjwalgarag8815
    @prajjwalgarag8815 7 месяцев назад

    but how are weights and biases calculated?

    • @3-minutedatascience
      @3-minutedatascience  7 месяцев назад

      Good question! I briefly mentioned this when the hidden weights were passed to the nodes. A large amount of labeled training data is provided and then the weights and biases are adjusted (typically using stochastic gradient descent) until the neural network predictions match the training data as closely as possible. This is similar to a linear regression where you fit a line through some data points, but is a rougher process that uses random-based sampling. Each node is going to end up with different weights but will stay fixed until the neural network is "updated" with new training data (which does not happen when using it to predict).
      I do plan on talking about gradient descent and stochastic gradient descent later. I do also cover that in full in my book.
      oreillymedia.pxf.io/rQYLrD