To support more videos like this, please check out my O'Reilly books. Essential Math for Data Science amzn.to/3Vihfhw Getting Started with SQL amzn.to/3KBudSY Access all my books, online trainings, and video courses on O'Reilly with a 10-day free trial! oreillymedia.pxf.io/1rJ1P6
Thank you! I can definitely try to bring an animated spin to that. If you have not already, I would check out Josh Starmer's great videos on those topics. ruclips.net/video/Xm2C_gTAl8c/видео.html
Good question! I briefly mentioned this when the hidden weights were passed to the nodes. A large amount of labelled training data is provided and then the weights and biases are adjusted (typically using stochastic gradient descent) until the neural network predictions match the training data as closely as possible. This is similar to a linear regression where you fit a line through some data points, but is a rougher process that uses random-based sampling. And yes, each node is going to end up with different weights but will stay fixed until the neural network is "updated" with new training data (which does not happen when using it to predict). I do plan on talking about gradient descent and stochastic gradient descent later. I do also cover that in full in my book. oreillymedia.pxf.io/rQYLrD
Why is the value I calculated not equal to the value you showed in the Clip Video ? Equation (3.56*0.00)+(8.49*0.77)+(1.59*0.43)+(-6.67) I calculated the value to be 0.5510. In your video clip, the value was calculated to be 0.571. Equation (4.29*0.00)+(8.36*0.77)+(1.37*0.43)+(-6.34) I calculated the value to be 0.6863. In your video clip, the value was calculated to be 0.704. Equation (3.72*0.00)+(8.13*0.77)+(1.48*0.43)+(-6.11) I calculated the value to be 0.7865. In your video clip, the value was calculated to be 0.812.
Good question! I briefly mentioned this when the hidden weights were passed to the nodes. A large amount of labeled training data is provided and then the weights and biases are adjusted (typically using stochastic gradient descent) until the neural network predictions match the training data as closely as possible. This is similar to a linear regression where you fit a line through some data points, but is a rougher process that uses random-based sampling. Each node is going to end up with different weights but will stay fixed until the neural network is "updated" with new training data (which does not happen when using it to predict). I do plan on talking about gradient descent and stochastic gradient descent later. I do also cover that in full in my book. oreillymedia.pxf.io/rQYLrD
To support more videos like this, please check out my O'Reilly books.
Essential Math for Data Science
amzn.to/3Vihfhw
Getting Started with SQL
amzn.to/3KBudSY
Access all my books, online trainings, and video courses on O'Reilly with a 10-day free trial!
oreillymedia.pxf.io/1rJ1P6
Very good explained. Keep up the good work!
beautiful
i cant believe it, this is so well explained
Love your ML videos, keep up the effort.
I WANT MOREEE
Great videos, thank you.
You just earned yourself a sub. Good job!
GREAT...❤
MORE Videos ON Deep Learning Would Be Awesome __
Waiting
good job! thank for your code!
You're welcome! If you like Manim, please help out the Manim community wherever you can. They're great folks doing amazing work.
Thank you 🙏 Could you make more content, please?
yes, strategizing allocating more time in 2025!
Can you make a video talking about ridge and lasso regression?
Amazing work!!
Thank you! I can definitely try to bring an animated spin to that. If you have not already, I would check out Josh Starmer's great videos on those topics.
ruclips.net/video/Xm2C_gTAl8c/видео.html
How do you get the weights to start with and are the weights always the same (per hidden node input)
Good question! I briefly mentioned this when the hidden weights were passed to the nodes. A large amount of labelled training data is provided and then the weights and biases are adjusted (typically using stochastic gradient descent) until the neural network predictions match the training data as closely as possible. This is similar to a linear regression where you fit a line through some data points, but is a rougher process that uses random-based sampling. And yes, each node is going to end up with different weights but will stay fixed until the neural network is "updated" with new training data (which does not happen when using it to predict).
I do plan on talking about gradient descent and stochastic gradient descent later. I do also cover that in full in my book.
oreillymedia.pxf.io/rQYLrD
Why is the value I calculated not equal to the value you showed in the Clip Video ?
Equation (3.56*0.00)+(8.49*0.77)+(1.59*0.43)+(-6.67)
I calculated the value to be 0.5510. In your video clip, the value was calculated to be 0.571.
Equation (4.29*0.00)+(8.36*0.77)+(1.37*0.43)+(-6.34)
I calculated the value to be 0.6863. In your video clip, the value was calculated to be 0.704.
Equation (3.72*0.00)+(8.13*0.77)+(1.48*0.43)+(-6.11)
I calculated the value to be 0.7865. In your video clip, the value was calculated to be 0.812.
my question is neural network good for predicting a binary outcome
It can be, as was shown in this example!
but how are weights and biases calculated?
Good question! I briefly mentioned this when the hidden weights were passed to the nodes. A large amount of labeled training data is provided and then the weights and biases are adjusted (typically using stochastic gradient descent) until the neural network predictions match the training data as closely as possible. This is similar to a linear regression where you fit a line through some data points, but is a rougher process that uses random-based sampling. Each node is going to end up with different weights but will stay fixed until the neural network is "updated" with new training data (which does not happen when using it to predict).
I do plan on talking about gradient descent and stochastic gradient descent later. I do also cover that in full in my book.
oreillymedia.pxf.io/rQYLrD