i need to say this: you are the gamechanger here!! as a data scientist +2 years of experience, i ALWAYS learn something new with your content! please nich, never stop doing this things, and also, never cut your smile in your face, even if your are having bugs!! thanks for everything
Love the channel Nicholas, have recently graduated from an NLP Master's degree and seeing you explain stuff in a simpler way and your coding challenges is really helping me connect with the material I've learned! Keep it up and I'll keep watching!
Once you initialized lr to 0.0, I knew you were going to forget to change it lol. Love the challenges tho, keep doing them, I think it would be cool to see how you implement a neural network from scratch
I'm still kicking myself that it was the lr that tripped me up 😅, literally it's so different coding under pressure stuff that should just flow goes out the window. OHHH yeah, I thought about a good challenge building NNs while I was at the gym, stay tuned!
Hey Nicholas! Love your channel and I'm really appreciating these 15 minute coding challenges - please keep it up! Also, you can disable those annoying VS Code popups you ran into at 8:35 by going to Code > Preferences > Settings, then typing "editor.hover.enable", then unchecking the "Editor > Hover" option. Hope that's useful!
wow. you make the subject come alive with excitements and simplicity. you are really gifted. i will take you over hard to understand but smart Ph.D professors from Ivy league any day.
I've been following your channel for a while now and I always find new cool stuff here. Keep up the good work, it's really helpful. Also, I love your positive personality, you really make complex stuff look entertaining.
It definitely felt intense at the time Leonard 😅, the pressure is definitely real. I don't know what it is, but coding under pressure is just a completely different beast. Thanks a million, I'll take the win and thanks for checking it out!
Wow. This youtuber has only 197k. For this absolutely high-quality videos. you deserver more than 1m+, only thing to say, is keep grinding, and you'll get to it.
Great Video! Would be cool to come back to this and add visualization during gradient descend using matplotlib and show what is actually happening. For example drawing data points, regression line, individual loss between line and data points and showing stats like current step, w, b, total loss! :)
@@sergioquijano7721 DONE, fair trade!! Been studying this book in a ton of depth this week: themlbook.com/ I threw my own spin on the grad descent example but the fundamentals are in there!
ChatGPT won this challenge instantaneously lol : import numpy as np # Set the learning rate learning_rate = 0.01 # Set the number of iterations num_iterations = 1000 # Define the data points X = np.array([[0, 1], [1, 0], [1, 1], [0, 0]]) y = np.array([1, 1, 0, 0]) # Initialize the weights weights = np.zeros(X.shape[1]) # Train the model for i in range(num_iterations): # Compute the predicted values y_pred = 1 / (1 + np.exp(-1 * np.dot(X, weights)))
# Compute the error error = y - y_pred
# Update the weights weights += learning_rate * np.dot(X.T, error) # Print the weights print("Weights:", weights) A.I. description of the code: "This script defines a simple dataset with four data points and trains a model using the gradient descent algorithm to learn the weights that minimize the error between the predicted values and the true values. The model uses a sigmoid activation function to make predictions. The script initializes the weights to zeros, and then iteratively updates the weights using the gradient descent algorithm, computing the predicted values, the error, and the gradient of the error with respect to the weights. The learning rate determines the size of the step taken in each iteration. After training the model, the final weights are printed out. You can use these weights to make predictions on new data points by computing the dot product of the data points and the weights, and applying the sigmoid function."
I think you missed dividing the derivative by 2. Because in the formula for cost function, we have (1/2*no. of training data)*sum of squared error, when we take the derivative, 2 from dldw and 1/2 from cost function cancel each other. Anyway, it was a cool video, keep up the good work brother
Ayyyy, so glad you like it @Patrick. For the last two weeks I've just been making videos on stuff I find hard or want to get my head around I figure it's not just me staring there at some of these concepts like huh?!? Thanks for checking it out!!
Great video, I like this kind of video where you code some AI task counterclock, you teach us the concepts and show us the reality of implementing it👏 Well explained 😄👍
You should create a model to Reduce the pressure during last minutes. Such that finding an optimal time tolerance (+-) ( 15+-b) 😂😂😂😂. 😢 but we need more videos like this to have good dataset 😂😂🎉. Thanks man
Thanks for the video, subscribed! A suggestion : this small change to your code would demonstrate a real-world gradient descent solution for linear regression with noisy data. E.g. : x = np.random.randn(20,1) noise = np.random.randn(20,1)/10 # w = 5.8, b = -231.9 y = 5.8*x - 231.9 + noise
You're my new best friend @Kai-Hua, I could've just wrote it off and said "So that's a regression model with gradient descent...and nooooowww, we'll tune it!"
Nick but I thought there are existing algorithms that u can feed your data into ? I love the way you’re doing it though but is it good doing your style or used existing ones ??
100% use the prebuilt ones in sklearn, this is more to understand how they work and to provide intuition for tuning and preprocessing!! Good question 👍
man i am new to this. Why are the updates not zero when learning rate is? What does a learning rate of 0 mean if it does not learn what is the purpose of building it? Edit: Nvm. Saw the rest of the video ,lol.
Please check the Auto Save in file drop down list it's really time saver 😃 I need to see the video many times to understand what are you doing But great work I love all what you do Thumb up 👍👍
hi. i have a question. why you didn't use y = 2*x + np.random.rand(10,1). in your version we will have 10 points that are lye on a line with fix slope. with this y = 2*x + np.random.rand(10,1) every point will have different slope and intercept.
i need to say this: you are the gamechanger here!!
as a data scientist +2 years of experience, i ALWAYS learn something new with your content! please nich, never stop doing this things, and also, never cut your smile in your face, even if your are having bugs!!
thanks for everything
Thank you so much for your kind words @javierjdaza!
EXTREMELY educational.
This should get like million views. It's a very good point for starter.
Set the time limit to 20 mins from next time
Because you are even explaining us.
This is really awesome!!
Thanks a million @Lakshman!! I try to keep it pretty tight so it’s a good challenge otherwise I know I’ll just talk for 22 minutes anyway😅
Love the channel Nicholas, have recently graduated from an NLP Master's degree and seeing you explain stuff in a simpler way and your coding challenges is really helping me connect with the material I've learned! Keep it up and I'll keep watching!
Woah congrats @Ally 🎊 🎉 glad you’re enjoying the challenges, plenty more to come!!
Once you initialized lr to 0.0, I knew you were going to forget to change it lol. Love the challenges tho, keep doing them, I think it would be cool to see how you implement a neural network from scratch
I'm still kicking myself that it was the lr that tripped me up 😅, literally it's so different coding under pressure stuff that should just flow goes out the window. OHHH yeah, I thought about a good challenge building NNs while I was at the gym, stay tuned!
@@NicholasRenotte did you ever make that video? A NN from scratch for hadnwritten digits (MNIST) classification would be so awesome!
the zoom in on the unsaved icon was personal 💀
one of the reasons why I use autosave
😅 I was angry at myself when editing, I had to make a point of it lol😂
Hey Nicholas! Love your channel and I'm really appreciating these 15 minute coding challenges - please keep it up! Also, you can disable those annoying VS Code popups you ran into at 8:35 by going to Code > Preferences > Settings, then typing "editor.hover.enable", then unchecking the "Editor > Hover" option. Hope that's useful!
You are a lifesaver @Spencer, will do it next time i'm on the streaming rig!
wow. you make the subject come alive with excitements and simplicity. you are really gifted. i will take you over hard to understand but smart Ph.D professors from Ivy league any day.
Awesome video !! It's preety cool to see such theoretical concepts coded and explained like this. Keep going Nich !!
YESSSS, right?! Glad you liked it Miguel!
I've been following your channel for a while now and I always find new cool stuff here. Keep up the good work, it's really helpful. Also, I love your positive personality, you really make complex stuff look entertaining.
the essence of Deep learning in a few lines of code... awesome
This is a very novel and cool way to teach coding. I really enjoyed it, and it was good to see you troubleshoot and get stuff wrong.
This was oddly intense. Great job Nicholas! Even though you ran out of time, this video is still a win to me. 😉
It definitely felt intense at the time Leonard 😅, the pressure is definitely real. I don't know what it is, but coding under pressure is just a completely different beast. Thanks a million, I'll take the win and thanks for checking it out!
Wow. This youtuber has only 197k. For this absolutely high-quality videos. you deserver more than 1m+, only thing to say, is keep grinding, and you'll get to it.
Great Video!
Would be cool to come back to this and add visualization during gradient descend using matplotlib and show what is actually happening.
For example drawing data points, regression line, individual loss between line and data points and showing stats like current step, w, b, total loss! :)
OHHHH MANNN, I thought about doing that but I was debating whether I'd hit the 15 minute deadline already. Good suggestion @Julian!
You are so good at explaining these complicated concepts. Also, if you want to close the explore tab in VSCode try: Ctrl + b
Legend, thanks a million @Sergio!!
@@NicholasRenotte :D I can give you more shortcuts if you tell me where I can learn more about Machine Learning concepts as you explained
@@sergioquijano7721 DONE, fair trade!! Been studying this book in a ton of depth this week: themlbook.com/ I threw my own spin on the grad descent example but the fundamentals are in there!
i'll give you half a win, since it was a small detail
Cheers @brunospfc!!
Amazing! I'm learning so much watching you code. Thank you for sharing.
Thanks a mil @einsteinboi!!
Lots of Thanks, Nick :)
ChatGPT won this challenge instantaneously lol :
import numpy as np
# Set the learning rate
learning_rate = 0.01
# Set the number of iterations
num_iterations = 1000
# Define the data points
X = np.array([[0, 1], [1, 0], [1, 1], [0, 0]])
y = np.array([1, 1, 0, 0])
# Initialize the weights
weights = np.zeros(X.shape[1])
# Train the model
for i in range(num_iterations):
# Compute the predicted values
y_pred = 1 / (1 + np.exp(-1 * np.dot(X, weights)))
# Compute the error
error = y - y_pred
# Update the weights
weights += learning_rate * np.dot(X.T, error)
# Print the weights
print("Weights:", weights)
A.I. description of the code: "This script defines a simple dataset with four data points and trains a model using the gradient descent algorithm to learn the weights that minimize the error between the predicted values and the true values. The model uses a sigmoid activation function to make predictions.
The script initializes the weights to zeros, and then iteratively updates the weights using the gradient descent algorithm, computing the predicted values, the error, and the gradient of the error with respect to the weights. The learning rate determines the size of the step taken in each iteration.
After training the model, the final weights are printed out. You can use these weights to make predictions on new data points by computing the dot product of the data points and the weights, and applying the sigmoid function."
Really nice video! Love the energy and the enthusiasm. Thanks for the help!
I think you missed dividing the derivative by 2. Because in the formula for cost function, we have (1/2*no. of training data)*sum of squared error, when we take the derivative, 2 from dldw and 1/2 from cost function cancel each other. Anyway, it was a cool video, keep up the good work brother
Are you reading my mind or something? Every time I'm stuck on a topic, you drop a video about it...
Ayyyy, so glad you like it @Patrick. For the last two weeks I've just been making videos on stuff I find hard or want to get my head around I figure it's not just me staring there at some of these concepts like huh?!? Thanks for checking it out!!
Could you please provide the whole code, maybe in the description or else where? Thank you! Your videos are a life saver.
oh god! you forgot to save and i involuntarily kept shouting SAVE IT! SAVE IT!
Pretty impressive. This is awesome. Cheers
I really like this video. It is great!
That's so informative thank you so much
Glad you enjoyed it @Kashish!!
This is cool, seeing it realtime.
Glad you enjoyed it @NHMI!
Great video, I like this kind of video where you code some AI task counterclock, you teach us the concepts and show us the reality of implementing it👏
Well explained 😄👍
how amazing it is that he set timer for 15 mins and the vid is 22 mins long
You should create a model to Reduce the pressure during last minutes. Such that finding an optimal time tolerance (+-) ( 15+-b) 😂😂😂😂. 😢 but we need more videos like this to have good dataset 😂😂🎉. Thanks man
I realy love your vedio the idea of the vedio is insain and i realy like it
So stoked you liked it 🙏
this is gold!
Great video 🎉🎉
Thee learning raaate haha cool vid !
Thanks for the video, subscribed! A suggestion : this small change to your code would demonstrate a real-world gradient descent solution for linear regression with noisy data. E.g. :
x = np.random.randn(20,1)
noise = np.random.randn(20,1)/10
# w = 5.8, b = -231.9
y = 5.8*x - 231.9 + noise
Man you actually made it, unless you say tuning hyperparameter is part of the challenge lol
You're my new best friend @Kai-Hua, I could've just wrote it off and said "So that's a regression model with gradient descent...and nooooowww, we'll tune it!"
Nice implementation bro
Amazing video!! Thank you so much
Can you please do a tesorflow instance segmentation video using Mask RCNN. There isn't much of a RUclips content related to this online.
U R GOD MAN, so much thanks
Nick but I thought there are existing algorithms that u can feed your data into ? I love the way you’re doing it though but is it good doing your style or used existing ones ??
100% use the prebuilt ones in sklearn, this is more to understand how they work and to provide intuition for tuning and preprocessing!! Good question 👍
@@NicholasRenotte that’s why I call u Khalid of deep learning
Love it!
Is there any other machine learning/NVIDIA Jetson video tutorials you would recommend?
I wonder how much i takes the backpropagation algorithm ?
man i am new to this. Why are the updates not zero when learning rate is? What does a learning rate of 0 mean if it does not learn what is the purpose of building it?
Edit: Nvm. Saw the rest of the video ,lol.
why is it necessary for x and y to be list of lists ?
Can you explain the notears algorithm? It would be a great help.
Please do a video building a NN from scrath!!
hey man! I have a friend from Lyon and you guys have the same surname, haha
Any chance you have roots from there?
Thanks waiting for the part 5 forza
so can you please do this algorithm for multiple variables
Bro, how to implement gradient descent as weight in K nearest neighbor ?
Please check the Auto Save in file drop down list it's really time saver 😃
I need to see the video many times to understand what are you doing
But great work
I love all what you do
Thumb up 👍👍
Thanks for the suggestion @MrElectrecity!
Could you please upload correct code to github? I lost track of your logic after "def descend () etc".
Correct code is on there @Quadrophenia, not working?
Does gradient descent work for polynomial with multi-variable problems?
yes
where is it used? why?
👍👍👍
Great video. Set time to 20 mins.
Gift card not valid :(
But it was fun!
You are amazing!!
Got claimed super fast this time @Lakshman!!
@@NicholasRenotte My bad
I have turned on the notification of your channel!
Waiting for the next code that challenge!!!!
Hope you win next time! 🤞🤞🤞
😂😂😂
Was too fast for me
Vert nice
HEYYYYY PHIL!! Long time no see, thanks a mil!!
I can do this more efficiently
Where's my $50 gift card? Lol
LOLLLL
hi. i have a question. why you didn't use y = 2*x + np.random.rand(10,1). in your version we will have 10 points that are lye on a line with fix slope. with this y = 2*x + np.random.rand(10,1) every point will have different slope and intercept.
You can contact us on telegram
You can contact me on youtube