You are one of the rare educators who can make smile their viewers in between learning which makes learning flowless. I believe without any stop I can watch your 1 hr long content too. Thanks for making learning easy and funny.
Holy crap I can say with confidence this is the funniest introduction to hyperparameter optimisation there will ever be. Ever. Genius work. You don't call any more, but that's ok. Live your live, enjoy it! Be free! Be yourself!
ive never read a paper thats done that, but totally possible! All functions are neural networks if you stare at them long enough, you should definitely try it out
bayesian optimization itself has hyperparameters, like kernel window size for gaussian process fitting.... all these BO libraries have these parameters at some default value, which almost always does not work for your model at hand...
Thank you for this video, i am currently testing Scikit Optimize to optimize the network i am currently working on. It supports Bayesian optimization and is simple to implement where as hyperas likes to give errors.
Here I was thinking early in the video 'would a Monte Carlo approach work?' when you got into talking about exploration /exploitation I think it might. This higher level math you are doing here I don't get (or maybe I need someone else to explain it) but Monte Carlo is something I've used before and I think it might be good enough. You could seed in a set of likely values and let it add new ones when it heads to an upper or lower bound. The nice thing about Monte Carlo is that it would explore possibilities as the model matures and switch over to something if it winds up performing better. This obviously works better for integer parameters than for gradual values.
Hi Siraj, thanks tons for the video! I am unsure of what you meant by utility of the expectation of function f. You said it tells us which region of domain of f are best to sample from, but I can't quite follow what you mean by that. Would highly appreciate some help with this!
Want to make a neural network that converts fiction books to moviescripts. And then based on the character descriptions in the book find tge best actor in a db. And based on the information in the book find good filming locations. Im very new to AI and dont know anything. Is this possible with AI? Should I train on 3 different datasets and how? And what NN should I use to do all of that at the same time?
To answer the second question, yes, overfitting still is an open problem in Hyperparameter Optimization. You can find some information about some adopted methods that try to avoid this in Section 1.6.4 of this book: www.automl.org/wp-content/uploads/2019/05/AutoML_Book_Chapter1.pdf
How about using evolutionary algorithms to search for optimum values of hyperparameters? I'm not sure how well it works in comparison to bayesian optimization though.
For a tutorial on how to install and use Spearmint (an awesome Bayesian Optimization library by Jasper Snoek) check out this link: bitbucket.org/uhasseltmachinelearning/spearmint
I am looking for clarification on the homework this week because I think I have gotten confused between bayesian regression and bayesian optimization for finding hyper parameters. Is it correct to say that in a linear regression the hyper parameter is the gradient descent learning rate, and not the slope coefficients. So we first use bayesian optimization to find a good learning rate, and then run gradient descent to estimate the coefficient parameters? If this is true, I imagine we still want to minimize the sum of square errors? Someone let me know if I am on the right track, thx.
Hey Hammad! Great question. You can choose to do either, both are really cool ideas. Example of Bayesian regression: github.com/tdomhan/pyblr & for bayesian optimization for linear regression, what you said is correct, its used to first find the optimal learning rate, while gradient descent estimates the coefficient parameters.
Thanks for the clarification Siraj. I am going to do the Bayesian linear regression notebook, hopefully someone else does the Bayesian optimization to find gradient descent parameter.
thank you for making data science entertaining for reals, would you be able run some more examples with the concepts as you explain them in future videos?:)
Isn't the Kernel Trick that you don't really transform the data points at all? You just use a similarity function that is equivalent to the inner product calculation that _would_ happen after transforming to a high-dimension space with some kernel: the Kernel Trick is that there is no kernel.
You have to train your biological neural network to learn new things based on past experiences. xD The best way to start is to ... start. I mean start by building the most basic thing and then as you watch new videos you start to mess with some new things, at least is what i have been doing. That being said the most important videos to start the most basic thing are the "math of intelligence" videos. Hope it helps and good luck ;)
I felt the same way when I first found this channel and was watching random videos in no order. Currently you are watching the 7th video in this series, have you watched 1-6 already? He does have a sequence, and its becoming better and getting connected together over time. If you go to his channel and look at play lists, he has 1) Python for Data Science, 2) Math of Intelligence, I think these would be the starting points.
Hey man, I've got a sorta unrelated question.. Have you heard of useaible and what do you think, I've heard some pretty crazy stuff but I can't really find much on it.. Is it legit? Anyway thanks, great video as always.
quantum computers may be very very useful tor this kind of task, they'd parallelize the entire process and allow REAL BIG data to be handled buch better. isn't this a P problem? am i right?
Hi Siraj, I was thinking if you would like to make a video for all of us new CS-students out there on "Good to know basics". I am from Denmark, so we dont have quite the same educational system. I am coming from the equivalent to high school and have just been accepted to the Danish University of Technology where I will study Software technology. This is a bachelor which I will get in three years, then continuing with a to-year candidate/masters. I have no prior knowledge on programming or discrete math what-so-ever 😱 ty Edit: I will be starting September 5th :D
You are one of the rare educators who can make smile their viewers in between learning which makes learning flowless. I believe without any stop I can watch your 1 hr long content too. Thanks for making learning easy and funny.
Holy crap I can say with confidence this is the funniest introduction to hyperparameter optimisation there will ever be. Ever. Genius work. You don't call any more, but that's ok. Live your live, enjoy it! Be free! Be yourself!
Every time I visit the page, I learn a new technique. Thanks Siraj
Yo dang, I heard you like optimizers, so I made an optimizer to optimize your optimizer
I heard he liked plagiarism
I can optimize your optimizer optimizer
dude u making learning so awesome !! great work
Give this man a raise!
Your geeky / cringey jokes are the best! Don't stop. Seriously.
Love the energy Siraj
Pretty cool video...good job Siraj.. thankyou...
Et u Brute Force... I laughed so hard at this point.
glad you liked it :)
Just figured something out with nodes. Length amount of nodes is cleverness, height of nodes is smartness.
Thanks for the effort you are taking for these videos. I respect it. :)
This is the coolest channel on RUclips!
Can we train a neural network to optimize hyperparameters?
ive never read a paper thats done that, but totally possible! All functions are neural networks if you stare at them long enough, you should definitely try it out
-----------> . (tactical dot in case OP wants to share results)
Yes, i guess. Try using GPyOpt which is basically a black-box function optimization library written in python.
Yes.
If you have hundreds of hyper parameter, this would be better than GP, but usually we don’t.
i was drinking my tea when i heard biggie and 2pac. jesus almost spitted my tea out
Bayes is not as random as it seems you think around 5 minutes in. But I did learn a lot here. thanks.
Loved this pace, atleast it makes us understand whats going on, as compared to the previous vidoes, which are quantity over quality.
This is amazing! Thanks for the video :)
I enjoyed this so much!
bayesian optimization itself has hyperparameters, like kernel window size for gaussian process fitting.... all these BO libraries have these parameters at some default value, which almost always does not work for your model at hand...
So.....Gradient Descent is a special case of Bayesian Optimization ri8?
Good explanation, illuminated a few things for me, thank you.
Thank you for this video, i am currently testing Scikit Optimize to optimize the network i am currently working on.
It supports Bayesian optimization and is simple to implement where as hyperas likes to give errors.
What about the genetic algorithm? Can they be used to optimize hyperparameters? For example using TPOT libaray.
Nice explanation
It just clicked how a random forest really works less than 1 minute into this video. i feel sick because the world is so interesting.
hey siraj can you tell us about replika..
I've only ever seen the Kernel Trick glossed over. I'd love it if you could find an opportunity to spend a few minutes on it.
Here I was thinking early in the video 'would a Monte Carlo approach work?' when you got into talking about exploration /exploitation I think it might.
This higher level math you are doing here I don't get (or maybe I need someone else to explain it) but Monte Carlo is something I've used before and I think it might be good enough.
You could seed in a set of likely values and let it add new ones when it heads to an upper or lower bound. The nice thing about Monte Carlo is that it would explore possibilities as the model matures and switch over to something if it winds up performing better.
This obviously works better for integer parameters than for gradual values.
Hi Siraj, thanks tons for the video! I am unsure of what you meant by utility of the expectation of function f. You said it tells us which region of domain of f are best to sample from, but I can't quite follow what you mean by that. Would highly appreciate some help with this!
Want to make a neural network that converts fiction books to moviescripts. And then based on the character descriptions in the book find tge best actor in a db. And based on the information in the book find good filming locations. Im very new to AI and dont know anything. Is this possible with AI? Should I train on 3 different datasets and how? And what NN should I use to do all of that at the same time?
would be great, use IMDB dataset
Any schemes for initializing the likelihoods?
Is it possible for me to optimize the neurons inside convolution layer for image classification?
Just when you think you've heard every pronunciation of Gaussian possible...
haha always something new
how do we do it in tensorflow ?
Love your videos.
thank you for hyperparameters video
For tuning hyperparameter, how does bayesian optimization compares to PSO? Any risk of overfitting when tuning the hyperparameters?
To answer the second question, yes, overfitting still is an open problem in Hyperparameter Optimization. You can find some information about some adopted methods that try to avoid this in Section 1.6.4 of this book: www.automl.org/wp-content/uploads/2019/05/AutoML_Book_Chapter1.pdf
Cool..!!
Can you plz tell me some best algorithm which can be used for video summarisation....!!
How can we use this to predict new parameters?
How about using evolutionary algorithms to search for optimum values of hyperparameters? I'm not sure how well it works in comparison to bayesian optimization though.
Can we talk about the amazing song of bayesians vs frequentists?
For a classification model, how to optimize hyper parameters using CAP curve analysis?
Docker tutorial please...muchh needed!!
There's going to be a video about Feedforward neural net?
What are you doing in Amsterdam brother? You work there now?
Why should be the TF/IDF a better strategy instead of Bag-of-Words? I Think it depend on the application.
why not use a binary search algorithm to eliminate the half of the possible hyper prameter rather than brute force?
Man have to make new videos 🙌. Don't lose hope
For a tutorial on how to install and use Spearmint (an awesome Bayesian Optimization library by Jasper Snoek) check out this link: bitbucket.org/uhasseltmachinelearning/spearmint
great link thanks
Is this already implemented on some library like sklearn or keras? I never read about this before and looks very promising
frequentist, bayesian their result are almost same for the first 20% of result data, but bayesian also includes uncertainty so there's that.
Great video .
Thanks .
I am looking for clarification on the homework this week because I think I have gotten confused between bayesian regression and bayesian optimization for finding hyper parameters. Is it correct to say that in a linear regression the hyper parameter is the gradient descent learning rate, and not the slope coefficients. So we first use bayesian optimization to find a good learning rate, and then run gradient descent to estimate the coefficient parameters? If this is true, I imagine we still want to minimize the sum of square errors?
Someone let me know if I am on the right track, thx.
Hey Hammad! Great question. You can choose to do either, both are really cool ideas. Example of Bayesian regression: github.com/tdomhan/pyblr & for bayesian optimization for linear regression, what you said is correct, its used to first find the optimal learning rate, while gradient descent estimates the coefficient parameters.
Thanks for the clarification Siraj. I am going to do the Bayesian linear regression notebook, hopefully someone else does the Bayesian optimization to find gradient descent parameter.
Bias routines involve illusions
Diverge or continue
From where Bayesian Optimization get the initial value of C and gamma?
It's a prior belief, so it means that you or the person coding should assume their initial values.
interesting and useful
thank you for making data science entertaining for reals, would you be able run some more examples with the concepts as you explain them in future videos?:)
came here for Hyperparameters Optimization, found SVM explaination
Isn't the Kernel Trick that you don't really transform the data points at all? You just use a similarity function that is equivalent to the inner product calculation that _would_ happen after transforming to a high-dimension space with some kernel: the Kernel Trick is that there is no kernel.
Siraj, where can I go to get the latest in deep learning publications so I can then replicate the results?? Thank you! You are the shit!
can i get the subtitle ,thanks
You have a problem. Your videos have no learning sequence. I don't understand where to start and where to go
You have to train your biological neural network to learn new things based on past experiences. xD
The best way to start is to ... start. I mean start by building the most basic thing and then as you watch new videos you start to mess with some new things, at least is what i have been doing. That being said the most important videos to start the most basic thing are the "math of intelligence" videos. Hope it helps and good luck ;)
I felt the same way when I first found this channel and was watching random videos in no order. Currently you are watching the 7th video in this series, have you watched 1-6 already? He does have a sequence, and its becoming better and getting connected together over time. If you go to his channel and look at play lists, he has 1) Python for Data Science, 2) Math of Intelligence, I think these would be the starting points.
@Akujin yes it does.
But what you mean by "math of intelligence"? . This playlist or anything else
Yes, i meant the playlist.
what ran domness said
Hey man, I've got a sorta unrelated question.. Have you heard of useaible and what do you think, I've heard some pretty crazy stuff but I can't really find much on it.. Is it legit? Anyway thanks, great video as always.
1:35 Getting kind of edgy, Siraj.
please need to understand svm and pso
quantum computers may be very very useful tor this kind of task, they'd parallelize the entire process and allow REAL BIG data to be handled buch better.
isn't this a P problem?
am i right?
What about gradient descent?
The problem is how do you calculate the gradient
Hi Siraj, I was thinking if you would like to make a video for all of us new CS-students out there on "Good to know basics".
I am from Denmark, so we dont have quite the same educational system. I am coming from the equivalent to high school and have just been accepted to the Danish University of Technology where I will study Software technology. This is a bachelor which I will get in three years, then continuing with a to-year candidate/masters. I have no prior knowledge on programming or discrete math what-so-ever 😱
ty
Edit: I will be starting September 5th :D
that song is a jam
I've already seen people using genetics algorithms in order to find Hyperparameters. but i thinks that's not very efficient :/
Great video! really makes me laugh
Cool skunk, thank you
HAHAHA that "mmm look at that Gaussian" meme has a pic from a McGill prof I knew
Background 🙃
Gauss, as in louse, not Gauss, as in boss.
humor and intelligence
💯
Equally interesting and ridiculous.
Yey
Can you share your collection of memes please?
No you see his memes change over time he doesnt even find memes anymore he has software to crawl the web and predict which memes siraj will most enjoy
what spark said
@@SirajRaval waiting for new video
I somehow find all that animation distracting to get the point across. Mehhh
If you think Siraj is exciting, have a look at this awesome dude on the same topic :
ruclips.net/video/con_ONbhD2I/видео.html 😂
who did u copy to make this video lol
G-owwww-sian, not G-awwwww-sian.
Igotattitude93 That's what I said.
Igotattitude93 I know plenty of Americans who say it correctly. Same for Euler. Shit, even Nietzsche.
Pronunciation depends on your hyperparameter selection :P
thank you
Headache !!! :(
please clarify, what specifically gave you a headache? thanks
Boi I'm early
I suck
First :p
congrats
I want to thank my parents, teachers, brother, sister and my dog for this great opportunity. Without them, this would not be possible.
I can't tell you how much I hate this guy, but this is the only video that explains what I want to know :'(
Second