Hyperparameter Optimization - The Math of Intelligence #7

Поделиться
HTML-код
  • Опубликовано: 28 дек 2024

Комментарии • 139

  • @surajthapa4160
    @surajthapa4160 3 года назад +5

    You are one of the rare educators who can make smile their viewers in between learning which makes learning flowless. I believe without any stop I can watch your 1 hr long content too. Thanks for making learning easy and funny.

  • @JulianHarris
    @JulianHarris 5 лет назад +7

    Holy crap I can say with confidence this is the funniest introduction to hyperparameter optimisation there will ever be. Ever. Genius work. You don't call any more, but that's ok. Live your live, enjoy it! Be free! Be yourself!

  • @DaredevilGotU
    @DaredevilGotU 5 лет назад +1

    Every time I visit the page, I learn a new technique. Thanks Siraj

  • @Skythedragon
    @Skythedragon 7 лет назад +49

    Yo dang, I heard you like optimizers, so I made an optimizer to optimize your optimizer

  • @tumul1474
    @tumul1474 6 лет назад +7

    dude u making learning so awesome !! great work

  • @abhilashjoy
    @abhilashjoy Год назад

    Give this man a raise!

  • @peretz7
    @peretz7 7 лет назад +19

    Your geeky / cringey jokes are the best! Don't stop. Seriously.

  • @manmoffatt8497
    @manmoffatt8497 4 года назад

    Love the energy Siraj

  • @s3wannabesaliha238
    @s3wannabesaliha238 5 лет назад

    Pretty cool video...good job Siraj.. thankyou...

  • @rajatmaheshwari916
    @rajatmaheshwari916 7 лет назад +58

    Et u Brute Force... I laughed so hard at this point.

  • @phillipotey9736
    @phillipotey9736 4 года назад

    Just figured something out with nodes. Length amount of nodes is cleverness, height of nodes is smartness.

  • @dexterdev
    @dexterdev 7 лет назад +3

    Thanks for the effort you are taking for these videos. I respect it. :)

  • @powerrabbit
    @powerrabbit 7 лет назад

    This is the coolest channel on RUclips!

  • @luck3949
    @luck3949 7 лет назад +11

    Can we train a neural network to optimize hyperparameters?

    • @SirajRaval
      @SirajRaval  7 лет назад +5

      ive never read a paper thats done that, but totally possible! All functions are neural networks if you stare at them long enough, you should definitely try it out

    • @freediugh416
      @freediugh416 7 лет назад +1

      -----------> . (tactical dot in case OP wants to share results)

    • @AkashSwamyBazinga
      @AkashSwamyBazinga 7 лет назад

      Yes, i guess. Try using GPyOpt which is basically a black-box function optimization library written in python.

    • @deeplearningpartnership
      @deeplearningpartnership 6 лет назад

      Yes.

    • @hfkssadfrew
      @hfkssadfrew 6 лет назад

      If you have hundreds of hyper parameter, this would be better than GP, but usually we don’t.

  • @solidsnake013579
    @solidsnake013579 5 лет назад +4

    i was drinking my tea when i heard biggie and 2pac. jesus almost spitted my tea out

  • @FinanceLogic
    @FinanceLogic 2 года назад

    Bayes is not as random as it seems you think around 5 minutes in. But I did learn a lot here. thanks.

  • @Ninja-iq2xt
    @Ninja-iq2xt 7 лет назад

    Loved this pace, atleast it makes us understand whats going on, as compared to the previous vidoes, which are quantity over quality.

  • @elsmith1237
    @elsmith1237 6 лет назад +1

    This is amazing! Thanks for the video :)

  • @keturananny6559
    @keturananny6559 4 года назад

    I enjoyed this so much!

  •  6 лет назад +2

    bayesian optimization itself has hyperparameters, like kernel window size for gaussian process fitting.... all these BO libraries have these parameters at some default value, which almost always does not work for your model at hand...

  • @swamysriman7147
    @swamysriman7147 4 года назад

    So.....Gradient Descent is a special case of Bayesian Optimization ri8?

  • @randompast
    @randompast 5 лет назад

    Good explanation, illuminated a few things for me, thank you.

  • @phil.s
    @phil.s 7 лет назад

    Thank you for this video, i am currently testing Scikit Optimize to optimize the network i am currently working on.
    It supports Bayesian optimization and is simple to implement where as hyperas likes to give errors.

  • @ketanpandey2561
    @ketanpandey2561 6 лет назад +1

    What about the genetic algorithm? Can they be used to optimize hyperparameters? For example using TPOT libaray.

  • @floopybits8037
    @floopybits8037 2 года назад

    Nice explanation

  • @FinanceLogic
    @FinanceLogic 2 года назад

    It just clicked how a random forest really works less than 1 minute into this video. i feel sick because the world is so interesting.

  • @vijaykoravi7583
    @vijaykoravi7583 7 лет назад

    hey siraj can you tell us about replika..

  • @antonylawler3423
    @antonylawler3423 7 лет назад

    I've only ever seen the Kernel Trick glossed over. I'd love it if you could find an opportunity to spend a few minutes on it.

  • @justinwhite2725
    @justinwhite2725 3 года назад

    Here I was thinking early in the video 'would a Monte Carlo approach work?' when you got into talking about exploration /exploitation I think it might.
    This higher level math you are doing here I don't get (or maybe I need someone else to explain it) but Monte Carlo is something I've used before and I think it might be good enough.
    You could seed in a set of likely values and let it add new ones when it heads to an upper or lower bound. The nice thing about Monte Carlo is that it would explore possibilities as the model matures and switch over to something if it winds up performing better.
    This obviously works better for integer parameters than for gradual values.

  • @_jiwi2674
    @_jiwi2674 5 лет назад +1

    Hi Siraj, thanks tons for the video! I am unsure of what you meant by utility of the expectation of function f. You said it tells us which region of domain of f are best to sample from, but I can't quite follow what you mean by that. Would highly appreciate some help with this!

  • @Nola1222Piano
    @Nola1222Piano 7 лет назад +3

    Want to make a neural network that converts fiction books to moviescripts. And then based on the character descriptions in the book find tge best actor in a db. And based on the information in the book find good filming locations. Im very new to AI and dont know anything. Is this possible with AI? Should I train on 3 different datasets and how? And what NN should I use to do all of that at the same time?

    • @SirajRaval
      @SirajRaval  7 лет назад +1

      would be great, use IMDB dataset

  • @gbengaomotara2102
    @gbengaomotara2102 6 лет назад

    Any schemes for initializing the likelihoods?

  • @irenesforeheadforlyve5820
    @irenesforeheadforlyve5820 6 лет назад

    Is it possible for me to optimize the neurons inside convolution layer for image classification?

  • @alvincay100
    @alvincay100 7 лет назад +8

    Just when you think you've heard every pronunciation of Gaussian possible...

  • @karankatiyar5414
    @karankatiyar5414 6 лет назад

    how do we do it in tensorflow ?

  • @zihanqiao2850
    @zihanqiao2850 4 года назад

    Love your videos.

  • @larryteslaspacexboringlawr739
    @larryteslaspacexboringlawr739 7 лет назад

    thank you for hyperparameters video

  • @RaymondWong
    @RaymondWong 6 лет назад

    For tuning hyperparameter, how does bayesian optimization compares to PSO? Any risk of overfitting when tuning the hyperparameters?

    • @lucasnildaimon7598
      @lucasnildaimon7598 5 лет назад

      To answer the second question, yes, overfitting still is an open problem in Hyperparameter Optimization. You can find some information about some adopted methods that try to avoid this in Section 1.6.4 of this book: www.automl.org/wp-content/uploads/2019/05/AutoML_Book_Chapter1.pdf

  • @vipulsonar4561
    @vipulsonar4561 6 лет назад

    Cool..!!
    Can you plz tell me some best algorithm which can be used for video summarisation....!!

  • @gunjannaik7575
    @gunjannaik7575 7 лет назад

    How can we use this to predict new parameters?

  • @adityashinde6202
    @adityashinde6202 7 лет назад

    How about using evolutionary algorithms to search for optimum values of hyperparameters? I'm not sure how well it works in comparison to bayesian optimization though.

  • @NataliaRevenga
    @NataliaRevenga 2 месяца назад

    Can we talk about the amazing song of bayesians vs frequentists?

  • @deepak3303
    @deepak3303 7 лет назад

    For a classification model, how to optimize hyper parameters using CAP curve analysis?

  • @jasneetsingh4018
    @jasneetsingh4018 7 лет назад

    Docker tutorial please...muchh needed!!

  • @kaikewesleyreis
    @kaikewesleyreis 7 лет назад

    There's going to be a video about Feedforward neural net?

  • @manwa5192
    @manwa5192 7 лет назад

    What are you doing in Amsterdam brother? You work there now?

  • @sonOfLiberty100
    @sonOfLiberty100 7 лет назад

    Why should be the TF/IDF a better strategy instead of Bag-of-Words? I Think it depend on the application.

  • @deepak3303
    @deepak3303 7 лет назад

    why not use a binary search algorithm to eliminate the half of the possible hyper prameter rather than brute force?

  • @AIwithAniket
    @AIwithAniket 3 года назад

    Man have to make new videos 🙌. Don't lose hope

  • @Egop3105
    @Egop3105 7 лет назад +3

    For a tutorial on how to install and use Spearmint (an awesome Bayesian Optimization library by Jasper Snoek) check out this link: bitbucket.org/uhasseltmachinelearning/spearmint

  • @deniscandido4116
    @deniscandido4116 7 лет назад

    Is this already implemented on some library like sklearn or keras? I never read about this before and looks very promising

  • @planktonfun1
    @planktonfun1 6 лет назад

    frequentist, bayesian their result are almost same for the first 20% of result data, but bayesian also includes uncertainty so there's that.

  • @masteronepiece6559
    @masteronepiece6559 7 лет назад

    Great video .
    Thanks .

  • @hammadshaikhha
    @hammadshaikhha 7 лет назад

    I am looking for clarification on the homework this week because I think I have gotten confused between bayesian regression and bayesian optimization for finding hyper parameters. Is it correct to say that in a linear regression the hyper parameter is the gradient descent learning rate, and not the slope coefficients. So we first use bayesian optimization to find a good learning rate, and then run gradient descent to estimate the coefficient parameters? If this is true, I imagine we still want to minimize the sum of square errors?
    Someone let me know if I am on the right track, thx.

    • @SirajRaval
      @SirajRaval  7 лет назад

      Hey Hammad! Great question. You can choose to do either, both are really cool ideas. Example of Bayesian regression: github.com/tdomhan/pyblr & for bayesian optimization for linear regression, what you said is correct, its used to first find the optimal learning rate, while gradient descent estimates the coefficient parameters.

    • @hammadshaikhha
      @hammadshaikhha 7 лет назад

      Thanks for the clarification Siraj. I am going to do the Bayesian linear regression notebook, hopefully someone else does the Bayesian optimization to find gradient descent parameter.

    • @laidbackmedia
      @laidbackmedia Год назад

      Bias routines involve illusions
      Diverge or continue

  • @shreysharma7806
    @shreysharma7806 6 лет назад

    From where Bayesian Optimization get the initial value of C and gamma?

    • @lucasnildaimon7598
      @lucasnildaimon7598 5 лет назад

      It's a prior belief, so it means that you or the person coding should assume their initial values.

  • @qunchongqa
    @qunchongqa 8 месяцев назад

    interesting and useful

  • @rahulahoop1
    @rahulahoop1 4 года назад +2

    thank you for making data science entertaining for reals, would you be able run some more examples with the concepts as you explain them in future videos?:)

  • @normannborg
    @normannborg 4 года назад

    came here for Hyperparameters Optimization, found SVM explaination

  • @williamchamberlain2263
    @williamchamberlain2263 6 лет назад

    Isn't the Kernel Trick that you don't really transform the data points at all? You just use a similarity function that is equivalent to the inner product calculation that _would_ happen after transforming to a high-dimension space with some kernel: the Kernel Trick is that there is no kernel.

  • @heri_prieto
    @heri_prieto 7 лет назад

    Siraj, where can I go to get the latest in deep learning publications so I can then replicate the results?? Thank you! You are the shit!

  • @chengjunli4518
    @chengjunli4518 7 лет назад +1

    can i get the subtitle ,thanks

  • @shuvendubikash3792
    @shuvendubikash3792 7 лет назад +6

    You have a problem. Your videos have no learning sequence. I don't understand where to start and where to go

    • @diogojvc
      @diogojvc 7 лет назад +6

      You have to train your biological neural network to learn new things based on past experiences. xD
      The best way to start is to ... start. I mean start by building the most basic thing and then as you watch new videos you start to mess with some new things, at least is what i have been doing. That being said the most important videos to start the most basic thing are the "math of intelligence" videos. Hope it helps and good luck ;)

    • @hammadshaikhha
      @hammadshaikhha 7 лет назад +2

      I felt the same way when I first found this channel and was watching random videos in no order. Currently you are watching the 7th video in this series, have you watched 1-6 already? He does have a sequence, and its becoming better and getting connected together over time. If you go to his channel and look at play lists, he has 1) Python for Data Science, 2) Math of Intelligence, I think these would be the starting points.

    • @shuvendubikash3792
      @shuvendubikash3792 7 лет назад

      @Akujin yes it does.
      But what you mean by "math of intelligence"? . This playlist or anything else

    • @diogojvc
      @diogojvc 7 лет назад +1

      Yes, i meant the playlist.

    • @SirajRaval
      @SirajRaval  7 лет назад +1

      what ran domness said

  • @benbenjamin5
    @benbenjamin5 7 лет назад

    Hey man, I've got a sorta unrelated question.. Have you heard of useaible and what do you think, I've heard some pretty crazy stuff but I can't really find much on it.. Is it legit? Anyway thanks, great video as always.

  • @chasegraham246
    @chasegraham246 7 лет назад +3

    1:35 Getting kind of edgy, Siraj.

  • @rawiasammout5833
    @rawiasammout5833 7 лет назад

    please need to understand svm and pso

  • @colox97
    @colox97 7 лет назад

    quantum computers may be very very useful tor this kind of task, they'd parallelize the entire process and allow REAL BIG data to be handled buch better.
    isn't this a P problem?
    am i right?

  • @matthewdaly8879
    @matthewdaly8879 7 лет назад +4

    What about gradient descent?

    • @eloyeligon6676
      @eloyeligon6676 7 лет назад

      The problem is how do you calculate the gradient

  • @thoughtsofapeer
    @thoughtsofapeer 7 лет назад

    Hi Siraj, I was thinking if you would like to make a video for all of us new CS-students out there on "Good to know basics".
    I am from Denmark, so we dont have quite the same educational system. I am coming from the equivalent to high school and have just been accepted to the Danish University of Technology where I will study Software technology. This is a bachelor which I will get in three years, then continuing with a to-year candidate/masters. I have no prior knowledge on programming or discrete math what-so-ever 😱
    ty
    Edit: I will be starting September 5th :D

  • @tommyeastman2999
    @tommyeastman2999 6 лет назад

    that song is a jam

  • @edouarddelaire1939
    @edouarddelaire1939 7 лет назад

    I've already seen people using genetics algorithms in order to find Hyperparameters. but i thinks that's not very efficient :/

  • @xumeixi382
    @xumeixi382 7 лет назад

    Great video! really makes me laugh

  • @АнтонДостоевский-ж2ш

    Cool skunk, thank you

  • @GuillaumeVerdonA
    @GuillaumeVerdonA 6 лет назад

    HAHAHA that "mmm look at that Gaussian" meme has a pic from a McGill prof I knew

  • @CrazyGamerSidh
    @CrazyGamerSidh 7 лет назад +1

    Background 🙃

  • @michaelvarney.
    @michaelvarney. 7 лет назад

    Gauss, as in louse, not Gauss, as in boss.

  • @JosephQPham
    @JosephQPham 6 лет назад

    humor and intelligence

  • @chicken6180
    @chicken6180 7 лет назад

    💯

  • @jmoz
    @jmoz 5 лет назад

    Equally interesting and ridiculous.

  • @blindfoldchessabhi
    @blindfoldchessabhi 6 лет назад +1

    Yey

  • @killordie2412
    @killordie2412 7 лет назад

    Can you share your collection of memes please?

    • @chicken6180
      @chicken6180 7 лет назад

      No you see his memes change over time he doesnt even find memes anymore he has software to crawl the web and predict which memes siraj will most enjoy

    • @SirajRaval
      @SirajRaval  7 лет назад

      what spark said

    • @Ur.Podcast_Buddy
      @Ur.Podcast_Buddy 4 года назад

      @@SirajRaval waiting for new video

  • @MrPanthershah
    @MrPanthershah 4 года назад

    I somehow find all that animation distracting to get the point across. Mehhh

  • @Privacy-LOST
    @Privacy-LOST 5 лет назад +1

    If you think Siraj is exciting, have a look at this awesome dude on the same topic :
    ruclips.net/video/con_ONbhD2I/видео.html 😂

  • @exec9292
    @exec9292 2 года назад +1

    who did u copy to make this video lol

  • @nickmcneely5601
    @nickmcneely5601 7 лет назад +11

    G-owwww-sian, not G-awwwww-sian.

    • @nickmcneely5601
      @nickmcneely5601 7 лет назад

      Igotattitude93 That's what I said.

    • @nickmcneely5601
      @nickmcneely5601 7 лет назад

      Igotattitude93 I know plenty of Americans who say it correctly. Same for Euler. Shit, even Nietzsche.

    • @_sudipidus_
      @_sudipidus_ 7 лет назад +3

      Pronunciation depends on your hyperparameter selection :P

    • @SirajRaval
      @SirajRaval  7 лет назад

      thank you

  • @averageengineeer
    @averageengineeer 7 лет назад

    Headache !!! :(

    • @SirajRaval
      @SirajRaval  7 лет назад

      please clarify, what specifically gave you a headache? thanks

  • @ismaelgoldsteck5974
    @ismaelgoldsteck5974 7 лет назад

    Boi I'm early

  • @yb801
    @yb801 6 лет назад

    I suck

  • @AkarshSundareswar
    @AkarshSundareswar 7 лет назад +5

    First :p

    • @SirajRaval
      @SirajRaval  7 лет назад +2

      congrats

    • @AkarshSundareswar
      @AkarshSundareswar 7 лет назад +7

      I want to thank my parents, teachers, brother, sister and my dog for this great opportunity. Without them, this would not be possible.

  • @PCCoooler
    @PCCoooler 3 года назад

    I can't tell you how much I hate this guy, but this is the only video that explains what I want to know :'(

  • @vaptua4109
    @vaptua4109 7 лет назад

    Second