187 - Hyperparameter tuning for learning rate and momentum

Поделиться
HTML-код
  • Опубликовано: 15 янв 2025

Комментарии • 19

  • @thevgancheetah
    @thevgancheetah 3 года назад +2

    When you said the answer to life is 42 I subscribed immediately! Thank you for great content!

  • @emmanouil2586
    @emmanouil2586 2 года назад

    I cant write this for each and every video I want to since there are too many, but thank you for the quality content!

  • @Mohomedbarakat
    @Mohomedbarakat 4 года назад

    Thanks for your great efforts!. Wonderful explanation

  • @jumanatte
    @jumanatte 3 месяца назад

    Great video!! Thank you!

  • @lalitsingh5150
    @lalitsingh5150 4 года назад

    Another great video. Thank you

  • @khushpatelmd
    @khushpatelmd 4 года назад +3

    Thank you for amazing tutorials. Can you please make a video on Bayesian Optimization? thank you.

  • @sadeghsalehi9103
    @sadeghsalehi9103 2 года назад +1

    @DigitalSreeni
    Hello there, thanks for your greats tutorials. I have a question that shouldn't we focus more on CV argument in GridSearchCV in order to use like RepeatedKFold to address the both data-uncertainty and algorithm-uncertainty(in this case, MLP-uncertainty)? like using at least 10Fold-10Times or 30times? I know it takes a really big time and it make us sad so, but isn't it the only way we could really trust the result for best hyperparameters? or as with your experiences is it ok to just use like cv=5or10Fold and not use repeating process(like 10Fold-1Time)?

  • @AzizallahFarhadi
    @AzizallahFarhadi 9 месяцев назад

    As you can see, your learning rate has chosen the maximum value, and it is clear that the selected range is not an appropriate range, right?

    • @DigitalSreeni
      @DigitalSreeni  9 месяцев назад

      Yes, if the selected value is either maximum or minimum then we have no information on how the values outside of this range would perform. You need to search again by changing the range.

  • @helimehuseynova6631
    @helimehuseynova6631 3 года назад

    Hello, How can we do hyper parameter optimization with transfer learning?

  • @evyatarcoco
    @evyatarcoco 4 года назад

    Awesome 👏🏻

  • @rohinigaikar4117
    @rohinigaikar4117 4 года назад

    This is the common process to get best hyperparameter, or for segmentation it will be different?
    Please guide. Thank you

  • @frankdearr2772
    @frankdearr2772 2 года назад

    I use colab, do you think it will run under cudnn...LSTM, do you have tried it ? If so, do you have an exemple to get me on the right path please ? thanks a lot :)

  • @lindajordan8950
    @lindajordan8950 2 года назад

    can you please make a video on hyperparameter optimization using genetic algorithm for unet?

  • @omarabdelwahab3722
    @omarabdelwahab3722 3 года назад

    Hi,
    Thank you for the tutorials.
    I did hyperparameter tuning for my model and get the best parameters but when I plot the val_loss and training loss and val_accuracy with training accuracy for the model based on the best parameters. It shows that the best model is not fitting well. I calculated the performance indices (accuracy, AUC, kappa, recall and precision) for both training and testing data set and all of them are ok (> 0.9). Also, I visualized the results and it is ok. do you have a recommendation to overcome this problem with the model?
    I used to use early stopping but I couldn't find a way to include it ( callback in general ) in the hyperparameter tuning process (I found someone on StackOverflow that recommended including it to the Keras classifier but it didn't work with me). is there a way to include it?
    I am looking forward to your response and thanks in advance

  • @ifeanyiokwuazu3225
    @ifeanyiokwuazu3225 3 года назад

    Is this same thing as finding global minimum ?

  • @СергейКулаженко-л3и

    random.seed(42) i always knew why it's 42. btw Thanks for all the fish!