@DigitalSreeni Hello there, thanks for your greats tutorials. I have a question that shouldn't we focus more on CV argument in GridSearchCV in order to use like RepeatedKFold to address the both data-uncertainty and algorithm-uncertainty(in this case, MLP-uncertainty)? like using at least 10Fold-10Times or 30times? I know it takes a really big time and it make us sad so, but isn't it the only way we could really trust the result for best hyperparameters? or as with your experiences is it ok to just use like cv=5or10Fold and not use repeating process(like 10Fold-1Time)?
Yes, if the selected value is either maximum or minimum then we have no information on how the values outside of this range would perform. You need to search again by changing the range.
I use colab, do you think it will run under cudnn...LSTM, do you have tried it ? If so, do you have an exemple to get me on the right path please ? thanks a lot :)
Hi, Thank you for the tutorials. I did hyperparameter tuning for my model and get the best parameters but when I plot the val_loss and training loss and val_accuracy with training accuracy for the model based on the best parameters. It shows that the best model is not fitting well. I calculated the performance indices (accuracy, AUC, kappa, recall and precision) for both training and testing data set and all of them are ok (> 0.9). Also, I visualized the results and it is ok. do you have a recommendation to overcome this problem with the model? I used to use early stopping but I couldn't find a way to include it ( callback in general ) in the hyperparameter tuning process (I found someone on StackOverflow that recommended including it to the Keras classifier but it didn't work with me). is there a way to include it? I am looking forward to your response and thanks in advance
When you said the answer to life is 42 I subscribed immediately! Thank you for great content!
Welcome!
I cant write this for each and every video I want to since there are too many, but thank you for the quality content!
Wow, thank you!
Thanks for your great efforts!. Wonderful explanation
Great video!! Thank you!
Another great video. Thank you
Thank you for amazing tutorials. Can you please make a video on Bayesian Optimization? thank you.
@DigitalSreeni
Hello there, thanks for your greats tutorials. I have a question that shouldn't we focus more on CV argument in GridSearchCV in order to use like RepeatedKFold to address the both data-uncertainty and algorithm-uncertainty(in this case, MLP-uncertainty)? like using at least 10Fold-10Times or 30times? I know it takes a really big time and it make us sad so, but isn't it the only way we could really trust the result for best hyperparameters? or as with your experiences is it ok to just use like cv=5or10Fold and not use repeating process(like 10Fold-1Time)?
As you can see, your learning rate has chosen the maximum value, and it is clear that the selected range is not an appropriate range, right?
Yes, if the selected value is either maximum or minimum then we have no information on how the values outside of this range would perform. You need to search again by changing the range.
Hello, How can we do hyper parameter optimization with transfer learning?
Awesome 👏🏻
This is the common process to get best hyperparameter, or for segmentation it will be different?
Please guide. Thank you
I use colab, do you think it will run under cudnn...LSTM, do you have tried it ? If so, do you have an exemple to get me on the right path please ? thanks a lot :)
can you please make a video on hyperparameter optimization using genetic algorithm for unet?
Hi,
Thank you for the tutorials.
I did hyperparameter tuning for my model and get the best parameters but when I plot the val_loss and training loss and val_accuracy with training accuracy for the model based on the best parameters. It shows that the best model is not fitting well. I calculated the performance indices (accuracy, AUC, kappa, recall and precision) for both training and testing data set and all of them are ok (> 0.9). Also, I visualized the results and it is ok. do you have a recommendation to overcome this problem with the model?
I used to use early stopping but I couldn't find a way to include it ( callback in general ) in the hyperparameter tuning process (I found someone on StackOverflow that recommended including it to the Keras classifier but it didn't work with me). is there a way to include it?
I am looking forward to your response and thanks in advance
Is this same thing as finding global minimum ?
random.seed(42) i always knew why it's 42. btw Thanks for all the fish!