Hello there, thanks for your greats tutorials. I have a question that shouldn't we focus more on CV argument in GridSearchCV in order to use like RepeatedKFold to address the both data-uncertainty and algorithm-uncertainty(in this case, MLP-uncertainty)? like using at least 10Fold-10Times or 30times? I know it takes a really big time and it make us sad so, but isn't it the only way we could really trust the result for best hyperparameters? or as with your experiences is it ok to just use like cv=5or10Fold and not use repeating process(like 10Fold-1Time)?
As usual, a great video, which takes the fine-tunning work more comfortable and easy.
Recently I started involved in this channel. Great content. Is really interesting use the conv features to train a svm model.
sir what is cubic and medium svm
Hi Sreeni sir,
Xgboost hyper parameter tuning with some example image segmentation(in your style) will be helpful.
Is it possible to train the model batch wise using grid search ?
Hello there, thanks for your greats tutorials. I have a question that shouldn't we focus more on CV argument in GridSearchCV in order to use like RepeatedKFold to address the both data-uncertainty and algorithm-uncertainty(in this case, MLP-uncertainty)? like using at least 10Fold-10Times or 30times? I know it takes a really big time and it make us sad so, but isn't it the only way we could really trust the result for best hyperparameters? or as with your experiences is it ok to just use like cv=5or10Fold and not use repeating process(like 10Fold-1Time)?
thank you best teacher ..