Hindi-Types Of Cross Validation In Machine Learning|Krish Naik
HTML-код
- Опубликовано: 10 дек 2024
- Cross-Validation also referred to as out of sampling technique is an essential element of a data science project. It is a resampling procedure used to evaluate machine learning models and access how the model will perform for an independent test dataset. Below are the types of cross validation.
Leave one out cross-validation
Holdout cross-validation
Repeated random subsampling validation
k-fold cross-validation
Stratified k-fold cross-validation
Time Series cross-validation
#crossvalidation #machinelearning #datascience
github: github.com/kri...
Subscribe @krishnaik06 Channel For Data Science Videos In English.
channel link: bit.ly/3aeve4r
ML playlist in hindi: bit.ly/3NaEjJX
Stats Playlist In Hindi:bit.ly/3tw6k7d
Python Playlist In Hindi:bit.ly/3azScTI
Connect with me here:
Twitter: / krishnaik06
Facebook: / krishnaik06
instagram: / krishnaik06
Krish, Your videos are really good. Had I had the option to subscribe the videos multiple times, I would have done that. You deserve it. Thanks.
I liked the vedio easy to understand eagr to watch implement part
very nice bro. really like your style: Regards Dr. Vishwanath Bijalwan
Please continue this hindi series
Sir, please make videos on practical implementation of all these cross validation techniques.
Informative Sir 💯
thank you sir for this
Sir thankyou for your effort.
Thankyou so much from Nepal
Can you do a video on applying diff cross validation techniques on image classification problem and how to extract the fold or split where the model performance is the best, which can be used further to test different models on the best fold
@krish could you plz tell me which pen tablet and software u use to create these videos??
big fan sir...🤩🤩🤩
Thank you very much Sir😇😇
Watch at 2x for better learning.
sir you said some Types Of Cross Validation not work with imbalanced dataset,, so how we check that the dataset is imbalanced???
Question:
For "Hold One Out" CV, what should be the number of iterations? Like, for LOOCV, the iteration is the number of records, and for k-fold, it is the value of 'k'.
For Time Series CV, there is only one way to split the data, how this be considered as cross-validation?? It's just like the test-train split. And if there are multiple ways to split, what should be the number of iterations (no of experiments)?
For the Hold-One-Out approach using train_test_split:
Number of Iterations: 1
You only perform one iteration of splitting the dataset into training and test set.
Imbalanced dataset pr kyaa kr skte h pr?
saare model overfit ho jaa rhe
Isn't the LOOCV similar to K-Fold CV (where K=1)?
yes 😂
Not k=1 when k=total data set size ( no. of experiments)
fntastic