What I love the most about this RUclips channel, is that the quality of the free tutorials is much better than many paid ones. I must admit, that you have a talent in illustrating such a complex topic into very easy method. A true professor you are. I really don't know how to thank you. I would be so grateful if you create a tutorial of machine learning using tidymodels package. From the bottom of my heart, thank you. ❤❤❤❤
Wow, thanks! The most positive feedback I've ever got! 🙏🙏🙏 I already tried to get my hands on tidymodels, but they are still a bit chaotic for me. I plan to do it though in the future. So, stay tuned!
Even though I know very little about modeling it is clear that this package kicks ass! The animations on this video illustrating the concepts being described is superb!
Very well explained 👍 Thanks for sharing 🙏 just to be clear, for cross-validation, which model is going to run over the test-data created in the very beginning?
@@yuzaR-Data-Science thanks for the feedback 👍i was asking to specify if you may since there will be N models for each of the N-folds, there will be a statistic for each of the parameter of the model, isn't it? If so, which one?
Good to know about Monte Carlo CV in that package. Where does Monte Carlo fit in the variance|bias continuum vis-a-vis bootstrapping and cross-fold validation? I use tend to use 'caret' instead where I dial in these model performance tests via train_control. We need a vid on sub-space clustering. 😉
thanks for many suggestions! I'll do my best to cover ML in tidyverse ASAP, including sub-space clustering ;). For now, I am not sure about your monte-carlo question, I think it will depend on the data. The tidymodels are created by Max Kuhn, the same guy who created caret. So, it should contain everything, caret has.
@@yuzaR-Data-Science Another topic suggestion: Tweedie GLMs, working with hard right skew distributions with and without (hard & soft) zeros. Zero inflated models.
thanks, mate! Zero inflated models are definitely on the list, because I use them often. Not sure I'll do tweedie anytime soon though, because nobody in my field (medicine) is using them.
@@yuzaR-Data-Science Yeah, no need to look at the whole Tweedie family, but if you are faced with zeros frequently and use zero inflated models, the Tweedie of the 1 < epsilon < 2 family for zero and positive continuous data may be worth a look.
Kudos to our Boss, thanks in million sir, the data scientist gurus in the universe Sir could you make a tutorial on handling class imbalance when dealing with a binary classification
Good suggestion. I just did random forest with massive class imballance with lots of data and many zeros and few ones, and solved it this way: library(randomForest) set.seed(1) fit=randomForest(response ~ predictor1 + predictor2 + ... + , data = data, importance = T, scale = T, mtry = 5, ntrees=10000, sampsize=c(1000,1000)) So, "sampsize" is the argument that helps. I then got similar results to the logistic regression in terms of importance of variables and their interactions
Could I use this approach to remove some BIAS regarding different data sample efforts? Example: I have a dataset with monthly video records of animal interactions with vertebrate latrines. However, some spots have different months' recordings because come latrines weren't there when I installed my equipment, meaning I have different latrines sampling with different sample efforts. I am looking for a way to correct it, but since stats are something new for me, I was wondering if this approach you use could be used in my situation. Please don't stop doing this, we need this kind of informative videos and didactic 🙏🙏🙏. Thank you again for this one!
Thank for cool feedback! With the month it depends. It sounds to me, that you want to capture the variance (difference) from month to month, then "strata = " argument is a way to go. So, thats for resampling. But it also useful to check out mixed-effects model, where Month would be your random effect, it account for month then. But if you don't care about the month, just want to average out the monthly effect, then bootstrapping 1000-2000 times would do the trick. cheers
thanks for noticing! I think I wanted to use random forest for classification, I just used the "lm" in the name of the object where I saved the model. thanks for watching!
hey, I actually did a video on bootstrapping before doing this one, so, just browse on my youtube channel and you'll find it. and the link is working perfectly, I just checked it. thank you for watching!
Regarding this, yuzaR, I was wondering if it existed something along the concept of keeping % of classes in our test-train split, but for numerical values in order to draw two populations that got the most similar means, sd and all of that. Thank you :D
oh, it's a great question! I honestly never tried or needed it. I would be interested myself. I neves seen this in the tutorial, and could ask the creators of the package, but I don't know any case for a moment, I would need it. The thing is, if you stritify the numeric predictor and use a few breaks (strata = horsepower, breaks = 10) you'll make the testing and training sets (their distributions actually) very similar and then the means and SDs will be very similar. hope that helps. thanks you for watching!
@@yuzaR-Data-Science Yep, just edited it with the thank you cause I just watched it that part! You sir, are really fast answering comments!! Thank you again and Have a good day
As in the beginning of the video with the train-test split. CV and bootstrapping can be used to find the best model with ONLY the training set, but then you use this model to get a final R2 or RMSE from this model applied to a training set to predict our response variable in the test set. hope that helps
@@yuzaR-Data-Science exactly that definetely helps. I wish there had been a method to combine the values obtained from cv, kendi of ensembling. Thanx for the reply and the videos you share
What I love the most about this RUclips channel, is that the quality of the free tutorials is much better than many paid ones. I must admit, that you have a talent in illustrating such a complex topic into very easy method. A true professor you are. I really don't know how to thank you.
I would be so grateful if you create a tutorial of machine learning using tidymodels package.
From the bottom of my heart, thank you.
❤❤❤❤
Wow, thanks! The most positive feedback I've ever got! 🙏🙏🙏 I already tried to get my hands on tidymodels, but they are still a bit chaotic for me. I plan to do it though in the future. So, stay tuned!
I am watching R-tutorials since 7 years now. Your visual representations and explanations are the *best,* I've ever seen- so far.
Wow. Precocious child. When I was seven, I watched cartoons. 🤡
@@chacmool2581 Not since I was seven years old 🙂
Wow, thank you, Hans! That means the world to me!
I must agree. Even paid courses doesn't show this quality.
Thank you! It helped me a loot! Excellent explanation!
Glad you enjoyed it!
Even though I know very little about modeling it is clear that this package kicks ass! The animations on this video illustrating the concepts being described is superb!
Thank you for a nice feedback and for watching, Brian!
Excelent! Great animations and clear explanations. Thanks!
Glad you liked it! Thank you for watching!
Incredibly usefull chanel, love your work.
Much appreciated! That motivates to make more!
Very well explained 👍 Thanks for sharing 🙏 just to be clear, for cross-validation, which model is going to run over the test-data created in the very beginning?
Yes, correct. The model created on the training set will be running over the test set. Thanks you for nice feedback and for watching!
@@yuzaR-Data-Science thanks for the feedback 👍i was asking to specify if you may since there will be N models for each of the N-folds, there will be a statistic for each of the parameter of the model, isn't it? If so, which one?
Nice lecture for resampling. Please make a video for simulation study
Thanks 🙏 Sunil, I’ll do. But it’ll take some time, because I first want to cover frequentists stats. Then come to simulation
Thanks for the vid sir. Can you create a video about the use of tidymodels in time series model and analysis?
Thanks, Josh! Sure, it'l take some time, but they are definitely on my list.
Fantastic videos
Glad you like them! Thanks for watching!
Good to know about Monte Carlo CV in that package. Where does Monte Carlo fit in the variance|bias continuum vis-a-vis bootstrapping and cross-fold validation?
I use tend to use 'caret' instead where I dial in these model performance tests via train_control.
We need a vid on sub-space clustering. 😉
thanks for many suggestions! I'll do my best to cover ML in tidyverse ASAP, including sub-space clustering ;). For now, I am not sure about your monte-carlo question, I think it will depend on the data. The tidymodels are created by Max Kuhn, the same guy who created caret. So, it should contain everything, caret has.
@@yuzaR-Data-Science Another topic suggestion: Tweedie GLMs, working with hard right skew distributions with and without (hard & soft) zeros.
Zero inflated models.
thanks, mate! Zero inflated models are definitely on the list, because I use them often. Not sure I'll do tweedie anytime soon though, because nobody in my field (medicine) is using them.
@@yuzaR-Data-Science Yeah, no need to look at the whole Tweedie family, but if you are faced with zeros frequently and use zero inflated models, the Tweedie of the 1 < epsilon < 2 family for zero and positive continuous data may be worth a look.
@@chacmool2581 cool! thanks for your ideas, Chac! I'll keep it in mind and would explore when I have positive continuous data.
Kudos to our Boss, thanks in million sir, the data scientist gurus in the universe
Sir could you make a tutorial on handling class imbalance when dealing with a binary classification
Good suggestion. I just did random forest with massive class imballance with lots of data and many zeros and few ones, and solved it this way:
library(randomForest)
set.seed(1)
fit=randomForest(response ~ predictor1 + predictor2 + ... + ,
data = data, importance = T, scale = T, mtry = 5, ntrees=10000, sampsize=c(1000,1000))
So, "sampsize" is the argument that helps. I then got similar results to the logistic regression in terms of importance of variables and their interactions
Thanks so much sir for throwing more light you are one in millions
you are very welcome!
Could I use this approach to remove some BIAS regarding different data sample efforts? Example: I have a dataset with monthly video records of animal interactions with vertebrate latrines. However, some spots have different months' recordings because come latrines weren't there when I installed my equipment, meaning I have different latrines sampling with different sample efforts. I am looking for a way to correct it, but since stats are something new for me, I was wondering if this approach you use could be used in my situation.
Please don't stop doing this, we need this kind of informative videos and didactic 🙏🙏🙏. Thank you again for this one!
Thank for cool feedback! With the month it depends. It sounds to me, that you want to capture the variance (difference) from month to month, then "strata = " argument is a way to go. So, thats for resampling. But it also useful to check out mixed-effects model, where Month would be your random effect, it account for month then. But if you don't care about the month, just want to average out the monthly effect, then bootstrapping 1000-2000 times would do the trick. cheers
13.25 you have used rand_forest() instead of lm() on code line 124. Could you please clarify?
thanks for noticing! I think I wanted to use random forest for classification, I just used the "lm" in the name of the object where I saved the model. thanks for watching!
Hello. Can you do a demo about how to do bootstrap in R. Also the website link is not working. Thank you
hey, I actually did a video on bootstrapping before doing this one, so, just browse on my youtube channel and you'll find it. and the link is working perfectly, I just checked it. thank you for watching!
Regarding this, yuzaR, I was wondering if it existed something along the concept of keeping % of classes in our test-train split, but for numerical values in order to draw two populations that got the most similar means, sd and all of that. Thank you :D
oh, it's a great question! I honestly never tried or needed it. I would be interested myself. I neves seen this in the tutorial, and could ask the creators of the package, but I don't know any case for a moment, I would need it. The thing is, if you stritify the numeric predictor and use a few breaks (strata = horsepower, breaks = 10) you'll make the testing and training sets (their distributions actually) very similar and then the means and SDs will be very similar. hope that helps. thanks you for watching!
@@yuzaR-Data-Science Yep, just edited it with the thank you cause I just watched it that part! You sir, are really fast answering comments!! Thank you again and Have a good day
@@galan8115 you are welcome!
So how can we use cv sampled or bootstrapped models in prediction using predefined test set?
As in the beginning of the video with the train-test split. CV and bootstrapping can be used to find the best model with ONLY the training set, but then you use this model to get a final R2 or RMSE from this model applied to a training set to predict our response variable in the test set. hope that helps
@@yuzaR-Data-Science exactly that definetely helps. I wish there had been a method to combine the values obtained from cv, kendi of ensembling. Thanx for the reply and the videos you share
you are very welcome! thank you for watching!