You can get a copy of the code from the StatQuest GitHub, here: github.com/StatQuest/roc_and_auc_demo/blob/master/roc_and_auc_demo.R Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
Hi Josh, Love your content. Has helped me to learn a lot & grow. You are doing an awesome work. Please continue to do so. Wanted to support you but unfortunately your Paypal link seems to be dysfunctional. Please update it.
No, thank you! You're comment was very helpful and spared me a lot of future embarrassment. The video was only seen by 100 or so people (not 1,000s) before you pointed out the error.
Your explanation of the process and logic behind each function and line are so helpful. I hope you'll make more of these videos. Thank you so much, this content is uniquely valuable.
Crazy how good you are at explaining. You explain the little things I always start to struggle with other teachers/tutors! Thank you so much for these Videos
@@statquest If you have two output neurons in a ANN (for a two class classification problem {1,0; 0,1}, it is okay to build the ROC just by comparing output of any one of those neurons with its corresponding target?
Impressive video. Theory and examples with software are the best way to learn. There is much going on this video, one of the best of ever. Thank You, Josh, greetings form Italy for a happy new year for you, your beloved ones and for all the people which follow your amazing lessons.
Hey Josh, great videos on ROC curves, your teaching is refreshingly concise and clear. I just have one question that I hope you could expand on. When we first generate 100 samples from a normal distribution, why do we need to sort them from low to high? And what would the dangers be if we didn't do this? Thanks for the great content!
@@PeterKidd-s5c Technically, you don't need to sort them, but it makes it easier to look at the data. When we print out the values for the "obese" variable at 4:11, the output is way easier to interpret because the values for weight were sorted.
Great again! I would be interested to see how to make combined ROCs for, say 2-4 different biomarker candidates. This would be to see if their combined use would result in higher AUCs than that of individual markers.
Hahaha.. That cute confession that you have a hard time remembring what sensitivity and specificity mean, made me laugh.. Because it is so confusing to me also.. These really are confusion metrics🤣
Hi, this video solved a puzzle for me. I was searching how to make a ROC curve using R. Now, can u demonstrate how to calculate the other statistics like precision, negative predictive value, accuracy etc, using R, and how to plot these
Thank you so much! Do you consider make a video about limited dependant variables models (tobit, heckman...)? It will be very helpful for us! All the best.
Thank you! This is a short bibliography about the topic: J. Scott Long, Regression Models for Categorical and Limited Dependent Variables Alfred DeMaris, Regression With Social Data: Modeling Continuous and Limited Response Variables Wooldrige, Introductory Econometrics I can share you the books if needed.
@@Davidravaux OK. However, just know that my to-do list is huge (it has about 200 things on it - I get about 3 or 4 requests every day), so it might take me a long time to get to it. However, if a lot of people start asking for a certain topic, that topic gets moved closer to the top of the to-do list. So, if you know of a ton of people interested in this subject, you should have them add to this comment.
Great vedio! Very helpful. BTW, there is a discrepancy between this clip and the code shared in your website about the obj roc.df (line 78). Nothing has been assigned to the obj yet so when we run the line 78 gives an error msg. Overall, very clear and handy. Thank you!
This video is absolutely amazing! but how can i determine the threshold/cut off weight from threshold probability that decides whether the subject is obese or not using code and not by direct extrapolation from the logit curve?
For anyone that searches how to make them squared. Put "par(pty= "s")" before running the lane with the graph. And if u got like huge margins in your graph, u need one more argument in the roc() wich is "asp=NA" ; also you could print your AUCS easily in the plot. My code looks like this: roc(df_mod_cand$clase, mod_cand$fitted.values, plot=T,asp=NA,col="red", lwd=3, legacy.axes=T, print.auc=TRUE)
@@statquest A year later, I suddenly wake up to StatQuest. Python implementation please. Perhaps SciKit Learn also has built-in computations for these and other metrics. I'll check...
Many thanks Josh, you are doing a great job. In my study, I would like to calculate and plot pROCs for a couple of maxent scenarios and glm model scenarios using 1000 iterations and a 5% omission error using pROC package in R, would be really grateful if you can guide me a bit. Thanks in advance.
@@statquest May I get the R code for the scenario I mentioned? I am still trying to figure out how to prepare data from the maxent output and then use it with pROC package to calculate and plot AUCs. I am relatively a newbie in R. Theory wise I think I am pretty clear, but struggling with codes and commands to get this job done with pROC package.
Hello for the video, really useful, in this example you come up with a method to classify obese and not obese , what about when you don't know a threshold for the initial classification of obese or not obese ? Does the pROC function test different thresholds ?
AUC for Logistic regression is more than AUC for RF, but if you consider only corner most points for both, RF does better, so who is the winner in this case ?
@@statquest Thank you for your reply! I was actually wondering how to interpret the threshold numbers seen on 09:51. After head(roc.df), you get a list of TPP, FPP and thresholds. For example in the 2nd row TPP 100 FPP 97.77, what does threshold of 0.01349 mean? I also have a separate question, I am curious if it is always necessary to always create a linear model first for the ROC curve? For example I am comparing the ROC curves of age and co-morbidities against non-cancer mortality, do I have to create a linear regression for age using glm()?
Didn't realize you also did videos with R code. I am now truly out of a job, as my students are completely covered. Maybe I could take up the guitar...what...he does that also....dammit.
@@mathiasschmidt93 As long as your predicted value is binary, it shouldn't matter how many variables you use to make predictions - the process is the exact same as illustrated in this video. To see how it is done in R, see: ruclips.net/video/qcvAqAH60Yw/видео.html
Here's a counter-intuitive trick that helps me keep the two straight: SENSITIVITY = True POSITIVE Rate, even though the term does not have a P but does have an N. SPECIFICITY = True NEGATIVE Rate, even though the term does not have an N but does have a P.
i am following python for data science so far and got stuck after saw this video , best person like you using R language instead of python so what should i do and which one is best for data science and also in future purpose R program or python kindly let me know and enlighten me thanks in advance ..! little BAM
They are both very useful. Python is a great language used in a lot of different situations and has a lot of good machine learning libraries. In contrast, R is very useful for doing statistics.... So I would recommend learning both if you have time.
Thank you very much for a great video. Is there a way how to get the the sensitivity and specificity data when setting the threshold manually to certain level (e.g. weight cut off for obese to 30g) ? Thanks
Love your content! Quick q: from a conceptual standpoint, are you just testing the hypothesis that the underlying distribution of the weights (which you defined as a gaussian) is not a uniform distribution
ROC graphs give us a sense of how accurate or models are given different thresholds for making decisions. For more details, see: ruclips.net/video/4jRBRDbJemM/видео.html
First, you find the threshold you are interested in (these are in roc.df), then we look at weight associated with the largest glm.fit$fitted.values < the threshold. For example, if the threshold is 0.5, then the weight is: max(weight[glm.fit$fitted.values < 0.5])
In these examples, the thresholds are the actual cut-off values. In other words, if the logistic regression predicts that the probability that a mouse is obese is 0.9, then we would compare that to the threshold that we obtained from the ROC graph to make a final classification.
THanks for the videos Josh! I have a question about AUC. Even though in this video AUC for random forest is lower than logistic, isn't forest a better alternative here as there exists a threshold that generates higher true positive rate for the same false positive rate compared to logistic. This makes the significance of AUC subjective in comparison
What you have to do is pick a range of thresholds that are acceptable. Once you do that, you can compare the AUC between those thresholds to determine which method is best.
Hi Josh, your videos are great! I have one question about choosing best method based on ROC overlapping graph. If we compare Logistic Regression and Random Forest we see that Logistic Regression is better because of bigger AUC. Bur does it make more sense here to choose Random Forest because one specific instance of Random Forest (with one specific threshold) gave us best confusion matrics? I assumed here that accurately classifyng positive and negative class are equally important.
It really depends on your goals. In general, Logistic Regression performs better. However, depending on what threshold works best for you, you may still choose Random Forests if it performs better at that threshold.
I got an error, Error in roc.data.frame(trainData, fitModelTrai$votes[, 1], plot = TRUE, : 'response' argument should be the name of the column, optionally quoted. the only difference between your code and mine is that I have many parameters/columns/features (approx 35) not only one (weight)
Hey! Wonderful video. I had just one doubt- I used a similar code that you used in my Rstudio. And as the runif function is generating random numbers, I could have very well expected that the values in the obese variable is different from the ones generated in your machine. However, eerily enough, it came out to be exactly the same. What sort of sorcery is this? 😮
Is there anyway to suppress plotting the top and right axes? I tried bty='n' and axes=FALSE to add them later using axis(1) and axis(2) but neither of those worked.
Hi and thanks for your great videos! Could you please elaborate about the obese variable and specifically about the "test" part in that code line. What if I already know who is obese and who is not (let's say based on some external medical profile, let's say "real") and I want to estimate the prediction of the model which is based on a some score (let's say "score") that each individual has. Would I just do glm(real ~ score).? What if I wanted to find the best score - the score that above it I classify someone as "obese" and below it "not obese". what's between the probability threshold in ROC curve and a thresholding of the score itself. Thanks!
In order to draw this ROC graph, we have to know who is obese and who is not to begin with. So the situation in this video is no different from yours. If you want to find the "best" score, you have to then decide what percentage of false positives and false negatives you are willing to live with - the ROC graph will help you decide that. You can then find the corresponding value by looking at the thresholds and the probabilities predicted for from your model with different scores.
So do these thresholds correlate to the probabilities that are used to separate the obese vs. not obese? Is there a way to figure out how to convert the thresholds back to the actual weights themselves that are used as the cutoff?
The thresholds, with the exception of -infinity and +infinity, are the exact same as the probabilities. -infinity corresponds to a probability of 0 and +infinity corresponds to a probability of 1. Thus, you can compare thresholds to the original glm.fit$fitted.values and match those to the original array of "weight" values.
@@statquest Many thanks for a great video. Could you kindly explain how exactly we can do this? I am looking to convert these threshold to actual cut-off values
@@redgreenskittles First, I would look at the ROC curve to find my threshold. For the example, we might pick a False Positive Percentage of 20 to be the threshold. Then I would look in roc.info to find the threshold associated with that false positive percentage. We can do that by just printing roc.info to the screen and looking at it, or with the command... roc.df[min(which(roc.df$fpp
Is it expected that the number of sensitivity/specificity values determined by the roc function (that we stored in the data frame) may not match the number of predictor/response values that I input? For example, my input predictor/response vectors contained 46 objects, but the roc function returned only 12 sensitivity/specificity values.
I believe this is possible if there are fewer thresholds that make a difference. In other words, some thresholds might result in the same number of false positives, true positives etc. and in that case, those "duplicate" thresholds will be omitted.
@@statquest Okay great this is exactly what I thought was happening--just wasn't sure if that was a possible outcome. Thanks so much for your reply and for all the great videos!!!
13:47 - sorry I don't understand why, in `rf.model$votes`, choose column 1 (which is the column of zeros) and not column 2 (which is the column of ones)
Hey Josh, is there a way to make inferences on more than two ROC and to perform multiple comparisons? (a generalization of DeLong's test? and maybe a method to adjust alpha for multiple comparisons too?)
Thank you for another great video. I have a question, what if we have multiple problems for classifications? Not only two classifications (obese and not obese). For example, we want to classify 10 cell types (let's say cell type 1, cell type 2, ..., cell type 10) whether these cell types are present or not in the tissue sample? How can we use this roc() function to plot the ROC curve?
@@statquest I have made my own function to plot the ROC curve with similar condition I mentioned. However, I need to make another function to calculate the AUC and was hoping I could use the roc() function which seems providing more information and can include much more information, such as AUC and partial AUC as well. 😰
You can get a copy of the code from the StatQuest GitHub, here: github.com/StatQuest/roc_and_auc_demo/blob/master/roc_and_auc_demo.R
Support StatQuest by buying my book The StatQuest Illustrated Guide to Machine Learning or a Study Guide or Merch!!! statquest.org/statquest-store/
Hi Josh,
Love your content. Has helped me to learn a lot & grow. You are doing an awesome work. Please continue to do so.
Wanted to support you but unfortunately your Paypal link seems to be dysfunctional. Please update it.
The code would not run when I downloaded it from github
@@ryanmckenna2047 What part didn't run? I just re-ran it and worked fine.
did u make this in python too??
@@ashishdayal172 not yet
Complex things in simple and understandable language. I have never met a better teacher!
Thank you very much! :)
"The only man who never makes mistakes is the man who never does anything."
Thank you ;)
No, thank you! You're comment was very helpful and spared me a lot of future embarrassment. The video was only seen by 100 or so people (not 1,000s) before you pointed out the error.
Dude, your videos are great. I never found something so clearly on the internet. Congratulations!!!
Your explanation of the process and logic behind each function and line are so helpful. I hope you'll make more of these videos. Thank you so much, this content is uniquely valuable.
Thanks!
Just to let you know that I found your channel via Claude and I am not disappointed! 91 videos left BAM BAM !
You're making great progress! :)
Crazy how good you are at explaining. You explain the little things I always start to struggle with other teachers/tutors! Thank you so much for these Videos
Happy to help!
These are the best videos. When I need to relax, I watch your videos
Glad you like them!
@@statquest If you have two output neurons in a ANN (for a two class classification problem {1,0; 0,1}, it is okay to build the ROC just by comparing output of any one of those neurons with its corresponding target?
Thanks Josh, I changed it to {1,0} as output as the AUC for the two neurons {1or0} in the {1,0;0,1} architecture were not the same.
Impressive video. Theory and examples with software are the best way to learn. There is much going on this video, one of the best of ever. Thank You, Josh, greetings form Italy for a happy new year for you, your beloved ones and for all the people which follow your amazing lessons.
Wow, thanks!
Such an awesome channel I came across! ....gonna share it with everyone under my umbrella !!! You are doing really great bro!
Thank you! :)
Best song ever, Josh. StatQuest keeps gettin’ better and better! Many thanks.
Thank you so much! :)
Great explanation of everything including each parameter in the graphs. Loved it!
Thank you!
The silly songs, the calm voice and the bams gives this vibes as if the course is narrated by Forrest Gump.
Love it.
Thanks! :)
Hey..... Love the way you present ❤️
Thank you so much 😀
I thank God I found this channel 2 years ago... 😇
bam!
@@statquest 😄😄😄
Thank you sooooo much Josh! You are a life saver!!😄
Happy to help!
hey bro, i love your videos so much, please hang in and i will continue to support you!
Thank you very much!
Hey Josh Ty again, while my studies I reproduced everything using R Colab (Really recommend for who is studying Josh's codes in R)
You solve my headache. Thanks a lot
Happy to help!
Thanks for your wonderful and detailed videos!
Thank you so much for supporting StatQuest! BAM! :)
Good job and well-done. I like your style of teaching, it's great!!!
Thank you! 😃
Wonderful tutorial!!.....thank you so much Josh :)
Thanks! :)
waaa.. i'm so thankful found this video. Thanks a lot. Stay healthy cool people :)
Thanks!
I mean, the stats tutorial is indeed very well done, but the intro song was already enough to make me immediatly click on the like button.
bam! :)
Thank you for this informative video. It helped me a lot. Great work!
Glad it helped!
Incredibly helpful, thank you!
Thanks!
thank you for such an informative tutorial
Glad it was helpful!
Dear ,i haveenjoyed ur video ,very much clearity of thoughts
Thank you so much 🙂
YOU ARE MARVELOUS,EXTRAORDINARY .I WISH YOU COULD HAVE EXPLAINED IN PYTHON
One day I will.
Thank you sooooo much for your lessons. Super helpful
Thanks!
Thank you for helping me with my credit risk class :)
You are amazing man, thanks for the video and keep making more videos like these. BAM!!
Double BAM!!!! Thanks for the encouragement! :)
Double Bam!!
You are amazing, man! Thanks!!!
Thanks
Thanks man, very clear and helpful
Thanks! :)
Keep up the good work .. Thank u🤩
Thanks!
Thank you so much for this video!
Glad it was helpful!
Thanks a lot sir! You are very helpful!
Most welcome!
Hey Josh, great videos on ROC curves, your teaching is refreshingly concise and clear. I just have one question that I hope you could expand on. When we first generate 100 samples from a normal distribution, why do we need to sort them from low to high? And what would the dangers be if we didn't do this?
Thanks for the great content!
What time point in the video, minutes and seconds, are you asking about?
@@statquest roughly around 2:55
@@PeterKidd-s5c Technically, you don't need to sort them, but it makes it easier to look at the data. When we print out the values for the "obese" variable at 4:11, the output is way easier to interpret because the values for weight were sorted.
VERY helpful - thank you!
Glad it was helpful!
Many Thanks Josh!
You're welcome! :)
That ROC you had really tied the room together
:)
So good Thanks for the video
Glad you enjoyed it!
Bam! Good tutorial.
Thanks! :)
Please make more Videos with R! :)
:)
Hi Sir, your videos are very helpful. Hope that you can make a video on mean decrease Gini of Random Forest
I'll keep that in mind.
thank you very much !!!! 😁😁
Great again! I would be interested to see how to make combined ROCs for, say 2-4 different biomarker candidates. This would be to see if their combined use would result in higher AUCs than that of individual markers.
Noted
Great video thanks
Thank you!
Ah, the pirate's favorite programming language!
:)
Hahaha.. That cute confession that you have a hard time remembring what sensitivity and specificity mean, made me laugh.. Because it is so confusing to me also.. These really are confusion metrics🤣
Since I had so much trouble remembering about sensitivity and specificity, I wrote a little song to help me out: ruclips.net/user/shortsPWvfrTgaPBI
Wow.. You are so creative at making things easy.. I am impressed!
You are the savior of the little humans we are, thanke you god! I have a silly question, sometimes you use
R is funny about the "
Hi, this video solved a puzzle for me. I was searching how to make a ROC curve using R. Now, can u demonstrate how to calculate the other statistics like precision, negative predictive value, accuracy etc, using R, and how to plot these
I'll keep those topics in mind.
Thank you very much!
great video!!
Thank you! :)
Thank you so much!
Do you consider make a video about limited dependant variables models (tobit, heckman...)?
It will be very helpful for us! All the best.
OK. I'll put it on the to-do list, but it will be a while before I get to it.
Thank you! This is a short bibliography about the topic:
J. Scott Long, Regression Models for Categorical and Limited Dependent Variables
Alfred DeMaris, Regression With Social Data: Modeling Continuous and Limited Response Variables
Wooldrige, Introductory Econometrics
I can share you the books if needed.
@@Davidravaux OK. However, just know that my to-do list is huge (it has about 200 things on it - I get about 3 or 4 requests every day), so it might take me a long time to get to it. However, if a lot of people start asking for a certain topic, that topic gets moved closer to the top of the to-do list. So, if you know of a ton of people interested in this subject, you should have them add to this comment.
Ok, I totally understand, thank you for clarifying.
Great vedio! Very helpful. BTW, there is a discrepancy between this clip and the code shared in your website about the obj roc.df (line 78). Nothing has been assigned to the obj yet so when we run the line 78 gives an error msg. Overall, very clear and handy. Thank you!
Thanks for catching that! The problem had to do with how wordpress interprets the the ">" and "
@@statquest I see. Good to know! Thank you~ :>
You gotta stop saying BAM!!! it's really funny :D
Great video! One quick question. Do you know how to plot ROC-AUC graph for SVM and adaboost?
This video is absolutely amazing! but how can i determine the threshold/cut off weight from threshold probability that decides whether the subject is obese or not using code and not by direct extrapolation from the logit curve?
Thank you sir
Excellent videio
Thanks!
YOU DA BEST
Thank you!
Thanks a lot !
Thumb up for the "number-of-exclamation-points-on-the-BAM" track record.
haha! :)
For anyone that searches how to make them squared. Put "par(pty= "s")" before running the lane with the graph. And if u got like huge margins in your graph, u need one more argument in the roc() wich is "asp=NA" ; also you could print your AUCS easily in the plot.
My code looks like this:
roc(df_mod_cand$clase, mod_cand$fitted.values, plot=T,asp=NA,col="red", lwd=3, legacy.axes=T, print.auc=TRUE)
thank you soooo much!!!
:)
thank you for your video. btw, can you make one for python?
I'll work on it. I'm doing a lot more Python coding these days, so it makes sense.
@@statquest A year later, I suddenly wake up to StatQuest. Python implementation please. Perhaps SciKit Learn also has built-in computations for these and other metrics. I'll check...
Many thanks Josh, you are doing a great job.
In my study, I would like to calculate and plot pROCs for a couple of maxent scenarios and glm model scenarios using 1000 iterations and a 5% omission error using pROC package in R, would be really grateful if you can guide me a bit. Thanks in advance.
Let me know how it goes! :)
@@statquest May I get the R code for the scenario I mentioned? I am still trying to figure out how to prepare data from the maxent output and then use it with pROC package to calculate and plot AUCs. I am relatively a newbie in R. Theory wise I think I am pretty clear, but struggling with codes and commands to get this job done with pROC package.
@@rahulg1504 The code for this video is here: github.com/StatQuest/roc_and_auc_demo/blob/master/roc_and_auc_demo.R
Thanks for the video and explanations! What statistical test would you use to compare 2 ROC curves?
There are a bunch of options. This tool (in R) implements them: bmcbioinformatics.biomedcentral.com/articles/10.1186/1471-2105-12-77
Thank you
:)
The Answer to the Ultimate Question of Life, the Universe, and Everything :D F*cking loved the reference (hope it is not casual ^_^´)
:)
Hello for the video, really useful, in this example you come up with a method to classify obese and not obese , what about when you don't know a threshold for the initial classification of obese or not obese ? Does the pROC function test different thresholds ?
That's the whole idea of an ROC graph to being with - it's used to determine the optimal threshold.
AUC for Logistic regression is more than AUC for RF, but if you consider only corner most points for both, RF does better, so who is the winner in this case ?
Which corner are you looking at? I don't see RF doing better in either one. Or are you looking at the very edges?
StatQuest with Josh Starmer
At the north west corner
Rf at a point has better tpp and fpp
So isn’t rf better than logistic regression?
@@animeshkansal7746 North east? You are right. RF is a little better up there. This is a good example of when a Partial AUC might be more informative.
Thank you so much, your videos are really great
@@animeshkansal7746 Thanks!
i am a big fan of you! can you make a survival anaylsis video?
Yes! I will make one this spring. Many people have asked for this topic, so it is at the top of my to-do list.
savior
Thank you for the video. It was very easy to follow. May I know how do i obtain optimal cut off points using the ROC curve?
I answer that question in my video that explains ROC and AUC: ruclips.net/video/4jRBRDbJemM/видео.html
@@statquest Thank you for your reply! I was actually wondering how to interpret the threshold numbers seen on 09:51. After head(roc.df), you get a list of TPP, FPP and thresholds. For example in the 2nd row TPP 100 FPP 97.77, what does threshold of 0.01349 mean?
I also have a separate question, I am curious if it is always necessary to always create a linear model first for the ROC curve? For example I am comparing the ROC curves of age and co-morbidities against non-cancer mortality, do I have to create a linear regression for age using glm()?
Didn't realize you also did videos with R code. I am now truly out of a job, as my students are completely covered. Maybe I could take up the guitar...what...he does that also....dammit.
Ha! :)
Great video! I was wondering if it is possible to plot this graph in a Multinomial Logistic Regression?
Hmmm...I'm not sure.
@@statquest Ah okay, what about a multiple logistic regression? Any ideas about that one?
@@mathiasschmidt93 As long as your predicted value is binary, it shouldn't matter how many variables you use to make predictions - the process is the exact same as illustrated in this video. To see how it is done in R, see: ruclips.net/video/qcvAqAH60Yw/видео.html
thanks you bro
You're welcome! :)
Here's a counter-intuitive trick that helps me keep the two straight:
SENSITIVITY = True POSITIVE Rate, even though the term does not have a P but does have an N.
SPECIFICITY = True NEGATIVE Rate, even though the term does not have an N but does have a P.
Nice! :)
i am following python for data science so far and got stuck after saw this video , best person like you using R language instead of python so what should i do and which one is best for data science and also in future purpose R program or python kindly let me know and enlighten me
thanks in advance ..! little BAM
They are both very useful. Python is a great language used in a lot of different situations and has a lot of good machine learning libraries. In contrast, R is very useful for doing statistics.... So I would recommend learning both if you have time.
Thank you very much for a great video. Is there a way how to get the the sensitivity and specificity data when setting the threshold manually to certain level (e.g. weight cut off for obese to 30g) ? Thanks
If you already have a threshold in mind, you can just calculate everything directly by running your data through the model with that threshold.
Love your content! Quick q: from a conceptual standpoint, are you just testing the hypothesis that the underlying distribution of the weights (which you defined as a gaussian) is not a uniform distribution
ROC graphs give us a sense of how accurate or models are given different thresholds for making decisions. For more details, see: ruclips.net/video/4jRBRDbJemM/видео.html
Loved the video! How do you relate the threshold back to the data? I.e. make a statement like the threshold between obese and not obese is 140lb
First, you find the threshold you are interested in (these are in roc.df), then we look at weight associated with the largest glm.fit$fitted.values < the threshold. For example, if the threshold is 0.5, then the weight is: max(weight[glm.fit$fitted.values < 0.5])
Yeah sure 420 made better looking data 😁😂🤣😄🤗
Yep.... ;)
Thank your great lectures! The thresholds that you derive here are between 0 and 1. Can we translate these thresholds to the actual cut-off values?
In these examples, the thresholds are the actual cut-off values. In other words, if the logistic regression predicts that the probability that a mouse is obese is 0.9, then we would compare that to the threshold that we obtained from the ROC graph to make a final classification.
THanks for the videos Josh! I have a question about AUC. Even though in this video AUC for random forest is lower than logistic, isn't forest a better alternative here as there exists a threshold that generates higher true positive rate for the same false positive rate compared to logistic. This makes the significance of AUC subjective in comparison
What you have to do is pick a range of thresholds that are acceptable. Once you do that, you can compare the AUC between those thresholds to determine which method is best.
Hi Josh, your videos are great! I have one question about choosing best method based on ROC overlapping graph. If we compare Logistic Regression and Random Forest we see that Logistic Regression is better because of bigger AUC. Bur does it make more sense here to choose Random Forest because one specific instance of Random Forest (with one specific threshold) gave us best confusion matrics? I assumed here that accurately classifyng positive and negative class are equally important.
It really depends on your goals. In general, Logistic Regression performs better. However, depending on what threshold works best for you, you may still choose Random Forests if it performs better at that threshold.
BAM !!!!!!! Indeed
:)
Bouble Dam!!
:)
Love You
:)
I got an error, Error in roc.data.frame(trainData, fitModelTrai$votes[, 1], plot = TRUE, :
'response' argument should be the name of the column, optionally quoted. the only difference between your code and mine is that I have many parameters/columns/features (approx 35) not only one (weight)
Hey! Wonderful video. I had just one doubt- I used a similar code that you used in my Rstudio. And as the runif function is generating random numbers, I could have very well expected that the values in the obese variable is different from the ones generated in your machine. However, eerily enough, it came out to be exactly the same. What sort of sorcery is this? 😮
Did you set the seed of the random number generator? If so, we'll get the same random numbers every time.
Is there anyway to suppress plotting the top and right axes? I tried bty='n' and axes=FALSE to add them later using axis(1) and axis(2) but neither of those worked.
Hi and thanks for your great videos! Could you please elaborate about the obese variable and specifically about the "test" part in that code line. What if I already know who is obese and who is not (let's say based on some external medical profile, let's say "real") and I want to estimate the prediction of the model which is based on a some score (let's say "score") that each individual has. Would I just do glm(real ~ score).? What if I wanted to find the best score - the score that above it I classify someone as "obese" and below it "not obese". what's between the probability threshold in ROC curve and a thresholding of the score itself. Thanks!
In order to draw this ROC graph, we have to know who is obese and who is not to begin with. So the situation in this video is no different from yours. If you want to find the "best" score, you have to then decide what percentage of false positives and false negatives you are willing to live with - the ROC graph will help you decide that. You can then find the corresponding value by looking at the thresholds and the probabilities predicted for from your model with different scores.
So do these thresholds correlate to the probabilities that are used to separate the obese vs. not obese? Is there a way to figure out how to convert the thresholds back to the actual weights themselves that are used as the cutoff?
The thresholds, with the exception of -infinity and +infinity, are the exact same as the probabilities. -infinity corresponds to a probability of 0 and +infinity corresponds to a probability of 1. Thus, you can compare thresholds to the original glm.fit$fitted.values and match those to the original array of "weight" values.
@@statquest Great thanks for the help!
@@statquest Many thanks for a great video. Could you kindly explain how exactly we can do this? I am looking to convert these threshold to actual cut-off values
@@redgreenskittles First, I would look at the ROC curve to find my threshold. For the example, we might pick a False Positive Percentage of 20 to be the threshold.
Then I would look in roc.info to find the threshold associated with that false positive percentage. We can do that by just printing roc.info to the screen and looking at it, or with the command...
roc.df[min(which(roc.df$fpp
@@statquest Wow that was a super quick response. Works like a treat! thank you
Is it expected that the number of sensitivity/specificity values determined by the roc function (that we stored in the data frame) may not match the number of predictor/response values that I input? For example, my input predictor/response vectors contained 46 objects, but the roc function returned only 12 sensitivity/specificity values.
I believe this is possible if there are fewer thresholds that make a difference. In other words, some thresholds might result in the same number of false positives, true positives etc. and in that case, those "duplicate" thresholds will be omitted.
@@statquest Okay great this is exactly what I thought was happening--just wasn't sure if that was a possible outcome. Thanks so much for your reply and for all the great videos!!!
13:47 - sorry I don't understand why, in `rf.model$votes`, choose column 1 (which is the column of zeros) and not column 2 (which is the column of ones)
Believe it or not, it doesn't matter which column you choose, both will give you the same ROC curve.
Hey Josh, is there a way to make inferences on more than two ROC and to perform multiple comparisons? (a generalization of DeLong's test? and maybe a method to adjust alpha for multiple comparisons too?)
Good question! Off the top of my head I don't know if there is or not.
could you do the same in python too?
Thank you for another great video. I have a question, what if we have multiple problems for classifications? Not only two classifications (obese and not obese). For example, we want to classify 10 cell types (let's say cell type 1, cell type 2, ..., cell type 10) whether these cell types are present or not in the tissue sample? How can we use this roc() function to plot the ROC curve?
To be honest, I don't know the answer to that off the top of my head.
@@statquest I have made my own function to plot the ROC curve with similar condition I mentioned. However, I need to make another function to calculate the AUC and was hoping I could use the roc() function which seems providing more information and can include much more information, such as AUC and partial AUC as well. 😰