This video is terrible. First of all, accuracy has a particular definition, so recall and performance are not a way of measuring accuracy because they have different definitions. Second, they way they present recall is just the precision of the other class which is not correct.
1:27 Why is it so satisfying to hear someone in a pretty professional context use the term "sucks" in this context? It's not profanity, but it's usually avoided - & the fact that it wasn't avoided here is really refreshing for some reason.
our sir was teaching this in class, and he took the same example when explaining this. He said you don't want to risk a life on short confidence interval for spotting cancer. It was Probability class, and it is nice to see that concept being applied here. I don;t know how they are linked, but they sure are. by that insight, Recall suited better here as larger confidence intervals will consider those cells that are suscipicious enough to be of cancer disease. so yeah
Precision -> it focuses more on the quality aspect in how the model actually predicted true positives in a significant amount. Thus, higher precision results in lower false positive rates. Recall -> it focuses more on the quantity of positive INSTANCES (both false and true positives are included) that were identified by the model. Thus, higher recall results in lower false negative rates.
Great example on precision and recall. F1 takes into account both precision and recall. It would've been helpful if an example of F1 was also presented. Overall, great intro!
Hello, precision is a measure used in statistics and machine learning to describe the accuracy of a model or classifier. It is defined as the ratio of true positive predictions to the total number of positive predictions made by the model. In the context of predicting bananas or apples, precision would represent the proportion of correct positive predictions made by the model, out of all the positive predictions it made. Precision focuses on the accuracy of positive predictions and is calculated as: Precision = True Positives / (True Positives + False Positives) Therefore, precision does not specifically capture the avoidance of mistakes in predicting only bananas or apples. It quantifies the correctness of positive predictions in general, without explicitly distinguishing between bananas and apples. To provide a more specific description related to avoiding mistakes in predicting bananas or apples, other metrics such as recall or accuracy might be more suitable. Recall, for example, measures the proportion of correctly predicted positive instances out of all the actual positive instances in the dataset. Accuracy, on the other hand, represents the overall correctness of predictions, taking into account both positive and negative instances.
I disagree. Performance can describe accuracy, recall, precision, etc depending on what you're looking to achieve. Accuracy = n correctly classified/n all classified
I thought the same, but further research I know now she is correct. You do not want to skip a cancer case so you should have a low precision/high recall model to make sure if in doubt the model classifies it as cancer. Here is a video that explains that very well. www.coursera.org/lecture/ml-classification/trading-off-precision-and-recall-IMHs2
1:02 After several listens, I realized you're saying "bananas IN apples" instead of "bananas AND apples". If you speak these essential words more clearly and lower that music, it'll be easier to understand. Also, it's not clear which side of the equal sign is your prediction and which side is the truth. You should call that out when presenting the diagram.
One example was just the other one flipped around. Doesn't really help to understand the difference between recall and precision. And I got only the vaguest idea what F1 is supposed to be, or where even the name comes from. You should have included a real mathematical definition what precision, recall and F1 are supposed to be.
precision (accuracy of positive predictions) - When the test says someone is sick (positive prediction), how often is it correct? If the test says 30 people are sick, but only 20 of them really are, your precision would be 66.7%. recall - Out of all the people who are actually sick, how many did the test correctly identify? If there are 40 people who are really sick, and the test only correctly identified 30 of them, your recall would be 75%. F1 score - harmonic mean of precision and recall
This video is terrible. First of all, accuracy has a particular definition, so recall and performance are not a way of measuring accuracy because they have different definitions. Second, they way they present recall is just the precision of the other class which is not correct.
well i only realized this AFTER my exam
at least the girl is cute♡
videos for 3 min and learning from these videos, the most satisfying education in the world!
1:27 Why is it so satisfying to hear someone in a pretty professional context use the term "sucks" in this context? It's not profanity, but it's usually avoided - & the fact that it wasn't avoided here is really refreshing for some reason.
I had the exact same thought lol :)
I DONT UNDERSTAND ITS EXACTLY THE SAME WITH THE CLASSES SWAPPED AAAAAAAAAAAHG
yeah the apple banana thing not an ideal example lmao
Couldn't have been any better explanation in a 3 min video. Appreciate it. And you have a soothing voice too.
our sir was teaching this in class, and he took the same example when explaining this. He said you don't want to risk a life on short confidence interval for spotting cancer. It was Probability class, and it is nice to see that concept being applied here. I don;t know how they are linked, but they sure are.
by that insight, Recall suited better here as larger confidence intervals will consider those cells that are suscipicious enough to be of cancer disease. so yeah
Thank you for the explanation! Broke down what my book was trying to tell me with better visuals.
Great! Thanks for the plain English description!
Keep following us for more tutorials, Seong.
Great explanation. Thank you.
Keep following us for more tutorials, Jisan.
Excellent video - needed to know the basics of these concepts in a pinch. Thank you! :)
Precision -> it focuses more on the quality aspect in how the model actually predicted true positives in a significant amount. Thus, higher precision results in lower false positive rates.
Recall -> it focuses more on the quantity of positive INSTANCES (both false and true positives are included) that were identified by the model. Thus, higher recall results in lower false negative rates.
Super awesome explanation!
Great example on precision and recall. F1 takes into account both precision and recall. It would've been helpful if an example of F1 was also presented. Overall, great intro!
Great video
Stay tuned with us for more tutorials!
Accuracy is the total correctly classified: true positives + true negatives divided by all attempts true positives + true negatives + false positives + false negatives.
Perfect that's exactly what I needed to know 😌
Keep following us for more content!
so is this the same thing as Type 1 & Type 2 error in statistics?
Should not Precision be described as avoids a lot of mistakes in predicting bananas or apples (not and)?
Hello, precision is a measure used in statistics and machine learning to describe the accuracy of a model or classifier. It is defined as the ratio of true positive predictions to the total number of positive predictions made by the model.
In the context of predicting bananas or apples, precision would represent the proportion of correct positive predictions made by the model, out of all the positive predictions it made. Precision focuses on the accuracy of positive predictions and is calculated as:
Precision = True Positives / (True Positives + False Positives)
Therefore, precision does not specifically capture the avoidance of mistakes in predicting only bananas or apples. It quantifies the correctness of positive predictions in general, without explicitly distinguishing between bananas and apples.
To provide a more specific description related to avoiding mistakes in predicting bananas or apples, other metrics such as recall or accuracy might be more suitable. Recall, for example, measures the proportion of correctly predicted positive instances out of all the actual positive instances in the dataset. Accuracy, on the other hand, represents the overall correctness of predictions, taking into account both positive and negative instances.
thank you
couldn't be any simpler than this
Why is there distraction music in background?
for the purpose of not confusing the viewer. I think the term "accuracy" should be swapped with "performance".
definitely
I disagree. Performance can describe accuracy, recall, precision, etc depending on what you're looking to achieve. Accuracy = n correctly classified/n all classified
i am a bit confused, why in the case of cancer, recall is more important than precision? Shouldn't it be the other way?
I'm afraid yes
I thought the same, but further research I know now she is correct. You do not want to skip a cancer case so you should have a low precision/high recall model to make sure if in doubt the model classifies it as cancer. Here is a video that explains that very well. www.coursera.org/lecture/ml-classification/trading-off-precision-and-recall-IMHs2
Excellent
Would be helpful if you showed equations as well!
Informative, thanks a lot.
This is a brilliant explanation.
very good
Glad you liked it, Candido.
Very helpful
thanks !
Stay tuned with us for more tutorials and talks!
Thank you.
As per the video precision and recall are same
I am also little confused, if the definition is correct. What in case of three classes?
1:02 After several listens, I realized you're saying "bananas IN apples" instead of "bananas AND apples". If you speak these essential words more clearly and lower that music, it'll be easier to understand.
Also, it's not clear which side of the equal sign is your prediction and which side is the truth. You should call that out when presenting the diagram.
One example was just the other one flipped around. Doesn't really help to understand the difference between recall and precision. And I got only the vaguest idea what F1 is supposed to be, or where even the name comes from. You should have included a real mathematical definition what precision, recall and F1 are supposed to be.
Really confusing explanation
The music is too loud. Very distracting.
good video but a bit more on F1 would have been much helpful...
I still don't get it.
precision (accuracy of positive predictions)
- When the test says someone is sick (positive prediction), how often is it correct?
If the test says 30 people are sick, but only 20 of them really are, your precision would be 66.7%.
recall
- Out of all the people who are actually sick, how many did the test correctly identify?
If there are 40 people who are really sick, and the test only correctly identified 30 of them, your recall would be 75%.
F1 score
- harmonic mean of precision and recall
very confusing
i love u
#Accuracy and #Precision / #परिशुद्धता और #यथार्थता Best RUclips Video Link - ruclips.net/video/pR-nba40DQo/видео.html
Worst explanation I have seen so far.
really bad explanation
remove the music
Cute
Terrible precision recall example...
Waooo..,you r so beautiful
you should start your ASMR business