Wow, thank you so much for these video. I am a software engineer by trade but increasingly big tech companies have ML system design as one of their interview rounds. Your content was amazingly helpful in preparing for those interviews!
Please ma, can you share the codes you used to plot the true positive rate vs false positive rate graph, PR curve? It looks so beautify and i can't get the exact, please help. Thanks in advance
Thank you for this video, Thank you so much. I have questions to ask based on this model evaluation, my questions go like this sir, " is there a way to use the confusion matrix to know the exact datapoint in our dataset our model got wrongly during the predictive system? Also, Sir when we deploy our model to a web app using streamlit, can we use a confusion matrix to figure out which exact datapoint our model predicted wrongly by applying the confusion matrix to the final predictive system output in the web app ?
Awesome content, but something unrelated question, What are your camera settings? I especially like your camera setup, could you give info on that? What lens, what aperture, and anything else is needed to replicate the same light/room setup. Thanks 🙂
What do you think about repeated random data splitting e.g you split the data 80 percent for train and 20 percent for test on a random basis that preserves the class structure vs k fold cross validation? Edit yep I now know this is worse
Depending on the language you're using, you'd need different resources, but for Python and the Scikit-learn library, here is some really good and comprehensive documentation, explaining the implementation of each metric: scikit-learn.org/stable/modules/model_evaluation.html
This video helped me to pass the azure data scientist associate exam.
Thanks for the video.
super clear explanation! I seldom leave comments, but this video totally amazed me!
That's great to hear, thank you!
Wow, thank you so much for these video. I am a software engineer by trade but increasingly big tech companies have ML system design as one of their interview rounds. Your content was amazingly helpful in preparing for those interviews!
Great to know Diwas! Good luck with your interviews!
Thank you so much, this video helped me understand the metrics in the clearest way possible.
Glad it helped!
*INSANELY* helpful. Thank you *so* much!
You're very welcome!
Thanks a lot I needed this clarification for my presentation
Well explained! thank you soooooo much
You are very welcome. :) - Mısra
love your videos
Thanks for this informative lecture.
Thank you for the informative video
You're very welcome!
Well done 👍
thanks a lot for your explanations.
Seems the MSE is like Brier score?!
super! thanks!
How to evaluate clustering algorithms like K-Means and Fuzzy C-Means.
Please ma, can you share the codes you used to plot the true positive rate vs false positive rate graph, PR curve? It looks so beautify and i can't get the exact, please help. Thanks in advance
Interesting how most people jump to the RECALL section. Why? Is it a harder topic?
Thank you for this video, Thank you so much. I have questions to ask based on this model evaluation, my questions go like this sir, " is there a way to use the confusion matrix to know the exact datapoint in our dataset our model got wrongly during the predictive system? Also, Sir when we deploy our model to a web app using streamlit, can we use a confusion matrix to figure out which exact datapoint our model predicted wrongly by applying the confusion matrix to the final predictive system output in the web app ?
How about the differences here for a non-linear regression? :)
Awesome content, but something unrelated question, What are your camera settings? I especially like your camera setup, could you give info on that? What lens, what aperture, and anything else is needed to replicate the same light/room setup. Thanks 🙂
thank you dear👏
You're welcome! - Mısra
Good video thanks
You're welcome
how to calculate the coefficient of determination (R2)
great!
"شكرا جزيلا لك"
This means "thank you so much" in Arabic 🕴
What do you think about repeated random data splitting e.g you split the data 80 percent for train and 20 percent for test on a random basis that preserves the class structure vs k fold cross validation? Edit yep I now know this is worse
Why??? I am concidering It now.
Can you please upload code shortcuts of this metrics ?
Thank you in advance
Depending on the language you're using, you'd need different resources, but for Python and the Scikit-learn library, here is some really good and comprehensive documentation, explaining the implementation of each metric: scikit-learn.org/stable/modules/model_evaluation.html
Greatt
❤️
Good explanation + beautiful face (:
Idk i somehow seem not being capable of following this pace
How do you pronounce "Accuracy"? Its so triggering lmao
Thats not american nor british english, right?
Well explained but might be good if you added something on threshold setting for binary classification
Thank you!