Message from the creator: I hope you've all enjoyed this series of videos. It was fun to collaborate with freeCodeCamp! If you're interested in more content from me feel free to check out calmcode. Also, I'd like to give a shoutout to my employer, Rasa! We're using scikit-learn (and a whole bunch of other tools) to build open-source chatbot technology for python. If that sounds interesting, definitely check out rasa.com/docs/rasa/.
This is the way everything should be taught! I love that you present concepts in a structured and systematic way, speaking slowly and clearly, using as few words as possible... - starting with the concept and talking through drawing a logical diagram (which is so important for developing abstract thinking in terms of high level concepts, which is how we think when we are experienced in something). - then writing clean, concise code to implement each part of the concept - showing plots that directly demonstrate the effects of the entire iteration Too many tutorials make the mistake of talking too much. A lot of videos also either assume too much or too little about the viewer's knowledge. This seems to confidently stike the nail on the head! Thanks!
I must agree with others: this is a great lecture. I mean... REALLY good. Vincent, do you have any more of these? This stuff is not only informative, but also pleasant to watch and listen to. Good, correct, and clear English is rather rare these days. Sadly. This lecture is good because it does not shy away from details. It also goes beyond just showing the API. It tries to build something new from the available "Lego" pieces. Which is great as it shows creativity and also how to dig deeper to understand the data. Very, very good exposition. Many thanks.
I feel you about clear and well enunciated English. I HATE having to 'interpret' what I'm hearing....too much extraneous Cognitive Load for an already high Intrinsic Load topic.
It is a delicate subject, but I think the question of the Algorithm being racist is an ill advised one. The real question under it is whether The % of black population parameter affects the house price or not. Is the aim of a data scientist to make the actual prediction or to make the data fit a point of view (which, btw, I totally endorse in principle)
You can still downgrade your scikit-learn version to 1.0.2 and it should be fine, also if you don't want to, you can use the fetch_california_housing instead
Awesome Tutorial, I have some suggestions regarding your content: 1. Tutorial on RUST 2. Tutorial on JULIA 3. Tutorial on AWK & SED (Especially AWK) 4. Tutorial on LUA What do you guys think????
The way each dataset complements the associated pitfall you want to bring up at a given moment... wow. What an amazing intro -- it must have taken a lot of forethought and behind the scenes organization to make the flow of this video series seem so effortless. THANK YOU!!
Hello, I just wanted to say for those who plan to do the videos. The data set 'Boston house prices' has been removed by scikit, therefore this tutorial is not really working anymore unless you change the dataset
I did not succeed to reproduce the figure @ 1:16:56. I'm always getting the same figure as the one just before even I did the log transformation of the "Amount" column. Anyone have had the same problem?
The mean is always measured over all 10 splits, for precision, for recall AND for the minimum separately. In other words, FIRST the minimum is calculated, THEN the mean over all these minimums is calculated. If you would have only one split, there would not be a problem. But starting with two splits, we have: test_precision 1.0 and 0.46 = mean 0.73. test_recall 0.37 and 1.0 = mean 0.68. However, the minimum is 0.37 and 0.46, and if you calculate the mean of these two, it's 0.42, which is below 0.73 and below 0.68. So it's reasonable that the minimum is always a bit lower than each of the two lines. In fact, I never found the "appendix", Vincent was talking about. I just took the grid-results as a dataframe, exported it to excel and played a bit around.
Hi, what do you guys suggest me to watch if I'm totally new to ML? I find this course a little bit beyond my knowledge, I thought because I've got the foundation of DS I can jump on this course but I think I'll need some intro to ML videos.
Do you guys like..read minds or something? I was working on a django project yesterday, and you released one. I was stuck on ML today, and here's the video. Wicked!
This is an excellent tutorial. Im doing the coursera ibm maachine learning cert and supplementing it with this video. This overall is a much more palatable and easier to understand tutorial of scikit learn and really a machine learning model in general. Awesome work!
Sorry, I have a question : Which version of python and opencv are matched ? Because a lot of tutorials I had follow, but unable to find matched compatible version of python and opencv. Please help me to find solution to my own project. Thank you so much.
sorry...but i totally lost it from metrics onwards...it was too heavy to understand...did not understand even the purpose of the lecture let alone the code...
Can I ask you how you are able to draw on the screen? I understand you are probably using a Stylus pen over some touch screen surface, which mirrors your display, but what software are you using for that?
Could you please do "Python for Raspberry Pi 4". I cannot fight a proper guide which properly introduces and explains from the very beginning. I would like to experiment with robotics (e.g. robot arm, etc.), but have no idea how to start programming it. All available guides are using irrelevant projects to start with Raspberry. Note: Thank you for the tutorial!
I have one question on time of lapsing GridSearchCV pipeline: how to minimize time of running code, because my model was estimated with mean fit time at least 9 min. My processor is AMD Ryzen 5 5500U with Radeon Graphics 2.10 GHz and 6 cores. Thenk you in advance!
Thanks for this great material about scikit-learn, it is really helpful and understanding is more comfortable with educators beatiful explanations. Huge thanks and keep going...
How did the entirety of setting up and getting Jupyter Notebooks to function...just get skipped? Everything beyond that is useless because JN is the worst software in history.
What if I want to use pycharm instead of Jupyer Notebook? Would I still be able to follow this course, or am I better off looking for another one? (There's this other course on this channel, but this one has a better audio quality and overall seems more pleasent to follow, so I'm not sure ruclips.net/video/pqNCD_5r0IU/видео.html) Note: I understand that python is used in both places, but I don't know how much of an effect using a different platform would make on the learning experience
Do someone have the credit card fraud .csv similar to the teacher? Because the sheet that I got on Kaggle I can't convert it directly to dataframe (yes, I tried to do some pretreatment on file but in the last row, if I sum up every thing, its returns 0)
Data leakage? In the introducing section (like in 28:41) we have a gridsearch that contains a pipeline with the numeric features transformer. I guess it is the right way to data leakage, because in our pipeline we first transform all the numeric features in the entire dataset and straightly after that we start our model learning through the cross-validation process within the entirely transformed dataset. Our training sets, created during cv, contain previously standardized data, so the model "knows" something about the examples that are not in the training set and can predict better when process them in the prediction step. Thus we should exclude any numeric features transformation in our grid search, am I right? If I'm not, please explain the mechanism.
Is it just me or is it everyone who thinks that everyone says every language and library is extremely popular and is the main aspect when it comes to building the best things in the world
00:19 i did not underestand why after changing k value from 5 to 1 prediction diagram changed ? knn is a classification algoithm but here it was like a regration
I am not getting this chart at this point in time ruclips.net/video/0B5eIE_1vpU/видео.html I get something more like the original but the dotted line is between 10 and 12.5 class weight.
very useful... I run the code on idle but it didnt work well, there are something that need to revise like importation of library being after used variable.
So far into the video, I don't see the data split into train and test samples. Does that mean the model is testing on seen data? If yes, how reliable are these metrics? Someone shed some light, please.
@43:00 where you perform the QuantileTransformer step and plot it...shouldn't the scatter plot fn take X (non transformed) and X_new (transformed) data as params? Little confused why we passed X_new[:, 0] X_new[:, 1]. It seems like we plotted 2 different features (indexed by 0, 1) after transformation step?
No, it is actually syntax of pandas, X[l1=[list...], l2=[list....]] => choose all rows in l1 and all columns in l2. so, X_new[:, 0] chooses all rows with col 0, X_new[:, 1] chooses all rows with col 1. Hope this helps
Hello, I run into an attribute error exception when i try to run the .cv_results_ on my model: 'GridSearchCV' object has no attribute 'cv_results_' df = pd.DataFrame(mod1.cv_results_) #is the line of code. where mod1 is my model. Does anyone know if there is a bug? i am using 1.1.1 versio of scikit learn
This is compelling writing. If the subject fascinates you, a subsequent book with similar themes would be beneficial. "From Bytes to Consciousness: A Comprehensive Guide to Artificial Intelligence" by Stuart Mills
Great video. Helped me with multiple sections that I had been fumbling my way through. No hard going over some things I already knew aswell. Thanks for this..👍
The Boston housing prices dataset has an ethical problem: as investigated in [1], the authors of this dataset engineered a non-invertible variable "B" assuming that racial self-segregation had a positive impact on house prices [2]. Furthermore the goal of the research that led to the creation of this dataset was to study the impact of air quality but it did not give adequate demonstration of the validity of this assumption. The scikit-learn maintainers therefore strongly discourage the use of this dataset unless the purpose of the code is to study and educate about ethical issues in data science and machine learning.
the explanations are well detailed, this really helps with understanding the library and know exactly what to use and where to use it. You have helped a great community of beginners. 🙏🏾🙏🏾🙏🏾🙏🏾🙏🏾
Message from the creator:
I hope you've all enjoyed this series of videos. It was fun to collaborate with freeCodeCamp!
If you're interested in more content from me feel free to check out calmcode. Also, I'd like to give a shoutout to my employer, Rasa! We're using scikit-learn (and a whole bunch of other tools) to build open-source chatbot technology for python. If that sounds interesting, definitely check out rasa.com/docs/rasa/.
i guess I'm kinda randomly asking but do anybody know of a good place to watch newly released tv shows online ?
@Jad Kylan Try flixzone. Just search on google for it =)
@Aries Ulises definitely, I've been using flixzone for months myself =)
@Aries Ulises thanks, I went there and it seems like a nice service :) I really appreciate it!!
@Jad Kylan happy to help =)
Im busy for the next 2h.
Me too
Way to go!
+=1
This is by far the most beginner friendly introduction to sk-learn I've seen
This is the way everything should be taught!
I love that you present concepts in a structured and systematic way, speaking slowly and clearly, using as few words as possible...
- starting with the concept and talking through drawing a logical diagram (which is so important for developing abstract thinking in terms of high level concepts, which is how we think when we are experienced in something).
- then writing clean, concise code to implement each part of the concept
- showing plots that directly demonstrate the effects of the entire iteration
Too many tutorials make the mistake of talking too much. A lot of videos also either assume too much or too little about the viewer's knowledge.
This seems to confidently stike the nail on the head!
Thanks!
Amazing review!
Exactly 👍
Are you serious???
Instructor didn't even show the dataset. How would anyone understand whats going on like this?
I must agree with others: this is a great lecture. I mean... REALLY good. Vincent, do you have any more of these? This stuff is not only informative, but also pleasant to watch and listen to. Good, correct, and clear English is rather rare these days. Sadly. This lecture is good because it does not shy away from details. It also goes beyond just showing the API. It tries to build something new from the available "Lego" pieces. Which is great as it shows creativity and also how to dig deeper to understand the data. Very, very good exposition. Many thanks.
I feel you about clear and well enunciated English. I HATE having to 'interpret' what I'm hearing....too much extraneous Cognitive Load for an already high Intrinsic Load topic.
OMG! I love all the contente that Vincent makes! I must watch this video!
Send me a link to his channel
It is a delicate subject, but I think the question of the Algorithm being racist is an ill advised one. The real question under it is whether The % of black population parameter affects the house price or not. Is the aim of a data scientist to make the actual prediction or to make the data fit a point of view (which, btw, I totally endorse in principle)
i am trying to learn from this course but it says that the boston data set has been removed from scikit learn. what should i do?
You can still downgrade your scikit-learn version to 1.0.2 and it should be fine, also if you don't want to, you can use the fetch_california_housing instead
This video saved me from a 5K course! Thanks! Loads of Love!
16:00 pipe
23:45 grid search
37:00 standard scaler
42:00 quantiles better
46:55 …
55:00 fraud ex
comeback dude. don't give up.
Just Amazing once again, u guys rock as always...
Thankyou very much, much needed for beginners like me❤️,
I hope one day when I'll become expert, I will make free courses for others too❤️
Awesome Tutorial,
I have some suggestions regarding your content:
1. Tutorial on RUST
2. Tutorial on JULIA
3. Tutorial on AWK & SED
(Especially AWK)
4. Tutorial on LUA
What do you guys think????
Could you please explain why the min of recall and precision is lower than both? Could not find appendix.
+1, anyone knows where to find the appendix?
hint: min_both is calculated separately at every train/test split in the cross-validation
+1, same, could not find appendix
Wow - I need to share this with the rest of the class! Thanks for making this video so understandable.
great video series, thanks ! In this video @56:56 i think you meant to say that "there are way more cases without Fraud than with Fraud"
exactly why i came to the comments
The way each dataset complements the associated pitfall you want to bring up at a given moment... wow. What an amazing intro -- it must have taken a lot of forethought and behind the scenes organization to make the flow of this video series seem so effortless. THANK YOU!!
please bro can you tell me where to find appending for the plot answer ?
Hello, I just wanted to say for those who plan to do the videos. The data set 'Boston house prices' has been removed by scikit, therefore this tutorial is not really working anymore unless you change the dataset
I did not succeed to reproduce the figure @ 1:16:56. I'm always getting the same figure as the one just before even I did the log transformation of the "Amount" column. Anyone have had the same problem?
please guys, where is this appending for the plot answer ????????????????
Bro, did you got any???
1:11:00 what’s the answer though?
Did anybody figure out why the mean of the min(recall, precision) was below the actual mean of both recall & precision? 1:10:57
The mean is always measured over all 10 splits, for precision, for recall AND for the minimum separately. In other words, FIRST the minimum is calculated, THEN the mean over all these minimums is calculated. If you would have only one split, there would not be a problem. But starting with two splits, we have: test_precision 1.0 and 0.46 = mean 0.73. test_recall 0.37 and 1.0 = mean 0.68. However, the minimum is 0.37 and 0.46, and if you calculate the mean of these two, it's 0.42, which is below 0.73 and below 0.68. So it's reasonable that the minimum is always a bit lower than each of the two lines. In fact, I never found the "appendix", Vincent was talking about. I just took the grid-results as a dataframe, exported it to excel and played a bit around.
@@meisterpianist Thanks for the explanation!
35:56 as a non-American, it is so satisfying hearing z read as 'zed' not 'zi'. lol
Does Vincent has his own Channel, I just love his teaching style!!
google calmcode
you're welcome
Just completed the first part of the lecture. I have been using scikit for a couple of months! Dudeee! This is an eye opener!
The section on Metrics gets confusing for me. Any easy to understand books I can read for understanding metrics?
The metrics section was overwhelming for me as well. There has to be a pre requisite base work before going for this.
Hi, what do you guys suggest me to watch if I'm totally new to ML?
I find this course a little bit beyond my knowledge, I thought because I've got the foundation of DS I can jump on this course but I think I'll need some intro to ML videos.
StatQuest
@@Caradaoutradimensao Awesome looks good!
Thanks a lot!
@@Caradaoutradimensao thanks bro
i feel i learned so much, great job sir. Thank you :)
Do you guys like..read minds or something?
I was working on a django project yesterday, and you released one. I was stuck on ML today, and here's the video. Wicked!
This is an excellent tutorial. Im doing the coursera ibm maachine learning cert and supplementing it with this video. This overall is a much more palatable and easier to understand tutorial of scikit learn and really a machine learning model in general. Awesome work!
Sorry, I have a question :
Which version of python and opencv are matched ?
Because a lot of tutorials I had follow, but unable to find matched compatible version of python and opencv.
Please help me to find solution to my own project. Thank you so much.
sorry...but i totally lost it from metrics onwards...it was too heavy to understand...did not understand even the purpose of the lecture let alone the code...
Great video ! At 1:49:40 you could use ".values" at the end instead of np.array in the beginning.
Can I ask you how you are able to draw on the screen? I understand you are probably using a Stylus pen over some touch screen surface, which mirrors your display, but what software are you using for that?
Thompson Linda White Charles Martinez Margaret
I was rewatching the course to make my basics better , there were actually a lot of details man!!!
Could you please do "Python for Raspberry Pi 4". I cannot fight a proper guide which properly introduces and explains from the very beginning. I would like to experiment with robotics (e.g. robot arm, etc.), but have no idea how to start programming it. All available guides are using irrelevant projects to start with Raspberry.
Note: Thank you for the tutorial!
I could help with a little info if you are still interested,
what a great course! thank you for openning the gates..
"Building dependencies failed"
error: subprocess-exited-with-error
Cannot import boston housing price dataset.
Allen Scott Hall Shirley Brown Carol
I have one question on time of lapsing GridSearchCV pipeline: how to minimize time of running code, because my model was estimated with mean fit time at least 9 min. My processor is AMD Ryzen 5 5500U with Radeon Graphics 2.10 GHz and 6 cores. Thenk you in advance!
Thanks for this great material about scikit-learn, it is really helpful and understanding is more comfortable with educators beatiful explanations. Huge thanks and keep going...
excellent explanation for a beginner in ML .Thanks for the course.
How did the entirety of setting up and getting Jupyter Notebooks to function...just get skipped? Everything beyond that is useless because JN is the worst software in history.
Awesome! Thank you for sharing!
Boston House Price Dataset is available on Kaggle for those who are saying scikit learn has removed it.
Kudos! Excellent training.
Is GridSearchCV(... ,cv=3) doing a nested crossvalidation?
What if I want to use pycharm instead of Jupyer Notebook? Would I still be able to follow this course, or am I better off looking for another one?
(There's this other course on this channel, but this one has a better audio quality and overall seems more pleasent to follow, so I'm not sure ruclips.net/video/pqNCD_5r0IU/видео.html)
Note: I understand that python is used in both places, but I don't know how much of an effect using a different platform would make on the learning experience
Do someone have the credit card fraud .csv similar to the teacher? Because the sheet that I got on Kaggle I can't convert it directly to dataframe (yes, I tried to do some pretreatment on file but in the last row, if I sum up every thing, its returns 0)
Impossivel prosseguir com este curso por conta do problema etico com este dataset
Data leakage? In the introducing section (like in 28:41) we have a gridsearch that contains a pipeline with the numeric features transformer. I guess it is the right way to data leakage, because in our pipeline we first transform all the numeric features in the entire dataset and straightly after that we start our model learning through the cross-validation process within the entirely transformed dataset. Our training sets, created during cv, contain previously standardized data, so the model "knows" something about the examples that are not in the training set and can predict better when process them in the prediction step. Thus we should exclude any numeric features transformation in our grid search, am I right? If I'm not, please explain the mechanism.
Is it still worth watching this video? How much has changed in 2 years? Thank you
i hate python so much, just errors after errors
I actually agree with you. I am having a hard time switching from R using Caret. Good Luck
Rime series needed these Polynomial parameters, i think. Cool tutorial though!
If i get a high paying job, i will donate at least 5000 rupees to freecodecamp
Amazing presentation !!
25:50 using space instead of tab .... stops watching :) (joke) great video
This video is awesome! Your narration style is fantastic.
Is it just me or is it everyone who thinks that everyone says every language and library is extremely popular and is the main aspect when it comes to building the best things in the world
For the Titanic example: 76% of the women survived, whereas just 16% of the men survived, that would have been a really good classifier to start with
vincent chansard
for better learning you can also provide data links used in this course ,sir if u can
B for blacks is wild.
00:19 i did not underestand why after changing k value from 5 to 1 prediction diagram changed ? knn is a classification algoithm but here it was like a regration
Does this video contains something about ML algorithms?
I am not getting this chart at this point in time ruclips.net/video/0B5eIE_1vpU/видео.html
I get something more like the original but the dotted line is between 10 and 12.5 class weight.
very useful... I run the code on idle but it didnt work well, there are something that need to revise like importation of library being after used variable.
So far into the video, I don't see the data split into train and test samples. Does that mean the model is testing on seen data? If yes, how reliable are these metrics?
Someone shed some light, please.
What do you mean watch all these videos? Are there different videos series?
i was wondering why i got the huge red warning when running load_boston data, that's ridiculous how that 30:40 is real
At the metrics part, when you plot mean recall and mean precision, how is it that i got the same results for the train and test sets?
Very good teacher. Thanks for the content I learned a lot.
thanks my co name --- vicent, you inspire me to do machine learning
what will be the prerequisite for scikit learn ??
6:07
Where can we find the dataset ?
if you find it tell me
link code github is 404, fix it plz.
where is that make_plots function from, at 1:31:00
Very interesting, Thank you very much
thank you so much! I am slowly digesting this stuff and most likely will have to review it 2 or more times.
rr
Wow such an awesome course, cant believe this is free
Wow thank u this really clarified my doubts :)
Where are the datasets for the sklearn metric tutorial (credit card dataset, etc)? Thank you!
Is there a way KNN to skip the closest nearest neighbor?
I loved the end chapter that joined machine learning with expert systems I've used 30 years ago...
great series of demo videos. well explained for a beginner to learn from zero.
@43:00 where you perform the QuantileTransformer step and plot it...shouldn't the scatter plot fn take X (non transformed) and X_new (transformed) data as params? Little confused why we passed X_new[:, 0] X_new[:, 1]. It seems like we plotted 2 different features (indexed by 0, 1) after transformation step?
No, it is actually syntax of pandas,
X[l1=[list...], l2=[list....]] => choose all rows in l1 and all columns in l2.
so, X_new[:, 0] chooses all rows with col 0, X_new[:, 1] chooses all rows with col 1.
Hope this helps
Thank you for uploading this video!
Well explained and high quality video and audio. Unlike some other videos out there.
Hello,
I run into an attribute error exception when i try to run the .cv_results_ on my model:
'GridSearchCV' object has no attribute 'cv_results_'
df = pd.DataFrame(mod1.cv_results_) #is the line of code. where mod1 is my model.
Does anyone know if there is a bug? i am using 1.1.1 versio of scikit learn
I'm having the very same error here as well, I have installed the specific version scikit-learn=0.23.0.
very nice tutorial watched the whole thing
How you watched 2 hr video in 27minutes
Thanks!
this is one of the best videos I have seen covering sklean so well. Thanks a lot! would love to learn sklearn in more depth for different scenarios ..
Hi Vignesh, could you suggest a book which covers the metrics section?
This is compelling writing. If the subject fascinates you, a subsequent book with similar themes would be beneficial. "From Bytes to Consciousness: A Comprehensive Guide to Artificial Intelligence" by Stuart Mills
please bro can you tell me where to find appending for the plot answer ?
1
Great video. Helped me with multiple sections that I had been fumbling my way through. No hard going over some things I already knew aswell.
Thanks for this..👍
so well explained thank you
The Boston housing prices dataset has an ethical problem: as
investigated in [1], the authors of this dataset engineered a
non-invertible variable "B" assuming that racial self-segregation had a
positive impact on house prices [2]. Furthermore the goal of the
research that led to the creation of this dataset was to study the
impact of air quality but it did not give adequate demonstration of the
validity of this assumption.
The scikit-learn maintainers therefore strongly discourage the use of
this dataset unless the purpose of the code is to study and educate
about ethical issues in data science and machine learning.
the explanations are well detailed, this really helps with understanding the library and know exactly what to use and where to use it. You have helped a great community of beginners. 🙏🏾🙏🏾🙏🏾🙏🏾🙏🏾