Hello Prof, I want to thank you for putting together training videos like this one. I have learned more than i have in the last 2 months of my data science MSc programme. You explained every line of code, every symbol and the reason behind every style of coding, that is what is called knowledge impartation. Thank you very much.
Thanks so much, this is what a Linear Regression actually is and how we apply it into our dataset. Pls also make videos about how applying Logistic Regression, KNN, Random Forest, SVM, Naïve Bias , Decision Trees using Python into our dataset. Very interesting and clear
Perfectly well put together videos. Just a little request about the linear regression model performance part can you elaborate a little bit what those numbers really mean. is this model good or bad?
Hi, thanks for the feedback. Pearson's correlation coefficient (R) can be in the range of 0 and 1 where the higher the number the better the results (for correlation between predicted and actual values). In a nutshell, high (good) and low (bad).
@wise guy That's a great question. I might make a future video dedicated to this topic. In the meantime, there are several other linear models that can be computed by scikit-learn package. scikit-learn.org/stable/modules/linear_model.html The coeff and intercept can be summarized below: Y = m1*x1 + m2*x2 + .... + mn*xn + b where Y is the dependent variable x1, x2, ..., xn are the independent variables m1, m2, ..., mn are the regression coefficients b is the Y-intercept Some more about the b value, it is the value where the regression model line passes the Y-intercept. Also, the coefficients tell us the relative importance of the independent variables.
Thanks for watching. R^2 is also known as the goodness-of-fit and it tells the relative performance of a regression model. It is computed from the actual and predicted values whereby a value approaching 1 suggests good performance.
hi @Data Professor, I can i ask for minutes (9.50 - 10.00) when you explain about modulo operator. So i confused with the 0.523810833536016 where is that number come from? i keep repeating and repeating your video but still don't get where that float number comes up. at moment i do some assignments/ project and use your YT tutorial as guidance for me grasp this linear regression. thank you
Wonderful presentation! Though Im struggling a bit with the loss function and the training/iteration principle. How does this work exactly? For your first example using the diabetes dataset, I would like to train the data/iterate the data 1000 times, and thereby plot the loss function over a 2-dimensional grid at every 100, 300, 700 and 1000 iteration. How exactly would you do this? Thank you!
I really do feel that most people just post videos for posting's sake. Most datasets in real life will have characters as values, that need to be converted using encoder, because ML does not use objects for prediction, but floating numbers. Please can someone help with a video of how I can build a model from a dataset with character values? Thank you Professor, well explained.
Greetings from Brazil professor Im a beginner in data analysis and i´d like to know if there´s any difference about turning the dataset into a dataframe, and if yes, why? Tnks
Hi, by datasets perhaps you are referring to the files on your computer such as in CSV format which needs to be read into Python using pandas and converted into a data frame. Such data frames can then be used by machine learning packages such as scikit-learn for model building.
Could you please explain what does Root mean square error (Root-MSE) tell us about the model? Somebody explained to me that larger the gap between R^2 and Root mean Square error, the better the model is at predicting the effect of independent variables on the output. But the question is, how much gap is good enough? Or is there a better interpretation of Root-MSE?
Hi Salik, I have a couple of recommended books that I normally include in the video description, here they are (includes affiliate link). The Hands-On book is definitely a must read, it is really all you need to get started and beyond, though the Python Data Science Handbook (a free version is available online by the author, let me find the link and post below) is also a great read as well. Recommended Books: 🌟kit.co/dataprofessor ✅ Hands-On Machine Learning with Scikit-Learn : amzn.to/3hTKuTt ✅ Data Science from Scratch : amzn.to/3fO0JiZ ✅ Python Data Science Handbook : amzn.to/37Tvf8n ✅ R for Data Science : amzn.to/2YCPcgW ✅ Artificial Intelligence: The Insights You Need from Harvard Business Review: amzn.to/33jTdcv ✅ AI Superpowers: China, Silicon Valley, and the New World Order: amzn.to/3nghGrd
Hello. I am confused about what data is being held in X_train and Y_train. I have only done linear regression with 2 variables before and I am confused about why a 353x10 matrix is being held in X_train and why a 353x1(?) matrix is being held in Y_train. Is Y_train a placeholder for 353 regression line y values that get produced after the 10 variable coefficients are calculated and made into a function? Or is the algorithm solving an overdetermined system of 353 equations with 10 unknowns using linear algebra: (y1=b0 + b1x1...) . . . (yn=b0 + bnxn...)?
X_train and Y_train holds 80% of the input data ( Refer to data split section). X_train is a 404*10 matrix because it has 80% of the input data which gives you the 404 rows * it has all the 13 features (except the Y or 'medv' that was dropped). Y_train is the 404*1 matrix to hold the Y values ('medv' column). This will be used to train the model for Y to make it predict Y_pred later on. @Data Professor
If one trains on 100% of the data (skipping split/train/test), does the sklearns lin/logreg-implementaiton basically become the same 'classic' implementation as statsmodels or glm (in R)?
i really learned alot from this video !! most amazing data Professor ever !! i was just wondering, that in my case i only need to compare 5 Machine learning algorithm and from a data set that is worldwide like CICIDS2017 or KDD, could you please post a video about it ???? that would be amazing if possible, thank you so much
Hello DataProfessor. I am a beginner in ML and have learned some basic concepts of linear/logistic regression, SVM, ANN, Recommendation systems, Anomaly Detection from ML course by Andrew NG on Coursera. I am looking for some good walkthrough videos like these for picking up libraries like sklearn, tensorflow, etc. Do you have a set of videos that could help me? P.S - The walkthrough was amazing. thanks for the content.
Thanks Ankit for the comment and kind words. Currently, I created a Playlist called the "Python Data Science Projects" available at ruclips.net/video/XmSlFPDjKdc/видео.html where I give a walkthrough tutorial on using scikit-learn package to solve various problems in machine learning. I've also started to create beginner friendly videos in the "Python Programming 101" playlist available at ruclips.net/video/6UcWs33Xti0/видео.html. Thanks for the suggestion, I'm also looking to expand into additional ML packages in Python.
Is using r2 a bad evaluation metric for linear regression? If the r2 value is really bad (like above one or like 10%) does that mean the model is not useful or is it still useful?
Typically, the rule of thumb that I and other researchers use is anything above 0.6 for a training and above 0.5 for test sets are considered to be really good in terms of performance. As for anything lower, it may mean that the model has not capture the X-Y relationship, sometimes exploring feature engineering may help. Hope this helps.
It is to allow us to split the data to a train subset and test subset. The train subset is used for model building and applied on the test subset to make a prediction. The purpose of data splitting is to allow us to assess whether the constructed model will perform well on new, unseen data. I've also written a Medium article with illustration at towardsdatascience.com/how-to-build-a-machine-learning-model-439ab8fb3fb1
is it ok in linear regression(single variable) if dependent and independent variable are not normally distributed if not what should be the optimum solution for negative skew and neg kurtosis
Thanks for watching. The difference in value is due to the random seed. If a seed number is set to be the same then the same values should be obtained.
hi prof, may I ask ,last stage of scatterplot for boston house model , so x axis is represent the y_test value and y_axis is represent the y_pred value? How do i evaluate from the scatterplot. Could you explain more on plt representation. thank u sir!
Hi Mayglie, for sure, I have written a Medium article in Towards Data Science where one of the section takes a look at the explanation of the Python codes line by line for making the scatter plot. I also drew an infographic (towards the end of the article) explaining this at a high-level, you can check out the article at towardsdatascience.com/how-to-build-a-regression-model-in-python-9a10685c7f09 Hope this helps 😃
Thanks for the feedback, this video is meant for beginners. There's a playlist showing its application to a bioinformatics project here ruclips.net/p/PLtqF5YXg7GLlQJUv9XJ3RWdd5VYGwBHrP
Great video, from statistics import LinearRegression did not work for me i had to use from sklearn.linear_model import LinearRegression to make it work
Hello Prof, I want to thank you for putting together training videos like this one. I have learned more than i have in the last 2 months of my data science MSc programme. You explained every line of code, every symbol and the reason behind every style of coding, that is what is called knowledge impartation. Thank you very much.
Wow thanks for the kind words! 😆
Thanks so much, this is what a Linear Regression actually is and how we apply it into our dataset.
Pls also make videos about how applying Logistic Regression, KNN, Random Forest, SVM, Naïve Bias , Decision Trees using Python into our dataset.
Very interesting and clear
Nice video, but how do we interpret the results? IOW, what would be the deliverable to our stakeholders? What are the actual predictions?
every video you have posted provides value to the audience. Outstanding job. I hope your channel could grow exponentially, as it is deserved.
Thank you Edwin for the encouraging words 😃
@@DataProfessor Might seem silly but building a model on YT growth would be interesting :)
You are the best professor for explaining , thanks for your content!
Wow, thanks!
Perfectly well put together videos. Just a little request about the linear regression model performance part can you elaborate a little bit what those numbers really mean. is this model good or bad?
Hi, thanks for the feedback. Pearson's correlation coefficient (R) can be in the range of 0 and 1 where the higher the number the better the results (for correlation between predicted and actual values). In a nutshell, high (good) and low (bad).
@@DataProfessor thanks huge fan of your work
Just found your channel. Thank you from a fellow 🇹🇭
this was realy helpful and wonderful of all other videos......
thank you so much sir
Thanks for the kind words! 😊
@@DataProfessor welcome sir😊
This was great thank you so much! Really useful and looking forward to using it in my research.
Glad it was helpful!
Glad to have more of your video to watch than usual 😍
Thanks Marco, glad to hear that😃
@wise guy That's a great question. I might make a future video dedicated to this topic. In the meantime, there are several other linear models that can be computed by scikit-learn package.
scikit-learn.org/stable/modules/linear_model.html
The coeff and intercept can be summarized below:
Y = m1*x1 + m2*x2 + .... + mn*xn + b
where Y is the dependent variable
x1, x2, ..., xn are the independent variables
m1, m2, ..., mn are the regression coefficients
b is the Y-intercept
Some more about the b value, it is the value where the regression model line passes the Y-intercept. Also, the coefficients tell us the relative importance of the independent variables.
your way of describing is really helpful to me. Thanks a lot for your videos.
Thank you for great video! The explanation is very clear. By the way what software do you use to make the videos?
Thanks, Premiere for video editing
Beautiful presentation. Thank you sir.
Thanks for the kind words!
Great video, looking forward to more such videos like these. Also, can you tell me what R2 score tells us about the model?
Thanks for watching. R^2 is also known as the goodness-of-fit and it tells the relative performance of a regression model. It is computed from the actual and predicted values whereby a value approaching 1 suggests good performance.
hi @Data Professor, I can i ask for minutes (9.50 - 10.00) when you explain about modulo operator. So i confused with the 0.523810833536016 where is that number come from? i keep repeating and repeating your video but still don't get where that float number comes up. at moment i do some assignments/ project and use your YT tutorial as guidance for me grasp this linear regression. thank you
This is exactly what I was looking for, thank you so much this was such a big help!!!
Glad it was helpful!
Wonderful presentation! Though Im struggling a bit with the loss function and the training/iteration principle. How does this work exactly?
For your first example using the diabetes dataset, I would like to train the data/iterate the data 1000 times, and thereby plot the loss function over a 2-dimensional grid at every 100, 300, 700 and 1000 iteration. How exactly would you do this? Thank you!
Can we call it a multiple regression model?
As we're predicting a value considering multiple parameters
I am lazy to comment usally, but this video is very delicious . Keep up with the good work , just subscribed.
Welcome to the channel, it is certainly nice to hear that, thanks for the kind words 😃
I really do feel that most people just post videos for posting's sake. Most datasets in real life will have characters as values, that need to be converted using encoder, because ML does not use objects for prediction, but floating numbers. Please can someone help with a video of how I can build a model from a dataset with character values?
Thank you Professor, well explained.
Bless your heart data professor
Greetings from Brazil professor
Im a beginner in data analysis and i´d like to know if there´s any difference about turning the dataset into a dataframe, and if yes, why?
Tnks
Hi, by datasets perhaps you are referring to the files on your computer such as in CSV format which needs to be read into Python using pandas and converted into a data frame. Such data frames can then be used by machine learning packages such as scikit-learn for model building.
Thank you for making this video. It is very helpful. 👍
It’s a pleasure, glad it is helpful 😆
Could you please explain what does Root mean square error (Root-MSE) tell us about the model? Somebody explained to me that larger the gap between R^2 and Root mean Square error, the better the model is at predicting the effect of independent variables on the output. But the question is, how much gap is good enough? Or is there a better interpretation of Root-MSE?
Hi data professor.. Can you suggest me a book of machine learning which should I buy as a beginner?
Hi Salik, I have a couple of recommended books that I normally include in the video description, here they are (includes affiliate link). The Hands-On book is definitely a must read, it is really all you need to get started and beyond, though the Python Data Science Handbook (a free version is available online by the author, let me find the link and post below) is also a great read as well.
Recommended Books:
🌟kit.co/dataprofessor
✅ Hands-On Machine Learning with Scikit-Learn : amzn.to/3hTKuTt
✅ Data Science from Scratch : amzn.to/3fO0JiZ
✅ Python Data Science Handbook : amzn.to/37Tvf8n
✅ R for Data Science : amzn.to/2YCPcgW
✅ Artificial Intelligence: The Insights You Need from Harvard Business Review: amzn.to/33jTdcv
✅ AI Superpowers: China, Silicon Valley, and the New World Order: amzn.to/3nghGrd
The free online version for Python Data Science Handbook is available at jakevdp.github.io/PythonDataScienceHandbook/
@@DataProfessor Thanks for your reply.
I have first edition of Hands-On Machine Learning.
Is it any difference among 1st and second edition.
Hello. I am confused about what data is being held in X_train and Y_train. I have only done linear regression with 2 variables before and I am confused about why a 353x10 matrix is being held in X_train and why a 353x1(?) matrix is being held in Y_train. Is Y_train a placeholder for 353 regression line y values that get produced after the 10 variable coefficients are calculated and made into a function? Or is the algorithm solving an overdetermined system of 353 equations with 10 unknowns using linear algebra: (y1=b0 + b1x1...) . . . (yn=b0 + bnxn...)?
X_train and Y_train holds 80% of the input data ( Refer to data split section).
X_train is a 404*10 matrix because it has 80% of the input data which gives you the 404 rows * it has all the 13 features (except the Y or 'medv' that was dropped).
Y_train is the 404*1 matrix to hold the Y values ('medv' column). This will be used to train the model for Y to make it predict Y_pred later on.
@Data Professor
Thank you. Excellent explanation! :)
Thanks for the kind words!
awesome tutorial!! thank you!
If one trains on 100% of the data (skipping split/train/test), does the sklearns lin/logreg-implementaiton basically become the same 'classic' implementation as statsmodels or glm (in R)?
i really learned alot from this video !! most amazing data Professor ever !! i was just wondering, that in my case i only need to compare 5 Machine learning algorithm and from a data set that is worldwide like CICIDS2017 or KDD, could you please post a video about it ???? that would be amazing if possible, thank you so much
Thank you! this video was such a big help
Very well explained, thank you!
Hello DataProfessor. I am a beginner in ML and have learned some basic concepts of linear/logistic regression, SVM, ANN, Recommendation systems, Anomaly Detection from ML course by Andrew NG on Coursera. I am looking for some good walkthrough videos like these for picking up libraries like sklearn, tensorflow, etc. Do you have a set of videos that could help me?
P.S - The walkthrough was amazing. thanks for the content.
Thanks Ankit for the comment and kind words. Currently, I created a Playlist called the "Python Data Science Projects" available at ruclips.net/video/XmSlFPDjKdc/видео.html where I give a walkthrough tutorial on using scikit-learn package to solve various problems in machine learning. I've also started to create beginner friendly videos in the "Python Programming 101" playlist available at ruclips.net/video/6UcWs33Xti0/видео.html. Thanks for the suggestion, I'm also looking to expand into additional ML packages in Python.
@@DataProfessor: thank you. I really appreciate your help. will go through the playlists.😊😊
@@bhankit1410 It's a pleasure 😃
Thank you 👨🏫 prof
thank you for the video. I have a question: what is the difference between print(diabetes.DESCR) or only diabetes.DESCR ? thank you
Very good video!
Thanks 😊
wonderful!!! thank you very much to share this video :D
A pleasure
thanks professor
Is using r2 a bad evaluation metric for linear regression? If the r2 value is really bad (like above one or like 10%) does that mean the model is not useful or is it still useful?
Typically, the rule of thumb that I and other researchers use is anything above 0.6 for a training and above 0.5 for test sets are considered to be really good in terms of performance. As for anything lower, it may mean that the model has not capture the X-Y relationship, sometimes exploring feature engineering may help. Hope this helps.
Thank you so much sir🙏🙏
Great tutorials
hi! Why you dont use standarscaler for the features? is not necesary ??
So excellent. Thank you so much
Thanks Frederic for the comment and kind words!
T.T , I still couldn't figure why need a linear regression.
I think i need to read more !!
I'm bad a maths ~~~
Fantastic video, thank you.
Glad you enjoyed it!
What is the purpose of the train test split function?
It is to allow us to split the data to a train subset and test subset. The train subset is used for model building and applied on the test subset to make a prediction. The purpose of data splitting is to allow us to assess whether the constructed model will perform well on new, unseen data. I've also written a Medium article with illustration at towardsdatascience.com/how-to-build-a-machine-learning-model-439ab8fb3fb1
@@DataProfessor wow bro! Thank you so much!
A pleasure, glad it was helpful 😁
Got a quick question, are you a professor? I ask because I'm a prof of data science and would wanna chat.
Technically, I'm an Associate Professor of Bioinformatics, I can be reached at hellodataprofessor@gmail.com
is it ok in linear regression(single variable) if dependent and independent variable are not normally distributed if not what should be the optimum solution for negative skew and neg kurtosis
can u tell the coffecient are giving us the weight value
what is weight values here?
Hi Professor, I followed the exact same steps as you, but my coef and intercept are different, do you know why? By the way, great presentation.
Thanks for watching. The difference in value is due to the random seed. If a seed number is set to be the same then the same values should be obtained.
11:50 what does the graph is showing? just dots? what do these dots mean?
Please zoom in a bit professor. Thank you for the video..
Noted, for future videos I have zoomed in on the screen. Thank you for the suggestion. 😊
@@DataProfessor Thank you...
hi prof, may I ask ,last stage of scatterplot for boston house model , so x axis is represent the y_test value and y_axis is represent the y_pred value? How do i evaluate from the scatterplot. Could you explain more on plt representation. thank u sir!
Hi Mayglie, for sure, I have written a Medium article in Towards Data Science where one of the section takes a look at the explanation of the Python codes line by line for making the scatter plot. I also drew an infographic (towards the end of the article) explaining this at a high-level, you can check out the article at towardsdatascience.com/how-to-build-a-regression-model-in-python-9a10685c7f09
Hope this helps 😃
Thank u professor... i will take a look!
there are 10 vars in X-Data, 1 in Y (which is obvious)
how these 10 vars can b represented as linear function of Y??
is it really linear regression?
Thank you, sir, for making this so easy :)
#HappyLearning
Thanks for watching and glad it was helpful 😃
Thank you ^^
You're welcome 😊
สุดยอดครับอาจารย์
ขอบคุณครับ
great video
Thanks 😊
he looks like Mature version of Zoma
thank you
How can you get biological activity data from
Which are the good database for IC50
How search any rule of thumb
Kind help anyone please
8:29 is the coefficient the same as the weight?
HI, yes the regression coefficients can be said to tell us the relative weight or magnitude by which the variable contributes to the calculation of Y.
@@DataProfessorSir, are these weights the same as for improving the accuracy of the Naive Bayes algorithm?
If you find value in this video, please give it a Like 👍and Subscribe ❤️if you would like to see more Data Science videos.
damn this guy looks a lot like joma
ขอบคุณครับ
7:25 MAE and others
Thank you so much
Thanks for watching!
perfect
#scikitlearn
nice!
Thanks Lucas! If you find value in the video, could you give it a Like. Thanks!
Good for only imitation purposes and nothing useful for applying to our own project.
Thanks for the feedback, this video is meant for beginners. There's a playlist showing its application to a bioinformatics project here ruclips.net/p/PLtqF5YXg7GLlQJUv9XJ3RWdd5VYGwBHrP
FRAJOLA
Great video, from statistics import LinearRegression did not work for me
i had to use from sklearn.linear_model import LinearRegression to make it work