It's like the concepts which I couldn't find anywhere else on youtube, I end up here. This is exaclty what I was looking for, the explanation of this "ols.summary()".I believed I wouldn't find any. But here I am. I really really appreciate this buddy .Thank you so much.
You making these videos as early as 6:47 AM shows your hard work and passion for this! Thank you so much for your Work! Question:" The condition number is large, 2.9e+04. This might indicate that there are strong multicollinearity or other numerical problems." - what do you infer from this?
Hi Bhavesh. Really a big fan of ur videos. U have made all topics so easy to understand. Can u pls make a video on evaluating a logistic regression model with statistics like ks test, psi, concordance pls
Hi Bhavesh, thank you for this nice interpreting video. I have two questions though: #1 You didn't go through the Df residuals, Df model, Log-likelihood etc. stuffs. It would be good if you could cover those in this video, OR at least state the significance of those parameters here in the comment section. #2 Is there any way to get SSE (sum of squared errors), SSR(sum of squared regression) values of this fitted model?
Hi, I have a question about my specific results. Would I be able to send you my statsmodel summary results and you can help me interpret them for my case?
Thank you for the explanation, though I have an issue with understanding why we should omit feature 3, it is the only feature with a high p-value and therefore fails to reject the null coef = 0 hypothesis meaning that there is a linear relationship between it and the target.
hellow sir, one thing i would like to ask is what optimization algorithm does stats.model.api OLS uses, ? just like sklearn linearRegression uses ClosedForm solutions. Thank you in advance 😊
if we have categorical and continuous combine my X and continuous Y , i have categorical variables and my target is continuous variable. that time what we will do ? i can check p value like this for categorical variables is well
Hello sir, I'm working on a project, I have a dataset less 100 values. I have created a regression model but I am ending with huge MSE value. Sir can u suggest any idea or techniques so that my model performance will be improved.
Great Explanation :). But you skipped some parts. I also want to know what is AIC, BIC at upper right. And what is [0.025 0.975] columns after the t-test columns. What is that omnibus? Are the skew & kurtosis are of response distribution? And all the remaining statistics at the bottom.
Thank you so much... That was a really informative video! I just couldn't understand one thing, why are we adding a constant column and appending it to actual x column?
I don't think you should get into the habit of filtering warnings - this may hide important information - The correct way is to only hide the specific type of warnings you want to.
I am glad you created this video 4 years ago.
Glad it was helpful!
It's like the concepts which I couldn't find anywhere else on youtube, I end up here. This is exaclty what I was looking for, the explanation of this "ols.summary()".I believed I wouldn't find any. But here I am. I really really appreciate this buddy .Thank you so much.
Glad it was helpful!
This is just too good Bhavesh bhai. Please keep this work going. It's so helpful that I can't put it into words. Thanks a lot.
Brilliant! Thank you Bhavesh.
Thank you Bhavesh!! Just what I was looking for and very well explained.
Glad you liked it
Just what I was looking for!! Well explained 🔥🔥🔥🔥
thanks, Bhavesh Sir, for making concept clear about feature selection
Thanks Bhavesh for such a beautiful video
Thanks for the quick and concise explanation.
Amazing video. Really helpful. Simple & clear. Thanks a lot!
You're welcome!
Bhavesh you are great. Very nice interpretation.
It's really amazing thanks for such informative and highly understandable videos.
Thank you very much for this tutorial, it's really hepful !
The clearest explanation! Thank you!
Glad you think so!
Thanks so much for the breakdown of the results table, it was very helpful.
You're very welcome Dave!
This was quite helpful. Thankyou so much
Glad it was helpful!
Super thanks such a simple and accurate explanation being from programming background the stats summary interpretation was bugging me a lot
Glad you liked it!
Great Explanation dude!!
Amazing Tutorial
Glad you liked it
awesome explanation
Glad you liked it
Thank you very much, it was really elucidative!
Glad it was helpful!
Thanks great explanation - you're the best!
Glad you think so!
This was absolutely useful. Thank you
You are welcome
You making these videos as early as 6:47 AM shows your hard work and passion for this!
Thank you so much for your Work!
Question:" The condition number is large, 2.9e+04. This might indicate that there are
strong multicollinearity or other numerical problems."
- what do you infer from this?
nice explanation brother!
Appreciate it!
Nice and concise thanks very much
Glad it helped
Hi Bhavesh. Really a big fan of ur videos. U have made all topics so easy to understand. Can u pls make a video on evaluating a logistic regression model with statistics like ks test, psi, concordance pls
Thanks for your kind words Sumit! I'll make a video on how you can evaluate logistic regression soon!
@@bhattbhavesh91 Hi! Is the video published? If yes can you post the link here.
Great video!
Glad you enjoyed it
we have to remove outliers and do the ols model?
Thanks! Extremely helpful. What about log-likelihood, AIC, and BIC?
Will upload soon
Please Explain about omnibus, durbin watson jarque bera etc
I will cover this in the next set of videos!
Sir its really helpful. But i have a as if R squared value is far for 1 and p>t is also less then 0.05 then what we do ??
Can we use VIF also to evaluate the features.?
Are those the results of causality or only a significant relation between variables?
Hi Bhavesh, thank you for this nice interpreting video. I have two questions though:
#1 You didn't go through the Df residuals, Df model, Log-likelihood etc. stuffs. It would be good if you could cover those in this video, OR at least state the significance of those parameters here in the comment section.
#2 Is there any way to get SSE (sum of squared errors), SSR(sum of squared regression) values of this fitted model?
I wanted to keep the video simple so skipped some parts! I'll cover the remaining topics in the near future videos! Thanks
Thanks for the video.
What about the column "Std err" ?
Hi, I have a question about my specific results. Would I be able to send you my statsmodel summary results and you can help me interpret them for my case?
Thank you for the explanation, though I have an issue with understanding why we should omit feature 3, it is the only feature with a high p-value and therefore fails to reject the null coef = 0 hypothesis meaning that there is a linear relationship between it and the target.
Great video. How to put uncertainty of measures in the regression?
thank you so much!
hellow sir, one thing i would like to ask is what optimization algorithm does stats.model.api OLS uses, ?
just like sklearn linearRegression uses ClosedForm solutions. Thank you in advance 😊
It uses Gradient descent
If I have a categorical variable, then how to identify that it is significant or not?
Thank you for your clear explain:>)
Glad it was helpful!
if we have categorical and continuous combine my X and continuous Y , i have categorical variables and my target is continuous variable. that time what we will do ? i can check p value like this for categorical variables is well
Hello sir,
I'm working on a project, I have a dataset less 100 values. I have created a regression model but I am ending with huge MSE value. Sir can u suggest any idea or techniques so that my model performance will be improved.
How does OLS ewquation look like?
Hi Bhavesh, Can you please tell me about the beta coefficient? Which one is the beta coefficient in the summary results?
what is the method to save stats model sir.
I didn't fully understand your question!
Could you please explain how the Durbin-Watson output is interpreted?
I have created a video on Durbin Watson test! Do have a look - ruclips.net/video/FiBBpscb6es/видео.html
how to improve the condition number??
Great Explanation :). But you skipped some parts. I also want to know what is AIC, BIC at upper right. And what is [0.025 0.975] columns after the t-test columns. What is that omnibus? Are the skew & kurtosis are of response distribution? And all the remaining statistics at the bottom.
I wanted to keep the video simple so skipped some parts! I'll cover the remaining topics in the near future videos! Thanks
@@bhattbhavesh91 thankyou for the reply and please cover that asap.
@@GauravSharma-ui4yd - I will try! I'm loaded with work at this point of time!
@@bhattbhavesh91 Can you share the link if you have posted the video
thank you
Thanks for the Great Video, Could you tell me What the number represents [0.025 0.975] ?
I'll create a new video around this soon!
Thank you so much... That was a really informative video!
I just couldn't understand one thing, why are we adding a constant column and appending it to actual x column?
Good one buddy :)
DEEPESH CHADHA
I don't think you should get into the habit of filtering warnings - this may hide important information - The correct way is to only hide the specific type of warnings you want to.