Tutorial 29-R square and Adjusted R square Clearly Explained| Machine Learning

Поделиться
HTML-код
  • Опубликовано: 18 ноя 2024

Комментарии • 144

  • @Emotekofficial
    @Emotekofficial 5 лет назад +68

    Sum of squared residuals also called the sum of squared Errors is SSE and Sum of squared Regression is SSR just make sure about this since new students can get confused.
    Y = individual data points, Yreg = predicted Regression points Ymean = Average of Individual data points
    SSE = Y - Yreg
    SSR = Yreg - Ymean
    so,
    SST = SSE + SSR
    = Y - Ymean

    • @porselvans6172
      @porselvans6172 3 года назад

      Thank you, now understood well

    • @porselvans6172
      @porselvans6172 3 года назад

      @Ahmed Kellen didn't they ask money

    • @ShashwatAgarwal007
      @ShashwatAgarwal007 3 года назад

      Hey can you help me the 'N' here, is it the total number of features or the total number of data points.

    • @GamerBoy-ii4jc
      @GamerBoy-ii4jc 3 года назад +1

      @@ShashwatAgarwal007 big N is total number of population and small n is total number of samples which we take from population

  • @blindprogrammer
    @blindprogrammer 2 года назад +3

    Initially
    N=1000
    R^2=0.85
    p=5 (initially)
    adjusted R_Squared = 1 - ((1-0.85)(1000-1)/(1000-5-1)) = 0.9849
    1. suppose a new non-correlated variable is added:
    N=1000
    R^2=0.86 (suppose new R^2)
    p=6 (new)
    adjusted R_Squared = 1 - ((1-0.86)(1000-1)/(1000-6-1)) = 0.8591
    2. suppose a new correlated variable is added:
    N=1000
    R^2=0.92 (suppose new R^2)
    p=6 (new)
    adjusted R_Squared = 1 - ((1-0.92)(1000-1)/(1000-6-1)) = 0.9195
    As we can notice, on adding a non-correlated predictor, the overall adjusted R_squared has decreased while it has increased on adding a correlated predictor. Hope it helps!

    • @shivampal9282
      @shivampal9282 2 месяца назад

      But it decreased from the initial adj R^2, so how we find out that new feature is correlated

  • @aryanudainiya9486
    @aryanudainiya9486 2 года назад

    best teacher of ML on the youtube

  • @MrPrashanth55
    @MrPrashanth55 5 лет назад +10

    SSR means Sum of the Squares of the Residuals
    SST - Sum of the Squares of the Total....

  • @NeerjaChawla
    @NeerjaChawla 11 месяцев назад +1

    very informative and useful content, lucid explaination

  • @tanvipunjani7096
    @tanvipunjani7096 3 года назад +2

    I am glad I came across this tutorial. Very well explained !

  • @sakshirikhe2869
    @sakshirikhe2869 4 года назад +2

    It's very excellent and detailed explanation for a beginner!!!

  • @kavururajesh1760
    @kavururajesh1760 4 года назад +1

    Explained in detailed manner keep doing

  • @bobbypathak123
    @bobbypathak123 3 года назад

    Wow.. thanks so much Krish. This was the best explanation i found

  • @kinnaryraval
    @kinnaryraval 3 года назад +4

    Hi Krish, Nicely explained. But have a query. R-square will always increase whether calculated against significant or insignificant feature. So, there is no thing that R-sq will be less for non-corelated features and more for corelated ones, like it will increase blindly. So, how can you say that R-adj will decrease when added attributes are non-corelated as R-sq will still increase, making R-adj = 1 - smaller_number ? I hope my question is bit clear. Thanks n respect sir!! (v).

  • @nilupulperera
    @nilupulperera 4 года назад +1

    Very interesting Krish. As always you stimulate us to think and learn.

  • @akshaymote3430
    @akshaymote3430 9 месяцев назад +1

    I didn't get one thing that even in Adjusted R2, whether there's correlation or not is not taken into consideration. So, by just considering number of variables, how correlation issue gets addressed?

  • @praneethcj6544
    @praneethcj6544 4 года назад

    Very intuitive explanation..!!! You have been such an inspirational instructor ..!!!!

  • @sushilpoudel8091
    @sushilpoudel8091 3 месяца назад

    very helpful video, thank you sir

  • @anishchhabra6085
    @anishchhabra6085 2 года назад +1

    Can you please explain how the SSres will decrease as we try to add a new independent variable?

  • @durgakorde3589
    @durgakorde3589 2 года назад

    Thanks a lot Krish 🙂its really helpful

  • @anuradhadevi1414
    @anuradhadevi1414 2 года назад

    Bahut accha somjaya sir thank you sir

  • @independent7212
    @independent7212 3 года назад

    Thank you so much sir for your great support by making such videos.

  • @hanman5195
    @hanman5195 5 лет назад +7

    All time never ever found these kind explanation.
    I will not follow any howle heros except Sadhguru and You.

  • @voramb123
    @voramb123 4 года назад

    Very interesting and excellent but requested to give examples to evaluate situations

  • @kalyanreddy6260
    @kalyanreddy6260 2 года назад

    Rsqaure meanns ssr/sst only right whay -1 before that . Just to know in some excel videos it shows only ssr/sst

  • @mohammad.anas7777
    @mohammad.anas7777 2 года назад

    Nayek sir
    p is total independent features or those independent features which we have added later?
    Also, can we say that N is total number of columns in the data set?
    is so then, should we count those columns also which have irrelevant data like ticket serial number or passenger name in titanic dataset?

  • @burhanuddinraja7209
    @burhanuddinraja7209 3 года назад

    Sir, but if p will increase the N will also increase because they both have independent variables. So the denominator will always be zero.

    • @kitagrawal3211
      @kitagrawal3211 2 года назад

      N is the number of samples, not number of predictors. For the shape of dataframe (m,n) the number of samples is m and number of preictors is n.

  • @hakkamadan9941
    @hakkamadan9941 3 года назад

    beautiful explanation sirji

  • @kumarvaibhav5325
    @kumarvaibhav5325 3 года назад

    Sir it would be great it you can compliment this with an example

  • @srinagabtechabs
    @srinagabtechabs 3 года назад

    Excellent explanation.. thank u very much

  • @akshaykrishnan7985
    @akshaykrishnan7985 5 лет назад +5

    Good morning sir. Please do upload a video with explanation of what exactly is p-value. Getting confused with it. I hope atleast your explanation would give more clarity.

  • @balaramg89
    @balaramg89 2 года назад

    N - total sample size, indicates no of rows in the model?

  • @ankursingh5969
    @ankursingh5969 2 года назад

    Krish R-square will increase in both of the cases whether the variable is correlated with dependent variable or not. hence it result in decrease in Adj R-Squarein both of the case. However the magnitute will be difference.

  • @abhi9029
    @abhi9029 3 года назад

    Hi Krish, At the end of your each sentence while explanation please make the same rhythm of the speech. What happen here is at the end of your sentence you make your voice very low so this creates confusion while listening.

  • @ayushmaheshwari5805
    @ayushmaheshwari5805 4 года назад +2

    please tell why SS res decrease as we increase the feature
    please explain ?

  • @shubhamprasad6910
    @shubhamprasad6910 3 года назад +3

    Which variable in the R^2 adjusted is equation has related to correlation. it is not R^2 and all other variable have nothing to do with correlation. Is it the ratio of (n-1)/(n-p-1)?

    • @akshaymote3430
      @akshaymote3430 9 месяцев назад

      Even I have same question. There should be something more in the formula of R2 adjusted which will take correlation into account.

  • @varunkukade7971
    @varunkukade7971 4 года назад +3

    You said by using 1st formula that even if independent feature is not related, r^2 value increses .that was the drawback. But at 14.18 sec of video you are saying if the feature is not related then we would get smaller r^ value from 1 st formula. I got confused here. Please solve my confusion. I will be glad. Please🙌🙌🙏

    • @ayushmishra-sw4po
      @ayushmishra-sw4po 4 года назад

      No.even if the feature is not correlated to output variable,the value of r square will increase, thats why we uses the adjusted r square..if the feature is not correlated, value will decrease....
      May be he said that by mistake

    • @kitagrawal3211
      @kitagrawal3211 2 года назад

      he meant that for the same features, if they are correlated with the target variable, you will get a higher R2 value and a smaller value if they are uncorrelated.

  • @anubhavgupta8146
    @anubhavgupta8146 4 года назад +1

    Bhai kya karke manoge , itna simply koi kaise padha sakta hai👍

  • @Priyadarshan123
    @Priyadarshan123 4 года назад +3

    Hello sir, I am making a project on income and health expenses, my r-squared value comes out less than 1%. What should i interpret from this? Should i change my linear model or try other? What should i do?

    • @kitagrawal3211
      @kitagrawal3211 2 года назад

      you should add another feature which is correlated to the target variable. Low R-squared means that your independent feature and target variable are not correlated. You can confirm this by computing the correlation between them

  • @pranjalgupta9427
    @pranjalgupta9427 4 года назад

    Awesome video and explaination

  • @abhinavjain5561
    @abhinavjain5561 3 года назад

    In adjusted r2, their is r2
    But whether the feature is correlated or not the r2 value will increase than how we are able to say something about adjusted r2

  • @richasharma5949
    @richasharma5949 4 года назад

    Good explanation, but it would be better to add an example. That way it will become more clear :)

    • @deepknowledge2505
      @deepknowledge2505 4 года назад

      Please see if this could help you
      ruclips.net/video/3SoK930HWL0/видео.html

  • @biswajitnayaak
    @biswajitnayaak 2 года назад

    i am not 100% sure if this is correct when you say it needs to be squared (Actual - Predicted) because of negative value but i suspect its for the outliers

  • @sagarpandya7865
    @sagarpandya7865 3 года назад

    Great explanation Thank you

  • @woblogs2941
    @woblogs2941 4 года назад

    Thank you sir u made the things veery easy

  • @rajeshdhyani3114
    @rajeshdhyani3114 3 года назад

    Well Explained

  • @anubhasinha2557
    @anubhasinha2557 4 года назад +2

    Nicely explained... Can you help me with difference between Sum of Residual and Cost function? Looks like both have same formula.

    • @ayushmishra-sw4po
      @ayushmishra-sw4po 4 года назад +2

      Actually both are same..sum of residual is the sum of square of difference between predicted and actual data points and cost function is also same,

    • @anubhasinha2557
      @anubhasinha2557 4 года назад

      @@ayushmishra-sw4po Thanks Ayush!!!

  • @harishgoud6772
    @harishgoud6772 5 лет назад

    Sir SSR means sum of squares of residuals.

  • @sahilbhatia2671
    @sahilbhatia2671 3 года назад

    very well explained

  • @ruchiyadav1334
    @ruchiyadav1334 3 года назад +5

    Kuch samjh nhi aya

  • @firta_banjara
    @firta_banjara 4 года назад

    hi krish,
    if we add features with high error then the SSres increases , but if we add features with low error then SSres decreases

  • @reddy764
    @reddy764 5 лет назад +1

    Can you suggest good book for Machine Learning ?

  • @hemantdas9546
    @hemantdas9546 4 года назад +1

    What does this mean that R square will always increase when feature is added. This means when features are increased predictions are better. Is it so?

    • @kulpreetsingh9064
      @kulpreetsingh9064 4 года назад

      No bro, That will depend whether the features getting added are correlated or not. If the features getting added are not correlated with the target variable then the adjusted R square will decrease, however if they are correlated then naturally adjusted R square will also increase.

    • @ayushmishra-sw4po
      @ayushmishra-sw4po 4 года назад +1

      Adding multiple feature will automatically increase the r square, as increasing feature decreases the value of SSres.even if the feature is not related to the output variable. Adding multiple feature to our model can perform better in sample than when tested out of sample.So in such case adjusted r square works

  • @adylmanulat2465
    @adylmanulat2465 2 года назад

    good day sir, I just wanted to ask if an independent variable is not significant or does not have an explanatory power to the model but when removing it lowers the adjusted r-square what does this imply? so far the reason that i know the reason is because the t-statistic is greater than one. With this information, what can we infer?

  • @alishaparveen5226
    @alishaparveen5226 2 года назад

    Could you please explain with any example from scratch with multi output in regression?? I want to predict 2 output (distance travelled and velocity) from the dataset.

  • @SandeepKumar-ie1ni
    @SandeepKumar-ie1ni 4 года назад

    Sir, As you said that in order to avoid negate values in the residuals we squared the terms SSres and SStot , but sir if we apply mode on both values neglecting squared both terms , what will be the change in R values ?? On squaring the R value its getting larger which is reaching towards 1 more easily that depicts our model has fitted well . please answer sir .

  • @hemachand5617
    @hemachand5617 4 года назад

    Let's say I have 10 features and some R square value is calculated. Later it found that 4 of the features are uncorrolated with the target. Now 1-R2 value is not going to change and so does the adjusted R2 value. Can u correct me if I'm analyzing it wrong hoping it would follow the simple linear regression model not the lasso

  • @Kmrabhinav569
    @Kmrabhinav569 2 года назад

    Well done

  • @rachanagovekar1683
    @rachanagovekar1683 3 года назад +2

    What are these 33 dislikes for ? Is your language different :-D, Awesome explanation Krish, hats off

    • @adityasagarr
      @adityasagarr 3 года назад

      maybe in search of hindi content

  • @utkarshsalaria3952
    @utkarshsalaria3952 3 года назад +1

    Sir at last of the video you said that r^2 will never be decreasing on increase of independent features even if the that feature is not correlated , then how can you say that adjusted R^2 will decrease when R^2 is less (at 14:16) which will never be true according to the fact that R^2 will always be increasing then how can it be less It have actually confused me Plz help if anyone knows

    • @rohandogra5421
      @rohandogra5421 3 года назад +1

      Yup I also have the same problem

    • @tiverekarrahul
      @tiverekarrahul 2 года назад +1

      1) If added features are correlated with target, R2 grows much fater compared to denominator term containing number of features ( p). Hence Adj. R2 also increases.
      2) If added features are not correlated or less correlated with target, then R2 grows slower compared to denominator term containing number of features ( p). Hence Adj. R2 will increase a little, but will not have any significant rise.( NOTE: Adj R2 Does not decrease) That is what is called as penalized. Not allowed to grow at same rate as that of correlated features case.

  • @mawais2560
    @mawais2560 4 года назад +1

    what are possible interpretations and justifications for low r square values in management science?

  • @mahalerahulm
    @mahalerahulm 4 года назад

    Wonderful Explanation !!

  • @kewalagrawal6539
    @kewalagrawal6539 4 года назад +2

    This is the problem with our education system...everything is just formula based...you started off with the formula without even giving any intuition about what actually R2 and adjusted R2 mean...what does a 50% R2 tell you...formula and maths always come last...you should first make your students visualize what these terms mean without using any maths at all...once they are good with it...then you bring the formula

  • @tannurohela6192
    @tannurohela6192 2 года назад

    Hey, I didn't get the term Penalizing. In the video just before explaining Adjusted R square, it was said that "it is not Penalizing the new added features". Can someone please elaborate.

  • @ayantikabhowmik1261
    @ayantikabhowmik1261 4 года назад

    Great explanation Sir!

  • @adipurnomo5683
    @adipurnomo5683 3 года назад

    Fantastic course!. I hope you doing well sir .

  • @shaz-z506
    @shaz-z506 5 лет назад +1

    Thank you Krish that's the good explanation.

  • @kishanpandey4798
    @kishanpandey4798 5 лет назад

    If I have 10 features and if I need to know which feature is affecting output y and which is not affecting y. Do I need to find correlation between y and each feature separately. If yes , then how? If not , then what to do? Krish please reply. Thanks

    • @deepakgehani
      @deepakgehani 5 лет назад +1

      You can do Eda, do a pairplot check correlation and put on heatmap and later you can aply machine learning algo

    • @kishanpandey4798
      @kishanpandey4798 5 лет назад

      @@deepakgehani thanks a lot. I will apply this and revert back to you in case I face any other issue. Thanks again

    • @praneethcj6544
      @praneethcj6544 4 года назад

      You need to perform chi square test if both IP&Op variables are categorical and ANOVA for cat ,cont variables ,finally Pearson correlation for both continuous ...!!!

    • @praneethcj6544
      @praneethcj6544 4 года назад

      You write in a loop all the variables and check correlation.

    • @mranaljadhav8259
      @mranaljadhav8259 4 года назад

      you have many way to find , firstly you can find correlation between them using heatmap or corr method, secondly you an find the VIF value of the features , last way you can check your standard error by using OLS method.

  • @amitanand8485
    @amitanand8485 5 лет назад

    Thanks .. Explained beautifully

  • @keerthanpu808
    @keerthanpu808 8 месяцев назад

    HOW U TOOK AVERAGE LINE IN GRAPH (ON WHAT BASIS?)

    • @AkashRusiya
      @AkashRusiya 5 месяцев назад +1

      It's simply the arithmetic mean of target variable's "actual" values.

  • @gauravjoshi9764
    @gauravjoshi9764 3 года назад

    i just wanna know this total sample size is total number of columns or total number of rows

    • @kitagrawal3211
      @kitagrawal3211 2 года назад

      sample size is total number of rows. predictors are total number of columns

  • @bhavanasree7573
    @bhavanasree7573 4 года назад

    What do we do next if we get to know that r-square is small ? Yeah it says the model isn't a good fit but is there any way we can improve the model after getting to know the r squared is less or we use some other method to solve this model

  • @gopakumar138
    @gopakumar138 4 года назад

    very useful video

  • @seemaarya598
    @seemaarya598 4 года назад

    How we can say adj r square is significant or not

  • @datascience6718
    @datascience6718 4 года назад +1

    Sir, what is the meaning of penalize in terms of machine learning?

    • @ayushmishra-sw4po
      @ayushmishra-sw4po 4 года назад +4

      Here Panalize means er are adding extra predictor which is no use..so it will decrease the value of Adjusted R sq

    • @datascience6718
      @datascience6718 4 года назад

      @@ayushmishra-sw4po thank you so much

  • @tonnysaha7676
    @tonnysaha7676 3 года назад

    Thank you sir🙏

  • @ParallelUniverse550
    @ParallelUniverse550 3 года назад

    Can R square be considered as training accuracy?

    • @kitagrawal3211
      @kitagrawal3211 2 года назад

      yes, it is a performance metric. in practice, adjusted r-score is used more often

  • @ganesanr2307
    @ganesanr2307 4 года назад

    Since R Square is the squared value of r, then how it will get a negative value.
    R square always 0 to 1. It will never ever be a negative number

    • @linuxrhel6107
      @linuxrhel6107 4 года назад +1

      There is no such value of R, only R Square is the terminology used for this formula. Check out the formula for R Square.

    • @ganesanr2307
      @ganesanr2307 4 года назад

      R is the Correlation Coefficient

    • @meetmeraj2000
      @meetmeraj2000 4 года назад +1

      R squared can be a negative value if the model is worse than average best fit line.

  • @sangitakhade1730
    @sangitakhade1730 3 года назад

    what is the meaning of penalize

  • @ravitadiboina6065
    @ravitadiboina6065 4 года назад

    Why r2 value is no decreasing when features are increasing is their any theory behind it

    • @kitagrawal3211
      @kitagrawal3211 2 года назад

      yes. you will always be adding either 0 or small values > 0 (because of the square) so it will either remain the same or increase.

  • @vjukulkarni6057
    @vjukulkarni6057 5 лет назад

    Hi krish can u please suggest how to explain the algorithm in interview

  • @snigdharay8847
    @snigdharay8847 5 лет назад

    If these two are different then why do all say that r-sqaure and adjusted r-sqaure both are same and while seeing the ouput we always see the adjusted r-square.

    • @generationwolves
      @generationwolves 5 лет назад +9

      R-Squared and Adj R-Squared are NOT the same.
      For Simple Linear Regression, the R-Squared and Adj. R-Squared values will almost be similar. You can just check the R-Squared value to evaluate your model's goodness of fit.
      For multiple Linear Regression, you will find that no matter what, the R-Squared value will keep increasing as you add new features (even if the new feature is not correlated to the dependent variable). This leads you to believe that the new feature (independent variable) you've added is contributing to building a better model, which is not the case. The adjusted R-Squared function provides a penalty mechanism that reduces the overall value if the new feature is not contributing to the model. This metric is usually considered to evaluate the goodness of fit (in the case of Multiple Linear Regression), especially when you're using a Feature Selection method like Step-Wise Regression.

  • @subhamsaha2235
    @subhamsaha2235 3 года назад

    Still not clear for me, can anyone help me out.
    In case of un-correlated or correlated variable, If p increases then N will also increase, R2 obviously increase, then how its penalizing?

    • @kitagrawal3211
      @kitagrawal3211 2 года назад

      N is constant here because it's number of samples vs p is number of preictors.

  • @emilyme9478
    @emilyme9478 3 года назад

    Awesome

  • @mayurisagiraju7928
    @mayurisagiraju7928 5 лет назад

    thank you so much...It helped

  • @saifsalim6084
    @saifsalim6084 5 лет назад

    In which condition, SSR will be greter than SST?

    • @ayushmishra-sw4po
      @ayushmishra-sw4po 4 года назад +1

      As we increase the number of independent feature the value of SSR will increase

    • @nilaykushawaha2666
      @nilaykushawaha2666 3 года назад

      If the model prediction is worst than the average prediction we have assumed in SST

  • @sardarsahib3993
    @sardarsahib3993 5 лет назад

    superb

  • @nileshsuryavanshi8792
    @nileshsuryavanshi8792 4 года назад +1

    very well explained, thank you sir.

  • @shubhamkundu2228
    @shubhamkundu2228 3 года назад

    Little Confusing for the use of Adjusted Rsquare !.. So when we add more independent variables to model, the Rsquare will always make sure to increase, then Adjusted Rsquare checks if independent variables is not correlated to the target variable and minimize Rsquare value.
    Does that mean while feature selection, we should take those independent features that are correlated to target/output variable and drop other..?
    Aren't we supposed to take those independent variables in model that are not correlated with each other and they are independent, so why penalizing them which are not correlated !! For independent variables that are correlated, we could drop them !

  • @manzarabbas6312
    @manzarabbas6312 4 года назад

    kamal !!!!!

  • @Dyslexic_Neuron
    @Dyslexic_Neuron Год назад +1

    not a satisfactory explanation as to how R adjusted takes care of non correlated value, just hacking the formula doesnt make it very clear. The intuition and the reason for adding sample size is not explained properly.
    Overall not a good explanation

  • @tejas4054
    @tejas4054 Год назад

    Particular bolna kab band kroge

  • @machinelearningchefs3525
    @machinelearningchefs3525 4 года назад

    Correct yourself R-squared = SumSquareRegression/SumSquareTotal and this entity cannot be negative.
    SST = SSR + SSE.
    So SST > SSE , there is no chance of R-squared to be negative. This what happens when you are teaching without have good understanding of concepts behind them. You have more than 150K subscribers and do not mislead them
    From mathematical stand point R-square is the ratio of variation explained due to the model to variation in the data

    • @jagannathgirisaballa
      @jagannathgirisaballa 4 года назад +2

      𝑅2 compares the fit of the chosen model with that of a horizontal straight line (the null hypothesis). If the chosen model fits worse than a horizontal line, then 𝑅2 is negative. Note that 𝑅2 is not always the square of anything, so it can have a negative value without violating any rules of math. 𝑅2
      is negative only when the chosen model does not follow the trend of the data, so fits worse than a horizontal line.
      Example: fit data to a linear regression model constrained so that the 𝑌
      intercept must equal 1500
      i.stack.imgur.com/CHpzE.png
      The model makes no sense at all given these data. It is clearly the wrong model, perhaps chosen by accident.
      The fit of the model (a straight line constrained to go through the point (0,1500)) is worse than the fit of a horizontal line. Thus the sum-of-squares from the model (𝑆𝑆reg)
      is larger than the sum-of-squares from the horizontal line (𝑆𝑆tot). 𝑅2 is computed as 1−𝑆𝑆reg𝑆𝑆tot. When 𝑆𝑆reg is greater than 𝑆𝑆tot, that equation computes a negative value for 𝑅2
      .
      With linear regression with no constraints, 𝑅2
      must be positive (or zero) and equals the square of the correlation coefficient, 𝑟. A negative 𝑅2 is only possible with linear regression when either the intercept or the slope are constrained so that the "best-fit" line (given the constraint) fits worse than a horizontal line. With nonlinear regression, the 𝑅2
      can be negative whenever the best-fit model (given the chosen equation, and its constraints, if any) fits the data worse than a horizontal line.
      Bottom line: a negative 𝑅2
      is not a mathematical impossibility or the sign of a computer bug. It simply means that the chosen model (with its constraints) fits the data really poorly.

    • @jagannathgirisaballa
      @jagannathgirisaballa 4 года назад +3

      This person has put in a great degree of time and effort which is an indication of his passion. The reason he has 150K subscribers is that the followers are able to make sense of what he is saying. And dude, logically what will he gain by misleading them. Is he preaching some religion???? I checked your RUclips channel...surprised that you are commenting without having uploaded a single video?? I recommend that first of all we learn to appreciate the person and even if there is a mistake in something he is saying(to err is human!), lets show some humility in pointing it out.

    • @machinelearningchefs3525
      @machinelearningchefs3525 4 года назад

      @@jagannathgirisaballa Hi I understand that you no idea about ML or stats. I dont need videos to be uploaded to comment on others videos. Anyway I have Phd in ML/Computer Vision. I dont want get into fight with you . Chill and follow his Videos.

    • @krishnaik06
      @krishnaik06  4 года назад +2

      Buddy chill...whatever I explain is based on the practical experience...so that means I have proof of everything I do. Any how u r highly qualified, I think u should share your knowledge with everyone...I would also love to see some implementations from your end..and Yes I do not mislead anyone..You can check my linkedin profile, and these videos have helped people to clear interviews. Anyhow it has not helped you, I am sorry about it. So in conclusion misleading is a very wrong term to use over here. Being a highly qualified guy like you, it doesn't suit you at all. Cheer stay safe and healthy. I would also suggest u to go through this link
      stats.stackexchange.com/questions/12900/when-is-r-squared-negative

    • @jagannathgirisaballa
      @jagannathgirisaballa 4 года назад +1

      @@machinelearningchefs3525 bro, I will be the first person to accept that I have no idea of ML or stats. And that's my excuse of being here and watching the video. So, bro with a PhD, whats your excuse of being here and watching the video? Checking out the opposition? :-) anyways, peace brother. I am here for learning and would love to learn from anyone..apologies if my comment hurt your feelings. not intentional.

  • @harisjoseph117
    @harisjoseph117 3 года назад

    Thank you Krish. Nice explanation.

  • @prateeksachdeva1611
    @prateeksachdeva1611 2 года назад

    Very well explained

  • @SACHINKUMAR-px8kq
    @SACHINKUMAR-px8kq 2 года назад

    Thankyou so much sir

  • @pratiknabriya5506
    @pratiknabriya5506 5 лет назад

    Thanks...very well explained.