Part 1-Decision Tree Classifier Indepth Intuition In Hindi| Krish Naik

Поделиться
HTML-код
  • Опубликовано: 2 окт 2024

Комментарии • 66

  • @krishnaikhindi
    @krishnaikhindi  2 года назад +8

    We are happy to announce iNeuron is coming up with the 6 months Live Full Stack Data Analytics batch with job assistance and internship starting from 18th June 2022.The instructor of the course will be me and Sudhanshu. The course price is really affordable 4000rs Inr including GST.
    The course content will be available for lifetime along with prerecorded videos.
    You can check the course syllabus below
    Course link: courses.ineuron.ai/Full-Stack-Data-Analytics
    From my side you can avail addition 10% off by using Krish10 coupon code.
    Don't miss this opportunity and grab it before it's too late. Happy Learning!!

  • @netviz8673
    @netviz8673 Месяц назад +4

    decision tree for both classification regression here classification. 2 techniqiues. ID3 and CART. In Cart the decision tress spilts into binary trees. a)Entropy and Gini Index(Purity spirit) b) Information Gain(feature decision tree split). To check for Pure split two techniques called Entropy and Gini impurity are used and second technique called INformation gain (how the features are selected) is used.
    When H(S) is zero then that is pure split. And when H(s) is 1 then that is impure split ie equal distribution (eg 3yes and 3nos). The range of entropy remains between 0 to 1.
    In impure split the Gini impurity comes out to be 0.5 and in pure split it is 0. So the gini impurity ranges between 0 to 0.5. So in impure split the max value of gini impurity is 0.5 and in pure split it is 0.
    gini impurity is preferrable over entropy because of involvement of log it may slow down
    Now if you have multiple features, you use information gain to know how to make the tree using the given features whether which feature will start and which one will follow later. The feature starting with which the information gain calculation comes out to be the most should be the one with which the decision tree should be started.

  • @jasonbourn29
    @jasonbourn29 Год назад +5

    Thanx sir in Hindi explanations you tend to cover topics better(English vedios are also of far better quality than anyone else)

  • @DrZubairulIslam
    @DrZubairulIslam 8 дней назад

    Thanks Krish, Best Ever Video, Wao.

  • @utkarsh5165
    @utkarsh5165 5 месяцев назад

    Thank you Krish for Crystal Clear Explanation.❤

  • @sauravsahay8803
    @sauravsahay8803 5 месяцев назад

    You make everything look so easy

  • @akashlinganwar4810
    @akashlinganwar4810 Год назад +1

    hello krish sir... your explanation is easy to understand and anyone can learn easily..thank you sir...😊

  • @abhiWorldIN
    @abhiWorldIN Месяц назад

    Awesome video

  • @ketanitaliya9493
    @ketanitaliya9493 Год назад

    Thanks it is really helpful and easy to understand

  • @dhavalsukhadiya8654
    @dhavalsukhadiya8654 5 месяцев назад

    great explaination sir

  • @jaiprakashsingh756
    @jaiprakashsingh756 Год назад

    sir, I really find your videos very helpful. thanks a lot.

  • @krishj8011
    @krishj8011 4 месяца назад

    nice tutorial

  • @BikashKonwar-w7q
    @BikashKonwar-w7q 11 месяцев назад +1

    In Entropy formula summation of p(x) * log2(p(x))

  • @tuhinbarai7592
    @tuhinbarai7592 Год назад

    thank you sir..its to understand...

  • @jasanimihir4994
    @jasanimihir4994 2 года назад +6

    As always Very well explained.
    I have one query sir. You told that if the dataset is very big then use gini index otherwise entropy is fine. But finding the entropy is must for the information gain as no mention of Gini index in information gain formula. So is it possible to use gini index to find information gain?
    Kindly throw light on that. 😊

    • @shaiqmahmood
      @shaiqmahmood Год назад

      There is a way to calculate the Information Gain using Gini index as well.

  • @meenalpande
    @meenalpande Год назад

    Nice explanation

  • @pkumar0212
    @pkumar0212 2 месяца назад

    👌

  • @abhishekbhad4029
    @abhishekbhad4029 8 месяцев назад

    Very nice explanation sir.i have one question how to get intership as no one is hiring for fresher

  • @razashaikh8698
    @razashaikh8698 Год назад +2

    Sir, can we find information gain using ginny impurity?

  • @neelkalyani7849
    @neelkalyani7849 4 месяца назад

    In calculating information gain, can we use gini impurity instead of entropy?

  • @harshdeepjaggi9715
    @harshdeepjaggi9715 10 месяцев назад +2

    wonderful explanation sir.... I'm already enrolled in Data Science with one of the edtech of India... no doubt waha ke teachers bhi accha padhate par jo english mei content hai wo mind mei ek baar mei acche se nhi jaata... ye content hindi wala raise ghus gya mind mei ki bus ab hamesha yaad rhega... Thankyou for your efforts..

  • @VikasSingh-nq5yx
    @VikasSingh-nq5yx 2 года назад +1

    Sirrrr.... ❤ I have a question 🙋!
    If interviewer ask a question why we are using minus ( - ) sign in Entropy? Please reply........ ❤

    • @saranshsehrawat8544
      @saranshsehrawat8544 2 года назад

      its formula

    • @neeraj.kumar.1
      @neeraj.kumar.1 2 года назад

      Don't worry they don't ask these types of mathematical formulas.
      They can ask what is Gini impurity.

  • @prathameshekal7308
    @prathameshekal7308 Год назад +4

    मंडळ आभारी आहे

  • @pritamrajbhar9504
    @pritamrajbhar9504 Год назад +2

    It is one of the best and simplest explanations till far

  • @sugandhaarora8174
    @sugandhaarora8174 Месяц назад

    is this required for data analyst role?

  • @RiteshBhalerao-b2v
    @RiteshBhalerao-b2v Год назад +1

    Great explaination...hard to find anywhere else👌👌

  • @patelanjali2909
    @patelanjali2909 Год назад +1

    your teaching skill awesomwe.

  • @sandeepbehera3605
    @sandeepbehera3605 Год назад +1

    wonderful explanations sir no one can explain like you...🙏🙏🙏
    thankyou..sir😇

  • @SumitJindal-e3k
    @SumitJindal-e3k Год назад +1

    That was awesome

  • @IrfanSaleem541
    @IrfanSaleem541 2 месяца назад

    Thanks a lot. Love and Respect from Oman

  • @maukaladka4100
    @maukaladka4100 2 года назад +1

    Worth watching😍😍😍

  • @adityatiwari7287
    @adityatiwari7287 6 месяцев назад

    Awesome Explanation....Thanks A Lot....Keep It Up !!!

  • @anusuyaghosal6961
    @anusuyaghosal6961 5 месяцев назад

    sir you said H(s) is the entropy of root node but i think it is the entropy of target attribute

  • @SpectreWarfare
    @SpectreWarfare Год назад

    Video volume is very less. It is difficult to listen

  • @ruhiraj9551
    @ruhiraj9551 Год назад

    Hello Krish sir ..thanku so much 🙏 for a very excellent explanation .

  • @arunchougale5927
    @arunchougale5927 2 года назад

    Very Good explained by you .it is lot help me Thank u very much

  • @ShivamSharma-if1oh
    @ShivamSharma-if1oh 2 года назад

    My answer of Entropy is coming 0.6 not 1

  • @bryan4592
    @bryan4592 2 месяца назад

    Amazing video sir

  • @priyanshupokhariya8866
    @priyanshupokhariya8866 Год назад

    0*log0 is undefined how is it coming 0??

  • @maths_impact
    @maths_impact Год назад

    Wonderful explanation given by you sir in hindi.

  • @osamaosama-vh6vu
    @osamaosama-vh6vu 2 года назад

    Your legend deae sir thank u be happy 😍

  • @nisho404
    @nisho404 9 месяцев назад

    in one word bosssss

  • @Pankaj_Khanal_Joshi
    @Pankaj_Khanal_Joshi 9 месяцев назад

    Sir sklearrn and seaborn ka video banaiye . thank u

  • @GopalKrishna-p4m
    @GopalKrishna-p4m 11 месяцев назад

    after one an era No one will beat you sir !! incredible explanation thankyou so much sir

  • @gurpreetkaur-pf1bf
    @gurpreetkaur-pf1bf 4 месяца назад

    Amazing ❤

  • @mainakseal5027
    @mainakseal5027 Год назад

    what an amazing tutorial...hats off sirji!!!...

  • @girishgogate2733
    @girishgogate2733 2 года назад

    Thankyou krish sir ........

  • @Er_IT_prashantjha
    @Er_IT_prashantjha 4 месяца назад

    Bohot achha explain krte ho Sir aap 👌🏻💯

  • @aamiransarii
    @aamiransarii 9 месяцев назад

    Many, Many Thanks .....so lovely of you

  • @katw434
    @katw434 Год назад

    Thanks sir please continue this series

  • @vidvaanwithromi5213
    @vidvaanwithromi5213 Год назад

    wonderfully explained sir!

  • @maths_impact
    @maths_impact Год назад

    Wonderful

  • @SonuK7895
    @SonuK7895 2 года назад

    Very well explained sirjiiii

  • @rushikeshwaghmare3446
    @rushikeshwaghmare3446 9 месяцев назад

    Sir acc to external sites gini impurity ranges from 0-1

  • @vatsalshingala3225
    @vatsalshingala3225 Год назад

    ❤❤❤❤❤❤❤❤❤❤

  • @prerakchoksi2379
    @prerakchoksi2379 2 года назад

    wow

  • @mrityunjayupadhyay7332
    @mrityunjayupadhyay7332 2 года назад

    Great explanation

  • @aaqibafraaz6215
    @aaqibafraaz6215 2 года назад

    Hello sir

  • @Keep_Laughfing
    @Keep_Laughfing 2 года назад

    Sir aise hi video bnate rhiye apko shayd pta bhi nhi hogaa ki ye aapki kitni bdi help h DATA SCIENCE lovers ke liye.
    Dil se dhanyvaad 🙏🙏🙏