XGBoost for Regression | XGBoost Part 2 | CampusX

Поделиться
HTML-код
  • Опубликовано: 19 окт 2024

Комментарии • 48

  • @sunnyghangas4391
    @sunnyghangas4391 Месяц назад +2

    This is the best explanation i have ever seen for any ml algorithm. I was very confused on xgboost and don't find a detail and complete explanation on xgboost anywhere. You have connected each dots and explained as a story. Keep up this amazing work. Now, i would definitely go and check other concepts here. Thanks! :)

  • @purubhatnagar483
    @purubhatnagar483 6 месяцев назад +5

    for multiple features, i believe it will evaluate each feature separately and find the best split which has the highest gain and if one categorial variable is seen then we have to do one hot encoding (if categorical) in case of binary(0,1) It can work

  • @TemporaryForstudy
    @TemporaryForstudy Год назад +6

    if we have two features then one thing we can do is that we find gain for using each feature and the feature that gives the maximum gain is selected for splitting.

  • @anindyasen8414
    @anindyasen8414 11 месяцев назад +6

    Absolutely love this series. Can you please make videos on LightGBM regressors and classifiers as well? 🙏

  • @nxlamik1245
    @nxlamik1245 Год назад +6

    Happy birthday sir.I love your commitments, discipline and dedication❤

  • @ubaidurrehman5566
    @ubaidurrehman5566 Год назад +16

    Kindly sir upload all4 400 data science interview questions list which you tell us in data science interview questions video

  • @krish_toon4572
    @krish_toon4572 Год назад +1

    Happy Birthday Nitish Sir. One of the best teacher I have ever seen.

  • @balrajprajesh6473
    @balrajprajesh6473 Год назад +1

    Janmdin ki bahut bahut shubkaamnaayein aur badhaaiyaan Sir.

  • @KeyurkumarPanchal
    @KeyurkumarPanchal 5 месяцев назад

    Loved this one ❤,simply explained. Thank you very much.

  • @1111Shahad
    @1111Shahad Месяц назад

    Thanks you Nitish.

  • @sharksinvestment9864
    @sharksinvestment9864 2 месяца назад +1

    Good explanation sir❤❤

  • @shakilkhan4306
    @shakilkhan4306 11 месяцев назад +1

    Thank you sir..
    You did your best..

  • @AYUSH-KHAIRE
    @AYUSH-KHAIRE 3 месяца назад +1

    43:18
    We can scan all features , and do all possible splits for all features, then we will calculate gain and similarity score , and select feature which has max gain .
    43:56
    idenntify unique values in catagorical columns , and consider both as a potential split point ., then calculate gain and similarity scores ., and select with maximum gain .
    43:59
    idenntify unique values in catagorical columns , and consider everyone as a potential split point ., then calculate gain and similarity scores ., and select with maximum gain .
    split can have two ways
    1 . 1 node out of all can always have a single and other can be a rest classes in a group . like { small } and { medium , large } data - like that .
    2 . divide it into non empty , non overlapping subsets . in all possible ways .like { very small,small } , { large , very large } .
    now for possible split ,
    divide the data based on the subset of categories.
    calculate similarity scores and gain , and go with max gain :)

  • @tanmaygupta8288
    @tanmaygupta8288 Месяц назад

    best explanation

  • @sachinnaik1267
    @sachinnaik1267 Год назад +1

    I was waiting for this... Thank you sir.

  • @sudhiraos4149
    @sudhiraos4149 Год назад +2

    sir please make a video of what are stats for which algorithms so we learn parallel

  • @LUFFY-nk9wu
    @LUFFY-nk9wu Год назад +1

    Awesome Content Sir
    Sir please give one life advice should i give the GATE 2024 exam or focus on my placements>
    Please tell both pross and cons of gate and placements.
    Much Appreciated

  • @ParthivShah
    @ParthivShah 6 месяцев назад +1

    Thank You Sir.

  • @flawlessvideos
    @flawlessvideos Год назад +1

    Happy Birthday Nitish Sir

  • @alokmishra5367
    @alokmishra5367 10 месяцев назад +1

    Excellent letcure ❤

  • @PokemonMasterLeonado
    @PokemonMasterLeonado Год назад

    Thank you so much sir. Can i please request a video on Fourier transformation series for deep leanring and computer vision( mainly cnn)

  • @minalgupta7456
    @minalgupta7456 4 месяца назад

    Thanks

  • @rajdeep_6736
    @rajdeep_6736 Год назад

    Happiest Birthday Nitish Sir🎉

  • @thequietkid5212
    @thequietkid5212 Год назад +4

    For categorical label you take the log odds and revert it back to probability and train the model with new probabilities values as equivalent to what we did with residuals in the regression case. And train it with decision trees. Since we are using the regression trees the model assumes that it is a regression problem and it is not good if we are dealing with categorical data. So those regression tree values don't output in proper probabilities which ranges from 0-1 instead it gives values beyond that range so to countermeasures it we use logodds again to convert those probabilities values. And train another decision trees with new probabilities converted( log odds values) as label and repeat the process.

    • @Tusharchitrakar
      @Tusharchitrakar 8 месяцев назад +4

      What you mentioned is not right. You are talking about categorical variables in a classification problem where as the question was for a regression problem. The concept of log odds makes sense only for a classification problem. To use categorical variables in a regression problem would be to use some form of encoding and the resultant numeric values will be split by following which similarity scores will be evaluated. Look at the campusx video for GBM for classification where log odds and probability inter conversion is used for the classification problem

  • @Black_Hawk007
    @Black_Hawk007 Год назад

    Happy Birthday Nitish Sir 🎉🎂🎊🎈

  • @romiopal8124
    @romiopal8124 11 месяцев назад +1

    Plz upload xgboost for classification also

  • @math_section
    @math_section 3 месяца назад

    which device uses for writing pad.

  • @afroman1611
    @afroman1611 Год назад +1

    Happy birthday sir

  • @maaleem90
    @maaleem90 Год назад

    hi sir , we hope you doing good.
    i got some query an you please answer it . i worked with keras tuner after wathing your video , i just watched your video on keras classifier and now i also heard of NAS neural network search . can you please help me with differences among these three ?

  • @friends53392
    @friends53392 Год назад

    Sir maths for machine learning book ke lectures banado Zindagi savar jayegi sir koi nahin rok payega

  • @illusions8101
    @illusions8101 Год назад

    Happy birthday sir 🎉🎉🎉

  • @saadsn2985
    @saadsn2985 Год назад

    Thanks Sir😊

  • @shantanudas6319
    @shantanudas6319 11 месяцев назад

    Awesome

  • @devamsingh9252
    @devamsingh9252 Месяц назад

    Itnq detail me playlist banane ke baad toh nind me bhi aap Machine learning algorithm apply kar skte 😁😁

  • @TonyStark-qc8ow
    @TonyStark-qc8ow 4 месяца назад

    Why the sum of residuals is not coming zero since we are mean subtracting

  • @AnkitKumar-sz3by
    @AnkitKumar-sz3by 11 месяцев назад

    thanks great video

  • @aiforeveryone
    @aiforeveryone Год назад

    Super❤

  • @namnguyen9370
    @namnguyen9370 Год назад

    Why aren't your videos automatically translated?

  • @nitinrawat-g6t
    @nitinrawat-g6t Год назад +1

    Is this ml?

  • @shakilkhan4306
    @shakilkhan4306 11 месяцев назад

    Upload the next please!

  • @Prathameshhh_
    @Prathameshhh_ Год назад

    THANK YOU..! 3000❤

  • @SatyamBonaparte
    @SatyamBonaparte 6 месяцев назад +1

    I'd recommend u to upload videos in 2x coz u speak remarkably slowly and consequently ur videos become enormously long, which ideally don't make any sense at all.

  • @ryando4556
    @ryando4556 Год назад +2

    Man, you're contents are great, but you accent really discourage the audience...

    • @Tusharchitrakar
      @Tusharchitrakar 8 месяцев назад +2

      I beg to differ. Most audience here love the content and rightfully so: it's first of all free, secondly it's better than all paid courses especially because he goes deep into the fundamentals and his pedagogical technique is stellar. If you are referring to language of choice (and not accent), then you can send him an email directly which is understandable since everyones Hindi isn't strong. But commenting on his accent and telling it is discouraging is not representative of the majority

    • @sumankumar01
      @sumankumar01 5 месяцев назад +3

      Don’t watch

    • @khizerismail9684
      @khizerismail9684 4 месяца назад

      dont kid here. There is nothing wrong with his hindi accent or anything as such. He teaches so well all these topics which are very much highly paid outside. Going through a research paper and teaching stuff with research paper only a good teacher/human could do this.

  • @pxxthik
    @pxxthik Год назад

    Happy Birthday Nitish Sir