Machine Learning Tutorial Python - 11 Random Forest

Поделиться
HTML-код
  • Опубликовано: 15 окт 2024

Комментарии • 363

  • @codebasics
    @codebasics  2 года назад +4

    Check out our premium machine learning course with 2 Industry projects: codebasics.io/courses/machine-learning-for-data-science-beginners-to-advanced

    • @vipulsemwal478
      @vipulsemwal478 Год назад

      Hi, I have a question regarding fitting the model. When we do model. fit in every training, there will be a random set of samples for training. For example, in the iris dataset, I fit my model and then fine-tune with n_estimators =10,20,100, etc. sometimes it is getting 1.0 score on 20, but if I run it again, it gets 0.98, so how can I fix the x_train and y_train so it will not change every time. ?
      And I am really thankful for your lectures I am learning day by day.
      Thank you.

  • @sanchit0542
    @sanchit0542 5 лет назад +46

    Keeping the tutorial part aside (which is great), I really love your sense of humor and it's an amazing way to make the video more engaging. Kudos!!
    Also, thank you so much for imparting such great knowledge for free.

    • @codebasics
      @codebasics  5 лет назад +6

      Thanks for your kind words and appreciation shankey 😊

  • @zerostudy7508
    @zerostudy7508 5 лет назад +5

    Lets promote this channel.
    I am just a humble python hobbies who took local course yet still I don't understand most of the lecturer says. Because this channel i've finally found fun with python. In just 2 weeks(more) I already this Level? Man....! Can't Wait for Neural Network but only from this channel

  • @panagiotisgoulas8539
    @panagiotisgoulas8539 2 года назад +5

    It is a good practice to make a for loop for the n_estimators check the score for one of these:
    scores=[ ]
    n_estimators=range(1,51) #example
    for i in n_estimators :
    model=RandomForestClassifier(n_estimators=i)
    model.fit(X_train,y_train)
    scores.append(model.score(X_test,y_test))
    print('score:{}, n_estimator:{}'.format(scores[i-1],i))
    plt.plot(n_estimators,scores)
    plt.xlabel('n_estimators')
    plt.ylabel(('testing accuracy')
    And then you can sort of see what's going on. This practice is very useful for knearest neighbors technique for calculating k.

    • @cololalo
      @cololalo 2 года назад

      Thank you! I was looking for something like thi. I think in the fourth line the i is missing, as in model=RandomForestClassifier(n_estimators = i)

    • @panagiotisgoulas8539
      @panagiotisgoulas8539 2 года назад

      @@cololalo yep forgot it thanks.

    • @fathoniam8997
      @fathoniam8997 3 месяца назад +1

      Thank you, I am trying to find something like this since the previous video!

  • @adityahpatel
    @adityahpatel 3 года назад +2

    I cannot quite express how amazing teaching you are doing. I am doing masters one of the finest universities in America and this is better than the supervised learning class I am taking there. Kudos! Please keep it up. appreciate you are making this available for free although I would be willing to see your lectures even for a fee.

    • @codebasics
      @codebasics  3 года назад

      Thanks for leaving the feedback aditya

  • @srujanjayraj9490
    @srujanjayraj9490 5 лет назад +7

    The way you teach or explain the concepts completely different thanks a lot!!!!!! Please make more videos

  • @roodrakanwar3300
    @roodrakanwar3300 4 года назад +3

    I achieved an accuracy of .9736. Earlier, I got an accuracy of .9 when the test size was 0.2 and changing the number of trees wasn't changing the accuracy much. So, I tweaked the test size to .25 and tried different number of tree size. The best I got was .9736 with n_estimators = 60 and criterion = entropy gives a better result.
    Thank you so much sir for the series. This is the best RUclips Series on Machine Learning out there!!

    • @lokeshplssl8795
      @lokeshplssl8795 4 года назад +1

      xlabel is the truth
      and
      ylabel is the prediction
      but in the video it is reverse....
      Am I right?
      because we take "confusion_matrix(y_test,y_predicted)"

    • @panagiotisgoulas8539
      @panagiotisgoulas8539 2 года назад

      @@lokeshplssl8795 I think I know why you are probably confused. This not a plot chart. You should not assume that because you passed y_test as a first argument you would see it horizontally similarly you do with xlabel.
      Unfortunately the confusion matrix is printed out unlabeled. True/Actual/test values are vertically alligned and predicted ones are horizontally.
      A couple of videos before he used another library to demonstrate the matrix labeled.
      If you have any questions regarding confusion matrix this is by far the best video ruclips.net/video/8Oog7TXHvFY/видео.html .
      Also a similar use case has to do with Bayesian statistics. Another great example ruclips.net/video/-1dYY43DRMA/видео.html
      You don't have to get into it since the software does it for you, but it would help understand what is going on

  • @kausikkar2587
    @kausikkar2587 Год назад +3

    Sir, I am damn impressed by you!!!! You are the best ML instructor here on YT!!!!

  • @chrismagee5845
    @chrismagee5845 Год назад +9

    FYI if you are using version 0.22 or later the default value of n_estimators changed from 10 to 100 in 0.22

  • @Tuoc_Nguyen
    @Tuoc_Nguyen 5 месяцев назад +1

    For Iris Datasets I got score =1 for n_estimators = 40,50,60
    Thank sir very much

  • @tanishqrastogi1011
    @tanishqrastogi1011 9 месяцев назад

    ok so i read one comment and put test_size = 0.25 and n_estimator = 60. I rerun my test sample cell as well as model.fit and model.predict cell and got the accuracy of 100%. I am having a god complex right now thank you for this amazing series

  • @rameshthamizhselvan2458
    @rameshthamizhselvan2458 5 лет назад

    frankly telling your videos are more neat and clear than anyother videos in the youtube

    • @codebasics
      @codebasics  5 лет назад

      Thanks Ramesh for your valuable feedback :)

  • @AbhishekSingh-og7kf
    @AbhishekSingh-og7kf 3 года назад +1

    I can watch this type of videos whole day without take any break. Thank you!!!

  • @devendragohare5221
    @devendragohare5221 4 года назад +21

    I Got 100% accuracy!.... by changing criterion = "entropy"

    • @lokeshplssl8795
      @lokeshplssl8795 4 года назад +2

      xlabel is the truth
      and
      ylabel is the prediction
      but in the video it is reverse....
      Am I right?
      because we take "confusion_matrix(y_test,y_predicted)"

    • @carti8778
      @carti8778 2 года назад +2

      @@lokeshplssl8795 it doesn’t change much, i mean u are just transposing the confusion matrix. The info still remain the same

  • @pablu_7
    @pablu_7 4 года назад +3

    Thank you Sir for this awesome Explanation about RandomForestClassifier . I got score of 1.0 for every increased value in n_estimators

  • @maruthiprasad8184
    @maruthiprasad8184 2 года назад

    I got 93.33 accuracy at n_estimators=30 after that accuracy not increasing w.r.t increase in n_estimators. Thankyou very much for simply great explanation

  • @motox296
    @motox296 5 лет назад +2

    Great Video! I'm working on my first project using machine learning and am learning so much from your videos!

    • @codebasics
      @codebasics  5 лет назад +2

      Hey Alex, good luck on your project buddy. I am glad these tutorials are helpful to you :)

  • @iaconst4.0
    @iaconst4.0 3 месяца назад

    eres un excelente profesor!, gracias por compartir tus conocimientos! saludos desde Peru!

  • @sumitkumarsain5542
    @sumitkumarsain5542 5 лет назад +12

    Just love ur videos. I was struggling with python. With ur videos was able to get everything in a weeks time. Also completed pandas and bumpy series. I would highly encourage u to start a machine learning course with some real life projects

  • @MazlumDincer
    @MazlumDincer 4 года назад +1

    n_estimators = 1 (also 290 or bigger) is even made accuracy %100 but, as all we know , this type of datasets are prepared for learning phases, so making %100 accuracy is so easy as well.

  • @Pacificatorrr
    @Pacificatorrr 3 месяца назад

    You sir, are a gem! Thank you for this series!
    I managed to get an accuracy of 98%!

  • @vijaykumarlokhande1607
    @vijaykumarlokhande1607 3 года назад

    this is crash course; if you are in hurry; this is the best series out there on youtube

  • @bhaskarg8438
    @bhaskarg8438 2 года назад

    your teaching is superb, and your knowledge sharing to Data Science community is Nobe|.
    I tried the exercise by giving the criterion = "entropy" got score as 1

  • @geethanjaliravichandhran8109
    @geethanjaliravichandhran8109 3 года назад +4

    Hi sir,i did your exercise of iris data and got an accuracy of 1.0 with n_estimators=80

  • @VivekKumar-li6xr
    @VivekKumar-li6xr 5 лет назад +2

    Hello Sir, I have started learning pandas and ML from your channel, and i am amazed the way you are teaching.
    For Iris Datasets I got score =1 for n_estimators = 30

    • @codebasics
      @codebasics  5 лет назад

      Great Vivek. I am glad you are working on exercise. Thanks 😊

  • @abhinavsharma6633
    @abhinavsharma6633 3 года назад +1

    I got an accuracy of 0.982579 by giving, n_estimators = 100, well 100 is the default value now, and sir, big fan of your teaching 🙂

    • @codebasics
      @codebasics  3 года назад +1

      Good job Abhinav, that’s a pretty good score. Thanks for working on the exercise

    • @abhinavsharma6633
      @abhinavsharma6633 3 года назад

      @@codebasics sir just wished to get in contact with you, to get a proper guidance

  • @praveenkamble89
    @praveenkamble89 4 года назад

    I got 100% accuracy with default estimator and random_state=10. Thanks a lot Sir

    • @codebasics
      @codebasics  4 года назад

      Good job Praveen, that’s a pretty good score. Thanks for working on the exercise

  • @spicytuna08
    @spicytuna08 3 года назад +1

    again, just spectacular graphics and easy to understand explanation. thank you so much.

  • @codebasics
    @codebasics  4 года назад +11

    github.com/codebasics/py/blob/master/ML/11_random_forest/Exercise/random_forest_exercise.ipynb
    Complete machine learning tutorial playlist: ruclips.net/video/gmvvaobm7eQ/видео.html

    • @lokeshplssl8795
      @lokeshplssl8795 4 года назад +2

      xlabel is the truth
      and
      ylabel is the prediction
      but in the video it is reverse....
      Am I right?
      because we take "confusion_matrix(y_test,y_predicted)"

    • @jiyabyju
      @jiyabyju 3 года назад

      @@lokeshplssl8795 I do have same question

    • @lokeshplssl8795
      @lokeshplssl8795 3 года назад

      @@jiyabyju I figured it out

    • @jiyabyju
      @jiyabyju 3 года назад

      @@lokeshplssl8795 hope there is no mistake in code..

    • @lokeshplssl8795
      @lokeshplssl8795 3 года назад

      @@jiyabyju no mistake,
      He took y_predicted as a model of prediction with X_test.

  • @ajaykumaars2154
    @ajaykumaars2154 4 года назад +4

    Hi Sir,
    Can we use any other model (eg: svm) with the random forest approach, that is, by creating an ensemble out of 10 svm models and getting a majority vote?
    Thank you for the wonderful video.

  • @harishdange9048
    @harishdange9048 3 года назад

    from sklearn.ensemble import RandomForestClassifier
    rf = RandomForestClassifier(n_estimators=30)
    rf.fit(X_train,Y_train)
    Output: RandomForestClassifier(n_estimators=30)
    rf.score(X_test,Y_test)
    output: 1.0
    from sklearn.metrics import confusion_matrix
    cm = confusion_matrix(Y_test,Y_pred)
    cm
    output:
    array([[11, 0, 0],
    [ 0, 8, 0],
    [ 0, 0, 11]], dtype=int64)

  • @javadkhalilarjmandi3906
    @javadkhalilarjmandi3906 4 года назад +1

    I've done all the Exercise till here. But I was planning not to do it for this video until I saw your last picture! I don't want you to be angry! so I am going to do it right now!

    • @codebasics
      @codebasics  4 года назад +1

      Ha ha nice. Javad. Wish you all the best 🤓👍

  • @lakshyasharma24
    @lakshyasharma24 4 года назад

    Sir I got score=1.0 for estimator=10
    And random_state=10
    Very nice explanation👌👌👌

    • @codebasics
      @codebasics  4 года назад

      Great score. Good job 👌👏

  • @sagnikmukherjee8954
    @sagnikmukherjee8954 4 года назад

    n_estimators = 10, criterion = 'entropy' led to a 100% accurate model !! Thanks!

    • @codebasics
      @codebasics  4 года назад +1

      Great job Sagnik :) Thanks for working on exercise

    • @sagnikmukherjee8954
      @sagnikmukherjee8954 4 года назад

      @@codebasics My pleasure ! Amazing tutorials !! Been a great learning experience so far ! Cheers :)

  • @adarshkesarwani6775
    @adarshkesarwani6775 4 года назад +2

    Thanks a lot sir for the videos, I wanna know when to use random forest or just tree?

  • @alokpratap2094
    @alokpratap2094 5 лет назад +2

    Again a nice video from you.
    Sir I have one general question. What is random_state and why we sometime take 0 and sometimes we assign value to it. What's the significance of this.

  • @anji1164
    @anji1164 5 лет назад +1

    Another Great Video. Thanks for that. I got 1.0 as score with n_estimators=1000. Keep doing these kind of great videos. Thank you.

    • @codebasics
      @codebasics  5 лет назад +1

      Anji, it's great you are getting such an excellent score. Good job 👍👏

  • @usmanafridi9668
    @usmanafridi9668 3 года назад

    Best explanation of Random Forest!!!!!!

    • @codebasics
      @codebasics  3 года назад

      I am happy this was helpful to you.

  • @vijaydas2962
    @vijaydas2962 5 лет назад +1

    Thanks for another post.. It's really helpful.... Just a question- Considering the fact that Random forest takes the majority decision from multiple decision trees, does it imply that Random forest is better than using Decision tree algorithm? How do we decide when to use Decision tree versus Random forest?

  • @RustemShaimagambetov
    @RustemShaimagambetov 5 лет назад +1

    Man, its great! Your videos is best i have seen ever about machine learning. Its very helpfull material. I am waiting when you make tutorial about gradient boosting and neural networks. I think you can make easily to report it. Thanks!

  • @meenakshimalik7102
    @meenakshimalik7102 2 года назад

    Hi Sir, we are blessed that we got your videos on youtube. Your videos are unmatchable. I am interested in your upcoming python course. When can I expect starting of the course?

    • @codebasics
      @codebasics  2 года назад +1

      Python course is launching in June, 2022. Not sure about exact date though

  • @jaydhumal2610
    @jaydhumal2610 Месяц назад

    I got the perfect score of 1 when I set n_estimators to 40 although the selection of train,test data would also have been contributed in the accuracy of model.

  • @shivamtyagi5614
    @shivamtyagi5614 4 года назад

    default 100 n_estimators or 20 n_estimator , each case it gives 1.0 accuracy. well after getting on this channel , i can feel the warmth on the tip of my fingers.

  • @jyothishp143
    @jyothishp143 5 лет назад

    This is the only channel i subscribed.

    • @codebasics
      @codebasics  5 лет назад

      J Es, thanks. I am happy to have you as a subscriber 👍😊

  • @Moukraan
    @Moukraan 3 года назад +3

    Thank you very much! This tutorial is really amazing!

  • @freecodecamp
    @freecodecamp 5 лет назад +86

    This is a great series! Would you be interested in allowing us to repost it on our channel? We'll link to your channel in the description and comment section. Send me an email to discuss further: beau [at] [channelname]

    • @ShubhamSharma-to5po
      @ShubhamSharma-to5po 4 года назад +1

      mega.nz/file/LaozDBrI#iDkMIu6v-aL9fMsl-X1DETkOqnMqwptkn54Z51KINyw (like data in this file )//help if anyone understand. mega.nz/file/LaozDBrI#iDkMIu6v-aL9fMsl-X1DETkOqnMqwptkn54Z51KINyw (like data in this file )//help if anyone understand.

    • @ShubhamSharma-to5po
      @ShubhamSharma-to5po 4 года назад +2

      sir, can you tell me how to plot random forest classification with multiple independent variables.so confused in that

    • @codebasics
      @codebasics  3 года назад +31

      yes sure. go ahead. You can post it.

  • @talharauf3111
    @talharauf3111 2 года назад

    Sir I have Done the Exercise with 100% Accuracy

  • @dickson9877
    @dickson9877 7 месяцев назад

    I see many people is saying that in Irises they had 1.0 with 50+ esitmators. I am just starting with ML but for me 4 functions in Irises means that we don't need much estimators, there is actually only 6 unique combinations of functions. 10 if we used also solo columns as estimators which I presume is not happening. Am I correct that anything beyond 6 estimators shouldn't improve the model?

  • @rajmohammed8134
    @rajmohammed8134 2 года назад

    Thank you for such wonderful videos, I got accuracy score a 1 in the exercise question

  • @James-pe3wl
    @James-pe3wl 4 года назад

    Maybe I am a bit late jumping on the train, even though, I still want to say thank you for everything you have been doing. Your videos are much better to understand the field rather than the courses of top class Universities such as MIT. I have to say that you outperform all your competitors in a very simple way. As far as I know you had some problems with your health and I hope everything is good now. Wish you good luck and stay healthy at least for your RUclips community. ^_^

    • @codebasics
      @codebasics  4 года назад +1

      Hey Yea James, thanks for checking on my health. You are right, I was suffering from chronic ulcerative colitis and last year 2019 had been pretty rought. But guess what I cured it using raw vegan diet, ayurveda and homeopathy. I am 100% all right and symptoms free since past 10 months almost and back in full force doing youtube tutorials :)

    • @prvs2004
      @prvs2004 4 года назад

      @@codebasics Good to hear, Things are working out in a positive way! Be safe and I pray everything works well in the long run.
      Jai SriRam

  • @ashishsinha8893
    @ashishsinha8893 5 лет назад +1

    It's nice to see you bhaiya again

  • @allahbakshsheikdawood466
    @allahbakshsheikdawood466 4 года назад

    Nice to watch your videos.. you make us understand things end to end !!

  • @pranaymitra7565
    @pranaymitra7565 3 года назад +2

    Great content!! I have a question though, shouldn't the xlabel be 'Truth' and ylabel be 'Predicted' ?

  • @shukur533
    @shukur533 8 месяцев назад

    train_test_split test size is 20% and the random state is 32
    1. n_estimators default test score is 0.96
    2. The best test score is 1.0 and n_estimators is 3

  • @veeek8
    @veeek8 Год назад

    You made that so simple thank you so much

  • @dineshjangra7413
    @dineshjangra7413 4 года назад

    Way of teaching is very good.....sir plz make a vedio on how to give our image to it....how to convert our image like mnist dataset as there is benefit till the time we will use our images

    • @codebasics
      @codebasics  4 года назад

      Sure I am going to add image classification tutorial.

    • @dineshjangra7413
      @dineshjangra7413 4 года назад

      @@codebasics thanks sir

  • @harshalbhoir8986
    @harshalbhoir8986 Год назад

    This is so awesome explanation!! Thank you so much!!!

  • @mycreations3452
    @mycreations3452 5 лет назад +2

    Please upload frequently..we will wait for you

  • @VIVEK-ld3ey
    @VIVEK-ld3ey 2 года назад +1

    Sir how are you deciding the xlabel and ylabel in the heatmap

  • @igorsmet1123
    @igorsmet1123 3 года назад

    Thank you so much for very dynamic and clear content with the ideal depth on the topic details

  • @late_nights
    @late_nights 4 года назад +1

    The default value of n_estimators changed from 10 to 100 in 0.22 version of skllearn. i got accuracy of 95.56 with n_estimators = 10 and for 100 the same.

  • @kpl_sh
    @kpl_sh 4 года назад

    Thank you sir...I got 100% accuracy with n_estimator 90

    • @codebasics
      @codebasics  4 года назад

      Good job Kapil, that’s a pretty good score. Thanks for working on the exercise

  • @AlvinHampton-rz2iz
    @AlvinHampton-rz2iz Год назад +1

    What makes you put truth on the y_label and predicted on the x_label?

  • @rajatbhalla1455
    @rajatbhalla1455 5 лет назад +1

    Sir u r great thnx for these kinds of videos please make more videos 😊😊😊😊

  • @aishwaryakilledar1742
    @aishwaryakilledar1742 3 года назад

    Very nice sir.... Expecting more videos 😀

  • @vaishalibhat3741
    @vaishalibhat3741 6 месяцев назад

    test_size=0.2
    model=RandomForestClassifier(n_estimators=10,criterion='gini')
    model.score = 1
    2)
    RandomForestClassifier(criterion='entropy', n_estimators=10)
    model.score = 1
    3)
    test_size=0.35
    RandomForestClassifier(n_estimators=10,criterion='gini')
    model.score=0.9811
    4)
    test_size=0.35
    RandomForestClassifier(n_estimators=10,criterion='entropy')
    model.score=0.9811

  • @ashish-blessings
    @ashish-blessings 2 года назад

    You are amazing brother. I really loved this. You made it so simple. Thank you so much.

  • @ousmanelom6274
    @ousmanelom6274 3 года назад

    thank you for this tutorial how to visualize randomforest and decision tree

  • @vishank7
    @vishank7 4 года назад +1

    This is sooo awesome! Amazing work sir💎

  • @ahmedakmal1545
    @ahmedakmal1545 2 года назад

    I got 100% accuracy after tuning the parameters and train test split for the iris dataset
    test_size=0.2, n_estimators=20, random_state=2

  • @tk1215
    @tk1215 5 лет назад +1

    Amazing, I like how you explain simply

  • @hanfeng32
    @hanfeng32 5 лет назад +2

    very Great video!!!!! thanks

  • @GerConGdeGato
    @GerConGdeGato 4 года назад

    Awesome channel.
    I have a question though.
    To find the optimal n_estimators I made a loop that went from n_estimators=1 until a number of my choice (number_trees)
    But I thought that a lucky train_test_split could give a very good score to a shitty model. So i made an inside loop that run up to a number of my choice (number_sets) the split, Train model, score and keep the best and worst scores.
    The result is that I see absolutely no tendency on the score depending on n_estimators.
    For example, with n_iterations = 3 and doing the split 5 times, the worst i get is 0.97 accuracy, which is great
    But with n_iterations = 4, the worst i get is 0.89, which is worse
    But then again, n_iterations = 10 i get 0.97
    And so on so forth.
    My question is, why do not I see a tendency on the score depending on n_estimators? I was expecting the score to go up up to a certain n_estimators and then not changing.
    CODE (RUclips doesnt allow copypaste so there might be a typo)
    number_trees = 100
    number_sets = 5
    pd.set_Option("Display.max_rows", None)
    results = pd.DataFrame(columns = ["min_score", "max_score"])
    for i in range (1, number_trees+1):
    modeli = RandomForestClassifier(n_estimators = i)
    min_score = 1
    max_score = 0
    for j in range (number_sets):
    X_Train, X_test, y_train, y_test = Train_test_split(X,y)
    modeli.fit(X_train, y_train)
    score = modeli.score(X_test, y_test)
    if score > max_score:
    max_score = score
    if score < min_score:
    min_score = score
    results.loc[i, "min_score"]= min_score
    results.loc[i,"max_score"]= max_score
    results

  • @granothon8054
    @granothon8054 2 года назад

    Excellent. Thank you.

  • @usamarehmanyousaf2010
    @usamarehmanyousaf2010 3 года назад

    Hi, just want to ask this question that, in a data set split why should we drop the target column. Like that is the actual or final result that either the row is true or false. Then while spliting why should we have to drop that?

  • @sohamnavadiya992
    @sohamnavadiya992 5 лет назад +2

    Amazing man, keep it up and share more tutorial like this.

  • @iradukundapacifique987
    @iradukundapacifique987 4 года назад

    100% accuracy on the given exercise. I used n_estimators = 1

    • @codebasics
      @codebasics  4 года назад

      That’s the way to go Iradukunda, good job working on that exercise

  • @Ateeq10
    @Ateeq10 7 месяцев назад

    Good Evening sir,
    I hope you are doing well.
    n_estimators = 50 at this estimator I am getting more score.
    Thank You

  • @izharkhankhattak
    @izharkhankhattak 3 года назад

    Nice work.

  • @RubiPandey-l6j
    @RubiPandey-l6j 11 месяцев назад

    You r God for me for helping me phd

    • @codebasics
      @codebasics  11 месяцев назад

      🙌Woohoo! So glad it hit the mark for you! 😃

  • @nomanshaikhali3355
    @nomanshaikhali3355 4 года назад

    Hey, Hope you're doing well! I have a query regarding random forest algo! I want to ask that I have predicted random forests algo and made 70 30 ratio! But how i can specify the prediction for 30days! Any variable or specifier?
    Looking forward to hearing from you soon!
    Thanky!

  • @muhammedrajab2301
    @muhammedrajab2301 4 года назад +2

    I am not afraid of you, but I respect you!
    So I am gonna do the exercise right now!

  • @asmitakalaa
    @asmitakalaa 7 месяцев назад

    I got a hundred percent score by using n_estimators as 50…took test size as 0.2!!!!

  • @leooel4650
    @leooel4650 5 лет назад +1

    I can't get any better than 93.3333333% on the exercise even with more n_estimators.

  • @anujvyas9493
    @anujvyas9493 4 года назад

    Solved the exercise problem.
    With model = RandomForestClassifier(n_estimators=10) got an accuracy of 0.96667
    and with model = RandomForestClassifier(n_estimators=20) got 1.0

    • @codebasics
      @codebasics  4 года назад +1

      Anuj, good job 👍👏👌

    • @anujvyas9493
      @anujvyas9493 4 года назад

      @@codebasics Thanks sir! Its all because of you 😊

  • @HarshithaMortha
    @HarshithaMortha 5 лет назад +1

    Thank you so much for the tutorials sir. My interest in learning machine learning made easy by you. Can you please make tutorials on chatbots using python.
    Thank you

    • @codebasics
      @codebasics  5 лет назад +2

      Hey harshitha, thanks for your kind words of appreciation and sure I will note down the topic you suggested 👍

  • @boooringlearning
    @boooringlearning 3 года назад

    excellent lesson!

  • @ericwr4965
    @ericwr4965 4 года назад +1

    Thank you so much. I need some help on this classifier for my data set. This helped a lot.

  • @bandhammanikanta1664
    @bandhammanikanta1664 4 года назад

    On Iris data exercise,
    best_score = 0.95
    best_parameters = {'criterion': 'gini', 'n_estimators': 95}

  • @fahadabdullah510
    @fahadabdullah510 2 года назад

    I got 1.0 score on my training as well as test set by setting number of trees to '5' and criterion to 'gini'.

  • @supra20000000
    @supra20000000 2 года назад

    The R2 we got is for test set (R2test), what about the model's R2 which is generally termed as R2training

  • @erinwolf1563
    @erinwolf1563 5 лет назад

    Thanks a lot Bro great videos👍👍👍....where can I get more Exercises for machine Leaning

  • @arijitRC473
    @arijitRC473 5 лет назад +3

    Result of exercise:
    Score is always 96.66 percent
    If i will change n_estimators or will increase it, the score is not changing

    • @IntegralKing
      @IntegralKing 4 года назад

      are you re-running the fit? because it fit doesn't automatically rerun after changing the parameters

  • @snehasneha9290
    @snehasneha9290 4 года назад

    sir suppose to consider the 4 decision trees in that 2 trees give the same output and another 2 trees give the same output then which one considered both having the majority at that time plz clarify this doubt

  • @austossen
    @austossen 4 года назад

    first of all, your video is amazing. it simplifies image analysis to a 15 minute task. may i ask you a question regarding brain tumor data? i want to implement a random forest on the BraTS data set. i have a 4d array with 4 modalities: (flair,t1,t1ce,t2) => [modalities, image slices, x-plane,y-plane] and the labels are just 2d. your video is amazing but i don't know what to do with these 2d labels because target variable in your video is 1d. might you be able to give me an idea of how to deal with my labels or how to approach this problem generally?

  • @sonikusum3
    @sonikusum3 Год назад

    I am new to machine learning. Why am I not getting the same numbers as you did for the confusion matrix or scores? I used exactly the same coding as in the video.

  • @flamboyantperson5936
    @flamboyantperson5936 5 лет назад +1

    Hey your vidoes are great. But where do you go away in the middle of making videos then come back after a long time.

  • @mvcutube
    @mvcutube 3 года назад

    Very nice tutorial

  • @fezkhanna6900
    @fezkhanna6900 Год назад

    Fantastic!

  • @tharunkumar8507
    @tharunkumar8507 2 года назад

    YOU NEED MORE AND MORE VIEWS SIR