13.4.4 Sequential Feature Selection (L13: Feature Selection)

Поделиться
HTML-код
  • Опубликовано: 18 дек 2024

Комментарии • 33

  • @leoquentin2683
    @leoquentin2683 Год назад +2

    At 9:20, wouldn't there be a total of 29 subsets (each containing 28 variables), and not 28 subsets? Each subset consists of the entire dataset minus one feature. So there would in total be one subset for each feature in the original dataset, right? That makes 29 subsets each consisting of 28 features. Am I missing something, or is this an error?

  • @djsocialanxiety1664
    @djsocialanxiety1664 20 дней назад +1

    sebastian you are a blessing

  • @rezajalali8684
    @rezajalali8684 2 года назад +1

    I'm using your work of effort, mlxtend, in my research and it's awesome to have your lecture along with the package, can't ask for more! Please continue the series.
    Big thanks from water sciences community!

    • @SebastianRaschka
      @SebastianRaschka  2 года назад +2

      Thanks for the kind words, glad to hear that both are useful to you (and the water sciences community!). I am hoping I'll find more time to make additional videos in the future for sure!

  • @TrainingDay2001
    @TrainingDay2001 2 года назад +4

    I appreciate the video, your explanation style and your mlxtend package. Thank you very much for all the work you do!!

  • @nguyenhuuuc2311
    @nguyenhuuuc2311 2 года назад +3

    - High-end video quality & thorough content 💙 Really enjoy your lecture 👨‍💻 Thanks for posting Dr Sabastian!

    • @SebastianRaschka
      @SebastianRaschka  2 года назад +1

      Thanks a lot, I am really thankful to hear that! I spent the last couple of days learning about Final Cut Pro and how to improve the audio quality (removing background noise and room echos). Glad to hear that it was time worth spent :)

  • @guliteshabaeva8922
    @guliteshabaeva8922 Год назад

    this video is really helpful. Highly recommended!

  • @oshanaiddidissanayake2840
    @oshanaiddidissanayake2840 2 года назад +1

    In the sequential backward selection (time = 10:55), stage 02, though we remove 1 features making the feature count as 28, we still get 29 feature subsets right? (It says 28). Can you help me clarify this?

    • @SebastianRaschka
      @SebastianRaschka  2 года назад

      Hm, I am not quite sure I understand the question correctly. So if we have 29 features, we have feature subsets of 28 features each after the first round.

    • @hahndongwoo2433
      @hahndongwoo2433 Год назад

      @@SebastianRaschka I have the same question.
      If we have 5 features, subsets will be [x,2,3,4,5], [1,x,3,4,5],[1,2,x,4,5],[1,2,3,x,5],[1,2,3,4,x]. We can see that we got 5 subsets, and each subset has 4 features.
      Therefore if we have 29 features at the beginning, 29 subsets would be obtained while the number of features in each subset is 28.

  • @juliangermek4843
    @juliangermek4843 Год назад

    During Sequential Forward Selection (around 16:30): Do you also add a new feature to the set if the performance is worse than without any new feature?

  • @nitishkumar7952
    @nitishkumar7952 2 года назад +1

    great explanation. very clear. keep going

  • @sf88-r6l
    @sf88-r6l 2 года назад +2

    Hi, thank you for the excellent explanation.
    One quetion please, in sequential floating forward selection, when will the floating round happen?
    Is the algorithm do it every round after adding a new feature? or it just do it randomly?

  • @russwedemeyer5602
    @russwedemeyer5602 2 года назад +1

    Thanks so much for the videos. Great presentation.
    I believe in your in your Feature Permutation Importance video you stated that the process was model agnostic.
    Is SFS also model agnostic? I would like to use this with a LSTM model but am not sure if it would be a correct application.

    • @SebastianRaschka
      @SebastianRaschka  2 года назад +1

      Yes, SFS is also model agnostic. My implementation at rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/ is only scikit-learn compatible at the moment, but the concept itself is model agnostic.

    • @russwedemeyer5602
      @russwedemeyer5602 2 года назад

      @@SebastianRaschka I've attempted several different implementations and done some research but it always seems that these wrapper methods only work with 2D arrays and not the LSTM 3D arrays.
      Is there a Feature Selection process, other than correlation, which already exists that you like best for LSTM's? Thanks

    • @SebastianRaschka
      @SebastianRaschka  2 года назад +1

      @@russwedemeyer5602 I wish I had a good answer for that, but I have not tried sth like this, yet

  • @rolfjohansen5376
    @rolfjohansen5376 2 года назад +1

    what if you got mixing of several categoricals and continuous variables ?

    • @SebastianRaschka
      @SebastianRaschka  2 года назад +1

      That's a good question. In this case, you can one-hot encode the categorical variables. And then, optionally, you can treat each set of binary variables (that belong to the original categorical variable) as a fixed feature set. I have an example for that at the bottom here: rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/

    • @rolfjohansen5376
      @rolfjohansen5376 2 года назад

      ​ @Sebastian Raschka ​ thank-you very much I will certainly read up to this. I attacked this problem (not sure if this was the most optimal) by separating the categorical variables and continuous variables, (the keyword is doing scanning them separately)
      First scanning trough the categoricals only (removing the less significants) , then doing the same for the continuous variables using in both cases :
      ---- scipy.stats.pointbiserialr.html
      What do you think about my approach? Thanks

  • @abhishek-shrm
    @abhishek-shrm 2 года назад +1

    Hi Sebastian, thank you so much for the videos. I really loved watching them. Just a few questions on feature selection techniques.
    1. How to pick one of the wrapper methods? I mean how to select a feature selection technique(wrapper) 😅
    2. Why do I even have to use wrapper methods? Can't I simply put all the features in a random forest model and use its feature importance for selecting the features and train a new model with selected features. It seems a lot simpler and faster to me than training 10-20 different models in any wrapper method.

    • @SebastianRaschka
      @SebastianRaschka  2 года назад +1

      Glad you found them useful! Sure, if you use a random forest, then you can use the random forest feature importance for selection. However, the advantage of wrapper methods is that they work with any model, not just random forests.

    • @abhishek-shrm
      @abhishek-shrm 2 года назад

      @@SebastianRaschka Thanks for replying. Just a follow-up question on feature selection.
      Let's say I'm working on a binary classification problem on tabular data. Now, how to decide if I should use Forward Selection, Recursive Feature Elimination, Permutation Importance or simple feature importance from Random Forest? There are multiple ways to do the same thing but the end results won't be the same. We can end up with different combination of features from different approaches. So, should I try all of them or select one only? If it's the latter one, then how to pick one in this scenario?

  • @ViriatoII
    @ViriatoII 2 года назад +2

    Very good, thank you

  • @Lost4llen
    @Lost4llen 2 года назад

    Excellent video, thank you!

  • @chandrimadebnath362
    @chandrimadebnath362 9 месяцев назад

    its a little bit out of the box question but can you tell me if we consider the feature selection as multiobjective problem, in which most of the people are consider objectives as sensitivity and specificity or accuracy and feature number by using the optimization (one of the wrapper based approach). what are the other objectives we might look into? Thank you sir in advance.

  • @Jay-eh5rw
    @Jay-eh5rw 2 года назад +1

    Thank you so much sir !!

  • @riptorforever2
    @riptorforever2 Год назад

    greate leasson. thanks!!