Self-training with Noisy Student improves ImageNet classification (Paper Explained)

Поделиться
HTML-код
  • Опубликовано: 31 янв 2025

Комментарии • 46

  • @bluel1ng
    @bluel1ng 4 года назад +13

    24:15 I think it is very important that they reject images with high entropy soft-pseudo-labels (=low model confidence) and only use the most confident images per class (>0.3 probability). Images that the model is confident about increase the generalization most since they get classified correctly and then extend the class-region through noise and augmentation, e.g. especially when previously unseen images lie at the "fringe" of the existing training set or closes to the decision boundary than other samples. Since the whole input space is always mapped to class-probabilities a region can be mapped to a different/wrong class although there has not seen much evidence there. Through new examples this space can be "conquered" by the correct class. And of course also with each correctly classified new image new augmented views can be generated which increases this effect.

    • @emilzakirov5173
      @emilzakirov5173 4 года назад

      I think the problem here is that they use softmax. If you use sigmoid, then for unconfident predictions the model would simply output zeros as class probabilities. It would alleviate any need for rejecting images

  • @mohamedbahabenticha8624
    @mohamedbahabenticha8624 3 года назад

    Your explanation is Amazing and very clear for a very interesting work! Inspiring for my work!!!

  • @omarsilva924
    @omarsilva924 4 года назад +1

    Wow! What a great analysis. Thank you

  • @AdamRaudonis
    @AdamRaudonis 4 года назад +1

    Super great explaination!!!

  • @herp_derpingson
    @herp_derpingson 4 года назад +6

    11:56 Never heard about Stochastic depth before. Interesting.
    .
    After the pandemic is over, have you considered giving speeches in conferences to gain popularity?

    • @YannicKilcher
      @YannicKilcher  4 года назад +1

      Yea I don't think conferences will have me :D

    • @herp_derpingson
      @herp_derpingson 4 года назад

      @@YannicKilcher Its a numbers game. Keep swiping right.

  • @sanderbos4243
    @sanderbos4243 4 года назад +6

    39:12 I'd love to see a video on minima distributions :)

  • @veedrac
    @veedrac 4 года назад +1

    This is one of those papers that makes so much sense they could tell you the method and the results might as well be implicit.

  • @MrjbushM
    @MrjbushM 4 года назад

    Crystal clear explanation, thanks!!!

  • @mehribaniasadi6027
    @mehribaniasadi6027 4 года назад +1

    Thanks, great explanation. I have a question though.
    In minute 14:40, when the steps for Algorithm 1: NoisyStudent method are explained, it goes like this: Step 1, is to train a noised teacher, but then in step 2, for labelling the unlabelled data, they use a not noised teacher for the inference.
    So, I don't get why in step 1 they train a noised teacher when eventually they use a not noised teacher for the inference?
    I get that at the end, the final network is noised, but during the steps, they use not noised teachers for the inference, so how these noised teachers trained in the intermediate steps (iterations) are used?

    • @YannicKilcher
      @YannicKilcher  4 года назад

      it's only used via the labels it outputs.

  • @alceubissoto
    @alceubissoto 4 года назад

    Thanks for the amazing explanation!

  • @blanamaxima
    @blanamaxima 4 года назад

    I would not say I am surprised after the double descent paper... I would have thought someone did this already.

  • @JoaoVitor-mf8iq
    @JoaoVitor-mf8iq 4 года назад

    That deep-emsemble paper could be used here 38:40, for the multiple local minima that are almost the global minima

  • @roohollakhorrambakht8104
    @roohollakhorrambakht8104 4 года назад +1

    Filtering the labels based on the confidence level of the mode is a good idea, but the entropy of the predicted distribution is not necessarily a good indicator of that. This is because the probability outputs of the classifier would not be calibrated and produce relative confidence (concerning the other labels). There are many papers on ANN uncertainty estimation, but I find this one from Kendall to be a good sample: arxiv.org/pdf/1703.04977.pdf

  • @kanakraj3198
    @kanakraj3198 3 года назад

    During the first training, "real" teacher model Efficientnet B5, was trained using augmentations, dropout, and SD, therefore the model becomes "noisy" but during inference, it was mentioned to use "clean", not noised teacher. They why we had trained with noise for the first time?

  • @Fortnite_king954
    @Fortnite_king954 4 года назад

    Amazing review. Keep going

  • @BanditZA
    @BanditZA 4 года назад +1

    If it’s just due to augmentation and model size why not just augment the data the teacher trains on and increase the size of the teacher model? Is there a need to introduce the “student”?

    • @YannicKilcher
      @YannicKilcher  4 года назад

      It seems like the distillation itself is important, too

  • @aa-xn5hc
    @aa-xn5hc 4 года назад

    Great great channel....

  • @hafezfarazi5513
    @hafezfarazi5513 4 года назад +1

    @11:22 You explained DropConnect instead of Dropout!

  • @samanthaqiu3416
    @samanthaqiu3416 4 года назад +1

    @Yannic please consider making a video of RealNVP/NICE and generative flows, and what is this fetish of having tractable log likelihoods

  • @karanjeswani21
    @karanjeswani21 4 года назад +1

    With a PGD attack, the model is not dead. Its still better than random. Random classification accuracy for 1000 classes would be 0.1%.

  • @MrjbushM
    @MrjbushM 4 года назад

    Cool video!!!!!!

  • @pranshurastogi1130
    @pranshurastogi1130 4 года назад

    Thanks now i have some new tricks in my sleeves

  • @muzammilaziz9979
    @muzammilaziz9979 4 года назад

    I personally think this paper has more hacking than the actual novel contribution. It's the researcher bias that made them push the idea more and more. This seems like the hacks had more to do with getting the SOTA than the main idea of the paper.

  • @cameron4814
    @cameron4814 4 года назад +1

    @11:40 "depth dropout" ??? i think this paper describes this users.cecs.anu.edu.au/~sgould/papers/dicta16-depthdropout.pdf

  • @Alex-ms1yd
    @Alex-ms1yd 4 года назад

    At first it sounds quite counter-intuitive that this might work. I would think of student becoming more confident of teacher mistakes.. But thinking over, maybe the idea is that using soft pseudo labels with big batch sizes we are kind of bumping student top1 closer to teacher top5. And teacher mistakes are balanced by other valid datapoints..
    Paper itself gives mixed feelings, one side all those tricks distracts from main idea and its evaluation. From other side its what they need to do to beat SOTA, cause they all do this.. But they tried their best to minimize this effect by many baseline comparisons.

  • @tripzero0
    @tripzero0 4 года назад

    Trying this now. resnet50 -> efficientnetB2 -> efficientnetB7. Only problem is that it's difficult to increase batch as the model size increases :(.

    • @mkamp
      @mkamp 4 года назад

      Because of your GPU memory limitations? Habe you considered gradient accumulation?

    • @tripzero0
      @tripzero0 4 года назад +1

      @@mkamp didn't know about them until now. Thanks!

    • @tripzero0
      @tripzero0 4 года назад

      I think this method somewhat depends on having a large-ish "good" initial dataset for the first teacher. I got my resnet50 network to 0.64 recall and 0.84 precision on a mutilabel dataset. The results were still very poor. Relabeling at a 0.8 threshold, produces one or two labels per image to train students from so a lot of labels get missed from there on. The certainty of getting those few labels right increases, but I'm not sure that trade-off is worth it.

  • @dmitrysamoylenko6775
    @dmitrysamoylenko6775 4 года назад

    Basically they achieve more precise learning on smaller data. And without labels, only from teacher. Interesting

  • @shivamjalotra7919
    @shivamjalotra7919 4 года назад

    Great

  • @impolitevegan3179
    @impolitevegan3179 4 года назад

    Correct me if I'm wrong, but if you would train a bigger model with the same augmented techniques on the imagenet and performed the same trick here described in the paper, then you probably wouldn't much better model than the original, right? I feel like it's unfair to have a not noised teacher and then say the student outperformed the teacher.

    • @YannicKilcher
      @YannicKilcher  4 года назад

      Maybe. It's worth a try

    • @impolitevegan3179
      @impolitevegan3179 4 года назад

      @@YannicKilcher sure, just need to get a few dozens of GPUs to train on 130m images

  • @michamartyniak9255
    @michamartyniak9255 4 года назад

    Isn't it already known as Active Learning?

    • @arkasaha4412
      @arkasaha4412 4 года назад

      Active learning involves human-in-loop isn't it?

  • @thuyennguyenhoang9473
    @thuyennguyenhoang9473 4 года назад

    Top 2 classify, Top 1 is FixEfficientNet-L2

  • @48956l
    @48956l 3 года назад

    these seems insanely resource intensive lol

  • @mehermanoj45
    @mehermanoj45 4 года назад

    1st, thanks

  • @Ruhgtfo
    @Ruhgtfo 4 года назад

    M pretty sure m silent