How to handle Uncertainty in Deep Learning #2.1

Поделиться
HTML-код
  • Опубликовано: 11 сен 2024

Комментарии • 14

  • @nguyenxuanthanh6988
    @nguyenxuanthanh6988 Год назад +2

    Brilliant!!! These videos help me a lot in understanding uncertainty. Could you make more videos regarding this topic? Thank you so much.

  • @salonikothari7265
    @salonikothari7265 Год назад

    Clear explaination with awesome visuals and examples! Thank you😊

  • @pawezawistowski4437
    @pawezawistowski4437 2 года назад +1

    Great explanation!
    As I understand, models mentioned here are able to asses the uncertainty stemming from the approximation of parameters (weights). What remains however is the uncertainty about the overall model architecture - which might be suboptimal, right?

    • @DeepFindr
      @DeepFindr  2 года назад +1

      Thanks!
      Yes, this is typically called structural uncertainty. I think it's the most complex of all, because you need to express uncertainty about the hypothesis/function space, which is huge.
      It is also closely related to structure learning. I've seen some papers in that direction, for example "Measuring uncertainty through bayesian learning of DNN Structure".
      Thanks for the hint!

  • @navyanthkusampudi6605
    @navyanthkusampudi6605 Год назад

    Thanks for making this very clear :)

  • @torstenschindler1965
    @torstenschindler1965 2 года назад +1

    Nice explanation. Similar to the lecture “MIT 6.S161: Evidential Deep Learning and Uncertainty” by Alexander Amini.
    Can you comment on the uncertainty in the interpretability of neural networks?

    • @DeepFindr
      @DeepFindr  2 года назад +1

      Uncertainty quantification and interpretability are closely related in my eyes. If the model is uncertain about a prediction (and we are able to measure this), then the interpretability is also less reliable for a specific input.
      I've seen that some packages, for example InterpretML partially also report confidence intervals for feature attributions, which captures the aleatoric uncertainty. But I've seen no examples so far for considering epistemic uncertainty.
      I believe all interpretability techniques should also be uncertainty-aware. This is also discussed in this paper: arxiv.org/abs/2105.11828

  • @hizircanbayram9898
    @hizircanbayram9898 2 года назад

    Great video and references thanks! Could you have a list of such tutorials you provided in the description? During this series, you shared great tutorials that tell us the first principles of bayesian neural networks, MLE etc. It's hard to find such rich content. If you have much more, please share

    • @DeepFindr
      @DeepFindr  2 года назад +1

      Hi! I don't have a specific list and read over many resources. The by far best ones are gathered in the description but there are certainly many others. Will consider this in the next videos :)

  • @deepTh00ught
    @deepTh00ught 10 месяцев назад

    omg super useful video thanks a lot!!!

  • @felipemello1151
    @felipemello1151 2 года назад

    great video, thanks for sharing!

  • @jonimatix
    @jonimatix 2 года назад

    Would be interesting to see ow the methods compare to each other on a couple of data sets and how the output of such models is used / displayed

    • @DeepFindr
      @DeepFindr  2 года назад +1

      Next video has some details on this :) but only on one dataset

  • @hassaannaeem4374
    @hassaannaeem4374 2 года назад

    Awesome vids.