Uncertainty Quantification and Deep Learning ǀ Elise Jennings, Argonne National Laboratory

Поделиться
HTML-код
  • Опубликовано: 27 окт 2024

Комментарии • 16

  • @EigenA
    @EigenA Год назад +4

    Love how she handled the questions in the middle of the presentation. Great work on the research too!

  • @KarriemPerry
    @KarriemPerry Год назад

    Outstanding presentation!!

  • @jiongwang7645
    @jiongwang7645 8 месяцев назад

    at around 10:00, last line, should be integration over theta, correct?

  • @alexandterfst6532
    @alexandterfst6532 3 года назад +2

    that was an excellent explanation

  • @a2002
    @a2002 2 года назад +2

    Great presentation. Can we get a copy of the code or the github link? Thank you

  • @corentink3887
    @corentink3887 3 года назад +4

    good presentation, do we have acces to the code?

  • @nickrhee7178
    @nickrhee7178 9 месяцев назад

    I guess that the size of uncertainty will depend on the size of dropout rate. how can I determine optimal dropout rate?

  • @saderick52
    @saderick52 11 месяцев назад

    I feel there is big gap between the lecture and audience. Variational inference is a pretty complicated process by itself. It’s difficult to introduce BBN without talking about how variational inference works

  • @ivotavares6576
    @ivotavares6576 2 года назад

    This was a really interesting presentation!!!

  • @masisgroupmarinesoftintell3299
    @masisgroupmarinesoftintell3299 3 года назад +1

    how do you interpret the uncertainties in prediction

  • @michaelsprinzl9045
    @michaelsprinzl9045 2 года назад

    "How do you parameterize a distribution?" Answer: "Like you are parameterize every distribution". Ok I got it.

  • @charilaosmylonas5046
    @charilaosmylonas5046 3 года назад +2

    13:31 - It's a really interesting mistake that she mixes the "Laplace" (which is the correct distribution she wanted to say) with the Poisson distribution! It has to do with PDEs: the Laplace PDE is the homogeneous version of the Poisson PDE! hehe (I could easily do the same mistake)

    • @siddhantrai7529
      @siddhantrai7529 2 года назад

      Hi Chariloas,
      Could you please describe how L1 corresponds to Poisson as she mentioned? And how Laplace as you mentioned is a correction over it. I am able to understand why l2 and normal makes sense, but for L1 I feel a bit clueless, I would really appreciate your guidance in this. Thank you

    • @charilaosmylonas5046
      @charilaosmylonas5046 2 года назад +2

      @@siddhantrai7529 check any reference on "Bayesian interpretation of regularization" - I answered before but the comment seems to disappear for some reason! Also, note the dependence on exp^(-|X|^2) in the Gaussian PDF and the dependence on exp^(-|X|^1) in the Laplace (not Poisson!) PDF. There is a "Poisson" distribution but it's not relevant to L1 regularization! She makes an honest mistake because of the connection of Poisson and Laplace in the diffusion PDEs! (There is also a Poisson and Laplace PDE - that's what my comment was about!).

    • @siddhantrai7529
      @siddhantrai7529 2 года назад

      @@charilaosmylonas5046 Thank you for the reply, makes sense now. For sure, I would look into "Bayesian interpretation of regularization" as you mentioned. Thanks again. 😁😁