Eric J. Ma - An Attempt At Demystifying Bayesian Deep Learning

Поделиться
HTML-код
  • Опубликовано: 7 авг 2024
  • PyData New York City 2017
    Slides: ericmjl.github.io/bayesian-de...
    In this talk, I aim to do two things: demystify deep learning as essentially matrix multiplications with weights learned by gradient descent, and demystify Bayesian deep learning as placing priors on weights. I will then provide PyMC3 and Theano code to illustrate how to construct Bayesian deep nets and visualize uncertainty in their results. 00:00 Welcome!
    00:10 Help us add time stamps or captions to this video! See the description for details.
    Want to help add timestamps to our RUclips videos to help with discoverability? Find out more here: github.com/numfocus/RUclipsVi...
  • НаукаНаука

Комментарии • 14

  • @user-nk8ry3xs5u
    @user-nk8ry3xs5u 9 месяцев назад +2

    Great video to develop a simple mind model of neural networks. Bonus : frequentist vs. Bayesian made simple! Great work Eric!

  • @harshraj22_
    @harshraj22_ 2 года назад +11

    1:00 Intro to Linear, Logistic regression, Neural Nets
    9:40 Going Bayesian
    14:32 Implementation Using PyMC3
    24:27 QnA

  • @mherkhachatryan666
    @mherkhachatryan666 2 года назад +5

    Love the charisma, enthusiasm put in this talk well done!

  • @suzystar3
    @suzystar3 8 месяцев назад

    Thank you so much! This has helped me so much with my project and really helped to understand both deep learning and bayesian deep learning much better. I really appreciate it!

  • @cnaccio
    @cnaccio 2 года назад

    Huge win for my personal understanding on this topic. I wish every talk was given in this format. Thanks!

  • @BigDudeSuperstar
    @BigDudeSuperstar 2 года назад +1

    Incredible talk, well done!

  • @sdsa007
    @sdsa007 Год назад

    great energy! and nice philosophical wrap-up!

  • @cherubin7th
    @cherubin7th 2 года назад +1

    Great explanation!

  • @HeduAI
    @HeduAI 11 месяцев назад

    Excellent talk! Thank you!

  • @bracodescammer
    @bracodescammer 4 месяца назад

    I understand the benefit of modelling aleatoric uncertainty, e.g. to be able to deal with heteroscedastic noise.
    However, why do we need to model epistemic uncertainty? The best prediction after all, lies in the middle of the final distribution. If you sample from the distribution, you will lose accuracy.
    So is uncertainty only useful for certain applications to determine different behaviour based on high uncertainty? For example: If uncertainty is high, drive slower?

  • @catchenal
    @catchenal 2 года назад +2

    The other presentation Eric mentions is that of Nicole Carlson:
    Turning PyMC3 into scikit learn
    ruclips.net/video/zGRnirbHWJ8/видео.html

  • @vtrandal
    @vtrandal Год назад

    Point #1 is wrong. You left out activations.

    • @bonob0123
      @bonob0123 3 месяца назад

      The tanh and Relu nonlinearities are the activations. He is not wrong. You are wrong. Learn to be humble.

  • @MiKenning
    @MiKenning Год назад

    Was he referring to Tensorflow when he denigrated an unnamed company for its non-pythonic API? The new Tensorflow is much better!