Sparse Gaussian Process Approximations, Richard Turner

Поделиться
HTML-код
  • Опубликовано: 17 дек 2024

Комментарии • 8

  • @giannisbdk
    @giannisbdk 10 месяцев назад

    Your teaching skills are truly impressive! Thank you for sharing those insightful tips that make these approximations more understandable at their core, especially the link to the factor graphs. Very useful for someone who wishes to connect the dots and gain a comprehensive understanding of why and how these approaches work, rather than just applying their results.

  • @mohammadjoshaghani5388
    @mohammadjoshaghani5388 2 года назад

    Thanks for sharing, very nice explanation!

  • @kazparala
    @kazparala 6 лет назад

    Brilliant! It's a very niche topic and this lecture explains the currently available literature in a very intuitive manner.

  • @chenxin4741
    @chenxin4741 4 года назад

    Turner is so great at teaching, too!

  • @mohamadmoghadam4756
    @mohamadmoghadam4756 6 лет назад +1

    Amazing explanation. Thanks a lot for uploading. Although I am out of the field, I could grasp it.

  • @king2176
    @king2176 4 года назад

    Great explanation, I'm sure it'll get more views as GPs become more mainstream

    • @monsterous289
      @monsterous289 3 года назад

      Unfortunately for GPs, it seems for nearly every machine learning-like process the sparse Bernoulli model has been proven as superior, along with the rTop-k algorithm as of late 2020. From my understanding of course.

  • @andreashadjiantonis2596
    @andreashadjiantonis2596 4 года назад

    Thanks a lot!