Machine Learning for Physicists (Lecture 1)

Поделиться
HTML-код
  • Опубликовано: 23 дек 2024

Комментарии •

  • @Cannongabang
    @Cannongabang Год назад +2

    Your english is very pleasing. Fellow italian physicist here, beginning my PhD studies in astrophysical techniques. Will benefit from these videos for sure, thank you

  • @sebastienmartin5183
    @sebastienmartin5183 8 месяцев назад +1

    This is a very fine introductory lecture on deep learning. Really looking forward to watching the next ones. Thanks a lot for sharing Professor !

  • @trigocuantico
    @trigocuantico 2 года назад +4

    Thanks for uploading this content, and for all in your channel

  • @侑樹藤川
    @侑樹藤川 3 года назад +1

    1:16:55 Professor, I have one question, you said vertical axes correspond to y2 and horizontal axes correspond to y1. In my opinion, the codes are written by row-major in double for loops, so vertical axes mean to y1 and horizontal axes mean to y2. Am I right? plz recommend me.

  • @ahmedsaeed8751
    @ahmedsaeed8751 Год назад

    Thanks alot prof
    I am interested in this field iam applied physics undergrad students and currently studying ML and i was looking to integrate this two fields with each other

  • @nguyenngocly1484
    @nguyenngocly1484 4 года назад +1

    ReLU is a switch. f(x)=x connect, f(x)=0 disconnect. The dot product of a number of dot products is still a dot product. What is actually happening in a ReLU network then? Do you see.
    Also fast transforms like the FFT and fast Walsh Hadamard transform can be viewed as fixed systems of dot products that are obviously very quick to compute. You can think of ways to include them in neural networks. I think there is a blog AI624 maybe.