Lecture 2 | The Universal Approximation Theorem

Поделиться
HTML-код
  • Опубликовано: 29 авг 2019
  • Carnegie Mellon University
    Course: 11-785, Intro to Deep Learning
    Offering: Fall 2019
    For more information, please visit: deeplearning.cs.cmu.edu/
    Contents:
    • Neural Networks as Universal Approximators
  • КиноКино

Комментарии • 28

  • @debayondharchowdhury2680
    @debayondharchowdhury2680 4 года назад +10

    Best lecture on Deep Learning.

  • @rajatmann8897
    @rajatmann8897 4 года назад +6

    Professor, you are so awesome 🙏

  • @pastrop2003
    @pastrop2003 4 года назад +4

    Outstanding lecture, thank you!

  • @cedricmanouan2333
    @cedricmanouan2333 4 года назад +9

    a great explanation...Thank you so much

  • @adhoc3018
    @adhoc3018 3 года назад +1

    Very nice lecture. I feel I understand better why neural networks work.

  • @ian-haggerty
    @ian-haggerty 3 месяца назад +1

    Couldn't help but think of 3B1B videos on hamming codes watching this.

  • @gottlobfreige1075
    @gottlobfreige1075 2 года назад

    Agreed! Good Lecture!

  • @BigHotCrispyFry
    @BigHotCrispyFry 3 года назад

    very delightful lecture

  • @ogsconnect1312
    @ogsconnect1312 4 года назад +1

    Thanks

  • @checkout8352
    @checkout8352 3 года назад

    Great Lecture, thank you..

  • @pavansaitadepalli6097
    @pavansaitadepalli6097 3 года назад

    Amazing

  • @plutophy1242
    @plutophy1242 8 месяцев назад

    really nice systematic lecture!

  • @Learner_123
    @Learner_123 4 года назад +3

    Such a cool explanation. Can anyone (in particular any student from this course) provide a link to a mathematical explanation behind the content from 35:00 till 45:00. Usually lecturers do provide references to such material. Please do not share the reference papers already listed in this video.

  • @dassingh2246
    @dassingh2246 2 года назад

    Thank You 😊

  • @samanrazavi5896
    @samanrazavi5896 4 года назад +1

    A great and very clear lecture. Thank you.

  • @bhargavram3480
    @bhargavram3480 4 года назад +2

    At 15:25, isn't the total input coming L-N if first L inputs are 0 and last N-L inputs 1?

    • @huiliu7013
      @huiliu7013 4 года назад

      agree

    • @ambujmittal6824
      @ambujmittal6824 4 года назад +1

      Think about it this way. Let a = L- N. The threshold to be crossed will be (a + 1) then. If you Increase a positive in a ( by making any one of the L = 1) or remove a negative from a ( by making any one of the N = 0) then only you will be able to touch the decision boundary. And thus the neuron will fire.

  • @husamalsayed8036
    @husamalsayed8036 3 года назад

    can any one explain the inequality in 41:30
    and the equation in 42:00
    thanks

  • @vtrandal
    @vtrandal Год назад

    ad infinitum ...

  • @ITPCD
    @ITPCD 4 года назад +1

    Is this a undergrad level course or grad level course?

    • @Dayanto
      @Dayanto 4 года назад

      _"11-785 is a graduate course worth 12 units."_

    • @ansha2221
      @ansha2221 4 года назад

      grad level course

    • @sansin-dev
      @sansin-dev 3 года назад +1

      Why are students so unresponsive in a grad level course?

    • @adamatkins8496
      @adamatkins8496 2 года назад +1

      @@sansin-dev their brains are frying

  • @smftrsddvjiou6443
    @smftrsddvjiou6443 9 месяцев назад

    This guy is confusing. No good explanations. I have doubts about the two circle one hidden layer solution. He nees an OP operation as a third layer, otherwise also other regions outside of the two cicles will be above the threshold.

  • @Ziijiang
    @Ziijiang 4 года назад +1

    这个老师的脖子是歪的