The Universal Approximation Theorem for neural networks

Поделиться
HTML-код
  • Опубликовано: 8 сен 2024
  • For an introduction to artificial neural networks, see Chapter 1 of my
    free online book: neuralnetworksa...
    A good series of videos on neural networks is by 3Blue1Brown. Start
    here: • But what is a neural n...
    This video just shows the (very simple!) basic idea of the proof. For the full proof of the universal approximation theorem, including caveats that didn't make it into this video, see Chapter 4 of my book:
    neuralnetworksa...
    This video was made as part of a larger project, on media for mathematics: / magic_paper

Комментарии • 75

  • @humzahkhan6299
    @humzahkhan6299 Год назад +14

    Everyone wants to talk about the expressive power of neural networks, but I want to talk about Michael Nielsen's expressive power to make me finally understand expressive power so powerfully and expressively.

  • @NoahElRhandour
    @NoahElRhandour Год назад +8

    Man I would give an arm and leg to know what amazing software he uses in this vid.... bet he coded it himself, the madman.

    • @NemexiaM
      @NemexiaM 3 месяца назад +1

      thats amazing, please if its available please please tell me what that is

  • @MysuruBharath
    @MysuruBharath 5 лет назад +11

    One of the most intuitive explanation for the approximation theorem, visually this makes it more accessible.
    Can the rectangular blocks be thought of as the blocks in Riemann summation, more you increase the blocks, better the approximation

  • @ahmedelsabagh6990
    @ahmedelsabagh6990 2 года назад +19

    I can't express how much I'm impressed by this short amazing video!
    Could you please tell us which software you used for these graphs/drawings?

  • @adosar7261
    @adosar7261 Год назад +4

    Very nice explanation! Can you make a video of why increasing the number of hidden layer is more efficient to approximate a function than increasing the number of neurons with only one hidden layer?

  • @JulietNovember9
    @JulietNovember9 5 лет назад +3

    Wow...just, wow! you explained in a little over 6 min what I've spend hours trying to understand while going through different textbooks. Thank you!

  • @medhavimonish41
    @medhavimonish41 3 года назад +2

    Didn't know you had a channel. I started ANN with books , but your online book 2ith 5 chapter was extremely useful. Thanks for writing that book 👍

  • @quintenone9107
    @quintenone9107 6 лет назад +1

    Your last video was 3 years age, and the moment that I check for a new video for a project file to be able to update, you uploaded!

  • @rkgaddipati
    @rkgaddipati 8 месяцев назад

    Thank you for this; This is a beautiful and simple explanation to build the intuition for universal function approximator. Could you please do a follow-on explainer detailing out caveats?

  • @kerem3715
    @kerem3715 9 месяцев назад

    Thank you for this beautiful explanation. I realized that I knew nothing about neural network mathematics.

  • @timberwolf4242
    @timberwolf4242 2 месяца назад

    One of the greatest mathematicians and one of the most gifted teachers in modern time! Huge props!

  • @livb4139
    @livb4139 2 месяца назад

    I remember seeing this 6 years ago and loving your explanation. What are you up to lately if you don't mind me asking?

  • @saminchowdhury7995
    @saminchowdhury7995 5 лет назад +6

    Thank you for the great video.
    Can I know what tool your using to visualize the neural network

  • @rickmonarch4552
    @rickmonarch4552 5 лет назад +6

    OMG THE BEST EXPLAINATION EVER THAT MAKES SENSE :O THANK YOU

  • @xxEndermelonxx
    @xxEndermelonxx 5 лет назад +2

    one of the best explanations I've found so far!

  • @JwalinBhatt
    @JwalinBhatt 2 года назад +1

    Very nice explanation!

  • @anibus1106
    @anibus1106 Год назад

    Huge thank you for the clear and to the point explanation.

  • @raghavendra2096
    @raghavendra2096 3 года назад +1

    Just exactly what i wanted!!!! thanks soo much Michael :)

  • @thingsfromspace
    @thingsfromspace 6 лет назад +36

    Awesome! What program are you using during this?

    • @MichaelNielsen
      @MichaelNielsen  6 лет назад +35

      It's described here: cognitivemedium.com/magic_paper/

    • @thingsfromspace
      @thingsfromspace 6 лет назад +1

      Michael Nielsen thanks!

    • @Papayalexius
      @Papayalexius 6 лет назад

      that magic paper is impressive

    •  4 года назад

      @@MichaelNielsen amazing

    • @iriss6143
      @iriss6143 4 года назад

      @@MichaelNielsen Thank you so much for this video, so well explained!
      The app blew my mind as well. Is it available for download at all?

  • @mikasomk34
    @mikasomk34 4 года назад +2

    Dear Michael Nielsen. Nice Video!!. I am wondering about the app you were using on the video to make graphics; so would you mind telling us the name of it?

  • @tiangolo
    @tiangolo 6 лет назад +3

    I love it! Awesome explanation. The interactive and intuitive magic paper makes a great difference.

  • @vivekKumar-qx2tl
    @vivekKumar-qx2tl 3 года назад +1

    Nice explanation 👏👏

  • @curiositytv7424
    @curiositytv7424 4 года назад +1

    Sir plz start classes teach More aabt quantum computing... And also source for solution for the problems of ur book😊

  • @omerraw
    @omerraw Месяц назад

    Intuitive and simple!

  • @loveplay1983
    @loveplay1983 2 месяца назад

    What is the software that you were using in the lecture, which seems amazing.

  • @Anujkumar-my1wi
    @Anujkumar-my1wi 3 года назад +1

    wow! can you tell me one thing that why increasing the number of neurons will increase the accuracy of approximation?

  • @muyigan1569
    @muyigan1569 2 года назад +1

    Hope my professors can make things as simple as you does to understand!

  • @lenag3329
    @lenag3329 2 года назад +1

    what is the software u duplicate neurons in?

  • @C4rb0neum
    @C4rb0neum 5 лет назад +1

    Great ideas about math and how it could be more dynamic. For pedagogic means I agree. For "production" and cooperation I do not (yet). As a CS person I think more fields should use Git to have all the benefits that come with it. I think there do not exist good tools for corporation on videos (corporation on code which generates video seems even more of a mental burden than normal math).

  • @user-gu2fh4nr7h
    @user-gu2fh4nr7h 3 месяца назад

    What GUI are you using for the neat squares and circles and stuff? Could be useful if code is available for making ODE compartment models.

  •  4 года назад +1

    This is great please do more

  • @swaralipibose9731
    @swaralipibose9731 3 года назад

    Best video on RUclips

  • @hellotoearth
    @hellotoearth 6 лет назад +1

    Isn't the proper terminology for a "tower" function that a sigmoid can 'collapse' into a unit step or 'Heaviside' function?

  • @sushmitajadhav7133
    @sushmitajadhav7133 3 года назад

    AMAZING EXPLAINATION! Thank you tons!

  • @omar24191
    @omar24191 3 года назад

    Hey Michael ... thanks for the simple explanation! One more thing ... how to use your awesome Magic Paper program?!?! Thanks again

  • @TheSmkngun
    @TheSmkngun 6 лет назад +1

    Very cool demonstration.
    But, isn't this basically overfitting with N free parameters?
    N is here: en.wikipedia.org/wiki/Universal_approximation_theorem

  • @lucavisconti1872
    @lucavisconti1872 6 лет назад +1

    Clear explanation. thanks.
    I don't know the function that has to be approximated but, I have a data set "input-output", let us say the pair [x,f(x)]. By using a trained NN I can find the best weights to approximate the unknown f(x) minimizing as much as possible the sum of the square errors...but then, if I need to use the just built trained NN to find the output of a new input, what should I do? Does a numerical simple example exist to show the full process? Thanks for your clarification

    • @RambutanLaw
      @RambutanLaw 2 года назад

      The end result of the trained NN can be stored as matrices or Python pickle file or R object. When you want to get the prediction for the new input, just pass the data through the NN.

  • @TPLCreationLoft
    @TPLCreationLoft 6 месяцев назад

    What's the software/program used for this? Thank you for the great video.

    • @user-bb4cv2ho9i
      @user-bb4cv2ho9i 5 месяцев назад

      I have the same little question. The tools used in this video is absolutely going to change online classes

  • @usama57926
    @usama57926 2 года назад

    What tool is this.... that is amazing

  • @paulcurry8383
    @paulcurry8383 3 года назад

    Great video, question though: what is going on with the artificial neuron? In all my research I’ve been exposed to it using a heaviside step function activation, but this looks like it is using a smooth sigmoid activation or something?

  • @Strausse12
    @Strausse12 3 года назад +1

    Thank you!

  • @christianorlandosilvaforer3451
    @christianorlandosilvaforer3451 2 года назад +1

    what is a linear neuron?

  • @danielhuynh6907
    @danielhuynh6907 3 года назад

    Great explanation!

  • @knowledgeclub4441
    @knowledgeclub4441 8 месяцев назад

    How to implement in simulink ????

  • @coolmanlulu
    @coolmanlulu 5 лет назад +1

    great job

  • @nathanwailes
    @nathanwailes 6 лет назад +2

    Very, very cool.

  • @huseyngorbani6544
    @huseyngorbani6544 Год назад

    What App are you using for visualisation?

  • @CriticalPhemomenon
    @CriticalPhemomenon 6 лет назад +2

    Good stuff..!

  • @loveandroid62
    @loveandroid62 3 месяца назад

    How is the software you use to draw called?

  • @ImaginaryMdA
    @ImaginaryMdA 6 лет назад +2

    Thank you, this was very clear!

  • @carolinefbandeira4493
    @carolinefbandeira4493 10 месяцев назад

    slayed, thank you so much!!

  • @nano7586
    @nano7586 5 лет назад

    Hey there, I just managed to install Chalktalk and I was wondering if you would send me your template? I'm having a presentation about ANNs soon and I would be really thankful to have an illustration like yours for an introduction. I would of course give you credits for it. Best regards!
    (Btw.: my topic is "Radial Basis Activation functions, so I would make sure to use them instead of the sigmoidal type)

  • @matheushernandes4212
    @matheushernandes4212 6 лет назад +1

    Does it work for Recurrent Networks?

  • @sansin-dev
    @sansin-dev 3 года назад

    Can anyone tell me what software he is using?

  • @MikeSieko17
    @MikeSieko17 10 месяцев назад

    what program is that?

  • @demetriusdemarcusbartholom8063

    ECE 449

  • @usama57926
    @usama57926 2 года назад

    That is crazy.. and beautiful... love you

  • @minsookim7402
    @minsookim7402 6 лет назад

    I love your voice

  • @leemosbacker276
    @leemosbacker276 3 года назад

    This is backwards. UAT is a polynomial theorem and the NN has shown to be capable of incorporating that Theorem

  • @Fcalysson
    @Fcalysson 3 месяца назад

    Why his eyes are closed?

  • @shinn-tyanwu4155
    @shinn-tyanwu4155 Год назад

    You are genius 😊😊

  • @AleksandrSerov-rn2cn
    @AleksandrSerov-rn2cn 5 лет назад +4

    This guy teaching you with closed eyes

  • @ste3191
    @ste3191 Год назад

    It's not a theorem, it's a model.

  • @raihanmomtaz7652
    @raihanmomtaz7652 4 года назад

    cOOOOOOOOOOOOOOOOOOOOOOOOL !!!!!!!!!!!!!!!!!!

  • @SreeramAjay
    @SreeramAjay 6 лет назад

    👏👏👏

  • @bismeetsingh352
    @bismeetsingh352 4 года назад

    Does anyone have a link /reference to a better explanation?

  • @Nachiketa_TheCutiePie
    @Nachiketa_TheCutiePie 3 года назад

    video is like a sleeping pill to me