The Universal Approximation Theorem for neural networks
HTML-код
- Опубликовано: 8 сен 2024
- For an introduction to artificial neural networks, see Chapter 1 of my
free online book: neuralnetworksa...
A good series of videos on neural networks is by 3Blue1Brown. Start
here: • But what is a neural n...
This video just shows the (very simple!) basic idea of the proof. For the full proof of the universal approximation theorem, including caveats that didn't make it into this video, see Chapter 4 of my book:
neuralnetworksa...
This video was made as part of a larger project, on media for mathematics: / magic_paper
Everyone wants to talk about the expressive power of neural networks, but I want to talk about Michael Nielsen's expressive power to make me finally understand expressive power so powerfully and expressively.
Man I would give an arm and leg to know what amazing software he uses in this vid.... bet he coded it himself, the madman.
thats amazing, please if its available please please tell me what that is
One of the most intuitive explanation for the approximation theorem, visually this makes it more accessible.
Can the rectangular blocks be thought of as the blocks in Riemann summation, more you increase the blocks, better the approximation
I can't express how much I'm impressed by this short amazing video!
Could you please tell us which software you used for these graphs/drawings?
Very nice explanation! Can you make a video of why increasing the number of hidden layer is more efficient to approximate a function than increasing the number of neurons with only one hidden layer?
Wow...just, wow! you explained in a little over 6 min what I've spend hours trying to understand while going through different textbooks. Thank you!
Didn't know you had a channel. I started ANN with books , but your online book 2ith 5 chapter was extremely useful. Thanks for writing that book 👍
Your last video was 3 years age, and the moment that I check for a new video for a project file to be able to update, you uploaded!
Thank you for this; This is a beautiful and simple explanation to build the intuition for universal function approximator. Could you please do a follow-on explainer detailing out caveats?
Thank you for this beautiful explanation. I realized that I knew nothing about neural network mathematics.
One of the greatest mathematicians and one of the most gifted teachers in modern time! Huge props!
I remember seeing this 6 years ago and loving your explanation. What are you up to lately if you don't mind me asking?
Thank you for the great video.
Can I know what tool your using to visualize the neural network
OMG THE BEST EXPLAINATION EVER THAT MAKES SENSE :O THANK YOU
one of the best explanations I've found so far!
Very nice explanation!
Huge thank you for the clear and to the point explanation.
Just exactly what i wanted!!!! thanks soo much Michael :)
Awesome! What program are you using during this?
It's described here: cognitivemedium.com/magic_paper/
Michael Nielsen thanks!
that magic paper is impressive
@@MichaelNielsen amazing
@@MichaelNielsen Thank you so much for this video, so well explained!
The app blew my mind as well. Is it available for download at all?
Dear Michael Nielsen. Nice Video!!. I am wondering about the app you were using on the video to make graphics; so would you mind telling us the name of it?
I love it! Awesome explanation. The interactive and intuitive magic paper makes a great difference.
Nice explanation 👏👏
Sir plz start classes teach More aabt quantum computing... And also source for solution for the problems of ur book😊
Intuitive and simple!
What is the software that you were using in the lecture, which seems amazing.
wow! can you tell me one thing that why increasing the number of neurons will increase the accuracy of approximation?
Hope my professors can make things as simple as you does to understand!
what is the software u duplicate neurons in?
Great ideas about math and how it could be more dynamic. For pedagogic means I agree. For "production" and cooperation I do not (yet). As a CS person I think more fields should use Git to have all the benefits that come with it. I think there do not exist good tools for corporation on videos (corporation on code which generates video seems even more of a mental burden than normal math).
What GUI are you using for the neat squares and circles and stuff? Could be useful if code is available for making ODE compartment models.
This is great please do more
Best video on RUclips
Isn't the proper terminology for a "tower" function that a sigmoid can 'collapse' into a unit step or 'Heaviside' function?
AMAZING EXPLAINATION! Thank you tons!
Hey Michael ... thanks for the simple explanation! One more thing ... how to use your awesome Magic Paper program?!?! Thanks again
Very cool demonstration.
But, isn't this basically overfitting with N free parameters?
N is here: en.wikipedia.org/wiki/Universal_approximation_theorem
Clear explanation. thanks.
I don't know the function that has to be approximated but, I have a data set "input-output", let us say the pair [x,f(x)]. By using a trained NN I can find the best weights to approximate the unknown f(x) minimizing as much as possible the sum of the square errors...but then, if I need to use the just built trained NN to find the output of a new input, what should I do? Does a numerical simple example exist to show the full process? Thanks for your clarification
The end result of the trained NN can be stored as matrices or Python pickle file or R object. When you want to get the prediction for the new input, just pass the data through the NN.
What's the software/program used for this? Thank you for the great video.
I have the same little question. The tools used in this video is absolutely going to change online classes
What tool is this.... that is amazing
Great video, question though: what is going on with the artificial neuron? In all my research I’ve been exposed to it using a heaviside step function activation, but this looks like it is using a smooth sigmoid activation or something?
Thank you!
what is a linear neuron?
Great explanation!
How to implement in simulink ????
great job
Very, very cool.
What App are you using for visualisation?
Good stuff..!
How is the software you use to draw called?
Thank you, this was very clear!
slayed, thank you so much!!
Hey there, I just managed to install Chalktalk and I was wondering if you would send me your template? I'm having a presentation about ANNs soon and I would be really thankful to have an illustration like yours for an introduction. I would of course give you credits for it. Best regards!
(Btw.: my topic is "Radial Basis Activation functions, so I would make sure to use them instead of the sigmoidal type)
Does it work for Recurrent Networks?
Can anyone tell me what software he is using?
what program is that?
ECE 449
That is crazy.. and beautiful... love you
I love your voice
This is backwards. UAT is a polynomial theorem and the NN has shown to be capable of incorporating that Theorem
Why his eyes are closed?
You are genius 😊😊
This guy teaching you with closed eyes
It's not a theorem, it's a model.
cOOOOOOOOOOOOOOOOOOOOOOOOL !!!!!!!!!!!!!!!!!!
👏👏👏
Does anyone have a link /reference to a better explanation?
video is like a sleeping pill to me