Activation Functions - EXPLAINED!
HTML-код
- Опубликовано: 19 окт 2024
- We start with the whats/whys/hows. Then delve into details (math) with examples.
Follow me on M E D I U M: towardsdatasci...
REFERENCES
[1] Amazing discussion on the "dying relu problem": www.quora.com/...
[2] Saturating functions that "squeeze" inputs: stats.stackexc...
[3] Plot math functions beautifully with desmos: www.desmos.com/
[4] The paper on Exponential Linear units (ELU): arxiv.org/abs/...
[5] Relatively new activation function (swish): arxiv.org/pdf/...
[6] Used an Image of activation functions from this Pawan Jain's Blog: towardsdatasci...
[7] Why bias in Neural Networks? stackoverflow....
wow, one of the best highlights of activation functions on the internet. Thank you for doing this video
Awesome as always. Some points to ponder correct me if I am wrong
1. Relu is just not a activation but can also be thought as a self regularizer, as it offs all those neurones whose values are negative, so it's just a kind of automatic dropout.
2. A neutral net with just input and output layer, with softmax at the output layer is logistic regression, but when we add hidden layers in this network with no hidden activations then it's more Powerful than just vanilla logistic regression as it is now taking linear combination of linear combinations with different weight settings. But it still results in linear boundaries.
Lastly your contributions to the community is very valuable, clears a lot nitty-gritty details in short time. Keep going like this :)
No, dropout is different. Random sets of neurons are turned off in order to cause the neurons to form redundancies which can make the model more robust. In the case of dying Relu, the same neurons are always dead, making them useless. Dropout is desirable and deliberate, dying Relu is not.
Straight to the point. Nice and super clean explanation for non-linear activation functions. Thanks!
the screetching noise is irrtitaing..else nice tutoial
I agree
agreed. cringe and irritating
I don't agree
One of the best explanations ive come across
I'm learning deep learning rn and using the deep learning book published by MIT press for the same. That's kinda complicated for me to understand especially these parts cause m still an undergrad and have 0 previous experience with this. Thank you for explaining this so well.
Anytime :)
best explanation of activation functions I ever seen
Amazing! Finally I am able to visualise vanishing gradient descent and dying relu.
Glad!
Great video!
The dissapointed gestures were a bit too much x'D
A question I did have as a beginner was.
What does it mean for a sigmoid gradient to "squeeze" values, as in they become smaller and smaller as they back propagate?
It means that sigmoid function will always output a value between 0 and 1 regardless of any real number input. Notice the mathematical formula and graph of a sigmoid function for better clarity. Any real number will be converted to a number between 0 and 1. Hence sigmoid is said to "squeeze" values.
Wooo, Did I just noticed the complex explained simple. Thanks! Looking forward to more videos.
The most thing I love about your videos is the fun you add... Learning becomes a bit easier
7:48 "once it hits zero the neuron becomes useless and there is no learning" this explains so much, thank you!
Great video ! And what is more great - are the useful references you add at the description. ( For me (1)+(7) answer the questions I asked my self at the end of your video - so its was on point ) ! Thank you !
Haha. Glad the references are useful! :)
Great explanation for activation function I like it so much
Thanks so much for commenting
I discovered your page just yesterday and might I say, YOU'RE AWESOME! Thanks for such good content bro.
Thanks homie! Will dish out more soon!
Great explanation ! and the animations with maths formula and visualizing it is awesome !! Many thanks !
Wow... Perfect and easiest way to explain it..
Everyone talks about what activations do but nobody shows in how actually it looks like behind the algos..
And you explain things in the most easiest way which are so easy to understand and remember..
So a big like for. All your videos..
Could uh make more and more and DL.. 😄
Thank you. I'm always thinking of more content :)
Excellent job. There is way too much "mysticism" around neural networks. This shows clearly that for a classification problem all the nerual net is doing is creating a boundary function. Of course it gets complicated in multiple dimensions. But your explanations and use of graphs is excellent
I hate you for making that noises, i want to learn, comedia is something i would pass on
With ReLU f(x)=x is connect, f(x)=0 is disconnect. A ReLU net is a switched system of dot products, if that means anything to you.
better than most professors, thanks for great video
Thanks!!
Awesome vid! Small sug: I might check the volume levels, during the screaming in :56 it was a bit painful to my ear and possibly sounded like audio clipping.
great .. so many thanks ... need more explanation
Great explanation! Had to switch to earphones though :P
Thank you for sharing! This video cleared my doubts and gave me a good introduction to learn
further
Super glad :)
Can you cover tanh activation? (Thanks for making this one so good!)
I wonder if there is enough support that warrants a video on just tanh. Will look into it though! And thanks for the compliments :)
Excellent explanation. Thank you.
thank you very much, this is really helpful
Thanks:)
Can you please explain this "No gradient means no learning"?
Beautiful explanation!
Thank you very much for the great, and smooth explanation. This was really perfect.
Much appreciated Malek! Thanks for watching!
What are the axises on these graphs? Is it inputs, input*weights + bias for linear?
did you get answer for it?
but we cannot use ReLU for the regression of functions with high degrees of derivatives!
In that case, we should still go with infinitely differentiable activation functions like "Tanh", right?
Amazing presentation ,easy and captivating to grasp
Glad you liked it! Thank you!
wonderful explanation!!!
this guy is genius
what does x and y represent in the graph you use to show the cats and dog points ?
This was really helpful! Thanks!
Thanks for watching :)
Amazing explanation and also funny 😅👏👏👏
Lovely intro! I am learning at the age of 58!
Great Video. Could you explain what U and V are equal to in this equation : o = Ux + V ? And How did you come up with the decision boundary equation and how did you determine the values of w1 and w2 ?
Thanks in advance
Great Video and great page :) Which softwares you use to make these videos ?
Thanks! I use Camtasia Studio for the editing; Photoshop and draw.io for the images.
THIS HELPED SO MUCH! THANK YOU!
good explanation for a beginner
Another great video
🎉🎉🎉!
Thanks so much!
Really awesome video!
That's great explanation
Thanks so much for watching !
wow, that was really helpful, thanks a ton!!!!
Glad to hear that. Thanks for watching!
oh man, amazing explanation.Thanks
Nicely explained
Thanks for watching this too
What I was looking for. Thanks!
9:03 what do you mean "most neurons are off during the forward step"?
then why isn't leaky RELU ELU used everywhere in LSTM, GRU, Transformers ..? why is RELU used everywhere
This was an amazing video!!! Keep up the good work!
Thanks so much!
really helpful..thanks
so how do we know when to use relu or leacky relu? do we just use leacky relu all together in all cases?
awesome video
How is softmax a linear function here? Shouldn't it be non linear?
I've read at least three books on ANN's so far, but it's only now, after watching this video, that I have the intuition of what exactly is going on and how do activation functions break linearity!
Great explanation. Just add more contrast to you color selection.
My palette is rather bland i admit
1:16 I couldn't see that there were different colors so I was confused.
Also I found the voicing of the training neural net annoying. But some people may like what other people dislike, so it's up to you to keep on voicing them.
the dude is making these videos alone, if you don't like his voice that's on you, but he can't just change his voice
What decides the shape of the boundary?
nice video ♥
Great video, keep going !
So, we should always use leaky reLU
@6:24 How does passing what is a straight line into the softmax function also give us a straight line? Isn't the output, and consequently the decision boundary, a sigmoid?
Or is it the output before passing it into the activation function what counts as the decision boundary?
6:45 - The line corresponds to those points in the feature space (the 2 feature values) where The sigmoid's height is 0.5.
can we use linear activation with hinge loss for Linear svm for binary classification.
Awesome video man !
Quality video!
good work bro keep it up
-
Will do homie
it should be possible to let (part of) the net optimize its own activation function no?
good explanation but the noises are little bit annoying but thank you bro
With graphical calculator, your explanation is sanely clear!! thank you!!
Thanks so much for the kind comment! Glad the strategy of explaining is useful :)
Thanks mate
Amazing!
awesome
Benefited a lot
Awesome! Glad!
Thanks for the tutorial. I found the noises very cringe.
Swish: activation function. Swift: programming language. More homework, less sound effects 😀
Nice catch. I misspoke :)
good job thank you
Very welcome!
Thanks man)
thanks my man.
You are oh so welcome
plot twist: its not that the boundary no longer changes, the vanishing gradient cause the gradient to be very small , that we can assume it is negligible
Danana nanana nanana nana
yes! I liked it. Keep it up.
this is helpful, thanks :)
gold, gold, gold.
Amazing!!!!!!!!!!!!!!!!!
Thanks!!!!!!!!!!
thanks brah
thx. subscribed
Perfect explanation!... Thanks
Much appreciated!
What's the +1 node on each layer?
The bias term
I've been trying to make a convolutional autoencoder for mnist, and at first I used sigmoid activation on the convolutional part and it couldn't make anything better than just a black screen on the output but when I removed all activation functions it worked well. Does anyone have any idea why that happened?
Are the outputs properly scaled back to pixel values after being squeezed by sigmoid?
@@fatgnome Yes. Otherwise the output wouldn't match with images. Also I checked model.summary() every time I made changes to the model.
I still dont understand what a activation function is
learn Activation Functions with Dora
but I honestly it is good
Bhai video shayad accha hoga but thumbnail pe Teri pic dekhke hi kafi log click na kare, I'm here just to let you know this:avoid putting your face on thumbnail or in video as no one is interested in seeing the educator while watching technical videos.
You clicked. That's all i care about ;)
gr8 exp
Thank you. Really appreciate it
nice
followeeeed
Replieeeeed. Thanks!
I don't understand..
Wtf what's the sound of pictures...