C4W1L07 One Layer of a Convolutional Net
HTML-код
- Опубликовано: 5 авг 2024
- Take the Deep Learning Specialization: bit.ly/2IlGB9n
Check out all our courses: www.deeplearning.ai
Subscribe to The Batch, our weekly newsletter: www.deeplearning.ai/thebatch
Follow us:
Twitter: / deeplearningai_
Facebook: / deeplearninghq
Linkedin: / deeplearningai
the thing with his lectures vs every other source on the planet for the machine and deep learning is that he teaches by developing your intuition. Believe me, I have tried every other material and none has made me understand the machine and deep learning the way his lectures do. Thanks, Andrew you are the best teacher out there.
True but you also need to debug your code to understand more about implementation. I believe to understand the concept totally you need to build it or reverse engineer existing code (mostly code from github). I have been introduced to Andrew Ng lecture very late in my career but those all concept looks very familiar because I have already worked on the implementation sides and believe me the maths look very satisfying when you look at the python code.
I had a bad break up and your deep learning videos are making me get the intellectual pleasure and urge to spend time as effectively as possible. You touch lives, Andrew. Lots of love for you.
It seems that you are an intellectual sir. I don't know whether you will read this or not but still. Try rearranging the weights, bias or the number of layers or what not. Whats stopping you from again patching with her? Its happy to be sad sometimes. You are fucking creating intelligence, what is a human mind?
Only reply when you are again with her.
@@HimanshuMauryadesigners haha this is the energy i need in my life
bro is going to the intellectual gym
@@saladlord7613 lmaooo
Correction: @ 2:05 -- It's 6x6x3 to 4x4x2 instead of 4x4x4.
Good and clear explanation.
Difficult to find this good elsewhere.
The explanation of the lecture is something completely different other sources of the internet when it comes deep nueral network ,it builds the intuition behind the content.
Thanks Andrew I'm very appreciate for that.
I love your explanations. Thank you. You change lives, greerings from Paraguay.
We should start giving our college money to Andrew instead lads
nice lectures sir,you are the father of deep learning.
Thank you, Andrew Ng.
Do not worry guys if you do not fully understand this part. The next video will make you understand better. I literally left the next video to come and type this here to help anyone who like me did not fully understand this particular video. The next one will make it clear.
14:16 Mind blown here, very fun trying to follow. Thank you very much.
Do we need a bias parameter for every Kernel at the Convolutional layer? I understand the significance of Bias at the fully connected layer. As per my understanding (I am probably wrong) the Convolutional layers are performing feature detection. E.g. edge detection ?
You are really great sir
hoping to meet you one day !
Can we relate using two filters to 2 nodes of a neural net(because each node has its own weight vector) ? If so, shouldn't the bias added to the two 4x4 results be the same, since the bias is constant for one layer?
You are a Great sir , thank you
i am a bit confuse here. can anyone say what will be the value of L in n(l-1). output of previous layer means ? 4*4*2? or any other??? so in next layer will there be n(4-1) ? or what?
Thank you sir..
Thank you Andrew sensei!
Dude he is chinese. Not japanese
are you stupid or are you stupid@@amaan6723
How Does the 3x3x3 filter output 4x4 what happens to colour channels do they add up ? [23,5,1] if this is the first pixel in the RBG image is the output a greyscale 29 ???
ruclips.net/video/KTB_OFoAQcc/видео.html
watch from 2 minutes
How are the 3 layers combined to one layer in the output?
nice explanation
great exoplanation
Looks like here number of channels (like R,G,B denoted earlier by n_c) and numbr of filters (horizontal, vertical etc) are both represented by n_c. This is confusing to me. Am I missing a point here?
Thanks
LEGEND
is it convolution layer?
It is a very cool resource, just that the volume becomes smaller and smaller. take a breath, sir.
@ 10:05 padding should be 2p[l-1] since padding is done on input not filters layer or output later... and same goes for stripe..
I am bit confused here..
does l go along with time when that step is performed regardless of on which layers its performed ..
It is 2p[l] because we are getting layer [l] by applying this padding "p" on the previous layer. So basically "p" is property of layer [l] not [l - 1].
For example, in multilayer neural networks you create weight matrix to apply on previous layer [l - 1] to generate next [l], so the matrix is the property of layer [l] not [l - 1].
16 minute is brutal
In the activation notation A = m * n(h) * n(w) * n(c), can any one explain what does that 'm' stand for ?
It's the number of examples since A is the result of stacking each activation matrix.
m here represent batch size
5ouna lkesa7 god bless your mother
5:02 good explanation but reading slides very difficult (especially indices and exponents)
Andrew Ng is a god
Agreed, I absolutely love him.
A god of AI
How many layers does a cnn need to have for 4 class labels?
Reisssss
day 1
280
27 weights + 1 bias = 28 * 10 filters = 280
the audio is not good man..there is some disturbance which is continuously hurting ears.
Press 'M' to fix it
90*3=270
+ 10 biases for each filter
@@l.3890 No , 1 bias for each filter so 10 bias for 10 filters.
Andrew not seen god but probably he looks like u