thanku very very much mr andrew ng...........u made my all the task super easy....now its like piece of a cake...u cant believe that how much i struggle to demystify all the issues regarding about deep-learning specially conv.
AlexNet was remarkable when it first came. Setting up two GPUs for training was very difficult and communication among Gpus using mpi require great deal of effort. Those guy were really geeks to figure out such solution
It's just the number of filters they used, they decided to up the filters from 96 to 256 to 384 at first, and probably when they realized that their results aren't changing much, they decided to back the filter number. Number of channels is just the number of filters they decided to use, there is no "explanation" or math formula as to why they chose those jumps(they explain it in the paper probably)
At 13:37, why do we have [CONV 64] x 2 ? why do we perform the CONV operation twice (when we get the same dimensions each time) ? Also, what is the advantage of increasing the number of channels and decreasing the height and width?
Well you don't have to change dimensions every time in order to actually get some results. It was just their way of trying to detect patterns in images, it just looks unusual
Excellent video. Thank you very much. I have a question. When we apply the 2nd set of convolution 16 filters - do we not apply this filter on the 6 channels we produced in the previous layer? Therefore the final output after the 2nd pooling should be 5X5 X 16filters X 6 filters = 400x6=2400 ?
I think I understand why. Every filter should be visualized as a 3d matrix , i.e. a volume. Each layer in the volume operates on each of the channels. E.g. In case of R,G,B picture, the first layer of filters would have 3 matrices each , 1 matrix for R, 1 matrix for G, 1 matrixfor B. The 3 matrices in a single filter would operate on the R,G,B image to produce a single 2d matrix. Now 6 filters in the first layer produce 6 two dimensional matrices. Think of the count 6 as a picture with 6 channels. Therefore in the subsequent filter layers, your input picture is made up of 6 two dimensional matrices. Each of the 16 filters in the second filter layer have a depth of 6 in the 3 rd dimension, i.e. a stack of 6 two dimensional matrices. Therefore 16 of these 6 channeled filters operate on the input image (which can be thought of as a 6 channel image produced by convolution in the first layer).
Why is the last layer 312 y-hat drawn as a single node? Shouldn't it be drawn as a 1x10 similar to the outputs of the FC? And what's the nonlinearity for the FC layers? Relu?
The critical part is that you SUM UP those 10x10x6 on depth(6). So when a filter of 5x5x6 is applied on a 14x14x6 tensor it will yield 10x10x1, just like a filter of 5x5x5 on a tensor 14x14x6 would yield a 10x10x2 output, and a 5x5x4 on the same tensor would yield a 10x10x3 output etc...
@@rahul25iit That's because if we will watch playlist in serial order. Then one can get to know such things have to be considered by default if he has not mentioned explicitly.
as we see in lenet or in our conv network when we apply filter dimension is decrease each time ex:- in leNet 32*32*1 when we get conv layer by 6 filters of 5*5 matrix then the answer is 28*28*6 but in VGG 16 the answer we get every time is same like 224*224*3 result in 224*224*64 only no. of filters are change let me help with that or explain it .
@@ayushyarao9693 p=1. (n+2p-f )s+1 => (224+2(1)-3)/1+1 224. Ans after 3 years of comment. I wrote it may help who learning from it now and came to see this doubt.
Every 400 nodes are connected to every 120 nodes and every 120 nodes are connected to every 84 nodes. There are no maths but only your experience which helps you decide the number of nodes.
90216 resulted from flattening the the last layer conv layer. 4096 are not the parameters but the number of hidden layers of that layer. lastly, number of layers is a hyperparameter meaning you have to experiment with what works best for your problem
number is just multiplication of width*height*channel , about how many you need they just test ( or phd students test ) and report to advisor professor kek
thanku very very much mr andrew ng...........u made my all the task super easy....now its like piece of a cake...u cant believe that how much i struggle to demystify all the issues regarding about deep-learning specially conv.
AlexNet was remarkable when it first came. Setting up two GPUs for training was very difficult and communication among Gpus using mpi require great deal of effort. Those guy were really geeks to figure out such solution
Setting up one GPU with tensorflow today is a feat of engineering.
At 8:40, the AlexNet, why the channel suddenly shrank from 384 to 256?
It's just the number of filters they used, they decided to up the filters from 96 to 256 to 384 at first, and probably when they realized that their results aren't changing much, they decided to back the filter number. Number of channels is just the number of filters they decided to use, there is no "explanation" or math formula as to why they chose those jumps(they explain it in the paper probably)
At 13:37, why do we have [CONV 64] x 2 ? why do we perform the CONV operation twice (when we get the same dimensions each time) ?
Also, what is the advantage of increasing the number of channels and decreasing the height and width?
Well you don't have to change dimensions every time in order to actually get some results. It was just their way of trying to detect patterns in images, it just looks unusual
p=1 here so the dimensions stayed the same..
Excellent video. Thank you very much. I have a question. When we apply the 2nd set of convolution 16 filters - do we not apply this filter on the 6 channels we produced in the previous layer? Therefore the final output after the 2nd pooling should be 5X5 X 16filters X 6 filters = 400x6=2400 ?
I think I understand why. Every filter should be visualized as a 3d matrix , i.e. a volume. Each layer in the volume operates on each of the channels. E.g. In case of R,G,B picture, the first layer of filters would have 3 matrices each , 1 matrix for R, 1 matrix for G, 1 matrixfor B. The 3 matrices in a single filter would operate on the R,G,B image to produce a single 2d matrix. Now 6 filters in the first layer produce 6 two dimensional matrices. Think of the count 6 as a picture with 6 channels. Therefore in the subsequent filter layers, your input picture is made up of 6 two dimensional matrices. Each of the 16 filters in the second filter layer have a depth of 6 in the 3 rd dimension, i.e. a stack of 6 two dimensional matrices. Therefore 16 of these 6 channeled filters operate on the input image (which can be thought of as a 6 channel image produced by convolution in the first layer).
The previous videos from ANG have the answer to my question. I have summarized below.
@@sau002 good job:)
Why is the last layer 312 y-hat drawn as a single node? Shouldn't it be drawn as a 1x10 similar to the outputs of the FC?
And what's the nonlinearity for the FC layers? Relu?
do you have papers that prof said in the lecture?
How does a 14×14×6 turn into 10×10×16, I mean we have 6 14×14 filtered output images, how to apply 16 filters for 6 14×14 output images
@@awest11000 so how do you know how many filters can fit with next layer?
The critical part is that you SUM UP those 10x10x6 on depth(6). So when a filter of 5x5x6 is applied on a 14x14x6 tensor it will yield 10x10x1, just like a filter of 5x5x5 on a tensor 14x14x6 would yield a 10x10x2 output, and a 5x5x4 on the same tensor would yield a 10x10x3 output etc...
After same padding how did 27x27x96 become 27x27x256?
Conv layer, since its same padding so the height and weight remained 27X27, but they used 256 filters, or channels, so the dimensions became 27x27x256
@@alexiafairy Andrew doesn't explicitly mention about using 256 filters.
@@rahul25iit That's because if we will watch playlist in serial order. Then one can get to know such things have to be considered by default if he has not mentioned explicitly.
224 on convolution by 3*3 filter twice should give 220. Help me with this !!
1st filter(3x3) = n-f+1 = 224-3+1=222
2nd filter(3x3) = 222-3+1 = 220
Here they use same convolutions, which means padding = 1.
Dont be such an ass to check every single thing
as we see in lenet or in our conv network when we apply filter dimension is decrease each time ex:- in leNet 32*32*1 when we get conv layer by 6 filters of 5*5 matrix then the answer is 28*28*6 but in VGG 16 the answer we get every time is same like 224*224*3 result in 224*224*64 only no. of filters are change let me help with that or explain it .
The block sizes do not change in VGG because the authors use zero padding throughout. I hope this helps.
i think joe meant that there is suitable padding used to make sure that they both are same size.Which must be 2.
@@ayushyarao9693 p=1. (n+2p-f )s+1 => (224+2(1)-3)/1+1 224. Ans after 3 years of comment. I wrote it may help who learning from it now and came to see this doubt.
it is confusing, is the kernel 3*3*3 or 3*3? I assume for the RGB images it is 3*3*3.
It is 3*3*no. of filter
@@gerrardandeminem are you sure? based on this video it is 3*3*3:
ruclips.net/video/Lakz2MoHy6o/видео.html
@@jacobjonm0511 I think Andrew ng explains this in previous videos of this series. It is an arbitrary choice.
@@gerrardandeminem it is not arbitrary. Here is another video at 7:23
ruclips.net/video/pDdP0TFzsoQ/видео.html
@@jacobjonm0511 If you are asking about the first input. then yes, it is 3*3*3. But it is arbitrary afterwards
How 400 to 120 to 84 in fully connected layers?
The next layers needs not have the same number of nodes than the previous ones
Every 400 nodes are connected to every 120 nodes and every 120 nodes are connected to every 84 nodes. There are no maths but only your experience which helps you decide the number of nodes.
120 and 84 are just choice of number of nodes for LeNet-5
they chose those numbers as they work best for the model, you have to find out what number of neurons works well for you
Thank you all for helping out
i have question how come from 90216 parameters to 4096 parameters? and how do i know how many layers i need?
im havent followed this video yet, did you mean layer or parameter?
for the layer it came from the experiment of the given architecture
90216 resulted from flattening the the last layer conv layer. 4096 are not the parameters but the number of hidden layers of that layer. lastly, number of layers is a hyperparameter meaning you have to experiment with what works best for your problem
number is just multiplication of width*height*channel , about how many you need they just test ( or phd students test ) and report to advisor professor
kek
how is 6 channels converted to 16 channels?
No. of filters applied , previously it was 6, then 16 filters were applied
skip too many things in AlexNet part..
Nİce. Thanks