C4W2L02 Classic Network

Поделиться
HTML-код
  • Опубликовано: 1 янв 2025

Комментарии • 53

  • @rubel7829
    @rubel7829 7 лет назад +21

    thanku very very much mr andrew ng...........u made my all the task super easy....now its like piece of a cake...u cant believe that how much i struggle to demystify all the issues regarding about deep-learning specially conv.

  • @sandyz1000
    @sandyz1000 5 лет назад +24

    AlexNet was remarkable when it first came. Setting up two GPUs for training was very difficult and communication among Gpus using mpi require great deal of effort. Those guy were really geeks to figure out such solution

    • @SuperSmitty9999
      @SuperSmitty9999 2 года назад +2

      Setting up one GPU with tensorflow today is a feat of engineering.

  • @yinghong3543
    @yinghong3543 5 лет назад +8

    At 8:40, the AlexNet, why the channel suddenly shrank from 384 to 256?

    • @CyborgGaming99
      @CyborgGaming99 5 лет назад +14

      It's just the number of filters they used, they decided to up the filters from 96 to 256 to 384 at first, and probably when they realized that their results aren't changing much, they decided to back the filter number. Number of channels is just the number of filters they decided to use, there is no "explanation" or math formula as to why they chose those jumps(they explain it in the paper probably)

  • @shwethasubbu3385
    @shwethasubbu3385 6 лет назад +4

    At 13:37, why do we have [CONV 64] x 2 ? why do we perform the CONV operation twice (when we get the same dimensions each time) ?
    Also, what is the advantage of increasing the number of channels and decreasing the height and width?

    • @CyborgGaming99
      @CyborgGaming99 5 лет назад +1

      Well you don't have to change dimensions every time in order to actually get some results. It was just their way of trying to detect patterns in images, it just looks unusual

    • @MuhannadGhazal
      @MuhannadGhazal 4 года назад

      p=1 here so the dimensions stayed the same..

  • @sau002
    @sau002 6 лет назад +3

    Excellent video. Thank you very much. I have a question. When we apply the 2nd set of convolution 16 filters - do we not apply this filter on the 6 channels we produced in the previous layer? Therefore the final output after the 2nd pooling should be 5X5 X 16filters X 6 filters = 400x6=2400 ?

    • @sau002
      @sau002 6 лет назад +3

      I think I understand why. Every filter should be visualized as a 3d matrix , i.e. a volume. Each layer in the volume operates on each of the channels. E.g. In case of R,G,B picture, the first layer of filters would have 3 matrices each , 1 matrix for R, 1 matrix for G, 1 matrixfor B. The 3 matrices in a single filter would operate on the R,G,B image to produce a single 2d matrix. Now 6 filters in the first layer produce 6 two dimensional matrices. Think of the count 6 as a picture with 6 channels. Therefore in the subsequent filter layers, your input picture is made up of 6 two dimensional matrices. Each of the 16 filters in the second filter layer have a depth of 6 in the 3 rd dimension, i.e. a stack of 6 two dimensional matrices. Therefore 16 of these 6 channeled filters operate on the input image (which can be thought of as a 6 channel image produced by convolution in the first layer).

    • @sau002
      @sau002 6 лет назад +3

      The previous videos from ANG have the answer to my question. I have summarized below.

    • @pengsun1355
      @pengsun1355 4 года назад +1

      @@sau002 good job:)

  • @nikilragav
    @nikilragav 5 месяцев назад

    Why is the last layer 312 y-hat drawn as a single node? Shouldn't it be drawn as a 1x10 similar to the outputs of the FC?
    And what's the nonlinearity for the FC layers? Relu?

  • @oktayvosoughi6199
    @oktayvosoughi6199 Год назад

    do you have papers that prof said in the lecture?

  • @sheethalgowda6616
    @sheethalgowda6616 4 года назад +2

    How does a 14×14×6 turn into 10×10×16, I mean we have 6 14×14 filtered output images, how to apply 16 filters for 6 14×14 output images

    • @anhphan8643
      @anhphan8643 3 года назад

      @@awest11000 so how do you know how many filters can fit with next layer?

    • @LogicalFootball
      @LogicalFootball 2 года назад

      The critical part is that you SUM UP those 10x10x6 on depth(6). So when a filter of 5x5x6 is applied on a 14x14x6 tensor it will yield 10x10x1, just like a filter of 5x5x5 on a tensor 14x14x6 would yield a 10x10x2 output, and a 5x5x4 on the same tensor would yield a 10x10x3 output etc...

  • @kiranarun1868
    @kiranarun1868 4 года назад +1

    After same padding how did 27x27x96 become 27x27x256?

    • @alexiafairy
      @alexiafairy 4 года назад +7

      Conv layer, since its same padding so the height and weight remained 27X27, but they used 256 filters, or channels, so the dimensions became 27x27x256

    • @rahul25iit
      @rahul25iit Год назад

      @@alexiafairy Andrew doesn't explicitly mention about using 256 filters.

    • @devanshgoel3433
      @devanshgoel3433 Год назад

      @@rahul25iit That's because if we will watch playlist in serial order. Then one can get to know such things have to be considered by default if he has not mentioned explicitly.

  • @aayushpaudel2379
    @aayushpaudel2379 5 лет назад +2

    224 on convolution by 3*3 filter twice should give 220. Help me with this !!

    • @basavarajpatil9821
      @basavarajpatil9821 4 года назад

      1st filter(3x3) = n-f+1 = 224-3+1=222
      2nd filter(3x3) = 222-3+1 = 220

    • @shaunli7001
      @shaunli7001 4 года назад +3

      Here they use same convolutions, which means padding = 1.

    • @trexmidnite
      @trexmidnite 3 года назад

      Dont be such an ass to check every single thing

  • @navneetchaudhary4842
    @navneetchaudhary4842 4 года назад

    as we see in lenet or in our conv network when we apply filter dimension is decrease each time ex:- in leNet 32*32*1 when we get conv layer by 6 filters of 5*5 matrix then the answer is 28*28*6 but in VGG 16 the answer we get every time is same like 224*224*3 result in 224*224*64 only no. of filters are change let me help with that or explain it .

    • @legacies9041
      @legacies9041 4 года назад +2

      The block sizes do not change in VGG because the authors use zero padding throughout. I hope this helps.

    • @ayushyarao9693
      @ayushyarao9693 4 года назад +2

      i think joe meant that there is suitable padding used to make sure that they both are same size.Which must be 2.

    • @computing_T
      @computing_T Год назад

      @@ayushyarao9693 p=1. (n+2p-f )s+1 => (224+2(1)-3)/1+1 224. Ans after 3 years of comment. I wrote it may help who learning from it now and came to see this doubt.

  • @jacobjonm0511
    @jacobjonm0511 2 года назад

    it is confusing, is the kernel 3*3*3 or 3*3? I assume for the RGB images it is 3*3*3.

    • @gerrardandeminem
      @gerrardandeminem Год назад

      It is 3*3*no. of filter

    • @jacobjonm0511
      @jacobjonm0511 Год назад

      @@gerrardandeminem are you sure? based on this video it is 3*3*3:
      ruclips.net/video/Lakz2MoHy6o/видео.html

    • @gerrardandeminem
      @gerrardandeminem Год назад

      @@jacobjonm0511 I think Andrew ng explains this in previous videos of this series. It is an arbitrary choice.

    • @jacobjonm0511
      @jacobjonm0511 Год назад

      @@gerrardandeminem it is not arbitrary. Here is another video at 7:23
      ruclips.net/video/pDdP0TFzsoQ/видео.html

    • @gerrardandeminem
      @gerrardandeminem Год назад

      @@jacobjonm0511 If you are asking about the first input. then yes, it is 3*3*3. But it is arbitrary afterwards

  • @roshnisingh8342
    @roshnisingh8342 6 лет назад +1

    How 400 to 120 to 84 in fully connected layers?

    • @janvonschreibe3447
      @janvonschreibe3447 6 лет назад

      The next layers needs not have the same number of nodes than the previous ones

    • @pallawirajendra
      @pallawirajendra 5 лет назад +2

      Every 400 nodes are connected to every 120 nodes and every 120 nodes are connected to every 84 nodes. There are no maths but only your experience which helps you decide the number of nodes.

    • @ritapravadutta7939
      @ritapravadutta7939 5 лет назад +2

      120 and 84 are just choice of number of nodes for LeNet-5

    • @Joshua-dl3ns
      @Joshua-dl3ns 4 года назад +2

      they chose those numbers as they work best for the model, you have to find out what number of neurons works well for you

    • @roshnisingh8342
      @roshnisingh8342 4 года назад +1

      Thank you all for helping out

  • @uchungnguyen1474
    @uchungnguyen1474 6 лет назад

    i have question how come from 90216 parameters to 4096 parameters? and how do i know how many layers i need?

    • @adamajinugroho830
      @adamajinugroho830 6 лет назад

      im havent followed this video yet, did you mean layer or parameter?
      for the layer it came from the experiment of the given architecture

    • @muhammadharris4470
      @muhammadharris4470 6 лет назад +1

      90216 resulted from flattening the the last layer conv layer. 4096 are not the parameters but the number of hidden layers of that layer. lastly, number of layers is a hyperparameter meaning you have to experiment with what works best for your problem

    • @ThePaypay88
      @ThePaypay88 4 года назад

      number is just multiplication of width*height*channel , about how many you need they just test ( or phd students test ) and report to advisor professor
      kek

  • @PramodShetty
    @PramodShetty Год назад

    how is 6 channels converted to 16 channels?

    • @payeldas746
      @payeldas746 6 месяцев назад

      No. of filters applied , previously it was 6, then 16 filters were applied

  • @Dohkim-ni6um
    @Dohkim-ni6um 5 лет назад

    skip too many things in AlexNet part..

  • @ati43888
    @ati43888 9 месяцев назад

    Nİce. Thanks