YOLO-V3: An Incremental Improvement || YOLO OBJECT DETECTION SERIES

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 29

  • @vivekk4525
    @vivekk4525 Год назад +4

    Hi ML For Nerds - This is best explanation of YOLO V3 from which I could easily understand the working of YOLO. Please do more videos on recent topics in computer vision like - transformers, CLIP etc.

  • @giabao2602
    @giabao2602 4 месяца назад

    thanks for helping us a lot in learning, truly appreciate your work

  • @AdnanMunirkhokhar
    @AdnanMunirkhokhar 8 месяцев назад +3

    Why you have low number of subscribers, you deserve millions of subscribers. Best Explanation (y)

  • @MuhammadRizwan-z2k5f
    @MuhammadRizwan-z2k5f 7 месяцев назад +3

    I had an presentation tomorrow of this topic, This video was a life saver bro keep up the good work, Would be more great if u had provided the link to slides and reference material but still thanks

    • @MLForNerds
      @MLForNerds  7 месяцев назад +1

      Glad it helped. The slides are already in my github.

  • @AndreiChegurovRobotics
    @AndreiChegurovRobotics 6 месяцев назад +1

    Best YoloV3 video!

  • @kvnptl4400
    @kvnptl4400 7 месяцев назад +1

    🌟Came from the YOLO series; one of the best RUclips videos with an easy-to-understand explanation of YOLOv3. Keep up the good work. If possible, make video on Vision Transformer (ViT) and then DETR. 🙏

    • @MLForNerds
      @MLForNerds  7 месяцев назад +1

      Thanks, will do!

  • @kvzui994
    @kvzui994 5 месяцев назад

    IM EATING THIS UP THANK YOU

  • @saisingireddy2359
    @saisingireddy2359 5 месяцев назад

    underrated af

  • @FuhangMao
    @FuhangMao Год назад +1

    Nice!!! Looking forward to other yolo versions!

  • @emoji4652
    @emoji4652 3 месяца назад

    Thanks

  • @pavanKUMAR-tt4jt
    @pavanKUMAR-tt4jt Год назад +1

    Amazing explanation sir, this video helps me alot thank you

  • @albertnagapetyan9482
    @albertnagapetyan9482 10 месяцев назад +1

    Nice!

  • @none-hr6zh
    @none-hr6zh 2 месяца назад

    for tx ty they are relative to grid cell why we are not multiplying with 64 like in yolov1

  • @MJBZG
    @MJBZG 4 месяца назад

    i still want to know how to code this stuff up? theory can be found everywhere but how to dive into doing it ourselves?

  • @abdoaborasheed
    @abdoaborasheed Год назад +1

    hi thank you very much its a very good explanation . i have a question in prediction across scales slide you explain that yolov3 architecture contain 106 layers but it based on darknet53 so i think it consist of 53 layers what the true architecture ?

    • @MLForNerds
      @MLForNerds  Год назад +1

      Yes, absolutely. But Darknet53 is only backbone and it has additional 53 layers on top of the backbone to make it 106 layers.

    • @abdoaborasheed
      @abdoaborasheed Год назад

      @@MLForNerds what is the kind off the other layers fully connected layer or else

  • @haaanhson
    @haaanhson Год назад +1

    Hi ML For Nerds, thank you for your videos !
    I have 1 question that, with YOLOv3 we have 3 different shape of anchor boxes, but object have only shape, this mean we must be choose 1 shape, but as I see in the loss function, we calculate loss for all box, with this loss how to reduce the loss because 3 shape is different, how to get true bounding box ?

    • @MLForNerds
      @MLForNerds  Год назад

      Hi, even though 3 possible scales of anchor boxes are generated, each ground truth box is matched with only one of them based on IOU. So loss is calculated only for the matched boxes.

    • @haaanhson
      @haaanhson Год назад

      @@MLForNerds thank for your respond, can you explain more detail, because as see on loss function formular, I do not see loss is calculated only for the box which have highest score of IOU, I only see loss = sum(loss all box - ground truth box)

  • @neeru1196
    @neeru1196 Год назад

    Hi! Thank you so much for your videos. It would be really intuitive and helpful if you could show what different operations/transformations look like in the Yolo process. For example, when the image is down-sampled using convolution, and then the features from previous iteration is re-introduced and up-sampled, what does the physical image look like?

    • @slobodanblazeski0
      @slobodanblazeski0 11 месяцев назад

      It's not an RGB image anymore that we could just see, instead its the feature map of the filters, the deeper in the hierarchy you go the more abstract features become. Google Visualizing the Feature Maps in Convolutional Neural Networks for examples.

  • @safiullah353
    @safiullah353 Год назад +1

    please make one another video about yolov5 sir

    • @MLForNerds
      @MLForNerds  Год назад +1

      Sure, I will start yolov5 this weekend.