Top Object Detection Models in 2023 | Model Selection Guide sponsored by Intel

Поделиться
HTML-код
  • Опубликовано: 2 фев 2025

Комментарии • 74

  • @juanolano2818
    @juanolano2818 Год назад +3

    Will go with Yolo8 for my current microbes identification project :) Thank you Piotr!

    • @Roboflow
      @Roboflow  Год назад

      If it is open source or academic good choice! 👍🏻

    • @saurabhgupta7148
      @saurabhgupta7148 Год назад

      I am also working with images of micro organisms. Did you get good results with YoloV8?

  • @hawkingradiation3774
    @hawkingradiation3774 Год назад +16

    would also like to see a video for comparison for segmentation tasks as well.

    • @Roboflow
      @Roboflow  Год назад +7

      Awesome idea! I'm curious if there are more people who would like to see that. It's a lot of work to create video like that.

    • @sozno4222
      @sozno4222 Год назад +2

      @@RoboflowI would also be interested in a video like that

    • @visuality2541
      @visuality2541 Год назад

      Indeed

    • @Ibn_Sulaimaan
      @Ibn_Sulaimaan 3 месяца назад

      @Roboflow , me too!

  • @rperezalejo
    @rperezalejo Год назад +1

    Great video, right now I am working on a real time sport object detection and this video comes like a charm for me. I can test other posibilities I did not had in mind.

    • @Roboflow
      @Roboflow  Год назад

      Awesome we came at the right one! I’d love to here about your results once you finish tests.

    • @rperezalejo
      @rperezalejo Год назад

      ​@@Roboflow I have to retrain on 4k images first and then see the performance of the models in real time, I am not sure If i am going to be able to test the all, but the idea of having the golden cub gives a sign on where to start. It is a great guide anyways

  • @AlainPilon
    @AlainPilon Год назад +4

    Thanks for the work! I am wondering if the COCO metric is actually useful. It is an interesting comparison point but I am not sure how it translates in practice given that most people only train detector for a very few set of classes. Community support, documentation and how the model integrates in the current ecosystem is much more impactful and I am glad you added these to your chart.

    • @Roboflow
      @Roboflow  Год назад

      I'm glad you agree with my methodology. I think the license is also very important. After all you must be able to use the model in your project. As for mAP, 100% agree. I'd love to have other metrics that I could use. We developed RF100 metric - paperswithcode.com/dataset/rf100, but I didn't have enough datapoints to compare all the models.

  • @seanolivieri4829
    @seanolivieri4829 Год назад +1

    Do you have a video on licenses? I dont understand any of those and which one should I use if I want to be able to sell my program or call it my own

  • @mpty2022
    @mpty2022 Год назад +2

    thanks for sharing your research

  • @satellite-image-deep-learning
    @satellite-image-deep-learning Год назад +1

    Fantastic summary, thank you for the effort that went into this! MMdetection has come up before and I would love an intro video on it 😊

    • @Roboflow
      @Roboflow  Год назад

      We already have MMDetection. :) Take a look on our channel

    • @segheysens
      @segheysens Год назад

      They already created a video here! 🙌 ruclips.net/video/5kgWyo6Sg4E/видео.html

  • @johannesmokami5760
    @johannesmokami5760 Год назад +1

    Thanks for the info
    I will definitely try them as well.

    • @Roboflow
      @Roboflow  Год назад +1

      Awesome! Which detector are you going to try?

    • @johannesmokami5760
      @johannesmokami5760 Год назад

      I'm currently trying out YOLOv8 but I'd like to try YOLOv7 and GroundingDINO @@Roboflow

  • @romroc627
    @romroc627 Год назад +1

    Excellent video, really useful.

    • @Roboflow
      @Roboflow  Год назад

      I’m so happy to see such a positive feedback!

  • @Seethis-HD
    @Seethis-HD Год назад +2

    Excellent video. Thanks for the efforts. I was wondering why you didn't consider YOLO-NAS in the list?

    • @Roboflow
      @Roboflow  Год назад

      I considered it but ultimately decided not to include it. I’m pretty confident those models are better choices.

    • @PhilippBlum
      @PhilippBlum 11 месяцев назад

      @ow What was the reason not to include it? Accuracy?

  • @ferneutron
    @ferneutron Год назад

    Super great job, thanks!

  • @gz3442
    @gz3442 2 месяца назад

    minimum (hardware /software) for project check every 1s only documents and some pc screen shot ?

  • @EliSpizzichino
    @EliSpizzichino Год назад +3

    We need a platform to fully compare them on real datasets on real training on the same device.
    Is also important to keep track if a version change produce worsening quality. I've noticed for example that between one minor version and another of the ultralitics codebase the quality of the final trained model worsened by a lot.

    • @Roboflow
      @Roboflow  Год назад

      That's super interesting! Could you share more insights on what the versions were? I'd love to do more investigation.
      As for the "platform to fully compare them on real datasets", have you seen RF100? paperswithcode.com/dataset/rf100

    • @EliSpizzichino
      @EliSpizzichino Год назад

      @@Roboflow The last tested good version was 8.0.103. Unfortunately I had no time since then to do further investigation my self but I remember trying a couple of training out with some versions after that and got worse results
      I haven't tested the RF100 yet, it's a good effort and I like what you do as company (I never left such a good comment to anyone in my life :)

  • @tryingtobeproductive
    @tryingtobeproductive Год назад +1

    This video is extremely useful 10/10

    • @Roboflow
      @Roboflow  Год назад

      Thank you! Awesome to hear such a positive feedback 🔥

  • @gz3442
    @gz3442 2 месяца назад

    hardware (sbc) requirement vid ?

  • @KaranBhuva-t8d
    @KaranBhuva-t8d Год назад

    can we use yolov8 pretrained weights for commercial use?

  • @rafael.gildin
    @rafael.gildin Год назад +1

    Great Video 🎉

  • @pleison111
    @pleison111 11 месяцев назад +1

    I have a question, I am working on a OCR project, I am using a fastrcnn with resnet50 as object detector, and then I need something like a conv + GRU or ViT to decode the text, do you have some suggestions regarding OCR?

    • @Roboflow
      @Roboflow  11 месяцев назад

      First of all why Fast RCNN? As for OCR did you try Tesseract?

  • @VasimRaja-bw3fe
    @VasimRaja-bw3fe 5 месяцев назад

    @Roboflow I'm developing an exam proctoring system for an institution. I need a real-time object detection model to flag cell phone and book usage. Currently using a t3-medium server. What's the best model for this purpose? I'm open to upgrading the server if necessary.

  • @EduardoGarnica-h2h
    @EduardoGarnica-h2h Год назад

    I was looking for performance over the time inference for edge devices. I was trying to use Yolov8 for edge deployment into STM32 but at the end, i realized this model was too big for this card. What do you think is a good model for a good ratio between inference time / model size? Thanks for your response

  • @mr_tpk
    @mr_tpk Год назад

    Thank you ❤

  • @tyronetyrone2652
    @tyronetyrone2652 Год назад +1

    Which framework is better to use in embedded chips?

    • @Roboflow
      @Roboflow  Год назад

      which board are you using?

    • @sarathkumar-gq8be
      @sarathkumar-gq8be 11 месяцев назад

      Which model will perform better in raspberry pi

  • @shubh722
    @shubh722 Год назад +2

    I am doing an object detection task and get 97.4% accuracy on the dataset using yolov5 and will be running it on an edge device. Is yolov5 too old and Should I train a yolov8 model for faster inference? As I think accuracy will be almost similar as it’s already 97.4%. Or is it task specific. If yolov5 is performing good then is there any need to change. If anyone can suggest please

    • @Roboflow
      @Roboflow  Год назад +1

      I don’t think you can expect better accuracy than that. The main issue here is that YOLOv5 did not have proper Python packaging so integrating it into larger projects was problematic.

    • @omigator
      @omigator Год назад +2

      We switched from Yv8 to Yv5 because it gave better performance without any loss in accuracy on our edge devices.

    • @shubh722
      @shubh722 Год назад

      @omigator What do you think about the inference times between v8 and v5? Is it real time? Also idk I found yolov5 easier to use as well. I was training it in Azure ml so was much easier to tweak the files for v5 to train there rather than v8. And the accuracy is pretty good as well.

  • @titusfx
    @titusfx Год назад +1

    In 2:28 why not just do asymptotic analysis (computational complexity analysis).

    • @Roboflow
      @Roboflow  Год назад

      Hi 👋🏻! You mean use FLOPS to asses complexity?

  • @zskater1234
    @zskater1234 Год назад +2

    I’m currently porting GroundingDINO to the transformers library so buckle up

    • @Roboflow
      @Roboflow  Год назад

      Uuu! Awesome! I can’t wait to see that happening. Being able to setup GroundingDINO with single pip install.

    • @zskater1234
      @zskater1234 Год назад

      @@Roboflow the model is already ported, now I’m tackling the tests and documentation if everything goes well by next weekend I’ll finished it and will await the HF review

  • @sanchaythalnerkar9736
    @sanchaythalnerkar9736 Год назад +1

    Can you show actual code and real time comparison of these?

    • @Roboflow
      @Roboflow  Год назад

      You mean independent time benchmark comparing the speed of all of those models?

    • @sanchaythalnerkar9736
      @sanchaythalnerkar9736 Год назад +1

      Yes Exactly , a side by side comparison@@Roboflow

    • @Roboflow
      @Roboflow  Год назад

      @@sanchaythalnerkar9736 not sure if it really can be side by side. To truly measure the model speed we need to make sure there is no other heavy process running on the machine. But sure we can try to make that happen. I'll add it to our TODO list.

  • @satyajitpanigrahy7742
    @satyajitpanigrahy7742 Год назад

    Kindly, Update the ultralytics package for YOLOv4 model

    • @Roboflow
      @Roboflow  Год назад

      Hi @satyajitpanigrahy7742 👋 Ultralytics is a separate team. Kindly, try to submit bug report in their repository: github.com/ultralytics/ultralytics

  • @appliedml8665
    @appliedml8665 Год назад +1

    Yolo gold is now available.

    • @Roboflow
      @Roboflow  Год назад

      Yeah I know... This video was recorded before YOLO GOLD was released. I didn't have time to play with it yet. Have you?

  • @8eck
    @8eck Год назад

    Where is DETA video? Couldn't find DETA with 100k stars... Could you please add github link here.

    • @Roboflow
      @Roboflow  Год назад

      For now we only have DETR. You can find it here: ruclips.net/video/AM8D4j9KoaU/видео.html
      As for star count, DETA is distributed via transformers library and that's what I used to measure community size.

  • @8eck
    @8eck Год назад +1

    Found only DETA with 198 stars, not 100k like in your table...

    • @Roboflow
      @Roboflow  Год назад

      I responded to that question under your other comment :)

  • @8eck
    @8eck Год назад +1

    RT-DETR have 355 stars, 20k+

    • @Roboflow
      @Roboflow  Год назад

      To be honest no one use implementation from original repository. RT-DETR is distributed via PaddlePaddle package. That's why we use 20k+ star count. I know it is not perfect... but like I said I decided to use the top repo that make the model accessible.

    • @8eck
      @8eck Год назад +1

      ​@@Roboflowcan you please drop some links? Thank you.

    • @Roboflow
      @Roboflow  Год назад

      @@8eck take a look here: github.com/PaddlePaddle/PaddleDetection/tree/develop/configs/rtdetr and here: huggingface.co/docs/transformers/main/en/model_doc/deta

  • @adurks4846
    @adurks4846 Год назад +1

    Personally, I've found yolov8 to be disappointing in the real world. I work in aerial/satellite imaging and yolov8 performs ~10% worse than scaled-yolov4. Most of the others on that list perform similarly. Overall, it seems like once you leave the types of images/targets in the COCO dataset, the metrics mean less and less for what will do well on your project.

    • @Roboflow
      @Roboflow  Год назад

      Absolutely agree! I even said that in the video. I'd love to have other metric to compare models, not just mAP on COCO. The moment to start to fine-tune the model on your dataset that number means nothing. Do you care about the speed when you process aerial/satellite imaging?

    • @adurks4846
      @adurks4846 Год назад

      @@Roboflow We don't care that much about speed. However, we don't typically have much data which means that the larger models seem to do worse.
      Do you guys have in-house metrics for some of these models using the roboflow-100?