Speed Estimation & Vehicle Tracking | Computer Vision | Open Source

Поделиться
HTML-код
  • Опубликовано: 24 дек 2024

Комментарии • 218

  • @kemal_kilicaslan
    @kemal_kilicaslan 11 месяцев назад +31

    As a mathematician, your analytical geometry skills are admirable. I've been following your work on image processing applications closely and find it crazy. Keep it up Piotr.

    • @Roboflow
      @Roboflow  11 месяцев назад +7

      I plan to include more of those whiteboard explanations in future videos. I’m just a bit scared that some people will get board of me talking and drawing and just skip to the next section.

    • @kemal_kilicaslan
      @kemal_kilicaslan 11 месяцев назад +3

      @@Roboflow It is very important to know the theoretical part of the project, especially the theoretical part of coding, those who skip to the next part can only advance one step at most, even if they move to the second step, they will not be successful. My personal opinion is to continue in the direction you have planned, congratulations again.

    • @atomix_2402
      @atomix_2402 9 месяцев назад +3

      ​@@Roboflow We need more of the whiteboard explanations more man and possibly detailed explanation or you can suggest some of the pre requisites to understand the concept. The ones who want to be successful would love to watch those.

    • @bigflakes6699
      @bigflakes6699 8 месяцев назад

      @@Roboflow Hi, any ideas on how the coordinates of the region of interest were computed?

  • @mileseverett
    @mileseverett 11 месяцев назад +9

    Great tutorial. Do you think you could make a video that covers implementing re-identification for multi cameras? There is a real lack of tutorials on this topic now that you have covered tracking so well

    • @SkalskiP
      @SkalskiP 11 месяцев назад +1

      Hi! It's Piotr from the video here. I'd love to make it. I just don't have data that I could use to make it :/

    • @mileseverett
      @mileseverett 11 месяцев назад +1

      @@SkalskiP what kind of data do you need? I might be able to help.

    • @Roboflow
      @Roboflow  11 месяцев назад +2

      Two or more videos looking at area from different perspectives at the same time so we could use it as example in video.

    • @AlainPilon
      @AlainPilon 11 месяцев назад

      ​@@RoboflowShould the camera be looking at the exact same area from different angles? Or we could have one camera watching one street corner and the other looking at the next intersection? I too would be interested in such a tutorial.

    • @JoshPeak
      @JoshPeak 11 месяцев назад +1

      Absolutely crazy idea here… could you simulate reidentification with multi camera looking at a hot wheels or slotcar track? Like a scaled down simulation?

  • @tobieabel7474
    @tobieabel7474 11 месяцев назад +2

    Another great video Piotr! I am currently working on a project using Supervision to track the speed of hand movements as part of a hand gesture recognition system, and your tutorials are really timely. I'm detecting the hands, performing some minor perspective transformation as you do here, tracking their movements within certain zones, and calculating their speed over several frames to determine the specific gesture. One issue I'm noticing is that Byte track has a tendency to lose detections even within a small area, and I was wondering if you have any tips for improving tracking performance other than playing with the byte track parameters?

    • @Roboflow
      @Roboflow  11 месяцев назад +1

      ByteTrack is using IoU to match boxes between frames. So if your hand is moving fast you can loose tracking.

  • @Likith_Gannarapu
    @Likith_Gannarapu 5 месяцев назад

    While using the tracker, I noticed that the tracker IDs are not assigned sequentially. Specifically, after tracker ID #6, the next assigned tracker ID was #8. Tracker ID #7 was skipped. This issue can be observed starting at timestamp 22:45 in the video.

  • @Bassel48
    @Bassel48 11 месяцев назад +5

    Thanks for the video. It is not clear to me how did you calculate the points C and D outside the image boundaries. I understand the y axis value, but how about the x value, how is it calculated?

    • @recon14192
      @recon14192 День назад

      He doesn't explain it at all but I'm pretty sure he's using the fact that from our view it's a trapezoid but since we know 3 sides we can calculate the 4th which gives the 2 x coordinates

  • @minhnguyenquocnhat3796
    @minhnguyenquocnhat3796 8 месяцев назад +1

    Thank you so much for this tutorial. Your instruction is very great

    • @Roboflow
      @Roboflow  8 месяцев назад

      Thanks a loooot!

  • @tylorbillings4065
    @tylorbillings4065 23 дня назад

    This is super helpful and awesome. Thank you so much for taking the time!

  • @DavidAkinwande
    @DavidAkinwande 11 месяцев назад +4

    Thank you for such free education! Please where did you learn supervision?
    Edit: I learnt that you're the creator of supervision

    • @mileseverett
      @mileseverett 11 месяцев назад +2

      He created it

    • @DavidAkinwande
      @DavidAkinwande 11 месяцев назад

      Oooooohhh! No wonder@@mileseverett

    • @SkalskiP
      @SkalskiP 11 месяцев назад +2

      haha yup! I created it. Or I still create it every day. I hope you find it useful ;)

    • @DavidAkinwande
      @DavidAkinwande 11 месяцев назад +1

      I am really grateful for your creation and videos. I use it where I work, makes life so much easier@@SkalskiP

  • @유영재-c9c
    @유영재-c9c 11 месяцев назад

    10:17 Here you find the coordinates for A. Is this making an assumption? Or did you find out about it through mouse events?

    • @유영재-c9c
      @유영재-c9c 11 месяцев назад

      SOURCE = np.array([[1252, 787], [2298, 803], [5039, 2159], [-550, 2159]]) What I'm curious about here is, are the y-coordinates of 787 and 803 different? Shouldn't it be aligned?
      And I don't know how -550 was derived.

    • @Roboflow
      @Roboflow  11 месяцев назад

      A and B is easy. You can do it through mouse event for example. You can also do it with this tool: roboflow.github.io/polygonzone

    • @Roboflow
      @Roboflow  11 месяцев назад

      As for C and D. I made assumption that y coordinate is aligned with bottom edge. Than I used A and B points and info about y to figure out x coordinates.

  • @Ash2Tutorial
    @Ash2Tutorial 22 дня назад

    I liked your tutorial on this topic. Its very informative and helpful. Could you tell me your computer specification when you ran this code on your machine ? I noticed it was very smooth.

  • @pingyang8963
    @pingyang8963 11 месяцев назад +1

    Awesome presentation! Thanks for sharing. One question, since speed is detected, Is there a way to get the distance to the camera instead of speed?

    • @Roboflow
      @Roboflow  11 месяцев назад

      Well we ould need to know the distance from camera to some reference point.

    • @pingyang8963
      @pingyang8963 11 месяцев назад

      @@Roboflow for the reference point, Will that possible using 2 cameras (which has a known distance between those two cameras) and creating a fused map from the two cameras and get the distance and speed?

    • @李杰-u4e
      @李杰-u4e 5 месяцев назад

      @@Roboflow Can you provide relevant examples

  • @crazyKurious
    @crazyKurious 10 месяцев назад +1

    Piotr, great video, can you provide instructions on how to make it realtime ?

    • @Roboflow
      @Roboflow  10 месяцев назад

      Any specific problems you face when you try to run in real-time?

  • @theoldknowledge6778
    @theoldknowledge6778 9 месяцев назад +1

    These application videos are amazing!!

    • @Roboflow
      @Roboflow  9 месяцев назад

      Thanks a lot!

  • @NicholasRessi
    @NicholasRessi 9 месяцев назад +1

    Amazing work! Does anyone know how to estimate/predict distance in a 2d image? I assume the 250m length and 25m width of the road was discovered by doing an online research, I wonder if there is an algorithm or method that would allow one to estimate distance in a 2d image.

    • @Roboflow
      @Roboflow  9 месяцев назад

      Do you mean without passing any information? Fully automatically?

    • @李杰-u4e
      @李杰-u4e 5 месяцев назад

      @@Roboflow Fully automatic. That's what I thought. If that's the case, it's perfect

  • @lindseylombardi2910
    @lindseylombardi2910 6 месяцев назад +1

    Where do I add the configurations for both "vehicles.mp4" and "vehicles-result.mp4" in the ultralytics script? I see that the ultralytics example lists "--source_video_path" and "--target_video-path", but does not specifically include "vehicles.mp4" or "vehicles-result.mp4"?

    • @Roboflow
      @Roboflow  6 месяцев назад

      Take a look here: github.com/roboflow/supervision/tree/develop/examples/speed_estimation
      Example commands are in the README.

  • @vitormatheus8112
    @vitormatheus8112 11 месяцев назад +1

    This video is without a doubt one of the best I've seen, thank you very much.
    I would like to know if it is possible to calculate the distance of an object from the camera?

    • @Roboflow
      @Roboflow  11 месяцев назад

      Thanks a lot! Such a big complement. Unfortunately not. We like need some reference distance from camera to some point.

    • @Roboflow
      @Roboflow  11 месяцев назад

      Thanks a lot! Such a big complement. Unfortunately not. We like need some reference distance from camera to some point.

  • @william-faria
    @william-faria 11 месяцев назад +2

    That's great! Thank you, bro!

    • @Roboflow
      @Roboflow  11 месяцев назад +2

      My pleasure!

  • @cappittall
    @cappittall 11 месяцев назад +1

    Thanks Peter, That is great tutorial. :)

    • @Roboflow
      @Roboflow  11 месяцев назад

      Thanks a lot!

  • @luisescares
    @luisescares 11 месяцев назад +1

    Congratulations by this video, greatings from Santiago!

    • @Roboflow
      @Roboflow  11 месяцев назад

      Thanks a lot! Greetings from Poland!

  • @rluijk
    @rluijk 11 месяцев назад +1

    Great! Thanks for your clear explanations, showing what is possible. Very inspiring. Subscribed so I hope to see more creative tracking concepts explained.

    • @Roboflow
      @Roboflow  11 месяцев назад

      We will probably release video on time in zone next :) You can keep track of what I’m doing here: twitter.com/skalskip92

    • @rluijk
      @rluijk 11 месяцев назад

      I keep thinking about tracking ants, we might discover a lot of interesting things. @@Roboflow

  • @6Scarfy99
    @6Scarfy99 11 месяцев назад +1

    One of the best channels... I love u piotr

    • @Roboflow
      @Roboflow  11 месяцев назад

      Thanks a lot! Stay tuned for next video. Time in zone is coming soon.

  • @TheAIJokes
    @TheAIJokes 11 месяцев назад +1

    Hi sir, You are an wonderful instructor I almost watched all of your videos.....can you please show us a way to train a car number detection model.... please that would be a great help...also I would like to know if I finetune yolo model will it forget all its previous training?

    • @Roboflow
      @Roboflow  11 месяцев назад

      License plate OCR is on my TODO list. As for fine tuning. If you start from COCO dataset and that’s fine tune it on dataset with custom classes it will detect custom classes. If you wan to preserve that previous knowledge you would need to train model on dataset that is a combination of your classes and COCO classes.

    • @TheAIJokes
      @TheAIJokes 11 месяцев назад

      @Roboflow thanks for your reply
      ... looking forward to it...hope you will make it soon

    • @李杰-u4e
      @李杰-u4e 5 месяцев назад

      @@Roboflow ... looking forward to it...hope you will make it soon

  • @Santiagobgb18O
    @Santiagobgb18O 10 месяцев назад

    Your explanations have been incredibly helpful. Thank you sir!
    I'm currently working on a project where I apply similar tools to estimate the velocity of tennis players. However, I've encountered a challenge: the players often have part of their bodies outside the designated court polygon, which complicates the tracking. Is it possible to define multiple polygons to capture the full range of their movements, or do you have any recommendations for this scenario?
    Thank you once again for your valuable contribution to the community!

  • @thomas-nk7kx
    @thomas-nk7kx Месяц назад

    What would your advice be if my video source is coming from a moving vehicle (dashcam) thus there is also a relative velocity

  • @JIACHENWONG
    @JIACHENWONG 8 месяцев назад +1

    may i know which version of supervision that i need to install into my pycharm

    • @Roboflow
      @Roboflow  8 месяцев назад

      0.19.0 would be the best

  • @elhadjikarawthiam4595
    @elhadjikarawthiam4595 11 месяцев назад +2

    Thank you very much for sharing, it’s really interesting. I would like support for my subject on the analysis of congestion up to measuring the distance of traffic jams

  • @ceo-s
    @ceo-s 8 месяцев назад +1

    Very cool video! Btw which drawing app do you use?

  • @jeffcampsall5435
    @jeffcampsall5435 6 месяцев назад +1

    There needs to be correction factor along the path…it’s like drawing the globe on a flat piece of paper.
    If you watch cars driving away on the right side, their speed is 140 kph and “reduces” to 133 kph: which is very unlikely.
    I know the trapezoid can be limited to those vehicles closest to the camera but I thought you might like to tweak your algorithm.
    👍

    • @Roboflow
      @Roboflow  6 месяцев назад +1

      Sure 👍🏻 the whole algo is a bit of simplification as we only have 4 points. If road is not perfectly flat and straight some divisions may occur. Still I think it’s one of the complexity/accuracy tradeoff is okey.

    • @李杰-u4e
      @李杰-u4e 5 месяцев назад

      @@Roboflow Can I add related functions

  • @asilbekrahimjonov7475
    @asilbekrahimjonov7475 11 месяцев назад +1

    For getting higher accuracy speed, can we take distance with camera calibration parameters ?

  • @elviskiilu3977
    @elviskiilu3977 11 месяцев назад +1

    Hey,is it possible to intergrate these models to a database,ie detected vehicle speed

  • @alanszpetmanski2398
    @alanszpetmanski2398 Месяц назад

    Świetna robota Piotr! :) How can I change the annotator color strategy? I know I need to use custom_color_lookup, but I'm stuck. Could you provide me with a simple example? I want to change the annotator's color based on the car's speed. Thanks a lot!

  • @유영재-c9c
    @유영재-c9c 11 месяцев назад +1

    Is rtsp source also supported through supervision? Or do you have a plan?

    • @Roboflow
      @Roboflow  11 месяцев назад

      Not your but we have a plan to do it. But you can combo supervision with OpenCV to do it even now.

  • @adarshraj3208
    @adarshraj3208 7 месяцев назад +1

    Hey, I am facing error at the part "calculate_dynamic_line_thickness" . I read in the documentation that it has been changed to "calculate_optimal_line_thickness" but even after doing so i am getting the same error. What should i do now?
    thickness = calculate_dynamic_line_thickness(
    resolution_wh=video_info.resolution_wh
    )

    • @BDJ64
      @BDJ64 3 месяца назад

      use calculate_optimal_line_thickness and also calculate_optimal_text_scale

  • @onyekaokonji28
    @onyekaokonji28 11 месяцев назад

    Great job as usualy @Piotr. Is there a way to automate the generation of points A,B,C,D because I believe the current implementation requires one to use a mouse to hover around the the 4 points to get their cordinates, that won't be feasible in production.

    • @Roboflow
      @Roboflow  11 месяцев назад

      There is no way to reliably automate this. But you only need to do it once for each camera. So you can save the configuration in JSON and load it.

  • @akaashraj8796
    @akaashraj8796 10 месяцев назад +1

    is there a way to detect the objects speed while the camera thats capturing the vedio is in motion?

    • @Roboflow
      @Roboflow  10 месяцев назад

      I’m afraid not.

  • @abdullaasad4992
    @abdullaasad4992 Месяц назад

    Is there a way to store the velocities of each car in a file?

  • @kirtankalaria7239
    @kirtankalaria7239 11 месяцев назад +1

    There's some cool stuff I reckon you can do with the Deepsense 6G dataset.

  • @amiroacid
    @amiroacid 11 месяцев назад +1

    Crazy how object detection is just getting better and better!

    • @Roboflow
      @Roboflow  11 месяцев назад +3

      That’s right. I’m waiting for zero-shot detectors to be so good we will not need to train models anymore.

  • @g.s.3389
    @g.s.3389 11 месяцев назад +2

    very well done!

  • @elianabboud8721
    @elianabboud8721 9 месяцев назад +1

    Hello, I have run tracking and counting vehicles in addition to speed estimation and it's true , but I want a code that combines both. Do you have it?

    • @Roboflow
      @Roboflow  9 месяцев назад

      We created a different tutorial where we show how to count objects crossing the line: ruclips.net/video/OS5qI9YBkfk/видео.htmlsi=O4f26Cs3KnGGFBMC. Here is the code: colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8.ipynb.

    • @elianabboud8721
      @elianabboud8721 9 месяцев назад

      I understand but i mean the combination of count objects crossing the line and speed estimation in one output ?
      Best regards 😄

  • @PhạmNguyễnHoàngAnh-anhpnh
    @PhạmNguyễnHoàngAnh-anhpnh 11 месяцев назад +1

    How ViewTransformer for image with 1920x1080 resolution?
    'NoneType' object has no attribute 'reshape' with 1920x1080 resolution.

    • @Roboflow
      @Roboflow  11 месяцев назад +1

      Could you create the issue and describe your problem here: github.com/roboflow/supervision/issues?

  • @jarradm7697
    @jarradm7697 Месяц назад +1

    Can this be a camera in motion, I guess not without stereo?

    • @Roboflow
      @Roboflow  Месяц назад

      Nope. This approach applies only to static cameras

  • @ObsidianMusic842004
    @ObsidianMusic842004 4 месяца назад

    Greetings.
    First of all this is an excellent video, and I learned a lot from it.
    I just have one question, I'm confused regarding what deque is and why did we use it in our defaultdict?

  • @cliqshorts
    @cliqshorts 11 месяцев назад

    Nice .Could you please share youtube video link on how to run this notebook on AWS Sagemaker Studio.

    • @Roboflow
      @Roboflow  11 месяцев назад

      Did you faced any issues trying to run it on AWS?

  • @RamonSmits
    @RamonSmits 3 месяца назад +1

    Cool video! In the Netherlands speedcams measure you average speed for first and last seen for the area. That should provide a much more accurate value. That should be a very small adjustment on what you already made. Would be fun to have running in urban areas.
    Maybe add another video that shown to capture the license plate or even can detect the car model/type or just car color and log these to plot statistics and what colors/types are speeding or speeding excessively 😅😅😅

  • @ashiquep1407
    @ashiquep1407 4 месяца назад

    Good work from India

  • @Scott-lin
    @Scott-lin 10 месяцев назад +1

    HI @, i used my video to run Speed Estimation Open-Source Code,but my video had a little bit proble.could you help me ?
    issue >>
    AttributeError: 'NoneType' object has no attribute 'reshape'

    • @Roboflow
      @Roboflow  10 месяцев назад +1

      Could you create issue here: github.com/roboflow/supervision/issues and give us a bit more details?

    • @Scott-lin
      @Scott-lin 10 месяцев назад

      @@Roboflow OK ,thank you. I created issue

  • @recon14192
    @recon14192 День назад

    Can you explain how you got x values, what formula or the exact method? Other comments mention its not clear and none of your replies have provided a detailed explanation.

  • @lucasramirez320
    @lucasramirez320 3 месяца назад

    When running the code, the video with the annotated frame is very slow. I tried this in my 2 computers one with amd and one with nvidia and both are reproducing the video extremely slowly. Any suggestions?

    • @Roboflow
      @Roboflow  3 месяца назад

      which script from repo are you running?

    • @lucasramirez320
      @lucasramirez320 3 месяца назад

      @Roboflow I am running inference_example.py and ultralytics_example.py from the spped_estimation repo. I realized the video may be too slow due to it being run on the CPU instead of GPU. Is this correct?
      I am now on my way on downloading CUDA and it's version of pytorch.

  • @danialkhan2910
    @danialkhan2910 11 месяцев назад +1

    Hi i had a question! Firstly, Amazing tutorial! It was a simple explanation of a really useful tool! I want to use this tool for myself, so my question was, will i be able to run this on a windows OS? Or is this specific to linux OS. Thanks to anyone for the help!

    • @Roboflow
      @Roboflow  11 месяцев назад

      I think we will release a Colab notebook, to help users like you.

    • @danialkhan2910
      @danialkhan2910 11 месяцев назад

      @@Roboflow That would be great! Thanks!

  • @elbaz_afandy
    @elbaz_afandy 3 месяца назад

    i need to get the class id for every object +its tracking id ? need code please

  • @JellosKanellos
    @JellosKanellos 11 месяцев назад

    Thanks a lot for the awesome video Piotr! One thing I always wonder about applying yolov8 object detection to video is: it seems kind of naive to handle every successive frame as a separate image. What I mean by that is, can't we be more smart about taking information from the previous frame(s) into the inference of the current frame? For example: if there was a car detected somewhere in the camera image, it must be somewhere near that position in the next. What are your thoughts about that?

    • @Roboflow
      @Roboflow  11 месяцев назад

      Hi! Depends what you do. There are some systems. Like parking occupancy, where you can easily get away with running inference every 1 second or even less frequently, and just assume all cars are parked in the same places. Here the cars are moving, and that movement is particularly interesting for us. We are using ByteTrack. This tracker use only box position and overlap to match objects. If you will not run inference sufficiently often, there will be no overlap between the frames, and you loose track.

  • @adlernunez
    @adlernunez 7 месяцев назад

    how do i run the whole code
    in vscode

  • @jpsst9
    @jpsst9 11 месяцев назад

    @ 11:41 for the target you say 0-24 and 0-249 your target is now 24m width and 249m long are you sure you need to subtract 1 ? Not 0-25 width and 0-250 long

    • @Roboflow
      @Roboflow  11 месяцев назад

      No :) Let me explain. Target will end up as image 25 x 250 pixels. And pixels are numbered from 0 to 24. So I still have 25 pixels.

  • @blessingagyeikyem9849
    @blessingagyeikyem9849 11 месяцев назад +2

    Supervision is super useful. I have been using it in my computer vision workflow. I now prefer it over opencv. Keep up with the good work Piotr.

    • @Roboflow
      @Roboflow  11 месяцев назад +2

      This is probably the biggest complement I could get!

  • @11aniketkumar
    @11aniketkumar 6 месяцев назад +1

    i keep vscode on half screen and other half is for youtube, but your code is not properly visible, it's too small to copy from video, also I don't want github links to supervision and inference but direct link to the script file that you have used in this video.

    • @Roboflow
      @Roboflow  6 месяцев назад

      github.com/roboflow/supervision/tree/develop/examples/speed_estimation

  • @jkjhkjhkjhkjpopoipofsi
    @jkjhkjhkjhkjpopoipofsi 10 месяцев назад

    Is there a way to count the time of object that in the zone?

  • @fredericocaixeta9015
    @fredericocaixeta9015 6 месяцев назад

    Hello, Piotr Skalski! Hello everyone...
    I am diving a little into the code here... 😁
    Quick question - how do I add an image into a detection-box from Supervision? Thanks

  • @XoyTech
    @XoyTech 11 месяцев назад +2

    It would be of great help if you could publish a requirements.txt file with the versions of the libraries that you use to make the examples, since newbies like me have a hard time finding the correct versions to everything works correctly, starting from the Python version and then all the others libraries. thank you.

    • @Roboflow
      @Roboflow  11 месяцев назад

      So you would like me to update this requirements.txt and include versions? github.com/roboflow/supervision/tree/develop/examples/speed_estimation

    • @iraadit
      @iraadit 11 месяцев назад

      @@Roboflow yes, it should always include versions, to be sure to be still be able to execute the code later (when new version will be out, and maybe not compatible)

  • @李杰-u4e
    @李杰-u4e 5 месяцев назад

    May I ask if my area is an irregular graph, is it also supported? Thank you 请问一下,如果我的区域是一个不规则的图形,是否也支持,谢谢

    • @Roboflow
      @Roboflow  5 месяцев назад

      Could you explain what do you mean by irregular?

    • @李杰-u4e
      @李杰-u4e 5 месяцев назад

      @@Roboflow It resembles an irregular pattern composed of multiple points and is not a rectangle

  • @abhinandang6675
    @abhinandang6675 9 месяцев назад +1

    i have question will it work in low end device or PC in real time because it will take more time to process and time will increase which means the calculated speed will be less than the actual speed?? and how we tackle it ,if you know please share the solution..By The way nice calculation of prepective calculation

    • @Roboflow
      @Roboflow  9 месяцев назад

      This is such a good question. I’m working on new video covering calculating time. I will answer this question soon!

  • @SWARO5
    @SWARO5 8 месяцев назад

    Great video .....just a tiny issue , when i ran the code the line annotator was not taking trucks into account....can u help me with that

    • @Roboflow
      @Roboflow  8 месяцев назад

      Do you mean that truck was not detected or not counted in?

    • @SWARO5
      @SWARO5 8 месяцев назад

      Not counted

  • @ridwansatria-v5x
    @ridwansatria-v5x Месяц назад

    cool, can you make with plate detection?

  • @asamoahkofijoshua4581
    @asamoahkofijoshua4581 3 месяца назад

    I tried this on my finetuned model. It works for some few seconds and then just stopped. It resumed around 2 mins later and it was detecting even vehicles not within the designated zones. The speed doesn't seem real to me even I had calibrated. Help out.

  • @Studio-gs7ye
    @Studio-gs7ye 11 месяцев назад +2

    That is unique type of tutorial I have seen so far and thanks for such a good content.

    • @Roboflow
      @Roboflow  11 месяцев назад +1

      We plan to make more of those longer videos this year. :)

    • @Miinuuuuu
      @Miinuuuuu 8 месяцев назад

      does he provided complete project with code? please tell i wanna use it in my college project

  • @SilenceOnPS4
    @SilenceOnPS4 9 месяцев назад

    I am new to this, however, I am thinking of trailing the public, then purchasing the starter subscription to start a side project. For this specific project, how much would it cost to keep it running for 24 hours a day? Also, can you provide me with an estimate cost if this were to be scaled up to 1000 cameras?
    I am only looking for an idea on the cost to run such a programme on your typical camera over a motorway (like the one in this example). I am assuming it would go through roboflow, but I could be wrong. I am looking for the easiest option.
    Many thanks.

    • @Roboflow
      @Roboflow  9 месяцев назад +1

      Easy is a bit relative depending on your skillset and hardware. Here are a few ways to think about it:
      You can deploy with the hosted API. This requires devices with internet connection. You'd then be able to choose at what rate you hit the API for predictions and that would impact pricing. 24/7 with 1 prediction per second is 86,400 API calls per day or ~32 million per year for each location. 1,000 cameras means ~32 billion per year. You could reduce the rate of predictions to bring down API calls but then you won't have a real-time system if that is what you need. Alternatively, you can deploy your models onto the edge devices using Roboflow Inference and do the same operation but use your own compute. In either scenario, this level of usage requires a conversation with our Sales team to offer you Enterprise pricing roboflow.com/sales

    • @SilenceOnPS4
      @SilenceOnPS4 9 месяцев назад

      @@Roboflow Thank you for your prompt reply. I will get in touch shortly.

  • @ahmadmohammadi2396
    @ahmadmohammadi2396 9 месяцев назад

    Simply excellent

    • @Roboflow
      @Roboflow  9 месяцев назад

      Thanks a lot!

  • @ilamathimanivannan8315
    @ilamathimanivannan8315 7 месяцев назад

    Can you please explain how you determined the coordinates of ABCD ([1252, 787],
    [2298, 803],
    [5039, 2159],
    [-550, 2159])

    • @李杰-u4e
      @李杰-u4e 5 месяцев назад

      I wonder, too. Maybe it was manually marked

  • @DilipKumar-jm3ly
    @DilipKumar-jm3ly 7 месяцев назад

    you are making videos on latest technology in the fields cv , is interesting knowledgeable be continue like that. thankyou!

  • @kimridaaa1298
    @kimridaaa1298 5 месяцев назад

    terimakasih bang bule, thankyou sm brok buleee

  • @m.hassanmaqsood6642
    @m.hassanmaqsood6642 5 месяцев назад

    I am facing an issue when I try this notebook
    AttributeError Traceback (most recent call last)
    in ()
    10
    11 # annotators configuration
    ---> 12 thickness = sv.calculate_dynamic_line_thickness(
    13 resolution_wh=video_info.resolution_wh
    14 )
    AttributeError: module 'supervision' has no attribute 'calculate_dynamic_line_thickness'

  • @deaangeliakamil7453
    @deaangeliakamil7453 7 месяцев назад

    Hello, I am facing some issues when I used my own video. When no vehicle shown in the video, the trace_annotator and label_annotator are facing error. For trace_annotator it said "IndexError: index 0 is out of bounds for axis 0 with size 0", and for label_annotator it said "ValueError: The number of labels provided (1) does not match the number of detections (3). Each detection should have a corresponding label. This discrepancy can occur if the labels and detections are not aligned or if an incorrect number of labels has been provided. Please ensure that the labels array has the same length as the Detections object." I hope you can help to solve this error, thank you.

  • @the_vheed1319
    @the_vheed1319 10 месяцев назад +3

    Thank yo so much for this video. It greatly simplified the entire speed estimation process

    • @Roboflow
      @Roboflow  10 месяцев назад +2

      Thank you!

  • @mayurmali2715
    @mayurmali2715 10 месяцев назад

    gys any ideas on what new features we can add to this?

  • @hammadyounas2688
    @hammadyounas2688 9 месяцев назад

    I am facing issue with Perspective Transformation for my video. can you help me with that?

    • @Roboflow
      @Roboflow  9 месяцев назад

      What’s the problem?

    • @hammadyounas2688
      @hammadyounas2688 9 месяцев назад +1

      @@Roboflow the main issue is my box is not correctly generated i am facing issue with these values [1252, 787],
      [2298, 803],
      [5039, 2159],
      [-550, 2159].

    • @Roboflow
      @Roboflow  9 месяцев назад +1

      @@hammadyounas2688 please, ask your question here: github.com/roboflow/supervision/discussions. We will try to help you.

    • @hammadyounas2688
      @hammadyounas2688 9 месяцев назад

      @@Roboflow Okay.

  • @leoqi9374
    @leoqi9374 3 месяца назад

    what if in a curve road?

  • @LukasSmith827
    @LukasSmith827 11 месяцев назад +1

    Very nice

  • @Oliver_Lam
    @Oliver_Lam 11 месяцев назад +1

    Thank you so much!

  • @rupeshrathod6588
    @rupeshrathod6588 11 месяцев назад

    Roboflow has an issue at the time of augmentation the annotation doesn't go according to the augmentation its an big issue. in case of instance segmentation i hope it will be resolved soon!!

  • @surajpatra6779
    @surajpatra6779 11 месяцев назад +2

    Sir please make a tutorial on how to deploy any kind of Computer Vision project in free

    • @Roboflow
      @Roboflow  11 месяцев назад

      Where would you like to deploy it?

    • @surajpatra6779
      @surajpatra6779 11 месяцев назад

      @@Roboflow Sir Anywhere except paid cloud platform like AWS, Heroku,etc.

  • @joelbhaskarnadar7391
    @joelbhaskarnadar7391 11 месяцев назад +1

    Interesting 👍🏿

  • @alexanderfritsch6612
    @alexanderfritsch6612 8 месяцев назад

    Good work! Keep it poppin' :)

  • @sanchaythalnerkar9736
    @sanchaythalnerkar9736 11 месяцев назад +1

    I am planning to take a workshop on supervision in my college

    • @Roboflow
      @Roboflow  11 месяцев назад

      Is there a workshop on supervision in your college?

  • @dudepowpow
    @dudepowpow 10 месяцев назад

    Would this work on Raspberry Pi 5 taking in a live camera feed do you think?

    • @vlasov01
      @vlasov01 6 месяцев назад

      I've used Yolov8n model on RPi4. It can only process 1 frame in close to 2 seconds using one core. RPi5 is faster. It depends what is your target fps/precision requiremenst.

  • @thisistaha6366
    @thisistaha6366 7 месяцев назад

    How can I watch this in real time, that is, how can I translate the image from a camera into this at the same time? PLEAS help meee

  • @patricksimo9045
    @patricksimo9045 11 месяцев назад +3

    Thank you for your efforts. The video is perfect and very well explained. Great work !

    • @Roboflow
      @Roboflow  11 месяцев назад +2

      Thank you! Awesome to hear people notice the effort.

  • @HS0
    @HS0 11 месяцев назад +1

    Can we do this in real time ?

  • @hamachoang5561
    @hamachoang5561 8 месяцев назад

    "SupervisionWarnings: BoxAnnotator is deprecated: `BoxAnnotator` is deprecated and will be removed in `supervision-0.22.0`. Use `BoundingBoxAnnotator` and `LabelAnnotator` instead" I have install cuda and cudnn but why this happend, can you help me pls!!

  • @josephyucra4503
    @josephyucra4503 5 месяцев назад

    Nice video, will there be an update for python 3.11.9? 'Cause when i installed requirements it showed that it only accepted versions from 3.7 to 3.11 , (3.11.9) it is what I use in visual code, thanks regards

  • @HS0
    @HS0 11 месяцев назад +1

    Can you publish the source code to implement this project in real time?

    • @Roboflow
      @Roboflow  11 месяцев назад

      The course code is published on GitHub. The link is in the description of the video.

  • @mrmacman04
    @mrmacman04 8 месяцев назад

    I followed this tutorial beginning to end on my laptop (Intel i9 Macbook Pro). It worked great, but was slow because it's not running on GPU. Instead of 'yolov8x-640' I used 'yolov8n-640' which ran faster, since the model is smaller. Is there any way to make these models run more efficiently on CPU?

    • @Roboflow
      @Roboflow  8 месяцев назад

      It is possible to run faster on MacBooks but with M1

    • @mrmacman04
      @mrmacman04 8 месяцев назад

      @@Roboflow I see. So on an Intel Mac, is there any option to speed up inference with OpenVINO? I imagine so, but would be good to see how to do it within a tutorial like this one.

    • @mrmacman04
      @mrmacman04 8 месяцев назад

      @@Roboflow I just got an M3 MacBook pro. Seeing the same performance as I saw on the Intel Mac. I'm wondering if we only see good performance with these Roboflow tools (models, Inference pkg, Supervision pkg) when using GPUs?

  • @GenieCivilNumerise
    @GenieCivilNumerise 7 месяцев назад

    Thank you too much. Can you do the same application with yolo v9 for me ?

  • @tjoec90
    @tjoec90 11 месяцев назад +1

    Amazing tutorial. Learnt something new today. Thanks a lot.

    • @Roboflow
      @Roboflow  11 месяцев назад

      I absolutely love to hear that!

  • @ayman_sed_7
    @ayman_sed_7 3 месяца назад

    where is the whole code, like in the video ?

    • @Roboflow
      @Roboflow  3 месяца назад

      everything is on github

  • @smccrode
    @smccrode 11 месяцев назад +1

    This is amazing! Thank you! Been wanting to do this for years. Now I’m going to do it!

    • @Roboflow
      @Roboflow  11 месяцев назад

      Glad you like it! Let me know how it goes!

  • @circulartext
    @circulartext 11 месяцев назад

    hey my brother is it a way to set up your python app with raspberry pi

    • @Roboflow
      @Roboflow  11 месяцев назад

      Yup. But it will be slow… probable 1-5 fps.

    • @circulartext
      @circulartext 11 месяцев назад

      @@Roboflow do you think it should be good to identify something from a good distance

  • @hoangng16
    @hoangng16 11 месяцев назад +1

    This is great; I've wanted to do this for a long time.

    • @Roboflow
      @Roboflow  11 месяцев назад +1

      Now we can donut together haha

  • @afriquemodel2375
    @afriquemodel2375 11 месяцев назад

    i trie to train custom model object detection with transformer in google coolab , but i used your tipp it does not work

    • @Roboflow
      @Roboflow  11 месяцев назад

      Hi. I’m not really sure what you are talking about? Could you be more specific?

  • @matthiasjunker8685
    @matthiasjunker8685 11 месяцев назад +1

    Cooles Video

    • @Roboflow
      @Roboflow  11 месяцев назад

      Thanks a lot! I spend a lot of time making it.

  • @Jokopie-wv3zp
    @Jokopie-wv3zp 9 месяцев назад

    Can anyone help me run this code :((( I don't know how to use pycharm.