I recommend using updated version of the notebook: colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8-and-supervison.ipynb
It goes to show how streamlined this stuff has become. Try doing a PhD in this ten years ago and having to write your own code for everything AND the novel parts you're working on. Takes months and hours to explain. Now anyone can git clone and run complex models. What a world :)
Hello Piotr @roboflow, thank you for the video. I have trained my model on 3 different classes.Would it be possible to have the line zone annotator display the count of each class separately rather than the sum of detections of all classes? Can you please help with this?
Hola tengo el error en la parte del código : tracks = byte_tracker.update( output_results=detections2boxes(detections=detections), img_info=frame.shape, img_size=frame.shape ) ;sale este error: AttributeError: module 'numpy' has no attribute 'float' ;pueden ayudar porfavor
@@manuelnavarrete4509 si ..antes de ejecutar el código agrega esta línea : !pip install -U numpy==1.23.5 ;después te pedirá reiniciar la sesión ,vuelves a ejecutar el código ya sin volver a instalar el numpy y listo
np.float has been deprecated. One way to fix is to modify it to np.float64. Make changes in the files yolox/tracker/matching.py and bytetracker.py in the same directory.
nice and simple explanation. i am a beginner and i am trying to start with something simpler like object detection and counting i a picture how would i go about this?
Thank you so much.. I have Zero experience on this matter but following each of your instruction and I did finish my project with my own video.. Super!
Did you used latest version of our notebook? colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8-and-supervison.ipynb
I really need help for one thing. How can you show the specific number of cars and trucks that have gone in and out. For example: 3 cars and 1 truck in and 5 cars and 1 truck out
We don't have a dedicated feature yet, but you can build a workaround solution. Create two separate line counters. Filter detections by class, to get car and truck detections and trigger one line counter with car detections and the other with truck detections.
Thank you for the amazing video! Is it possible to invoke yolo8 on every 4th frame (for example), instead of every single frame? And have some kind of other system follow the object in the other 3 frames (to save on resources).
Not to my knowledge. You skip the frame completely or not. All of those trackers depend on boxes being generated by the model. That being said you can try to pass detections to tracker every 4th frame. It all depends on input video but could still work.
Did you used latest version of our notebook? colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8-and-supervison.ipynb
TY for your great work on supervision library. I have modified your line counting algorithm. During counting people from indoor cctv camera, lines stay short to meet counting conditions. Firstly, I tried center dot instead of corners of bb, but it become unstable, especially when a person pass from door, because center of rectangle become unstable while object slowly disappear. Finally, I draw a square at center of object. It fits my case and generate stable countings.
Hi! This is Piotr from the video. This is something that is on my mind for a long time. And yes, having some reference object at least to calibrate measurements would be mandatory.
our project is to detect and count the object on the captured photo. can we follow this tutorial? or is there other more applicable tutorial we can follow
Hey there, thanks for the amazing YOLO 8 videos, I run the code for object detection and it was work fine. then I tried to run for instance segmentation. all steps are fine but in the final step when I run the code for Inference with Custom Model, code run without any issue but this message did not appear: Results saved to runs/segment/predict2. do you know what is the problem?
Hi it's Peter from the video! Wow! I didn't know that. Now you made me look and here is what I found: tqdm derives from the Arabic word taqaddum (تقدّم) which can mean “progress,” and is an abbreviation for “I love you so much” in Spanish (te quiero demasiado).
Great video! However I tried implementing it with more than one counter (one for each lane) but it seems that LineCounter is a global variable shared across all other lanes. is there a way to overcome this? Thank you!
Hi! That video was actually recorded before YOLOv8 team added tracking capability. But in short, you can use ByteTrack with any object detection model, and if you will use Ultralytics implementation then you are bound to use only YOLOv8.
What if i want to count objects of a specific class only? how can this be implemented? Let's say that we want to count only "car" class and not "truck" also? Can this be done?
sure! you just need to filter detections before passing them through the line counter. You would need to add this line before triggering: detections = detections[detections.class_id == YOUR_CLASS]
@@Roboflow well i get error: TypeError: 'Detections' object is not subscriptable Maybe if i use this filtering before the line counter?? #class to filter mask = np.array([class_id in CLASS_ID for class_id in detections.class_id], dtype=bool) detections.filter(mask=mask, inplace=True) Let's say instead of CLASS_ID to use [1] so i can keep only class 1
Thank you very much, really appreciate! I applied to my custom video, it does not count correctly. I saw in your video it also does not count correctly, how we can improve it?
It's Peter from video. I'm not sure if I'm YOLO guru, but thanks a lot for this kind comment. I went through a bit of internet hate lately, so it is great to here some positive feedback.
@@justin_richie it is not possible now but feel free to create feature request in supervision repo: github.com/roboflow/supervision/issues/new?assignees=&labels=enhancement&template=feature-request.yml
thanks for the video. I noticed that even with a clear view of all the vehicles, you still lose track of the truck and it gets a new id. Is there a way to limit the number of ids that the objects get so that this doesnt happen? for example you only have 4 possible labels during the video and the algorithm has to select the most likely label when tracking.
Is is possible to solve those issues. Or to at least make them less frequent. But potential solutions are usually strictly tied to use-case that you are trying to solve. In our case you can notice that those id changes are happening only when cars are still far away or when they are partially ocluded by this large metal object hanging over the left lane. Thats why I would propose to discard objects that are in top half of image and only take into account those that are in bottom half - closer.
Hi! 👋You would need to filter out detections by class_id. First, you need to check what is the class_id that represents the truck. I checked that and looks like it is 7. Now you can do something like that: detections = detections[detections.class_id == 7]. :)
Thanks for the video, it has been quite useful! I want to export the Tracking data as a CSV file. Specifically, I want to run the MOT evaluation toolset in order to evaluate my own dataset. Thus, I was wondering how I could correctly export each objects detection, its bounding boxes, confidence and so on for each frame. Any help would be greatly appreciated :))
We will actually release a new video this week. It will be about detections time analysts. But in this video we will show you how to save detections as csv. Stay tuned.
Any news on the new video so far? I am really struggling to make sense of analyzing the ByteTrack on the MOT toolset. The codebase that ByteTrack provides is just so faulty and has zero guidance@@Roboflow
how can i edit the in and out line (name/label), i want ti only detect in and how i adjust the script to immedietely ipdate the count when the object enter the scanning line and so it wonr't recount the object that already ?
Hello! I have a question, how does the model interpret the "out" variable in the candy example? Can it make the difference between if the object is moving to the right or left? Because of how the bounding box is approaching the line? And thank you so much for creating this content!
Brother, I watched your Object detection for a custom dataset video, it's awesome. I trained with my own dataset and it works like magic. Now, if I want to calculate the time , an object appears in a video, how can I do that? Then, is it possible to do the same for different objects and plot them as graph with Time in y-axis and the type of object in x-axis?
How to check its performance because we always need accuracy, precision and other metrices to find the performance of the model ?, Do I need to annotate first then calculate the accuracy?
Help me please! I has a error in the next code. the problem is here: tracks = byte_tracker.update( output_results=detections2boxes(detections=detections), img_info=frame.shape, img_size=frame.shape ) I dont know how i can fix it
Hello Piotr @roboflow , I'm so very thankful for this insightful video i just wanted to know how do you consider the coordinates for the custom dataset like is there a method or just intiution
@@Roboflow What I meant is you draw out polygons for the polygon zone or line zone. How do you do that like the exact numbers in the numpy array.. You also showed a project for candy counting and tracking on conveyor belt. I couldnt find your video so i found similar in youtube made a dataset trained it but after that i couldn't make coordinates for the "line" based on which if the candy crosses the line its in and count increases.. So basically to sum it up How do one calculate the numpy array for the polygon zone?
Suuperb... What if I want to detect and track the faulty chocolates in that video and mark the chocolate faulty until it leaves out the frame? Any thoughts on this?
@@Roboflow No, currently I have a model to detect potatoes on a conveyer belt. For detecting defects I'm thinking of using OpenCV to detect color deviations. My problem is since potatoes keep rotating on the conveyer belt, I want to track the defective potato even if it keeps rolling.
Hey Peter! Any thoughts on this? And also, Just now saw your video on Grounding DINO it looks interesting. What are your thoughts on using it to detect rotten/spoiled potatoes as explained in earlier comments.
@@snehitvaddi sorry I missed your comment. If you have images of rotten potatoes you can try if DINO detect it. Sounds like something that should work. Color range is doable as well, just pretty hard to get right color ranges I think :/
Great video! How do I customize the counter? For example, position it in the corner of the screen, count cars, trucks, and motorcycles with their own counters? Thank you!
I have one question: Since we are trying to count the objects and since the Object's id given by the tracker are unique, why can't we just count the last Id or count the different number of ids?
@SkalskiP As always amazing job! One problem I am facing inside *match_detections_with_tracks* function, when the object is not in frame and model return _emtpy list_ this line gives error *iou = box_iou_batch(tracks_boxes, detection_boxes)* How can I solve it?
I really enjoyed the last episodes, very well and comprehensibly explained! Thanks! Would it be possible to make a video about rotated object detection in YOLOv8? Would be very useful.
very helpful video, but my code kept on having error due to the line [ tracks = byte_tracker.update( ] saying "AttributeError: module 'numpy' has no attribute 'float'. " plus when I use the google colab link in the description, i ran the byetrack and it encounters the same error, eventhough i didnt change any code i left it be and it kept on having the same error, so is it an update issue? can you please try and run your code in google colab in the description you gave? Because its seriously not working when i didnt even change any code
I recommend using this updated version of the notebook: colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8-and-supervison.ipynb
Thank you for the video. It's really helpful. Is there any way to detect time stamp in the video to capture at what time Vehicle crosses the count line. It will be a great help.
@@Roboflow Thank you for your reply. Stream. Recorded footage of traffic with timestamp in it when it is recorded. It's similar to the Video used in your explanation.
Hello how would I be able to print the values from the counter to a txt. file. I tried the following code but I am unable to obtain the count from the line_counter: with open("output.txt", "w") as file: # Print the variable into the file print(line_counter, file=file)
For classification into car, bus, truck and motorcycle which one is used ByteTrack or Supervision? Additionally, is Bytetrack also used for counting along with tracking? Because supervision is used for annotations.
Oh! It should work for any video you want. I already seen so many projects build on top of that code demo. Let me know if that works for your case too!
i tried to use other video with your code but the for loop stops at 6% of the frame. it doesnt return error and it even runs next code properly. So what can be the problem?
Hi I have a question what happen if in this case I would like to put a line in vertical way in the middel of the image how can I solve this problem because in your video you don't show how to obtain these values that you put for trace the line for counting
Thank you for this video, it's very explanatory. However, the supervision library has been updated, so these codes don't work anymore. I tried to get all those supervision utils from the documentation with little success after a couple of hours. Could you please , make a video dedicated to supervision library alone and where to find those functions and classes and what each one is used for. That will be very helpful. Thank you once again.
I have been working with ByteTrack for a bit now, but I have struggled on evaluating its tracking performance do you know if it is possible to check tracking performance of the individual objects using something live MOT metrics?
Is there any change that I can make to detect the speed of the vehicles. I used two lines, and took a random value for the distance between those two lines. Now the problem is I am not able to understand how to use 'time' library to extract the time for the vehicles going up the lane and down the lane between the two lines. Can please anyone help me with that?
I don't understand how you added numbers to the labels before class name. now i see: tracker_id = match_detections_with_tracks(detections=detections, tracks=tracks) labels = [ f"#{tracker_id} {CLASS_NAMES_DICT[class_id]} {confidence:0.2f}" for _, confidence, class_id, tracker_id in detections ]
I am interested in running this algorithm in pycharm and with the YOLO model that I built and trained myself and not run the algorithm from a notebook. Is it possible to get a customized py file?
Hi, does the tracking work if the object changes class as time goes? For example for the football field dataset if my labels were "player static" and "player running" would the tracking be able to follow the same player as it changes states? Thank you!
your code comes with the error of numpy between float, int and double
I recommend using updated version of the notebook: colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8-and-supervison.ipynb
thanks
@@Roboflow
@@Roboflow Does that notebook solve the error? Thank you!
Is there any other updated notebook because this one is also throwing some error
@@mariacaleropereira2967 The notebook works perfectly :)
26 minutes for this is not long at all. Thank you for what you do and please don't hesitate to make longer videos, however you see fit.
My pleasure!
It goes to show how streamlined this stuff has become. Try doing a PhD in this ten years ago and having to write your own code for everything AND the novel parts you're working on. Takes months and hours to explain. Now anyone can git clone and run complex models. What a world :)
@@katanshin Truly!
The best and complete tutorial for implementing YOLOV8 based object detection, tracking and counting system. Love it brother
That’s what I strived for! Great to hear you liked it so much 🔥
How can I count the bounding boxes for a set of images ( not a video) in this case ( using a pre trained yolov8 model with only 1 class)
@@ashishreddy2634 Are you trying to detect specific class?
Hello Piotr @roboflow, thank you for the video. I have trained my model on 3 different classes.Would it be possible to have the line zone annotator display the count of each class separately rather than the sum of detections of all classes? Can you please help with this?
Always best contexts with very clear explanations... You are perfect bro !
haha! cv bro!
Hola tengo el error en la parte del código : tracks = byte_tracker.update( output_results=detections2boxes(detections=detections), img_info=frame.shape, img_size=frame.shape ) ;sale este error: AttributeError: module 'numpy' has no attribute 'float' ;pueden ayudar porfavor
Tengo el mismo error, pudiste solucionarlo?
@@manuelnavarrete4509 si ..antes de ejecutar el código agrega esta línea : !pip install -U numpy==1.23.5 ;después te pedirá reiniciar la sesión ,vuelves a ejecutar el código ya sin volver a instalar el numpy y listo
np.float has been deprecated. One way to fix is to modify it to np.float64. Make changes in the files yolox/tracker/matching.py and bytetracker.py in the same directory.
nice and simple explanation. i am a beginner and i am trying to start with something simpler like object detection and counting i a picture how would i go about this?
I think this video will be much more useful for you: ruclips.net/video/l_kf9CfZ_8M/видео.html
You deserve more subscribers and likes ! Cool guy and straightforward 💛
I hope we will get 50k subs this year! 🤞🏻
@@Roboflow Guys show your love for this dedicated Gentleman by subscribing and liking his content.
Thank you so much.. I have Zero experience on this matter but following each of your instruction and I did finish my project with my own video.. Super!
Did you do it locally?
thank you for the video,
however i ran into error when running the code for bytetrack regarding 'loguro' and no matter what i cant solve it
Did you used latest version of our notebook? colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8-and-supervison.ipynb
I really need help for one thing.
How can you show the specific number of cars and trucks that have gone in and out.
For example:
3 cars and 1 truck in and
5 cars and 1 truck out
We don't have a dedicated feature yet, but you can build a workaround solution. Create two separate line counters. Filter detections by class, to get car and truck detections and trigger one line counter with car detections and the other with truck detections.
@@Roboflow I will try that, thank you very much!!!
Bro I'm getting problem whenever I'm installation supervision in g-drive
Please let me know how to solve this problem
from 7:49, the notebook from the link in the description doesn't have those lines, so where can I copy them to paste? Thank you!
I just checked. The line definition is there.
@@Roboflow I'm sorry but I don't understand! Could you please reply me with the link?
Is there any easy way to count objects on pre predicted images? And print results in termina. I have a problem with find solution in internet.
Bro you deserve OSKAR .
Fantastic tutorial, playing around with plenty of the options here, thanks for the upload.
Hi it is Peter from the video 👋Thanks a lot! Let us know what other feature could be useful ;)
Thank you for the amazing video! Is it possible to invoke yolo8 on every 4th frame (for example), instead of every single frame? And have some kind of other system follow the object in the other 3 frames (to save on resources).
Not to my knowledge. You skip the frame completely or not. All of those trackers depend on boxes being generated by the model. That being said you can try to pass detections to tracker every 4th frame. It all depends on input video but could still work.
24:23 Attribute Error :Str object has no attribute model😢
Did you used latest version of our notebook? colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8-and-supervison.ipynb
hi. can i make a box instead of line, referencing on 17:40? so i want to count an object if that object staying in that box for milliseconds
@Roboflow may I know why I can't download or play the video? manage to full the code without error sv version 0.18.0
Hi you mean you can’t download video form Colab? Could you be a bit more specific?
thank you, nicley done, I was wondering if we use the segmentation model, how can we annotate the segments with supervision ?
Great question. We have support for segmentation on our road map, but it will take us a bit more time to put it on production.
TY for your great work on supervision library. I have modified your line counting algorithm. During counting people from indoor cctv camera, lines stay short to meet counting conditions. Firstly, I tried center dot instead of corners of bb, but it become unstable, especially when a person pass from door, because center of rectangle become unstable while object slowly disappear. Finally, I draw a square at center of object. It fits my case and generate stable countings.
How about showing an example of how we can measure dimensions of objects ? Probably needs to use a reference object of known dimensions ?
Hi! This is Piotr from the video. This is something that is on my mind for a long time. And yes, having some reference object at least to calibrate measurements would be mandatory.
our project is to detect and count the object on the captured photo. can we follow this tutorial? or is there other more applicable tutorial we can follow
Hey there, thanks for the amazing YOLO 8 videos, I run the code for object detection and it was work fine. then I tried to run for instance segmentation. all steps are fine but in the final step when I run the code for Inference with Custom Model, code run without any issue but this message did not appear: Results saved to runs/segment/predict2. do you know what is the problem?
Could you create issue here: github.com/roboflow/notebooks/issues ?
@@Roboflow Hi I found the error , in the code should write : save=true but you forgot it I guess . Thanks
@@lofihavensongs thanks a lot! Let me try to update that
Fun Fact: Tqdm is an arabic word pronounced "Ta-qa-dom", which means progress
Hi it's Peter from the video! Wow! I didn't know that. Now you made me look and here is what I found: tqdm derives from the Arabic word taqaddum (تقدّم) which can mean “progress,” and is an abbreviation for “I love you so much” in Spanish (te quiero demasiado).
@@SkalskiP didn't know about the Spanish abbreviation,
Nice informative tutorial btw
This is, in fact, fun. Thank you.
Difficult to install though no module ultralytics🙄
Great video! However I tried implementing it with more than one counter (one for each lane) but it seems that LineCounter is a global variable shared across all other lanes. is there a way to overcome this?
Thank you!
In 16:44 sec U have pasted a code where i will get it ?
Could you please do a tutorial about using yolo v8 real time on a webcam, even the pc webcam
Hi! Could you please add that idea here: github.com/roboflow/notebooks/discussions/categories/video-ideas?
YOLOv8 detection + tracking + counting on webcam?
@@neeraj.kumar.1 hi I'll think about it. Next video comming soon :)
How do we count for each class
Thank you so much for the video. what's the difference between this notebook and using "yolo track model=path/to/best.pt tracker="bytetrack.yaml"" ?
Hi! That video was actually recorded before YOLOv8 team added tracking capability. But in short, you can use ByteTrack with any object detection model, and if you will use Ultralytics implementation then you are bound to use only YOLOv8.
Is there any other updated notebook because this one is also throwing some error ?
I want to use supervision for face detection and tracking with Detectron2 model
Does the same code works for crowd videos ? I’ve been failing to do it.
Thanks.
It should. But I’d need to see specific result to understand what’s failing.
Thankyou so much. The explanation was in-depth.
My pleasure!
@@Roboflow by adjusting some resolution and having perfect line counter position,your code is doing great in real-time. 👍
What if i want to count objects of a specific class only? how can this be implemented? Let's say that we want to count only "car" class and not "truck" also? Can this be done?
sure! you just need to filter detections before passing them through the line counter. You would need to add this line before triggering:
detections = detections[detections.class_id == YOUR_CLASS]
@@Roboflow well i get error: TypeError: 'Detections' object is not subscriptable
Maybe if i use this filtering before the line counter??
#class to filter
mask = np.array([class_id in CLASS_ID for class_id in detections.class_id], dtype=bool)
detections.filter(mask=mask, inplace=True)
Let's say instead of CLASS_ID to use [1] so i can keep only class 1
Just wow!
Thank you for this great content.
show_frame_in_notebook is not working in google colab so i am unable to see the frame
Could you create issue here: github.com/roboflow/notebooks ? I will try to fix that as soon as possible.
i have this problem ''AttributeError: type object 'Detections' has no attribute 'from_ultralytics'''
How to write down the results of each counting on Excel?
Thank you brothers, for your work!
what to do if i want to show the vehicle counts based on their class. like car in: 1, bus in: 2, car out : 5, bus out: 6
Thank you very much, really appreciate! I applied to my custom video, it does not count correctly. I saw in your video it also does not count correctly, how we can improve it?
How do I get the specific time stamp for which the object was early detected in the video
We don’t have time analysis support yet in supervision :/
Piotr jest super-duper ultra yolo guru :D
It's Peter from video. I'm not sure if I'm YOLO guru, but thanks a lot for this kind comment. I went through a bit of internet hate lately, so it is great to here some positive feedback.
Is there a way to get rid or the OUT or IN so its just on label on the video?
So only show counters?
@@Roboflow only to so "Out" or just counters
@@justin_richie it is not possible now but feel free to create feature request in supervision repo: github.com/roboflow/supervision/issues/new?assignees=&labels=enhancement&template=feature-request.yml
thanks for the video. I noticed that even with a clear view of all the vehicles, you still lose track of the truck and it gets a new id. Is there a way to limit the number of ids that the objects get so that this doesnt happen? for example you only have 4 possible labels during the video and the algorithm has to select the most likely label when tracking.
Is is possible to solve those issues. Or to at least make them less frequent. But potential solutions are usually strictly tied to use-case that you are trying to solve. In our case you can notice that those id changes are happening only when cars are still far away or when they are partially ocluded by this large metal object hanging over the left lane. Thats why I would propose to discard objects that are in top half of image and only take into account those that are in bottom half - closer.
Thank you for the good video. How can I get detection to come out in Korean.
Hiw can i get the count of in and out vehicles in one var only . I just want the whole count . Is it possible??
What do I do if I only want to detect and track the trucks in the video
Hi! 👋You would need to filter out detections by class_id. First, you need to check what is the class_id that represents the truck. I checked that and looks like it is 7. Now you can do something like that: detections = detections[detections.class_id == 7]. :)
Thank you ,Could you please explain how to count objects detected in images?
Thanks for the video, it has been quite useful! I want to export the Tracking data as a CSV file. Specifically, I want to run the MOT evaluation toolset in order to evaluate my own dataset. Thus, I was wondering how I could correctly export each objects detection, its bounding boxes, confidence and so on for each frame. Any help would be greatly appreciated :))
We will actually release a new video this week. It will be about detections time analysts. But in this video we will show you how to save detections as csv. Stay tuned.
Thank you very much) You guys are really being helpful with your videos.@@Roboflow
Any news on the new video so far? I am really struggling to make sense of analyzing the ByteTrack on the MOT toolset. The codebase that ByteTrack provides is just so faulty and has zero guidance@@Roboflow
how can i edit the in and out line (name/label), i want ti only detect in
and how i adjust the script to immedietely ipdate the count when the object enter the scanning line and so it wonr't recount the object that already ?
Thanks. Can you tell me which tracking algorithm works better - ByteTrack or DeepSort
Hi it's Peter from the video. I like ByteTrack a lot more.
Hi I have question in this case you don't use deepsort technique for tracking the cars do I?
I use BytetTack. DeepSort is just another tracker that you can use.
Hello!
I have a question, how does the model interpret the "out" variable in the candy example? Can it make the difference between if the object is moving to the right or left? Because of how the bounding box is approaching the line?
And thank you so much for creating this content!
Brother, I watched your Object detection for a custom dataset video, it's awesome. I trained with my own dataset and it works like magic. Now, if I want to calculate the time , an object appears in a video, how can I do that? Then, is it possible to do the same for different objects and plot them as graph with Time in y-axis and the type of object in x-axis?
Hi. Thanks a lot. We are actually thinking about making video like that. I hope we will be able to record it soon.
@@Roboflow 😍 Thanks brother! Waiting for that video... ⏳
Tengo problemas con numpy en la parte del tqdm, y ya cambie los np.float por float y aun asi el problema persiste
What camera did you use to see the chocolates going?
This is stock footage I downloaded from internet :)
Is it possible do this on instance segmentation model?
How can I count cars with two diagonal lines not horizontal lines?? please teach my how to do this
You just need to create two lines and change coordinates of start and end :)
How to check its performance because we always need accuracy, precision and other metrices to find the performance of the model ?, Do I need to annotate first then calculate the accuracy?
i want to only count the vehicule entering not exiting.is it possible?i
Help me please! I has a error in the next code.
the problem is here:
tracks = byte_tracker.update(
output_results=detections2boxes(detections=detections),
img_info=frame.shape,
img_size=frame.shape
)
I dont know how i can fix it
Hello Piotr @roboflow , I'm so very thankful for this insightful video i just wanted to know how do you consider the coordinates for the custom dataset like is there a method or just intiution
Not really sure what you mean. Could you elaborate on your question?
@@Roboflow What I meant is you draw out polygons for the polygon zone or line zone. How do you do that like the exact numbers in the numpy array.. You also showed a project for candy counting and tracking on conveyor belt. I couldnt find your video so i found similar in youtube made a dataset trained it but after that i couldn't make coordinates for the "line" based on which if the candy crosses the line its in and count increases.. So basically to sum it up How do one calculate the numpy array for the polygon zone?
Suuperb... What if I want to detect and track the faulty chocolates in that video and mark the chocolate faulty until it leaves out the frame? Any thoughts on this?
Do you have a model to detect those faults?
@@Roboflow No, currently I have a model to detect potatoes on a conveyer belt. For detecting defects I'm thinking of using OpenCV to detect color deviations.
My problem is since potatoes keep rotating on the conveyer belt, I want to track the defective potato even if it keeps rolling.
Hey Peter!
Any thoughts on this? And also, Just now saw your video on Grounding DINO it looks interesting. What are your thoughts on using it to detect rotten/spoiled potatoes as explained in earlier comments.
@@snehitvaddi sorry I missed your comment. If you have images of rotten potatoes you can try if DINO detect it. Sounds like something that should work. Color range is doable as well, just pretty hard to get right color ranges I think :/
even i upload the zip file it shows no directory .what can i do?
Can i draw multiple lines to count objects at multiple locations ?
Great video! How do I customize the counter? For example, position it in the corner of the screen, count cars, trucks, and motorcycles with their own counters? Thank you!
did you find a fix?
appreciate the elaborate explanation. Can we tag each of those objects with unique id? like car1, car2 ...etc
Nice job ! love from china❤
Hi, it is Peter from the video! Thanks a looot! Love from Poland.
Very Nice explanation bro, is there any possibility to colaborate in supervision development?
I have one question: Since we are trying to count the objects and since the Object's id given by the tracker are unique, why can't we just count the last Id or count the different number of ids?
How do you know how many of them traveled up and how many down?
hello, can i use the supervision to count object on yolov5? i have an existing onnx model
fantastic!! Would really like to know if this will work for live rtsp url (multiple different camera's) in real-time
We would need to try out, but I think it will :)
@@Roboflow let us know if you guys try it out. Enjoying the videos
@@anadianBaconator maybe we will manage to include it in one of our upcoming videos
@@Roboflow really appreciate it
@SkalskiP As always amazing job! One problem I am facing inside *match_detections_with_tracks* function, when the object is not in frame and model return _emtpy list_ this line gives error *iou = box_iou_batch(tracks_boxes, detection_boxes)*
How can I solve it?
Hi it's Peter from the video. I just fixed that problem. Could you try the tutorial once again?
I really enjoyed the last episodes, very well and comprehensibly explained! Thanks!
Would it be possible to make a video about rotated object detection in YOLOv8? Would be very useful.
Hi, it is Peter from video! Thanks from kind words. It means a lot to me. Is YOLOv8 capable of rotated object detection?
@@SkalskiP Hm you are probably right, rotated detection doesn't exist yet.
Thought I just overlooked it..
Thanks for the answer!
@@SkiLLFace360 no worries it is kind of my job to know it ;)
very helpful video, but my code kept on having error due to the line [ tracks = byte_tracker.update( ] saying "AttributeError: module 'numpy' has no attribute 'float'. " plus when I use the google colab link in the description, i ran the byetrack and it encounters the same error, eventhough i didnt change any code i left it be and it kept on having the same error, so is it an update issue? can you please try and run your code in google colab in the description you gave? Because its seriously not working when i didnt even change any code
I recommend using this updated version of the notebook: colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/how-to-track-and-count-vehicles-with-yolov8-and-supervison.ipynb
@Roboflow thx a lot, I'll be sure to try this tomorrow
@@Roboflow is there a lot of difference in two methods ?
Thank you for the video. It's really helpful. Is there any way to detect time stamp in the video to capture at what time Vehicle crosses the count line. It will be a great help.
Thanks a lot. Is that static file or stream?
@@Roboflow Thank you for your reply. Stream. Recorded footage of traffic with timestamp in it when it is recorded. It's similar to the Video used in your explanation.
Hello how would I be able to print the values from the counter to a txt. file. I tried the following code but I am unable to obtain the count from the line_counter: with open("output.txt", "w") as file:
# Print the variable into the file
print(line_counter, file=file)
For classification into car, bus, truck and motorcycle which one is used ByteTrack or Supervision?
Additionally, is Bytetrack also used for counting along with tracking? Because supervision is used for annotations.
How do I access the model to Roboflow??
Very impressive and recommend video.
Is the code just related to one or two test cases/videos? Is it possible to do it for any video in general?
Oh! It should work for any video you want. I already seen so many projects build on top of that code demo. Let me know if that works for your case too!
super nice video, but probably an update would be amazing since a lot has changed in the repository, right?
hi! thanks ! its v useful. Can it be applied on cellphones ? like an android or IOS app?
i tried to use other video with your code but the for loop stops at 6% of the frame. it doesnt return error and it even runs next code properly. So what can be the problem?
SOURCE_VIDEO_PATH = "/content/highway.mp4"
video_info = VideoInfo.from_video_path(SOURCE_VIDEO_PATH)
byte_tracker = BYTETracker(BYTETrackerArgs())
TARGET_VIDEO_PATH = f"{HOME}/result.mp4"
generator = get_video_frames_generator(SOURCE_VIDEO_PATH, end=video_info.total_frames)
box_annotator = BoundingBoxAnnotator(thickness=4)
counts = []
ids = []
i = 0
print(video_info)
with VideoSink(TARGET_VIDEO_PATH, video_info) as sink:
for frame in tqdm(generator, total=video_info.total_frames):
print(frame)
i += 1
results = model(frame)[0]
detections = Detections(
xyxy=results.boxes.xyxy.cpu().numpy(),
confidence=results.boxes.conf.cpu().numpy(),
class_id=results.boxes.cls.cpu().numpy().astype(int)
)
tracks = byte_tracker.update(
output_results=detections2boxes(detections=detections),
img_info=frame.shape,
img_size=frame.shape
)
tracker_id = match_detections_with_tracks(detections=detections, tracks=tracks)
detections.tracker_id = np.array(tracker_id)
labels = [
f"#{tracker_id}"
for x
in detections
]
frame = box_annotator.annotate(scene=frame, detections=detections)
for x in tracker_id:
if not x in ids:
ids.append(x)
counts.append(len(ids))
sink.write_frame(frame)
print(i)
Hi I have a question what happen if in this case I would like to put a line in vertical way in the middel of the image how can I solve this problem because in your video you don't show how to obtain these values that you put for trace the line for counting
You just need to know your frame dimensions and experiment a bit to find the right fit.
thanks a lot this comment help me a lot @@Roboflow
Thank you for this video, it's very explanatory. However, the supervision library has been updated, so these codes don't work anymore. I tried to get all those supervision utils from the documentation with little success after a couple of hours. Could you please , make a video dedicated to supervision library alone and where to find those functions and classes and what each one is used for. That will be very helpful. Thank you once again.
Take a look here: github.com/roboflow/notebooks/pull/190 it is a PR that updates our vehicle counting notebook to supervision 0.13.0.
I have been working with ByteTrack for a bit now, but I have struggled on evaluating its tracking performance do you know if it is possible to check tracking performance of the individual objects using something live MOT metrics?
Yes, it is possible but you would need to have annotated data.
Thank you. This is nice. I have a requirement to create vehicle detection model with good accuracy. Is it possible to create for me and work for this
bro can i do it with my realtime webcam? if yes, help me
How can i draw multiple lines to count ?
which version of Python3 did you use?
Google Colab is currently at Python 3.8.10
Could you help with what each of the ByteTrackerArgs are for? I mean track_thresh, track_buffer and so on
Is there any change that I can make to detect the speed of the vehicles. I used two lines, and took a random value for the distance between those two lines. Now the problem is I am not able to understand how to use 'time' library to extract the time for the vehicles going up the lane and down the lane between the two lines. Can please anyone help me with that?
I don't understand how you added numbers to the labels before class name.
now i see:
tracker_id = match_detections_with_tracks(detections=detections, tracks=tracks)
labels = [
f"#{tracker_id} {CLASS_NAMES_DICT[class_id]} {confidence:0.2f}"
for _, confidence, class_id, tracker_id
in detections
]
Hi! 👋Could you elaborate on the question? Which part is not clear?
I am interested in running this algorithm in pycharm and with the YOLO model that I built and trained myself and not run the algorithm from a notebook.
Is it possible to get a customized py file?
Hi, does the tracking work if the object changes class as time goes? For example for the football field dataset if my labels were "player static" and "player running" would the tracking be able to follow the same player as it changes states? Thank you!
Can i deploy it to Jenso Nano?