Another great tutorial. I'm glad you've covered the issues of latency and frames accumulating in the buffer causing crashes, I've been having trouble with that when trying to run CV applications for long periods.
@Roboflow, what was done in the video is great! I loved it!!! . But it's a workaround; you took a video and turned it into an RTSP link. However, there are many challenges that occur when pulling footage from a stream. I would love to see that part done as well. Let's say you get permission for a few cameras from an office/gym or elsewhere and do an example project. It would be fascinating to see the process of pulling footage from these cameras, training a model, thinking of a use case that solves a real-world problem, coding it, and finally giving the result to the client (showing us thats the average time of a customer in checkout 6 is 50 sec on a given day) . Currently, most CV i see is done on 15-second videos and from one camera, and there isn't much value in that. real value is in solving real problems with multiple cameras models/logic that works together. I'm aware that what I'm talking about is a huge project and not an easy task, but if anyone can do it, it's @Roboflow and Piotr. I believe that a tutorial/series like I'm imagining would open the door for millions of CV applications to be built in the future.
Thats exactly what we're doing at the moment for a soccer stadium in UK, taking multiple streams from the CCTV and measuring the average wait time at some turnstiles and food/drink stalls. If the Proof of Concept is successful we will also use CV to monitor occupancy and highlight spare seats in the stadium on a floorplan. Roboflow and Supervision are really useful tools for us!
Today we are hosting a live community session. We will be talking mostly about real-time stream processing and time calculation. Would be awesome if you could join!
That’s a very good question. It is guaranteed that you will not drop s because of InferencePipeline logic. You may still drop it because of internet connection. But not because of InferencePipeline. We tested the logic on Jetson Nano decoding 4K stream and it worked really reliably.
Hi, thanks for the really great and easy to understand tutorial. Here, you implemented a clock based timer approach to calculate elapsed time from a real time stream. I am assuming that similar approach can be used to estimate the speed of an object (like car) from a real time stream as well. Is it correct? Would appreciate your valuable feedback.
@@Roboflow OK, thank you I thought this is exclusive for the Ultralytics models only, I have already created a custom YOLOv7 & 6 model and I will use them with supervision.
@@Roboflow I have used detect.py, but now I am working on a code to load my model and detect object, in case I want to implement anything with my model it will be easy for me to use my code.
Thank you for this amazing tutorial, i just have a problem I'm working with project that detect and tracking people but in my model when people change thier position the model record it's another track so how i can enhance it
Great video for show, but the processing time is quite slow even if with ultralytics :( Is there any way to speed up the processing time without using GPU ? Thanks in advance!!! You guys are doing a great work in a great direction.
If you don't have a GPU, perhaps you could use a smaller model? Eventually run object detection through OpenVINO - Intel hardware model accelerator. We could talk about it deeper during live community session today. It would be so cool if you could join. ruclips.net/video/u7XUC-3TqY8/видео.html&ab_channel=Roboflow
@@Roboflow Sorry for not attending the live session, 10PM is quite late for me cause I am quite tired after a long working day :( ! Hey, I have a proposal like this, can you consult how to achieve this goal (funding, framework, algorithms..., ) docs.google.com/document/d/1ldpQ6q3MpJmox-nOSj2SLq-9sKBMEau1JxHbQClmHMg/edit?usp=sharing . In the past, I used traditional method.ruclips.net/video/N8i2gbh6RRI/видео.html but with the advance of AI, I think we can do further.
Is there a simple way to save class detections in real time to an updating excel file or something similar? i.e. a continuous record of timestamps, with each timestamp containing the object detections, the zone it was detected in and/or bounding box positions/conf at that time, and this information is real-time updating an excel file?
@Roboflow Wow this is great content so much exposure to unleash YOLO features. im working on a realtime project which is taking in about 10 stream feeds and has 6 different yolo models(Yolov8) with different usecases. i have currently applied threading on the usecases but im confused how can i feed in 10 streams parallel. im looking for suggestions from on prem deployment point of view as well
is there a way to process high resolution stream in realtime to detect / track small objects in the stream? I have tried SAHI, but its not possible to run in real time if the images are 5000x576.
It will always be a tradeoff between speed and accuracy. You have two options - increase inference resolution(imgsz parameter in YOLOv8) or use SAHI. Both will increase accuracy but decrease speed.
@@Roboflow yes, thats what I also experienced. I am on the way to find a way to do parallel processing of small snippets and then merging the result. Lets see how far I can go
I am using yolo8 from ultralytics and bytetrack from supervision. I am doing exactly the same thing, Only difference is I am reading 2 videos at the same time. Yolo detecting humans but tracker doesn't update with that detections. Detection suddenly dissappear. What should be the problem?
i am not able to use the cuda/gpu support to the code. i tried by using code: model.to('cuda') but getting error. can someone help me on how to implement cuda to the above code. please
@@Roboflow Thank you the mention of its possibility My use case is to track the number of people that went by the camera and the dwell time infront of the camera I was thinking if bluring the faces of people make it GDPR complaint No collection of any other Personally identifiable data I wasn't able to find any good resource to back up this possibility Thanks again
@Roboflow, what was done in the video is great! I loved it!!! 😍. But it's a workaround; you took a video and turned it into an RTSP link. However, there are many challenges that occur when pulling footage from a stream. I would love to see that part done as well. Let's say you get permission for a few cameras from an office/gym or elsewhere and do an example project. It would be fascinating to see the process of pulling footage from these cameras, training a model, thinking of a use case that solves a real-world problem, coding it, and finally giving the result to the client (showing us thats the average time of a customer in checkout 6 is 50 sec on a given day) . Currently, most CV i see is done on 15-second videos and from one camera, and there isn't much value in that. real value is in solving real problems with multiple cameras models/logic that works together. I'm aware that what I'm talking about is a huge project and not an easy task, but if anyone can do it, it's @Roboflow and Piotr. I believe that a tutorial/series like I'm imagining would open the door for millions of CV applications to be built in the future.
@@ItayHilel there is simple no data like this that I can use in tutorials. I'd need footage from multiple cameras from store or gym. And people do not share that online :/
Another great tutorial. I'm glad you've covered the issues of latency and frames accumulating in the buffer causing crashes, I've been having trouble with that when trying to run CV applications for long periods.
I’m curious why so few tutorials cover this topic in depth
The system is amazing, thanks a lot for the tutorial :)
Thank you for the tutorial. Could you do one on shoplifting? Interesting use case.
Your channel calms me
Not really sure what you mean :/
@@RoboflowI think he is trying to say that your tutorials are chill af, and I can confirm it))))
It is my thesis
Thanks you so much
My pleasure!
i am new and understanding this logics and math is very difficult i hope i will learn it as soon as possible
Any specific questions you have?
Amazing video! Thanks!
@Roboflow, what was done in the video is great! I loved it!!! . But it's a workaround; you took a video and turned it into an RTSP link. However, there are many challenges that occur when pulling footage from a stream.
I would love to see that part done as well. Let's say you get permission for a few cameras from an office/gym or elsewhere and do an example project. It would be fascinating to see the process of pulling footage from these cameras, training a model, thinking of a use case that solves a real-world problem, coding it, and finally giving the result to the client (showing us thats the average time of a customer in checkout 6 is 50 sec on a given day) .
Currently, most CV i see is done on 15-second videos and from one camera, and there isn't much value in that. real value is in solving real problems with multiple cameras models/logic that works together.
I'm aware that what I'm talking about is a huge project and not an easy task, but if anyone can do it, it's @Roboflow and Piotr.
I believe that a tutorial/series like I'm imagining would open the door for millions of CV applications to be built in the future.
Thats exactly what we're doing at the moment for a soccer stadium in UK, taking multiple streams from the CCTV and measuring the average wait time at some turnstiles and food/drink stalls. If the Proof of Concept is successful we will also use CV to monitor occupancy and highlight spare seats in the stadium on a floorplan. Roboflow and Supervision are really useful tools for us!
@@tobieabel7474 can you share your journey? i would love to read a blog or watch a video
Great to know that the time spent creating these tools and tutorials is not wasted! I would love to learn more if possible!
Today we are hosting a live community session. We will be talking mostly about real-time stream processing and time calculation. Would be awesome if you could join!
Maybe@@Roboflow can sponsor me to do a blog of the journey and put it on your channel! Sure, I'll be at the community session today
amazing tutorial, thank you for your time!
Amazing tutorial it help to us 👍
Great Video as always!
How did you create the intro explanation on white board with the text and diagrams?
Thank you for the great Tutorial. I always enjoy your tutorials. If you have time, I wish you could make tutorial of Camera Calibration 💙.
I can’t add it to our TODO list but it is really long… we already have ideas for at least 5 videos.
Superb sir
Thanks a lot!
Great video. What happen when the frames dropping are s?
That’s a very good question. It is guaranteed that you will not drop s because of InferencePipeline logic. You may still drop it because of internet connection. But not because of InferencePipeline. We tested the logic on Jetson Nano decoding 4K stream and it worked really reliably.
Where does the processed video get saved? also seems to be running really slow on the detection running a m1 mac. Great tutorial!
In the repository you can find set it scripts that use ultralytics. It uses PyTorch as backend, and can use mps device accessible on M1 macs.
More content like this congrats!
We are not slowing down ;)
Hi, thanks for the really great and easy to understand tutorial. Here, you implemented a clock based timer approach to calculate elapsed time from a real time stream. I am assuming that similar approach can be used to estimate the speed of an object (like car) from a real time stream as well. Is it correct? Would appreciate your valuable feedback.
Thank you for this amazing tutorial, can I use these examples with other versions of YOLO algorithm like v7 or v6, or does this work on YOLOv8 only?
You can swap the model. No problem.
@@Roboflow OK, thank you I thought this is exclusive for the Ultralytics models only, I have already created a custom YOLOv7 & 6 model and I will use them with supervision.
Can you run your YOLOv7 as stand alone model or do you have to use detect.py script?
@@Roboflow I have used detect.py, but now I am working on a code to load my model and detect object, in case I want to implement anything with my model it will be easy for me to use my code.
Can we proceed with the vehicle speed estimation in real-time, which is a whole project?
You men you want us to make a video about it? :)
how much will it cost for a single stream for a month with maybe 2 fps provided we have already trained the model with roboflow?
Thank you for this amazing tutorial, i just have a problem I'm working with project that detect and tracking people but in my model when people change thier position the model record it's another track so how i can enhance it
Which architecture is used internalliy in
Roboflow 3.0 Object Detection (Fast)????
Se puede ejecutar en una jetson nano de 4GB?
Great video for show, but the processing time is quite slow even if with ultralytics :( Is there any way to speed up the processing time without using GPU ? Thanks in advance!!! You guys are doing a great work in a great direction.
What hardware you have and how fast would you like to run?
@@Roboflow Processor: Intel(R) Core(TM) i5-4460 CPU @ 3.20GHz 3.20 GHz; RAM: 8.00 GB (7.87 GB usable); 24 fps (real time speed) is enough for me.
If you don't have a GPU, perhaps you could use a smaller model? Eventually run object detection through OpenVINO - Intel hardware model accelerator. We could talk about it deeper during live community session today. It would be so cool if you could join. ruclips.net/video/u7XUC-3TqY8/видео.html&ab_channel=Roboflow
@@Roboflow Sorry for not attending the live session, 10PM is quite late for me cause I am quite tired after a long working day :( ! Hey, I have a proposal like this, can you consult how to achieve this goal (funding, framework, algorithms..., ) docs.google.com/document/d/1ldpQ6q3MpJmox-nOSj2SLq-9sKBMEau1JxHbQClmHMg/edit?usp=sharing . In the past, I used traditional method.ruclips.net/video/N8i2gbh6RRI/видео.html but with the advance of AI, I think we can do further.
Is there a simple way to save class detections in real time to an updating excel file or something similar? i.e. a continuous record of timestamps, with each timestamp containing the object detections, the zone it was detected in and/or bounding box positions/conf at that time, and this information is real-time updating an excel file?
Could I use a local trained dataset? because i can't afford roboflow premium so my dataset cant be trained better with roboflow
Is ther any way i can set my model project to run with my gpu? currently it only runs over the processor
Install inference-gpu package instead of inference. It will allow you to run faster on NVIDIA GPUs.
is possible to run it into a mps device? such as M3
I included scripts that use ultralytics to run the model. And you can use mps to accelerate inference with this model on M3?
@@Roboflow yes
Hello, is there anyway for me to custom for keeping the quality of frame more clearly, I have test and see the quality of frame stream is too bad
Steam quality depends on resolution and compression algorithms. Usually stream source controls those parameters.
@@Roboflow Thanks for your response, let me try it
@Roboflow Wow this is great content so much exposure to unleash YOLO features. im working on a realtime project which is taking in about 10 stream feeds and has 6 different yolo models(Yolov8) with different usecases. i have currently applied threading on the usecases but im confused how can i feed in 10 streams parallel. im looking for suggestions from on prem deployment point of view as well
Today we are hosting a live community session. It would be so cool if you could join. ruclips.net/video/u7XUC-3TqY8/видео.html&ab_channel=Roboflow
I can't find link for the code you wrote
is there a way to process high resolution stream in realtime to detect / track small objects in the stream?
I have tried SAHI, but its not possible to run in real time if the images are 5000x576.
It will always be a tradeoff between speed and accuracy. You have two options - increase inference resolution(imgsz parameter in YOLOv8) or use SAHI. Both will increase accuracy but decrease speed.
@@Roboflow yes, thats what I also experienced. I am on the way to find a way to do parallel processing of small snippets and then merging the result. Lets see how far I can go
I am using yolo8 from ultralytics and bytetrack from supervision. I am doing exactly the same thing, Only difference is I am reading 2 videos at the same time. Yolo detecting humans but tracker doesn't update with that detections. Detection suddenly dissappear. What should be the problem?
Hi Do you know how to generate the config.json file required by
--zone_configuration_path "data/checkout/config.json" . Thanks
hi!
why i can't open it in google colab?
i am not able to use the cuda/gpu support to the code. i tried by using code: model.to('cuda') but getting error. can someone help me on how to implement cuda to the above code. please
Can anyone help me by mentioning if I can do this with GDPR compliance
I was working in similar project under GDPR regime. It all depends on your use case, but in general it is possible.
@@Roboflow Thank you the mention of its possibility
My use case is to track the number of people that went by the camera and the dwell time infront of the camera I was thinking if bluring the faces of people make it GDPR complaint No collection of any other Personally identifiable data
I wasn't able to find any good resource to back up this possibility
Thanks again
Is there a github repo with all of the code?
Yup, the link is in the YT description
@@Roboflow I could not find the link for this project code in the description could you please help ?
why didn't you do this for a live video stream??
I’m not really sure what you mean. I did?
@Roboflow, what was done in the video is great! I loved it!!! 😍. But it's a workaround; you took a video and turned it into an RTSP link. However, there are many challenges that occur when pulling footage from a stream.
I would love to see that part done as well. Let's say you get permission for a few cameras from an office/gym or elsewhere and do an example project. It would be fascinating to see the process of pulling footage from these cameras, training a model, thinking of a use case that solves a real-world problem, coding it, and finally giving the result to the client (showing us thats the average time of a customer in checkout 6 is 50 sec on a given day) .
Currently, most CV i see is done on 15-second videos and from one camera, and there isn't much value in that. real value is in solving real problems with multiple cameras models/logic that works together.
I'm aware that what I'm talking about is a huge project and not an easy task, but if anyone can do it, it's @Roboflow and Piotr.
I believe that a tutorial/series like I'm imagining would open the door for millions of CV applications to be built in the future.
The biggest problem for me is the lack of video footage that I could use to make such a tutorial....
@@Roboflow what do you mean? is it a copyright issue? why is there a lack of video footage?
@@ItayHilel there is simple no data like this that I can use in tutorials. I'd need footage from multiple cameras from store or gym. And people do not share that online :/
Very Great Tutorial 🤩 I have done this before with @Lxuonis OAK. I also experienced the same challenges. Regards!
Yes, the topic is much more complicated than it seems at first glance. I am very happy that you like the video.
Can you send the link here ?, I dont seem to find the full stream code in the description 😅
take a look here: github.com/roboflow/supervision/tree/develop/examples/time_in_zone
no funciona