Sorry, we'll zoom in bigger next time! The code to run your own workflow can be found by clicking the "Deploy" tab on the top-right of the Workflow editor.
Heyy! Its an amazing video, i do have some questions on training on custom class, as in if i only want it to predict cell phone while in hand, how do i do that ? (Currently working as Ai project engineer)!!
You can train a custom model on Roboflow to do that! I'd recommend labeling ~100 images of a cell_phone_hand class, and then several more of a not_cell_phone_hand class so it's able to distinguish between the two. You'll want the bounding boxes to capture both the hand and cell phone so the model has that context when making predictions.
I've been looking at Oak-D Lite and Luxonis, can I deploy this kind of workflow from Roboflow on a standalone Pi4 with an Oak-D Lite? I''m not a programmer so this is a bit of a leap for me.
Yeah definitely; the compute will run on the Pi vs the OAK unless you write a custom workflow block that uses our integration with them though: docs.luxonis.com/software/ai-inference/integrations/roboflow/ This shouldn't be too hard to do -- if there's enough interest we can make an example. You could use this example of getting model predictions from an outside package as a template: github.com/roboflow/inference/blob/db94afd5bc46653810386cd7a7f90c2f9c6217ba/inference/core/workflows/core_steps/models/foundation/ultralytics/v1.py
I try this tutorial, but py return this error: StepExecutionError(inference.core.workflows.errors.StepExecutionError: Error during execution of step: $steps.cars_2. Details: Model type not supported: ('object-detection','yolo-world'), Is a paid account necessary to try this tutorial? thanks for your response
Could you post a link to your Workflow? This looks to me like you're using the Object Detection Block & trying to specify YOLO-World as the model. There's a separate YOLO-World block to use.
Great video !! How do I get code to download ? My vision is not good enough to get code from video.
Sorry, we'll zoom in bigger next time! The code to run your own workflow can be found by clicking the "Deploy" tab on the top-right of the Workflow editor.
informative .. thanks
Heyy! Its an amazing video, i do have some questions on training on custom class, as in if i only want it to predict cell phone while in hand, how do i do that ? (Currently working as Ai project engineer)!!
Use the Detections Filter block: inference.roboflow.com/workflows/blocks/detections_filter/
You can train a custom model on Roboflow to do that! I'd recommend labeling ~100 images of a cell_phone_hand class, and then several more of a not_cell_phone_hand class so it's able to distinguish between the two. You'll want the bounding boxes to capture both the hand and cell phone so the model has that context when making predictions.
@ thankk you so much 🩵!! I will definitely try that !!!
This was amazing thank you .
Amazing!
Hi, is possible to use webcam instead of the rtsp cam you are using? If yes then what can i use as video reference instead of your url
If i want to add more than one zone to count the time in zone, how can i do that?
I've been looking at Oak-D Lite and Luxonis, can I deploy this kind of workflow from Roboflow on a standalone Pi4 with an Oak-D Lite? I''m not a programmer so this is a bit of a leap for me.
Yeah definitely; the compute will run on the Pi vs the OAK unless you write a custom workflow block that uses our integration with them though: docs.luxonis.com/software/ai-inference/integrations/roboflow/
This shouldn't be too hard to do -- if there's enough interest we can make an example. You could use this example of getting model predictions from an outside package as a template: github.com/roboflow/inference/blob/db94afd5bc46653810386cd7a7f90c2f9c6217ba/inference/core/workflows/core_steps/models/foundation/ultralytics/v1.py
Hi. What program do you use for recording? Thanks in advance.
This is Loom, but we sometimes also use screen.studio
Thanks 👍👍
I try this tutorial, but py return this error: StepExecutionError(inference.core.workflows.errors.StepExecutionError: Error during execution of step: $steps.cars_2. Details: Model type not supported: ('object-detection','yolo-world'), Is a paid account necessary to try this tutorial? thanks for your response
Hi Aland, what inference version are you on and what workflow blocks are you using?
@@EmilyGavrilenko Inference: __version__ = "0.24.0" and block Yolo-World Model
Could you post a link to your Workflow? This looks to me like you're using the Object Detection Block & trying to specify YOLO-World as the model. There's a separate YOLO-World block to use.