Is there a way to do this directly with model.predict(source=video.mp4)? or do I have to get each frame -> process with sahi -> send each slice to a model.predict(source=slice.jpeg) ?
You can use `model.predict(source="video.mp4")` directly with Ultralytics YOLO for video inference. It will handle the video frames automatically, processing each one without requiring manual slicing or frame extraction. To efficiently manage large videos, consider using `stream=True` in the `predict()` call for memory optimization, as detailed here: Inference Sources Guide docs.ultralytics.com/modes/predict/. If your use case involves slicing frames for higher precision (e.g., using SAHI), you would need to extract frames, process them with SAHI, and then run `model.predict()` on each slice.
When training a model for SAHI (Slicing Aided Hyper Inference), choosing augmentation parameters that enhance generalization for small objects and varied image scales is critical. Key augmentation settings to focus on include: 1. `scale`: Set this to simulate objects appearing at different distances. This helps the model detect objects at varying scales within slices. Details docs.ultralytics.com/modes/train/. 2. `translate`: Helps the model handle partially visible objects in slices by translating images horizontally and vertically. 3. `mosaic`: Highly effective for complex scenes as it combines multiple images, which is crucial for small object detection in SAHI. 4. `degrees` and `shear`: These improve the model's ability to detect objects in different orientations and angles, beneficial for diverse slice perspectives. For a full list of augmentation parameters and their effects, check the training guide here docs.ultralytics.com/modes/train/. Experimentation is key to finding the best combination for your dataset!
Hi there! 😊 Thanks for your comment. To help you better, could you please specify which versions of YOLO and SAHI you were using when you encountered the issue? Also, make sure you're using the latest versions of `torch` and `ultralytics`. You can find more details in our documentation docs.ultralytics.com. If you still face issues, feel free to share more details! 🚀
Yes, SAHI works with YOLOv8 segmentation. You can find more details in the SAHI Tiled Inference Guide docs.ultralytics.com/guides/sahi-tiled-inference/. 😊
Absolutely! YOLOv8 can be adapted for pupil detection with the right dataset and training. For more details on training custom models, check out our guide: docs.ultralytics.com/guides/model-training-tips/. If you have any specific questions, feel free to ask! 😊
For using SAHI with YOLOv8-OBB, you can use the `get_sliced_prediction` method, which supports oriented bounding boxes. Here's a quick example: ```python from sahi.predict import get_sliced_prediction result = get_sliced_prediction( "path/to/your/image.jpeg", detection_model, slice_height=256, slice_width=256, overlap_height_ratio=0.2, overlap_width_ratio=0.2, perform_obb=True Enable OBB ) ``` For more details, check out our guide on SAHI tiled inference: docs.ultralytics.com/guides/sahi-tiled-inference/
@@Ultralytics parameter perform_obb is not recognized in get_sliced_prediction: result = get_sliced_prediction( ^^^^^^^^^^^^^^^^^^^^^^ TypeError: get_sliced_prediction() got an unexpected keyword argument 'perform_obb' I have sahi 0.11.18.
It looks like the `perform_obb` parameter isn't recognized in your current SAHI version. Please ensure you have the latest versions of both `ultralytics` and `sahi`. You can update them using: ```bash pip install -U ultralytics sahi ``` If the issue persists, please provide more details about the error or the specific use case. For further guidance, refer to our SAHI tiled inference documentation: docs.ultralytics.com/guides/sahi-tiled-inference/
@@Ultralytics I upgraded to latest ultralytics and sahi, but still getting the same error. Here are the versions I have: sahi 0.11.18 ultralytics 8.2.75
Thanks for the details! It seems like the `perform_obb` parameter might not be supported in the current version of SAHI. Instead, you can manually handle the OBB predictions by processing the slices and then applying the OBB logic. Here's a workaround: 1. Perform sliced inference without the `perform_obb` parameter. 2. Post-process the results to handle OBB. For detailed steps, please refer to our SAHI tiled inference guide: docs.ultralytics.com/guides/sahi-tiled-inference/ If you continue to face issues, please share more specifics about your use case, and we'll do our best to assist you!
Absolutely! YOLOv9 is designed for high-performance object detection, offering significant improvements in efficiency and accuracy. You can train, validate, predict, and export YOLOv9 models using both Python and CLI commands. For more details, check out the YOLOv9 documentation docs.ultralytics.com/models/yolov9/. 🚀
Thanks for the suggestion! While we can't take specific requests for video content, we appreciate your feedback and will consider it for future content. Stay tuned to our channel for updates! 😊
Is there a way to do this directly with model.predict(source=video.mp4)? or do I have to get each frame -> process with sahi -> send each slice to a model.predict(source=slice.jpeg) ?
You can use `model.predict(source="video.mp4")` directly with Ultralytics YOLO for video inference. It will handle the video frames automatically, processing each one without requiring manual slicing or frame extraction. To efficiently manage large videos, consider using `stream=True` in the `predict()` call for memory optimization, as detailed here: Inference Sources Guide docs.ultralytics.com/modes/predict/.
If your use case involves slicing frames for higher precision (e.g., using SAHI), you would need to extract frames, process them with SAHI, and then run `model.predict()` on each slice.
what augmentation params are most suitable in training if you want to do sahi?
When training a model for SAHI (Slicing Aided Hyper Inference), choosing augmentation parameters that enhance generalization for small objects and varied image scales is critical. Key augmentation settings to focus on include:
1. `scale`: Set this to simulate objects appearing at different distances. This helps the model detect objects at varying scales within slices. Details docs.ultralytics.com/modes/train/.
2. `translate`: Helps the model handle partially visible objects in slices by translating images horizontally and vertically.
3. `mosaic`: Highly effective for complex scenes as it combines multiple images, which is crucial for small object detection in SAHI.
4. `degrees` and `shear`: These improve the model's ability to detect objects in different orientations and angles, beneficial for diverse slice perspectives.
For a full list of augmentation parameters and their effects, check the training guide here docs.ultralytics.com/modes/train/. Experimentation is key to finding the best combination for your dataset!
I remember it was not working with some versions of yolo is it fixed?
I think when i install latest version of sahi
Hi there! 😊 Thanks for your comment. To help you better, could you please specify which versions of YOLO and SAHI you were using when you encountered the issue? Also, make sure you're using the latest versions of `torch` and `ultralytics`. You can find more details in our documentation docs.ultralytics.com. If you still face issues, feel free to share more details! 🚀
Hello, Thanks for the video, it works with Yolov8 segmentation ?
Yes, SAHI works with YOLOv8 segmentation. You can find more details in the SAHI Tiled Inference Guide docs.ultralytics.com/guides/sahi-tiled-inference/. 😊
is it good for pupil detection?
Absolutely! YOLOv8 can be adapted for pupil detection with the right dataset and training. For more details on training custom models, check out our guide: docs.ultralytics.com/guides/model-training-tips/. If you have any specific questions, feel free to ask! 😊
'from sahi.predict import predict'
For using SAHI with YOLOv8-OBB, you can use the `get_sliced_prediction` method, which supports oriented bounding boxes. Here's a quick example:
```python
from sahi.predict import get_sliced_prediction
result = get_sliced_prediction(
"path/to/your/image.jpeg",
detection_model,
slice_height=256,
slice_width=256,
overlap_height_ratio=0.2,
overlap_width_ratio=0.2,
perform_obb=True Enable OBB
)
```
For more details, check out our guide on SAHI tiled inference: docs.ultralytics.com/guides/sahi-tiled-inference/
@@Ultralytics parameter perform_obb is not recognized in get_sliced_prediction:
result = get_sliced_prediction(
^^^^^^^^^^^^^^^^^^^^^^
TypeError: get_sliced_prediction() got an unexpected keyword argument 'perform_obb'
I have sahi 0.11.18.
It looks like the `perform_obb` parameter isn't recognized in your current SAHI version. Please ensure you have the latest versions of both `ultralytics` and `sahi`. You can update them using:
```bash
pip install -U ultralytics sahi
```
If the issue persists, please provide more details about the error or the specific use case. For further guidance, refer to our SAHI tiled inference documentation: docs.ultralytics.com/guides/sahi-tiled-inference/
@@Ultralytics I upgraded to latest ultralytics and sahi, but still getting the same error. Here are the versions I have:
sahi 0.11.18
ultralytics 8.2.75
Thanks for the details! It seems like the `perform_obb` parameter might not be supported in the current version of SAHI. Instead, you can manually handle the OBB predictions by processing the slices and then applying the OBB logic.
Here's a workaround:
1. Perform sliced inference without the `perform_obb` parameter.
2. Post-process the results to handle OBB.
For detailed steps, please refer to our SAHI tiled inference guide: docs.ultralytics.com/guides/sahi-tiled-inference/
If you continue to face issues, please share more specifics about your use case, and we'll do our best to assist you!
can you perform in yolov9?
Absolutely! YOLOv9 is designed for high-performance object detection, offering significant improvements in efficiency and accuracy. You can train, validate, predict, and export YOLOv9 models using both Python and CLI commands. For more details, check out the YOLOv9 documentation docs.ultralytics.com/models/yolov9/. 🚀
@@Ultralytics can you make a video about it
Thanks for the suggestion! While we can't take specific requests for video content, we appreciate your feedback and will consider it for future content. Stay tuned to our channel for updates! 😊