very Good. well done. Your videos are really informative. Video on Yolov5 for calcification is also good. Your vides help me to solve my issues. Thanks
Very good explanation. One doubt on real time detection. what should be the maximum distance for YOLOv9 to detect the object? can it detect at 30ft or 40 feet could you please provide these details?
Can you upload a video on what you typically do to increase the accuracy of predictions such as hyperparameter tuning or working on the dataset? Are there any indicators from the results that could tell us what we can do to improve model accuracy or precision.
Great video. I did train my data set ,every thing worked fine except the val/losses stuck to zero during 100 training epochs .Any suggestion to solve this problem?
Hello, there is a tutorial on the Ultralytics page on how to use Yolov9. You can import the Ultralytics package and start training a model with just a few lines of code (like Yolov8). What is the difference between the approach in this video and using the Ultralytics package? What are the advantages and disadvantages? Thanks for your help.
@@Roboflow so you can use polygons to highlight objects of interest when creating the masks (in train, validate, test data) for YOLOv9 to detect objects?
Pardon me for a novice question - how can we prepare a dataset with bounding boxes? I have 3000 leaf images and I want to prepare a dataset where the leaf is identified. Any reference would be of great help.
Login with your mail to roboflow and you saw upload your dataset -> assign images to annotate. You will annotate box or ellipse. Annotate > generate model > export dataset > select the format you want . Either download or copy the snippet
Can you guide me on how to improve P,R,mAP when I keep getting stuck at around 90%. I am training yolov9 model with retail domain data about 1700 images and about 2000 labels for each object. Especially the label condition, can I label a part of the object or symmetry?
have been training YOLO models currently YOLOv9 using the same training script that used to generate train_batch.jpg and similar image files in the runs/train/exp directory. These images provided visualizations of the training batches and were extremely helpful for debugging. However, after recent updates or changes, the train_batch.jpg and similar image file is no longer being generated, even though I'm using the same script and hyperparameters. How can I re-enable the generation of train_batch.jpg or equivalent batch visualization during training? trayed re rerunning same old code.
What happens if the process crashes or errors out during training? Can you restart at the previous epoch or continue training from the point of the crash?
Hey what if i am not able to find an appropriate dataset on robolflow or anywhere else,how should i fine tune then.I want to detect cricket players in a frame ,but couldn't find a datset anywhere.Annotating 4000+ images will take a lot of time
Thank you for the informative video. I have a query related to my ongoing mobile app project, which involves capturing images with the camera, performing immediate cropping, and subsequently analyzing the colors using a dedicated algorithm. Can YOLOv9 facilitate real-time cropping of the captured images and allow seamless integration with code for color identification?
@@Roboflow I need to crop a photo of a cuvette, isolating only the liquid part. From there, I'll extract its color and integrate it into a pre-existing formula.
@@Roboflow That's the thing, im lost and sure how to proceed, this is why i need your help. I just need to find a way how to crop the liquid from the cuvette so i can analyze it later on (i might have not been very clear first)
In this video you talk about too many epochs is possible in training causing over-fitting..etc, but in Yolo v8 there is protection against this with 'EarlyStopping', for example I can set up 1000 epochs and the training will stop after say 573 epochs and observe the best results and create a model as best.pt for example. Is this same feature in Yolo v9? Many thanks.
it's possible do this trainning offline, just using Python? I'm new on this wolrd... and I'm don't know Colab or any Workbook technologie. I just want to test a bit on my local PC, do you have some kind of Repo for Offline mode? I mean no online trainning!
@@Roboflowi used my own custom dataset using roboflow , no i did not change anything and i used the same dataset to run a yolov5 code and it worked perfectly fine
hello! Thank you for the video. I trained the images with yolov9, processed the test data, and saved the weights. My final goal is to detect video, but the following error keeps occurring during loading weights. Is there a way to load the yolov9 model? I used a torch. Traceback (most recent call last): File "/content/yolov9/detect.py", line 232, in main(opt) File "/content/yolov9/detect.py", line 227, in main run(**vars(opt)) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/content/yolov9/detect.py", line 68, in run model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half) File "/content/yolov9/models/common.py", line 684, in __init__ model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse) File "/content/yolov9/models/experimental.py", line 76, in attempt_load ckpt = (ckpt.get('ema') or ckpt['model']).to(device).float() # FP32 model KeyError: 'model'
Thanks for the tutorial!! When using roboflow and the yolov9-c or yolov9-e model I get an AttributeError: 'list' object has no attribute 'view' in the loss_tal.py file line 168-169: File "/content/yolov9/utils/loss_tal.py", line 168, in pred_distri, pred_scores = torch.cat([xi.view(feats[0].shape[0], self.no, -1) for xi in feats], 2).split() How can this issue be solved?
It is GPL 3.0 so I’m afraid not. Take a look here: ruclips.net/video/dL9B9VUHkgQ/видео.htmlsi=BrjmBB-fKo7R5EtE I listed few models with permissive licenses here. Not to advertise or anything, but you can also train YOLO-NAS on Roboflow and use it commercially. Even on free tier.
Your intro video shows specific player numbers, but I can't see that on the video (just generic players with inference certainty). Is it possible to use Yolo 9 for specific player tracking (like the splash image of this video), or is there a way Bytetrack or other plug-in can work with Yolo 9 to allow instance tracking?
I am currently working on a project that involves tracking a specific person in a CCTV video and performing video summarization based on that person's movements. I plan to use YOLOv8 for object detection and either Deep SORT or Strong SORT for tracking. My main question is whether it is possible to track a specific person in the video using their assigned ID. For example, if I want to track the person with ID 2 in the video, can I do so without having pre-annotations or IDs for that person? Essentially, I would input a random video, obtain IDs for each person in the video, and then specify the ID of the person I want to track. I would greatly appreciate it if you could provide me with a detailed explanation or, if possible, create a video on this topic. Your insights would be invaluable to me and others working on similar projects. Can you please make a video on object detection and then video summarization on the detected object Thank you for your time and consideration. I look forward to your response.
let us consider both the cases.. but mostly the person coming in and out of the frame@@Roboflow and can you please do a video on object detection and video summarization on that detected object. thanks for your response
After training my model on one class, I observed good performance on the validation and test datasets. However, upon inspecting the confusion matrix generated during training, I observed a high rate of false positives with the background class. ``` Actual Class | Car | Background | Predicted Class |---------------------------------| Car | 1.0 | 1.0 | Background | | | ``` In the dataset, I provided an empty label.txt file when no object was in the image. Can you help me to understand why?
@@cagataydemirbas7259 as far as I know YOLOv9 only look at boxes during training. YOLOv8 on the other hand use polygons to make detection model better.
@@Roboflow - I did not change anything. I used the same command-line argument to train. Even the dataset was exported from Roboflow. Here's a traceback: Traceback (most recent call last): File "/yolov9/train.py", line 634, in main(opt) File "/yolov9/train.py", line 528, in main train(opt.hyp, opt, device, callbacks) File "/yolov9/train.py", line 277, in train for i, (imgs, targets, paths, _) in pbar: # batch ------------------------------------------------------------- File "/opt/conda/lib/python3.10/site-packages/tqdm/std.py", line 1178, in __iter__ for obj in iterable: File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 631, in __next__ data = self._next_data() File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1346, in _next_data return self._process_data(data) File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1372, in _process_data data.reraise() File "/opt/conda/lib/python3.10/site-packages/torch/_utils.py", line 722, in reraise raise exception IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop data = fetcher.fetch(index) File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/yolov9/utils/dataloaders.py", line 656, in __getitem__ img, labels = self.load_mosaic(index) File "/yolov9/utils/dataloaders.py", line 791, in load_mosaic img4, labels4, segments4 = copy_paste(img4, labels4, segments4, p=self.hyp['copy_paste']) File "/yolov9/utils/augmentations.py", line 248, in copy_paste l, box, s = labels[j], boxes[j], segments[j] IndexError: list index out of range
File "/content/yolov9/train.py", line 634, in main(opt) File "/content/yolov9/train.py", line 528, in main train(opt.hyp, opt, device, callbacks) File "/content/yolov9/train.py", line 277, in train for i, (imgs, targets, paths, _) in pbar: # batch ------------------------------------------------------------- File "/usr/local/lib/python3.10/dist-packages/tqdm/std.py", line 1181, in __iter__ for obj in iterable: File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 630, in __next__ data = self._next_data() File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1345, in _next_data return self._process_data(data) File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1371, in _process_data data.reraise() File "/usr/local/lib/python3.10/dist-packages/torch/_utils.py", line 694, in reraise raise exception IndexError: Caught IndexError in DataLoader worker process 0. Original Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop data = fetcher.fetch(index) File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 51, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/content/yolov9/utils/dataloaders.py", line 656, in __getitem__ img, labels = self.load_mosaic(index) File "/content/yolov9/utils/dataloaders.py", line 791, in load_mosaic img4, labels4, segments4 = copy_paste(img4, labels4, segments4, p=self.hyp['copy_paste']) File "/content/yolov9/utils/augmentations.py", line 248, in copy_paste l, box, s = labels[j], boxes[j], segments[j] IndexError: list index out of range@@Roboflow
SIR IAM NOT ABLE TO EXECUTE THE YOLOV9 MODEL ---GETTING THE ERROR AS ile "/content/yolov9/yolov9/yolov9/utils/general.py", line 537, in check_dataset raise Exception('Dataset not found ❌')
Thanks for uploading this video
Thanks for the video. Really thorough explanation!
At 15:10, the referee collides with a player and falls on beat. I loved this too much.
Thank you, congratulations
Thanks a lot!
very Good. well done. Your videos are really informative. Video on Yolov5 for calcification is also good. Your vides help me to solve my issues. Thanks
I'm so happy to hear that!
Very good explanation. One doubt on real time detection. what should be the maximum distance for YOLOv9 to detect the object? can it detect at 30ft or 40 feet could you please provide these details?
Can you upload a video on what you typically do to increase the accuracy of predictions such as hyperparameter tuning or working on the dataset? Are there any indicators from the results that could tell us what we can do to improve model accuracy or precision.
Thank you very much for the amazing video. Highly appreciated!
Great video. I did train my data set ,every thing worked fine except the val/losses stuck to zero during 100 training epochs .Any suggestion to solve this problem?
It does not need to get to zero. It needs to go as low as possible.
Yes U R right ,but I don't know why all the val/losses stuck on zero in the training step.@@Roboflow
Hello, there is a tutorial on the Ultralytics page on how to use Yolov9. You can import the Ultralytics package and start training a model with just a few lines of code (like Yolov8). What is the difference between the approach in this video and using the Ultralytics package? What are the advantages and disadvantages? Thanks for your help.
Spiderman meme was a nice detail
I’m glad you like my sense of humor 🤣
Appreciated
Pleasure!
Hello and thank you for your video! I managed to train my model but I can't find where the .pt or /content files are saved, could you tell me please?
Great video! What about the players´ ID? Is there any re-id algorithm built inside YOLOv9? Is it possible to export the coordinates of each player?
Nope. YOLOv9 is only producing detection box + class.
Awesome video!!
Can polygons be used for this model to detect objects?
Good question! YOLOv9 for now only supports detection.
@@Roboflow so you can use polygons to highlight objects of interest when creating the masks (in train, validate, test data) for YOLOv9 to detect objects?
Can you make a video adding Attention layer in yolov9?
to be honest I don’t think we plan any YOLOv9 vids in near future
Pardon me for a novice question - how can we prepare a dataset with bounding boxes? I have 3000 leaf images and I want to prepare a dataset where the leaf is identified. Any reference would be of great help.
You can do it via roboflow platform. Try it out ;)
Login with your mail to roboflow and you saw upload your dataset -> assign images to annotate. You will annotate box or ellipse.
Annotate > generate model > export dataset > select the format you want . Either download or copy the snippet
Can we save the prediction video with roboflow from RTSP server?
That is a Nice video. Thanks
can you make some object detection project with streamlit web application?
I could try. I never used it myself haha
Can you guide me on how to improve P,R,mAP when I keep getting stuck at around 90%. I am training yolov9 model with retail domain data about 1700 images and about 2000 labels for each object. Especially the label condition, can I label a part of the object or symmetry?
have been training YOLO models currently YOLOv9 using the same training script that used to generate train_batch.jpg and similar image files in the runs/train/exp directory. These images provided visualizations of the training batches and were extremely helpful for debugging. However, after recent updates or changes, the train_batch.jpg and similar image file is no longer being generated, even though I'm using the same script and hyperparameters. How can I re-enable the generation of train_batch.jpg or equivalent batch visualization during training?
trayed re rerunning same old code.
😍😍😍😍
Is there any good way to auto label? Or is manually labeling the only real use case method
Take a look at this video: ruclips.net/video/oEQYStnF2l8/видео.htmlsi=AzUpP3L-jYgPZFre
Is it possible use it for human object detectiin like "grabbing the glass" ?
If you will have the right dataset I think so.
on the img size can be train --img--size 810 832?
good vid at 1.75x speed
Hahaha nice ;) I was not aware I talk so slow in English
Hhhh agree also 2x is very good and understanding
What happens if the process crashes or errors out during training? Can you restart at the previous epoch or continue training from the point of the crash?
You should be able to restart the training by pointing the weights parameter to latest.pt file.
Hey what if i am not able to find an appropriate dataset on robolflow or anywhere else,how should i fine tune then.I want to detect cricket players in a frame ,but couldn't find a datset anywhere.Annotating 4000+ images will take a lot of time
when we train it on our custom data is it still pretrained from coco?
Thank you for the informative video. I have a query related to my ongoing mobile app project, which involves capturing images with the camera, performing immediate cropping, and subsequently analyzing the colors using a dedicated algorithm. Can YOLOv9 facilitate real-time cropping of the captured images and allow seamless integration with code for color identification?
What are your cropping critters?
@@Roboflow I need to crop a photo of a cuvette, isolating only the liquid part. From there, I'll extract its color and integrate it into a pre-existing formula.
@@karimfallakha looks like box is not enough. You need segmentation. Am I correct?
@@Roboflow That's the thing, im lost and sure how to proceed, this is why i need your help. I just need to find a way how to crop the liquid from the cuvette so i can analyze it later on (i might have not been very clear first)
In this video you talk about too many epochs is possible in training causing over-fitting..etc, but in Yolo v8 there is protection against this with 'EarlyStopping', for example I can set up 1000 epochs and the training will stop after say 573 epochs and observe the best results and create a model as best.pt for example. Is this same feature in Yolo v9? Many thanks.
yes
hello sir ,how can i create weight files and cfg files for a dataset using yolo v9
it's possible do this trainning offline, just using Python?
I'm new on this wolrd... and I'm don't know Colab or any Workbook technologie. I just want to test a bit on my local PC, do you have some kind of Repo for Offline mode? I mean no online trainning!
how to make it real time detector please?
when i get to training my custom dataset i get IndexError: list index out of range what should i do
How many classes are there in your proyect?
Indexes go from 0 to n-1, where n is the number of classes.
@@ajarivas72 i have 4 classes , i tried changing the nc to 3 and got the same error
Did you use your own dataset or dataset from Roboflow? Did you changed anything apart of dataset? Model architecture? Parameters values? Anything?
@@Roboflowi used my own custom dataset using roboflow , no i did not change anything and i used the same dataset to run a yolov5 code and it worked perfectly fine
Same
hello! Thank you for the video. I trained the images with yolov9, processed the test data, and saved the weights. My final goal is to detect video, but the following error keeps occurring during loading weights. Is there a way to load the yolov9 model? I used a torch.
Traceback (most recent call last):
File "/content/yolov9/detect.py", line 232, in
main(opt)
File "/content/yolov9/detect.py", line 227, in main
run(**vars(opt))
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/yolov9/detect.py", line 68, in run
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
File "/content/yolov9/models/common.py", line 684, in __init__
model = attempt_load(weights if isinstance(weights, list) else w, device=device, inplace=True, fuse=fuse)
File "/content/yolov9/models/experimental.py", line 76, in attempt_load
ckpt = (ckpt.get('ema') or ckpt['model']).to(device).float() # FP32 model
KeyError: 'model'
Could you share a command that you used?
Thanks for the tutorial!!
When using roboflow and the yolov9-c or yolov9-e model I get an AttributeError: 'list' object has no attribute 'view' in the loss_tal.py file line 168-169:
File "/content/yolov9/utils/loss_tal.py", line 168, in
pred_distri, pred_scores = torch.cat([xi.view(feats[0].shape[0], self.no, -1) for xi in feats], 2).split()
How can this issue be solved?
Do you resolve the error? If yes how please
Is free for commerical use?
It is GPL 3.0 so I’m afraid not. Take a look here: ruclips.net/video/dL9B9VUHkgQ/видео.htmlsi=BrjmBB-fKo7R5EtE I listed few models with permissive licenses here. Not to advertise or anything, but you can also train YOLO-NAS on Roboflow and use it commercially. Even on free tier.
Supposing I want the predictions in a .txt file, how can i do that?
Your intro video shows specific player numbers, but I can't see that on the video (just generic players with inference certainty). Is it possible to use Yolo 9 for specific player tracking (like the splash image of this video), or is there a way Bytetrack or other plug-in can work with Yolo 9 to allow instance tracking?
You would need to use tracker on top of detection.
what if I want to use my own images and labels?
waiting time tracking Tutorial video
It will be out today! Sorry I made you all wait for it soo long :/
Can yolo v9 also doing object segmenration?
Nope. Unfortunately (at least for now) it only supports object detection.
@@Roboflow thanks. What do you suggest for detection and segmentation real time for a working station ( detect tools and human)? Yolact maybe?
I am currently working on a project that involves tracking a specific person in a CCTV video and performing video summarization based on that person's movements. I plan to use YOLOv8 for object detection and either Deep SORT or Strong SORT for tracking.
My main question is whether it is possible to track a specific person in the video using their assigned ID. For example, if I want to track the person with ID 2 in the video, can I do so without having pre-annotations or IDs for that person? Essentially, I would input a random video, obtain IDs for each person in the video, and then specify the ID of the person I want to track.
I would greatly appreciate it if you could provide me with a detailed explanation or, if possible, create a video on this topic. Your insights would be invaluable to me and others working on similar projects.
Can you please make a video on object detection and then video summarization on the detected object
Thank you for your time and consideration. I look forward to your response.
Is that person coming in and out of the frame? Or is it visible all the time?
let us consider both the cases.. but mostly the person coming in and out of the frame@@Roboflow
and can you please do a video on object detection and video summarization on that detected object. thanks for your response
After training my model on one class, I observed good performance on the validation and test datasets. However, upon inspecting the confusion matrix generated during training, I observed a high rate of false positives with the background class.
```
Actual Class
| Car | Background |
Predicted Class |---------------------------------|
Car | 1.0 | 1.0 |
Background | | |
```
In the dataset, I provided an empty label.txt file when no object was in the image.
Can you help me to understand why?
How many images did you had in your dataset?
@@Roboflow I have a total of 2837 images with the object and 2776 without an object. Total images 5613.
train: 4490
test: 560
val: 560
how to change the optimizer to Adam?
🥳🥳👍
where is a code snippet to handle the video @ 14:45
How do you load in a video clip in the script
How do you upload to Colab or how do you run YOLOv9 inference on video?
Hi sir as requested last time... could you please create a video on licence plate detection and extraction... please make a video on this topic....
It is on my TODO list ;)
It really is. We just have so much ideas.
I want to detect the vehicles count on traffic signal for only single lane , that should the cctv camera can u give the idea for this ,please
Is the camera static or is it moving?
@@Roboflow static
@@Roboflow I want to make the project for traffic so I need to detect the vehicles count on signal with the help of CCTV camera
Can we use polygonal annotations for v9 ?
Good question. At least for now, no.
@@Roboflow do you think plygonal annotations are better than rectangles for model to learn better from data?
@@cagataydemirbas7259 as far as I know YOLOv9 only look at boxes during training. YOLOv8 on the other hand use polygons to make detection model better.
@@Roboflow Thanks a lot
Custom Training: IndexError: list index out of range
Can you give me a bit more details? Do you use my code? What did you changed?
@@Roboflow I changed nothing!
The command line arguments are the same.
Even the dataset is exported using Roboflow.
0% 0/33 [00:00
@@Roboflow - I did not change anything.
I used the same command-line argument to train.
Even the dataset was exported from Roboflow.
Here's a traceback:
Traceback (most recent call last):
File "/yolov9/train.py", line 634, in
main(opt)
File "/yolov9/train.py", line 528, in main
train(opt.hyp, opt, device, callbacks)
File "/yolov9/train.py", line 277, in train
for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
File "/opt/conda/lib/python3.10/site-packages/tqdm/std.py", line 1178, in __iter__
for obj in iterable:
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 631, in __next__
data = self._next_data()
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1346, in _next_data
return self._process_data(data)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1372, in _process_data
data.reraise()
File "/opt/conda/lib/python3.10/site-packages/torch/_utils.py", line 722, in reraise
raise exception
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/opt/conda/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/yolov9/utils/dataloaders.py", line 656, in __getitem__
img, labels = self.load_mosaic(index)
File "/yolov9/utils/dataloaders.py", line 791, in load_mosaic
img4, labels4, segments4 = copy_paste(img4, labels4, segments4, p=self.hyp['copy_paste'])
File "/yolov9/utils/augmentations.py", line 248, in copy_paste
l, box, s = labels[j], boxes[j], segments[j]
IndexError: list index out of range
File "/content/yolov9/train.py", line 634, in
main(opt)
File "/content/yolov9/train.py", line 528, in main
train(opt.hyp, opt, device, callbacks)
File "/content/yolov9/train.py", line 277, in train
for i, (imgs, targets, paths, _) in pbar: # batch -------------------------------------------------------------
File "/usr/local/lib/python3.10/dist-packages/tqdm/std.py", line 1181, in __iter__
for obj in iterable:
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 630, in __next__
data = self._next_data()
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
return self._process_data(data)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/usr/local/lib/python3.10/dist-packages/torch/_utils.py", line 694, in reraise
raise exception
IndexError: Caught IndexError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.10/dist-packages/torch/utils/data/_utils/fetch.py", line 51, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/content/yolov9/utils/dataloaders.py", line 656, in __getitem__
img, labels = self.load_mosaic(index)
File "/content/yolov9/utils/dataloaders.py", line 791, in load_mosaic
img4, labels4, segments4 = copy_paste(img4, labels4, segments4, p=self.hyp['copy_paste'])
File "/content/yolov9/utils/augmentations.py", line 248, in copy_paste
l, box, s = labels[j], boxes[j], segments[j]
IndexError: list index out of range@@Roboflow
good ddd
7:33 16:51
Hello, thank you for your video. It's really helpful. But on putting this command:
%cd {HOME}/yolov9
!python train.py \
--batch 16 --epochs 50 --img 640 --device 0 --min-items 0 --close-mosaic 15 \
--data {dataset.location}/data.yaml \
--weights {HOME}/weights/gelan-c.pt \
--cfg models/detect/gelan-c.yaml \
--hyp hyp.scratch-high.yaml
I am getting the following error:
Epoch GPU_mem box_loss cls_loss dfl_loss Instances Size
0% 0/243 [00:00
same here did you solve it?
Hopefully this doesn't go the way of yolov7.
Can you be a bit more specific what do you mean?
@@Roboflow It think it is safe to say that shortly after it's release, YOLOv7 was abandoned. This can be seen by the activity in it's repo.
SIR IAM NOT ABLE TO EXECUTE THE YOLOV9 MODEL ---GETTING THE ERROR AS ile "/content/yolov9/yolov9/yolov9/utils/general.py", line 537, in check_dataset
raise Exception('Dataset not found ❌')