How can I prepare dataset_params if I have a dataset structured as follows: Vid1/images and labels, Vid2/images and labels, and so on up to Vid100? The dataset consists of multiple videos, with each video stored in its own folder.
my colab keep crashing when i try to execute that training block, it throws an error "you colab session crash for unknown reason". my train images are 36, test images are 12, and val images are 9. batch size i have kept just 2. i have tried minimum possible data and batch size because everywhere i searched about the issue they says its about the memory limitations while these are so lower data that it should not cause memory issues
Thanks for the great explanation! I'm having this error after lunching the prediction: "Input type (unsigned char) and bias type (struct c10::Half) should be the same "
I am not sure why are you getting this error but as per the error you need to change the bias type to match the input type or Convert the input type to match the bias type
Set parameter "resume" with the same experiment_name in trainer initialization train_params = { "resume": True, ... } trainer.train(model=model, training_params=train_params, train_loader=train_data, valid_loader=val_data)
Thank you for sharing ! I am currently facing the following issue: super-gradients 3.1.0 requires pyparsing==2.4.5 , but roboflow requires pyparsing==2.4.7. Pyparsing version is incompatible. do I need to resolve this issue?
If you are not using dataset directly from roboflow then uninstall roboflow and run your yolo-NAS model. And if you want to use both modules then try to upgrade super-gradients to a version that is compatible with pyparsing version 2.4.7 or Downgrade roboflow to a version that is compatible with pyparsing version 2.4.5.
hello, thank you so much for explaining everything. I have a question. after training on custom data in checkpoints folder it is saving both the best and latest ckpt.pth. I just wanted to save the best checkpoint only. How can I do that.
You can check their source code and where they have code for saving best.pt and last.pt You can make modifications there to work for your requirements.
Maam can u please help me with this issue. When running this command "trainer.train(model=model, training_params=train_params, train_loader=train_data, valid_loader=val_data)" I'm getting the following error : 'Trainer' object has no attribute 'train_loader'
Ensure that you have the necessary build tools installed, repair the installation if needed, check for additional dependencies, and verify that your Python environment matches the architecture of the build tools.
I have 1000 test images but when using your code it only runs through 100 images. Can you help me solve this problem?? Because I want it to run through all 1000 photos
Yes, it is possible to track objects natively with YOLO-NAS, but it depends on the specific implementation and configuration. Object tracking is a different task than object detection, and while YOLO-NAS was designed primarily for object detection, it can be used for object tracking with additional modifications and techniques. One common approach to object tracking with YOLO-NAS is to use a combination of object detection and object tracking algorithms. For example, you can use YOLO-NAS for object detection in the first frame of a video or sequence, and then use an object tracking algorithm, such as Kalman filters or particle filters, to track the detected objects in subsequent frames.
Another approach is to modify the YOLO-NAS architecture to include object tracking capabilities directly within the neural network. This can involve incorporating additional layers or modules that enable the network to track objects across frames, as well as modifying the loss function to include object tracking objectives in addition to object detection objectives.
No, It is designed for object detection in 2D images and focus on accurately localizing and classifying objects within an image. But you combine object detection models with depth estimation techniques. For example, you can use a separate depth estimation algorithm or model alongside YOLO to infer the depth of detected objects in a scene. By fusing the results from both models, you can obtain both the 2D bounding box and an estimation of the object's depth.
Hey there I got my local system at 128GB RAM, and RTX 3090 24GB card, I don't know why but the kernel dies soon as the training starts. Is the system requirement not sufficient or is it a problem with the code? Can you help out!
System requirement is not a problem here. Check if you have right version of cuda.Check if pytorch is compiled with cuda. Also try to reduce the batch size and then test. Or if the images you are using are of high resolution then try with low resolution images.
Unable to import super_gradients ImportError: cannot import name 'is_directory' from 'PIL._util' (/usr/local/lib/python3.10/dist-packages/PIL/_util.py)
Thank you so much 😍😍 This is a very good video. But I have a problem when trying to run your code. I tried to train a model with a large number of epochs (50 epochs) but due to limited gpu usage time, I could not finish training that model. Can you help me solve that problem??
Hi! Do you plan to do a video about how to use the gpu properly on Windows? I have tried with tensorflow and it is just really difficulty to make it work at least for me. In your video it just works
You need to install cuda and cudnn in your windows machine as per your tensorflow version you are using. You can check this link for knowing about the cuda and cudnn support for tensorflow. Check under GPU section: www.tensorflow.org/install/source
Thank You madam for your video. I can learn lot from You. How can we get details of objects which are detected by model prediction. like class, bounding boxes and other details? Thank You
@@CodeWithAarohi Thank you mam for your response. Can we use any other pre-trained model instead of coco? If there is any documentation for all kinds of information please provide the link. Thank you so much madam
Im getting below error AttributeError: 'Trainer' object has no attribute 'train_loader' in below code trainer.train(model=model, training_params=train_params, train_loader=train_data, valid_loader=val_data) any reason
Mam, I am unable to install super_gradients because it is showing building wheel for pycocotools(project.toml) did not run successfully. Please can anyone help🙏🙏
Hello mam.. nice video. I m interested to know about how to evaluate performance of the YOLO NAS model in terms of mAP, Accuracy, Latency and other performance metrics required for object Detection. Can you please provide code for the same?
Great Aarohi, Thanks I'm having issue while working with custom dataset, in the line train_data.dataset.plot() getting SYSTEM ERROR (SystemError: new style getargs format but argument is not a tuple) mssg (I'm using RTX 4090)
There is a problem with the arguments passed to the plot() function in the train_data.dataset object. This error is typically caused by a mismatch between the expected input format and the actual argument provided. Your GPU is not affecting the code.
Thank you so much. When a new yolo model has been released, I always wait your videos for more and clear understanding.
Glad to hear that! Keep Watching 🙂
Is YOLO-NAS commercial use friendly?
Great content ma'am ..just love the way you explain 🎉🎉 Thank you .
Glad it was helpful!
@@CodeWithAarohi Hi... I am beginner in ML, and got so many errors in traing. Is it possible for you to send me trained model. i mean weights.
Thanks a lot mam, the way you explain is very helpful & easy to understand. Your work is inspiring.
You are welcome 🙏 Glad my videos are helpful!
@@CodeWithAarohihi mam, which Yolo model is better for small object detection ( like grains ) now I am using Yolo v7 can I switch to V8 or NAS
Great content. Keep it up!🔝💯 Greetings from Guatemala 🇬🇹
Thankyou 🙂
Your content is wonderful. Thank you for sharing with the community!
Glad you liked it!
Thank you very much for your work!
My pleasure!
Hi... I am beginner in ML, and got so many errors in traing. Is it possible for you to send me trained model. i mean weights.
Great and easy understanding tutorial video!
Glad it was helpful!
Good Explanation, im learning from Colombia
Glad it was helpful!
AWESOME video!!! But how to evaluate mAP@0.5:0.95 metrics
Thank you for an excellent explanation.👏
Glad it was helpful 🙂
How can I prepare dataset_params if I have a dataset structured as follows: Vid1/images and labels, Vid2/images and labels, and so on up to Vid100? The dataset consists of multiple videos, with each video stored in its own folder.
Great video. Thank you so much. How can I use it on smartphone? Is it possible to convert it into TFLite?
my colab keep crashing when i try to execute that training block, it throws an error "you colab session crash for unknown reason". my train images are 36, test images are 12, and val images are 9. batch size i have kept just 2. i have tried minimum possible data and batch size because everywhere i searched about the issue they says its about the memory limitations while these are so lower data that it should not cause memory issues
Love your explaination, Thanks...
Glad it was helpful!
Amazing! Perfect! 👏👏
Glad you like it!
Great video, thanks for sharing.
Can we get the result here like yolov8?
thanks for the great content i have question, how we can generate the plots like ones we get by default in yolov8
You can use the following commands after the training:
%load_ext tensorboard
%tensorboard --logdir {CHECKPOINT_DIR}/{EXPERIMENT_NAME}
Thanks for the great explanation! I'm having this error after lunching the prediction: "Input type (unsigned char) and bias type (struct c10::Half) should be the same "
I am not sure why are you getting this error but as per the error you need to change the bias type to match the input type or Convert the input type to match the bias type
Hi, I'm getting the same error, did you manage to solve it?
Great! How to resume to training process if it pauses.
Set parameter "resume" with the same experiment_name in trainer initialization
train_params = {
"resume": True,
...
}
trainer.train(model=model,
training_params=train_params,
train_loader=train_data,
valid_loader=val_data)
Thank you for sharing !
I am currently facing the following issue:
super-gradients 3.1.0 requires pyparsing==2.4.5
, but roboflow requires pyparsing==2.4.7.
Pyparsing version is incompatible.
do I need to resolve this issue?
If you are not using dataset directly from roboflow then uninstall roboflow and run your yolo-NAS model. And if you want to use both modules then try to upgrade super-gradients to a version that is compatible with pyparsing version 2.4.7 or Downgrade roboflow to a version that is compatible with pyparsing version 2.4.5.
do we also have to annotate test data here ?
hello, thank you so much for explaining everything. I have a question. after training on custom data in checkpoints folder it is saving both the best and latest ckpt.pth. I just wanted to save the best checkpoint only. How can I do that.
You can check their source code and where they have code for saving best.pt and last.pt You can make modifications there to work for your requirements.
I tried the run notebook and when in the training step, I got the "'Trainer' object has no attribute 'train_loader'" error.
Thats a very nice video ma'am
Thank you!
I already installed super gradients library in my jupyter notebook but still it shows no module found even after restart the kernel too
Try to activate the environment again and then check the version of super-gradients.
Maam can u please help me with this issue.
When running this command "trainer.train(model=model,
training_params=train_params,
train_loader=train_data,
valid_loader=val_data)"
I'm getting the following error : 'Trainer' object has no attribute 'train_loader'
Please check your code taking reference from this: github.com/AarohiSingla/YOLO-NAS/blob/main/YOLONAS_Custom_dataset.ipynb
@@CodeWithAarohi I've directly runned your jupyter notebook itself, but still facing the issue
@@CricRohirat :Same issue with me , please let me know if you get the answer
@@CodeWithAarohi : Even im facing the same issue
While installing super gradient facing issues, kindly expalin
"Thanks a lot, ma'am. Does the YOLO NAS model outperform two-shot detector models like Mask R-CNN based on accuracy?"
Yes, Yes
why am i getting error of microsoft c++ build tools (
which i have updated) on intsalling super-gradients?
Ensure that you have the necessary build tools installed, repair the installation if needed, check for additional dependencies, and verify that your Python environment matches the architecture of the build tools.
can the combine between yolov8 pose and mediapipe estimate the fall or multi person activies
I will try
@CodeWithAarohi hi mam, which Yolo model is better for small object detection ( like grains ) now I am using Yolo v7 can I switch to V8 or NAS
Switch to NAS good for small objects
@@infocus2160thank you soo much
Can you stream video and perform real-time detection in future videos? I'm interested in knowing the latency when conducting real-time detection.
Sure
you make very cool videos!
Thank you!
Very nice video
Thank you!
Thank you
Glad the video is helpful!
I have 1000 test images but when using your code it only runs through 100 images. Can you help me solve this problem?? Because I want it to run through all 1000 photos
Thanks a lot , good job.
Welcome 🙂
Thank you for sharing mam
Welcome
Great news! Thanks for your video. Is it possible to track objects natively with YOLO-NAS?
Yes, it is possible to track objects natively with YOLO-NAS, but it depends on the specific implementation and configuration. Object tracking is a different task than object detection, and while YOLO-NAS was designed primarily for object detection, it can be used for object tracking with additional modifications and techniques.
One common approach to object tracking with YOLO-NAS is to use a combination of object detection and object tracking algorithms. For example, you can use YOLO-NAS for object detection in the first frame of a video or sequence, and then use an object tracking algorithm, such as Kalman filters or particle filters, to track the detected objects in subsequent frames.
Another approach is to modify the YOLO-NAS architecture to include object tracking capabilities directly within the neural network. This can involve incorporating additional layers or modules that enable the network to track objects across frames, as well as modifying the loss function to include object tracking objectives in addition to object detection objectives.
@@CodeWithAarohi Great video. Well explained. Are you able to do a tutorial to incorporate object tracking ?
Hi Arohi , I am getting this error when i am trying to train.AttributeError: 'Trainer' object has no attribute 'train_loader'
Can i also get depth value of object with yolo-NAS model?
No, It is designed for object detection in 2D images and focus on accurately localizing and classifying objects within an image. But you combine object detection models with depth estimation techniques. For example, you can use a separate depth estimation algorithm or model alongside YOLO to infer the depth of detected objects in a scene. By fusing the results from both models, you can obtain both the 2D bounding box and an estimation of the object's depth.
thanks needed falldetection
Glad it helped!
Hey there I got my local system at 128GB RAM, and RTX 3090 24GB card, I don't know why but the kernel dies soon as the training starts. Is the system requirement not sufficient or is it a problem with the code?
Can you help out!
System requirement is not a problem here. Check if you have right version of cuda.Check if pytorch is compiled with cuda. Also try to reduce the batch size and then test. Or if the images you are using are of high resolution then try with low resolution images.
How to train this on 2 GPUs?
setup_device(num_gpus=4)
@@CodeWithAarohi thank you I will try on Monday.
Your content is awesome. But incase i annotate my images in roboflow how do insert it to train my dataset?
The process is same.
Thanks but how do I get my confusion matrix ?
Hi Mam,
Can you please try for YOLOX for custom dataset.... on colab environment
Will try
😮
Unable to import super_gradients
ImportError: cannot import name 'is_directory' from 'PIL._util' (/usr/local/lib/python3.10/dist-packages/PIL/_util.py)
Hello,
I want to use dataset that was annotated using instance segmentation technique for YOLO-NAS. Is it possible? Thanks in-advance
Hi... I am beginner in ML, and got so many errors in traing. Is it possible for you to send me trained model. i mean weights.
Thank you so much 😍😍 This is a very good video. But I have a problem when trying to run your code. I tried to train a model with a large number of epochs (50 epochs) but due to limited gpu usage time, I could not finish training that model. Can you help me solve that problem??
Reduce batch size
@@CodeWithAarohi i’ll try
Hi Aarohi Do you have the video series of vision Transformers. Your videos are good though.
No, Not yet! Working on it.
Hello, Can you share the full version of fall detection dataset?
universe.roboflow.com/sabya/fall_det-q1dp5
hello, nice video there, may I know how to build own dataset for train YOLO-NAS?
Suppose you have a dataset in yolo v5,v7 or v8 format. Just follow the steps from 11:30 timestamp
Hi! Do you plan to do a video about how to use the gpu properly on Windows? I have tried with tensorflow and it is just really difficulty to make it work at least for me.
In your video it just works
You need to install cuda and cudnn in your windows machine as per your tensorflow version you are using. You can check this link for knowing about the cuda and cudnn support for tensorflow. Check under GPU section: www.tensorflow.org/install/source
@@CodeWithAarohidoes this support in Mac os ?
Ma'am can u pls send the jupyter notebook of fall detection bcos we are doing as our clg project it will be helpful to us plssssssssss
github.com/AarohiSingla/YOLO-NAS
@@CodeWithAarohi thank you so much ma'am
Is it applicable for classification?
Yolo Nas is an object detection model.
How to do for segmentation on custom dataset
Still exploring...
@@CodeWithAarohi Ok . Thank you
Thank You madam for your video. I can learn lot from You. How can we get details of objects which are detected by model prediction. like class, bounding boxes and other details?
Thank You
predictss = model.predict(image, conf = confidence)._images_prediction_lst
._images_prediction_lst will provide you all the details
@@CodeWithAarohi Thank you mam for your response. Can we use any other pre-trained model instead of coco? If there is any documentation for all kinds of information please provide the link.
Thank you so much madam
Im getting below error
AttributeError: 'Trainer' object has no attribute 'train_loader'
in below code
trainer.train(model=model,
training_params=train_params,
train_loader=train_data,
valid_loader=val_data)
any reason
github.com/AarohiSingla/YOLO-NAS/issues/2
Error in installation of super gradient. Mam pls help mam
What is the error?
Mam, I am unable to install super_gradients because it is showing building wheel for pycocotools(project.toml) did not run successfully. Please can anyone help🙏🙏
Check if you have cython installed. Try this pip install pycocotools-windows
Hello mam.. nice video.
I m interested to know about how to evaluate performance of the YOLO NAS model in terms of mAP, Accuracy, Latency and other performance metrics required for object Detection.
Can you please provide code for the same?
Hi... I am Beginer in ML, and got so many errors in traing. Is it possible for you to send me trained model. i mean weights.
hello mam... nice project. can i get a research paper on this project
deci.ai/blog/yolo-nas-object-detection-foundation-model/
Hello mam, which python version is this? 3.10. x?
Python 3.10
Ma'am can you put your dataset link to use to detect fall
I took it from roboflow
@@CodeWithAarohi Thank You Ma'am
Greate content. Can you share the dataset?
You can take this dataset from Roboflow.
Great Aarohi, Thanks
I'm having issue while working with custom dataset, in the line train_data.dataset.plot() getting SYSTEM ERROR (SystemError: new style getargs format but argument is not a tuple) mssg (I'm using RTX 4090)
There is a problem with the arguments passed to the plot() function in the train_data.dataset object. This error is typically caused by a mismatch between the expected input format and the actual argument provided. Your GPU is not affecting the code.