Hi, you mention around 14:26 that you exported or downloaded the model so that you could use it with the webcam, but you don't describe that process at all. How do you get the pt file with your custom-trained model from Google Collab?
It seems it does not like the reflectivity of the std cup at the end. Make sense, tricky case really, even for deepnet models. Thanks for the video, very cool nontheless. I like especially when you show stuff which do not work perfect on the webcam, that is interesting and genuine.
Yeah for sure. Also the std cup was underrepresented in the dataset doe. I could have tuned some confidence score thresholds or added a tracker. It will make some false predictions when moving the camera around and when it runs with 75+ fps
hello sir i followed your tutorial and it worked out great! Do you have a video on how to extract the coordinates of the bounding box so i can determine the centroid of the boxes?
@@NicolaiAI I saw thank you! What I mean is using your own dataset containing your own images. Do you maybe have a suitable tool that I can use to draw the bounding polygons for segmentation? I have used LabelImg already, but it obv is not going to cut it for instance segmentation datasets.
The first video I have about yolov8 I show how to use the command line for testing the Yolov8 models. I show with the webcam as the source but u can just throw in the path to the images or an image folder instead. Thanks for watching!
Hi. I was actually looking for the deployment process and skipped to that chapter in your video, but it looks like only inference is being shown. Where and how is it being deployed? Using kubernetes? TF serving?
Hello, I love your videos and was wondering if you could make a video that covers in depth training process and datapreprocessing (such as: data augmentation, hyperparameter tuning, evolve, data vizualization etc.)
Data Preprocessing is the most asked thing in this field I experienced this as an intern. I would request you to take the data and normalize and convert them to to nparrary Or do some Preprocessing. Kindly make it from a granular level
Thanks for the good video. I will follow it. Can i get the coordinates of the detection objects real time? I would like to use them as an input for another program.
thanks for the video. how can i tackle a task of counting objects in a video with moving camera? specifically sea animals in a big pool so i need to scan all the pool and accumalte the numbers
At 5:58 the "!yolo task=detect mode=train model=yolov8m.pt data={dataset.location}/data.yaml epochs=30 imgsz=640" command is not appearing on the notebook I copied. Should I add it manually? Also the 2 commands afterwards (start with image(filename)) are missing also from the notebook. If I run the following command (!yolo task=detect..) it fails. The error is: Traceback (most recent call last): File "/usr/local/bin/yolo", line 8, in sys.exit(entrypoint()) File "/usr/local/lib/python3.10/dist-packages/ultralytics/cfg/__init__.py", line 423, in entrypoint getattr(model, mode)(**overrides) # default args from model File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/model.py", line 372, in train self.trainer = trainer(overrides=overrides, _callbacks=self.callbacks) File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/trainer.py", line 127, in __init__ raise RuntimeError(emojis(f"Dataset '{clean_url(self.args.data)}' error ❌ {e}")) from e RuntimeError: Dataset '/content/Loco1-1/data.yaml' error ❌ Dataset '/content/Loco1-1/data.yaml' images not found ⚠, missing path '/content/Loco1-1/Loco1-1/valid/images' Note dataset download directory is '/content/datasets'. You can update this in '/root/.config/Ultralytics/settings.yaml'
Thanks a lot. In your code, line 5, how did you export and get the .pt file? I exported from roboflow as yolov8 but I don't get any pt file. thanks again :)
Hello! I'm having trouble while trying to fix the epochs, i get an error that there are no images in the dataset although they are in fact existing in all of the train validation and test files. How can I fix ?
Thank you so much. However I hvae another problem. When I run the vscode script I do get the results running in the terminal but the webcam isn't displayed. My webcam is activated and it does show up when I try functions like cv2.imshow()..
I get errors when passing to Install YOLO v8: ValueError: Invalid 'mode='. Valid modes are ('train', 'val', 'predict', 'export', 'track', 'benchmark').
Thansk for watching! Think i deleted it when i updated the description. It is not added again. colab.research.google.com/drive/1q0GqWbYjACs9uVTjhsQfVQkfF0hhLHsq?usp=sharing
I have some problem with my virtual environment I installed all the required files and when I try to import YOLO from ultralytics for example from jupyter notepad using VScode, the import gets stuck and looks like it will never finish Maybe some of you have had a similar problem or know what is wrong with my environment?
hey, Can you provide me a script on how to deploy yolov8 model for intel Realsense Camera. Thanks in Advance! Btw your tutorials are very helpful. Keep up with the work!!
Thanks a lot man! I already have a video where we implement a custom class with the yolo model and run inference on webcam then u will just have to swap the camera
my problem is stoping the session in kaggle or google colab and my volum dataset is 4005 image can you help me pleas Because I have my graduation thesis and I don't have much time
Hi, I was struggling with the python script to use it , so, thank you very much for uploading this tutorial! 😄 But, I still have one question, how can we deploy it using flask or some other framework?
@NicolaiAI Sir, please make a video on: how many sample or images of an object for training and then implementing, epoches, batch confidence value for achive better result. #nicolainielsen #nicolaiaI
For users who are unable to run the training process, be assured to change the data.yaml from the yolk file downloaded in the Collab. Go into the code, on the most bottom line. Change the file directory to ../xxxx/xxxx
Join My AI Career Program
www.nicolai-nielsen.com/aicareer
Enroll in My School and Technical Courses
www.nicos-school.com
how do we want to make detection prediction classs 12:07
@@faisalhazry i show that in another video on my Channel with yolov8 class
Sir please make a project tutorial like this, but with classifications custom dataset on roboflow.
one mistake or update
in the yolo training line
data={dataset.location} wont work so try
source="dataset location"
thankssss , more people should see thiss
this works
data = "dataset location"
thanks for that
Bro pls consider only one window of yourself because many information is getting hide.
you are always doing the best congratulations!!
Thanks a lot!
Thankyou so much, Your video was short and simple and it helped me out a lot
Glad it helped!
Interesting! Can you make a tutorial on trajectory prediction with OpenCV and/or other computer vision libraries?
Hi, you mention around 14:26 that you exported or downloaded the model so that you could use it with the webcam, but you don't describe that process at all. How do you get the pt file with your custom-trained model from Google Collab?
also still trying to figure out what happened there... solution?
i am in the same question
@@sevenplyy I FOUND
@@Luca-yt2bg NICE! please tell me
@@Luca-yt2bghow?
This is really amazing. I really love your video. Kindly make a video on multiple video stiching and like making it live.
It seems it does not like the reflectivity of the std cup at the end. Make sense, tricky case really, even for deepnet models. Thanks for the video, very cool nontheless. I like especially when you show stuff which do not work perfect on the webcam, that is interesting and genuine.
Yeah for sure. Also the std cup was underrepresented in the dataset doe. I could have tuned some confidence score thresholds or added a tracker. It will make some false predictions when moving the camera around and when it runs with 75+ fps
hello sir i followed your tutorial and it worked out great! Do you have a video on how to extract the coordinates of the bounding box so i can determine the centroid of the boxes?
Thanks a lot for watching! Yeah I have a video where we implement it in a custom class
Great video! Just wonder how to verify the evaluation results of mAP, Precision and recall in Roboflow using the results in Google colab?
Thanks for the video! Will you be making a video on training a custom dataset for segmentation on Yolo v8?
I already did! Check out the second latest video
@@NicolaiAI I saw thank you! What I mean is using your own dataset containing your own images. Do you maybe have a suitable tool that I can use to draw the bounding polygons for segmentation? I have used LabelImg already, but it obv is not going to cut it for instance segmentation datasets.
He has already made it!!! last video
@@erikpetrosyan9662 that's my bad, I clearly didn't pay enough attention. Thanks
thanks for the vid! can you make a video on how we can test our model for custom dataset (like a bunch of images on my local pc)?
The first video I have about yolov8 I show how to use the command line for testing the Yolov8 models. I show with the webcam as the source but u can just throw in the path to the images or an image folder instead. Thanks for watching!
Any idea on how to deploy this on a cloud server? How would you connect the cameras?
Hi. I was actually looking for the deployment process and skipped to that chapter in your video, but it looks like only inference is being shown. Where and how is it being deployed? Using kubernetes? TF serving?
Hello, I love your videos and was wondering if you could make a video that covers in depth training process and datapreprocessing (such as: data augmentation, hyperparameter tuning, evolve, data vizualization etc.)
Hi thanks a lot for watching! I already have videos going over all those things in my deep learning playlist
Data Preprocessing is the most asked thing in this field I experienced this as an intern. I would request you to take the data and normalize and convert them to to nparrary Or do some Preprocessing. Kindly make it from a granular level
You videos are great ,I have been trying to count people at full screen or frames using yolov8 and with no success,what are the cods to do so I
Thanks for the good video. I will follow it. Can i get the coordinates of the detection objects real time? I would like to use them as an input for another program.
thanks for the video. how can i tackle a task of counting objects in a video with moving camera? specifically sea animals in a big pool so i need to scan all the pool and accumalte the numbers
Thanks for watching! My tracking course with Yolov8 can exactly be used for that
@@NicolaiAI t
Does this course have support chat or something?
Where can I find the complete code for using the trained model for prediction? Its not completely visible in the tutorial video.
At 5:58 the "!yolo task=detect mode=train model=yolov8m.pt data={dataset.location}/data.yaml epochs=30 imgsz=640" command is not appearing on the notebook I copied. Should I add it manually? Also the 2 commands afterwards (start with image(filename)) are missing also from the notebook. If I run the following command (!yolo task=detect..) it fails. The error is:
Traceback (most recent call last):
File "/usr/local/bin/yolo", line 8, in
sys.exit(entrypoint())
File "/usr/local/lib/python3.10/dist-packages/ultralytics/cfg/__init__.py", line 423, in entrypoint
getattr(model, mode)(**overrides) # default args from model
File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/model.py", line 372, in train
self.trainer = trainer(overrides=overrides, _callbacks=self.callbacks)
File "/usr/local/lib/python3.10/dist-packages/ultralytics/engine/trainer.py", line 127, in __init__
raise RuntimeError(emojis(f"Dataset '{clean_url(self.args.data)}' error ❌ {e}")) from e
RuntimeError: Dataset '/content/Loco1-1/data.yaml' error ❌
Dataset '/content/Loco1-1/data.yaml' images not found ⚠, missing path '/content/Loco1-1/Loco1-1/valid/images'
Note dataset download directory is '/content/datasets'. You can update this in '/root/.config/Ultralytics/settings.yaml'
same problem with me here
there a duplicate 'Loco1-1' here '/content/Loco1-1/Loco1-1/valid/images', you can delete the 'Loco1-1' in data.yaml train and val
Hello, where you able to fix it?
@@houdabekkourialami3581 No
Superb informative! 🔥 May I ask for the Colab link? 🙏
Thanks a lot! All the code is available on my GitHub
can you provide the colab notebook
Thanks for this. However, the last part of the video isn't really "deploying" the model anywhere - it is about using it locally.
Yes u can deploy models locally
Thanks a lot. In your code, line 5, how did you export and get the .pt file? I exported from roboflow as yolov8 but I don't get any pt file.
thanks again :)
U need to train the model as i do in this video. I then show how to download the model
Hello! I'm having trouble while trying to fix the epochs, i get an error that there are no images in the dataset although they are in fact existing in all of the train validation and test files. How can I fix ?
There are some problems with the data.yml file. Replace the paths to the correct ones in there
Thank you so much. However I hvae another problem. When I run the vscode script I do get the results running in the terminal but the webcam isn't displayed. My webcam is activated and it does show up when I try functions like cv2.imshow()..
Never mind it works fine now. I had to update my environement.
I get errors when passing to Install YOLO v8: ValueError: Invalid 'mode='. Valid modes are ('train', 'val', 'predict', 'export', 'track', 'benchmark').
awesome vidoes
Thanks a lot! Really appreciate the support
How did you get the results to save to 'runs/detect/predict/'? I have been struggling with this feat every time.
It should do that automatically if u run the predict command
you have to add the argument save=True
@NicolaiAI thanks for the video! can I see the python code that you used in 14:25?
When i do the predictions this way my gpu memory fills up quite fast is there a way to prevent this
thank you very much
Thanks for watching! Glad that it could help u
I have trained how you do in the video but do not see how to export? can you help
Where can I find the Colab scripts used in this RUclips video? Thanks.
Thansk for watching! Think i deleted it when i updated the description. It is not added again. colab.research.google.com/drive/1q0GqWbYjACs9uVTjhsQfVQkfF0hhLHsq?usp=sharing
@@NicolaiAI Thank you.
Can you please do a tutorial video detecting potholes and crack using raspberry pi
may I ask sir, what code ediitor your are using?
hey can you tell is there anything else after import detection in line 2 of the code?
hi, what webcam do you use in your projects
I’m just using a cheap budget webcam
**UPDATE**
!yolo checks
change content in -> data.yaml -> to
train: ../train/images
val: ../valid/images
Image(filename=f'/content/runs/detect/train7/confusion_matrix.png', width=600)
you hero!
thanks bruh...
An actual hero
I have some problem with my virtual environment
I installed all the required files and when I try to import YOLO from ultralytics for example from jupyter notepad using VScode, the import gets stuck and looks like it will never finish
Maybe some of you have had a similar problem or know what is wrong with my environment?
Hii, very good work but how can i convert output in audio form so that it can be used by blind person
Try using pyttsx3 library
I’m constantly getting “FileNotFoundError”when training please what can i do
hey, Can you provide me a script on how to deploy yolov8 model for intel Realsense Camera. Thanks in Advance! Btw your tutorials are very helpful. Keep up with the work!!
Thanks a lot man! I already have a video where we implement a custom class with the yolo model and run inference on webcam then u will just have to swap the camera
my problem is stoping the session in kaggle or google colab and my volum dataset is 4005 image can you help me pleas Because I have my graduation thesis and I don't have much time
how can I export the mode and get the path to it 14:30
Hi, I was struggling with the python script to use it , so, thank you very much for uploading this tutorial! 😄
But, I still have one question, how can we deploy it using flask or some other framework?
HI, could you post the link to the python script for using the trained model for prediction. The complete code is not visible in video
how can tunning hyperparameters for yolov8?
How do i deploy it to mobile devices using flutter?
permission to learn and download dataset in roboflow sir. thank you
tell me how to deploy yolov8 model on webcamp in colab
sir if i place rtsp link instead of webcam source 0 , it freezes . Any solution for this sir.. Thank u
Hello Brother I want the output results in json format do help me
If u Watch my tracking video u Can see how to extract the results
@@NicolaiAI Thanks for the reply brother
@@joart333 of course!
Hi, Where is the code at 14:25 ?
I using gtx1650,and why always using cpu intel core i5 gen 10 ram 16 gb process in my detection?how to optimize?
Anybody can explain?
im getting error while training the model
@NicolaiAI Sir, please make a video on: how many sample or images of an object for training and then implementing, epoches, batch confidence value for achive better result. #nicolainielsen #nicolaiaI
Thanks for the tip, will put that on my list of next videos to create. :-)
@@NicolaiAI thanks bro keep it up
@muhammadasil9374 thanks bro!👊
For users who are unable to run the training process, be assured to change the data.yaml from the yolk file downloaded in the Collab. Go into the code, on the most bottom line. Change the file directory to ../xxxx/xxxx
Thanks!
can you show me the steps on how to do this please ?
How is that?