Hi 👋🏻 It’s Peter from the video. I plan to revisit segmentation, as we plan to support it in Supervision. But I’m not 100% sure when that will happen. But I’ll keep your request in mind.
@SkalskiP i am stuck in a step ,so i need your help FileNotFoundError: Dataset '/content/ice-lolly-2/data.yaml' not found ⚠, missing paths ['/content/ice-lolly-2/ice-lolly-2/valid/images'] code and address are same but unable to get output -- %cd {HOME} !yolo task=segment mode=train model=yolov8m-seg.pt data={dataset.location}/data.yaml epochs=25 imgsz=640 address in my drive ---- # /content/ice-lolly-2/data.yaml---address is same please guide me
Any idea about this error of yolo v8? model.export(format='saved_model', imgsz=640) TensorFlow SavedModel: export failure ❌ 35.2s: SavedModel file does not exist at: train\weights\last_saved_model\{saved_model.pbtxt|saved_model.pb} same thing is happening with tflite..
Hey, I wanted to know whether I can use the yolov8 model and list of objects which it can detect stored in a file, say objects.txt and use it for custom instance segmentation instead of fine-tuning the model on a custom dataset?
How do you integrate this into android project like for example a food object detection + instance segmentation from an image particularly working with an imageview component?
hi. after following the instance segmentation tutorial with yolov8, I found that the train mode output contained one confusion matrix. My question is, does the CM belong to the box or the mask? Thank You
Very helpful video. Thank you for your works. I am just wondering after testing and you will have a set of images contain bounding box and instance segmentation. Can we using those images to turn it into binary mask. From that we can compare those masks from the model versus our ground truth?
@@ganeshjoshi4426 cool idea 💡 I’m not sure we will do it on Roboflow channel but something like that could for sure happen on my private channel or on Twitch
is it really instance segmentation since all instances of the same class are assigned the SAME COLOUR mask...same colour mask refers to semantic and not instance.
Can you please explain on how to know the accuracy of my model i trained using yolov8? Also explain the precision recall curve, f1 score and confusion matrix. I need for my project immediately within 2 days and i can't find any help regarding that.
how can i determine which pixel belongs to which class? i tried looking in the labels logs but i'm not sure how to decode that into a map of pixel coordinates to identified class
@@semperzero you men YOLOv8 docs? I’m only here to show you how to use the model. :) It’s not our. I guess you’d need to reach out to Ultralytics team :)
Hello, do I need the object detection weights for the rectangles and the segmentation weights for the segment area inside the rectangles or the segmentation custom dataset is superior to the object detection data set and the rectangle area appears also? So do I have to label my custom dataset in future only on segmentation? Is the object detection dataset obsolete?
Hi, i'm kinda new here and is trying to use yolov8 as well. May I know what platform is he using? and why he is using that platform instead of vsc or other code editors. Thank you!
Google Colab. To train models like this you need GPU. On Google Colab you can get access to GPU accelerated machines for free. Good place to start your AI journey.
Great question. You would pick segmentation over regular detection when precision is key. Most typical use case would be processing medical images or passing detections to robotic arm 🦾. Every millimeter counts.
Hi yes it is possible :) Just use classes argument. Take a look here: github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/cfg/default.yaml#L65
Thank you for the content, but I have a question. Is it possible to combine model Yolo-Nas and the part of segmentation in Yolov8-seg to model YoloNas-seg ? Thank you.
Please clarify me something There is a way to train with the roboflow web interface (where there are some paid and free plans) which means that I actually train not using my local GPU On the other hand there is a way to train via roboflow notebook as presented in the video. What I do not understand is when I train via notebook, is it the same as in the web interface? Particularly: does it also train via roboflow GPU (and not mine) and I am being charged for this training? (I mean if my plan includes 10 trainigs, so this trainig will be made from this very pool?)
Please, do I understand right the following: 1) I have purchased paid roboflow account, (for example, the one that includes 10 trainings per month) Do I understand right that such trainigs I can use ONLY via roboflow web interface? Right? 2)If I train via notebook I can use both - my local GPU or GOOGLE colab GPU? Right? 3)If I train via web interface I see it is somehow limited in detailing results, for example, I trined yolo v8 instance segmentation, and could not find any way to get AP per class or confusion matrix. Do I understand right that such information will be only available while training via notebook?
Did you try to call model.export(format="tflite") on the model that you trained? That should be the way to export TF Lite, at least according to the documentation.
i get this error. Sizes of tensors must match except in dimension 1. Expected size 134 but got size 0 for tensor number 1 in the list. i tried image sizes like 1072,1088,1024 but not fixed can i get some help? i resized images with roboflow
I’m happy to help. Could you create an issue here github.com/roboflow/notebooks/issues Describe your problem, provide link to your dataset and your version of Google Colab?
Tell us what is your name now?😅 Thanks for sharing!!! One thing that I don't like about Roboflow is has too many steps to try out something new. I know in a big project you need a good organisation, but sometimes you just need to try out stuff and then if it works as expected being organised.
hahaha, my name is a corporate secret :) That's a really cool feedback, one of my responsibilities is to make that process as smooth as possible. Any examples that you could share?
Hi! I tried the notebook used in this tutorial on a custom dataset but whenever I start training getting run time error: RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 834 but got size 0 for tensor number 1 in the list. I did as you showed in the video.
@filippoossuzio1346 I have only object detection dataset in roboflow what can I do now should I start from scratch (annotating for instance segmentation in roboflow) or we can convert this dataset
Nice video, thanks for the content. Would it be possible to get the coordinates of the segmented instance and use them to calculate an area? (assuming I could estimate the length/height of the real object)
Also while using Yolov8 instance segmentation code - after the below line of code %cd {HOME} !yolo task=segment mode=train model=yolov8x-seg.pt data={dataset.location}/data.yaml epochs=100 imgsz=640 Getting error - the datasets and runs/segment/ are formed but are not detected FileNotFoundError
Also another question - How can we find and convert the bounding boxes co-ordinates from tensors to numpy Because I want to find the area covered by the masks or the bounding boxes
@@Roboflow I mean we have to use 2 models like yolov8l-pose and yolov8l-seg one after another, or we can somehow use/train a single model with single config file/data set? Thanks
Yes, we can. You can filter detections by class using classes argument: github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/cfg/default.yaml#L65
We have example notebook for you to start: colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov8-classification-on-custom-dataset.ipynb
@@Roboflow i see this notebook, but whenever i am trying to execute classification 'training' part, i found error in data={dataset.location}..i dont understand 'dataset.location' means what!! Bt in detection and segmentation 'training' part i use data=data.yaml and this is working fine.. Plz solve this issue, i will be grateful to u
@@Roboflow i get that error in my notebook,,i follow your notebook code bt perform it in my own data.. In the 'custom training' option i dont understand "data=dataset.location" what does it mean in classification purpose!! In detection and segmentation 'custom training' i use data='my own yaml file' bt if i use this for classification 'custom training' then i found that error
Great catch, thanks for noticing. As a note for others, when the API key is visible in a video or example that is shared with other people, click into your Workspace Settings -> "Roboflow API" -> "Revoke API Key" and finally -> "Generate New API Key"
Hi it's Peter from the video :) Thank you very much! I already regenerated the key, and blured the fragment where it was visible. That's what happens when you cat the video at 3-4 AM :)
Thank you for uplaoding this :). I have started researching on this and this is such a great intro!
Very cool land interesting video Piotr...awesome!
Thanks a lot! 🙏
really good staff🙂
Thanks a lot! Dzięki ;)
Thank you for the content! I really enjoyed it. Could you please show how to get a mask array of an specific segment (object).
Hi 👋🏻 It’s Peter from the video. I plan to revisit segmentation, as we plan to support it in Supervision. But I’m not 100% sure when that will happen. But I’ll keep your request in mind.
Dear Peter,
Could you please share the link of the Coral Segmentation 4 dataset used in this tutorial?
@SkalskiP i am stuck in a step ,so i need your help
FileNotFoundError:
Dataset '/content/ice-lolly-2/data.yaml' not found ⚠, missing paths ['/content/ice-lolly-2/ice-lolly-2/valid/images']
code and address are same but unable to get output --
%cd {HOME}
!yolo task=segment mode=train model=yolov8m-seg.pt data={dataset.location}/data.yaml epochs=25 imgsz=640
address in my drive ----
# /content/ice-lolly-2/data.yaml---address is same
please guide me
Open data.yaml
and change
from:
test: ../test/images
train: {datasetName}/train/images
val: {datasetName}/valid/images
to:
test: ../test/images
train: /content/{datasetName}/train/images
val: /content/{datasetName}/valid/images
@@dserenini Thank you so much buddy love you.
@@dserenini thank you so much, why didn't he specify that in the video?
Thank you, but i would like if there is a method that can help to count objects in images ?
thanks for the video. How to display in the inference only the mask and confidence without bounding box?
Any idea about this error of yolo v8?
model.export(format='saved_model', imgsz=640)
TensorFlow SavedModel: export failure ❌ 35.2s: SavedModel file does not exist at: train\weights\last_saved_model\{saved_model.pbtxt|saved_model.pb}
same thing is happening with tflite..
Hey, I wanted to know whether I can use the yolov8 model and list of objects which it can detect stored in a file, say objects.txt and use it for custom instance segmentation instead of fine-tuning the model on a custom dataset?
How do you integrate this into android project like for example a food object detection + instance segmentation from an image particularly working with an imageview component?
Hey! Thats really great, I have a question if I want to extract the segemented masks from the predictions is there any way?
Thank you for the content. I love it
hi. after following the instance segmentation tutorial with yolov8, I found that the train mode output contained one confusion matrix. My question is, does the CM belong to the box or the mask? Thank You
Very helpful video. Thank you for your works.
I am just wondering after testing and you will have a set of images contain bounding box and instance segmentation.
Can we using those images to turn it into binary mask. From that we can compare those masks from the model versus our ground truth?
Thank you for the content!! You saved me ;)
is there anything i can do if the training stops in order to not to start again?
Good Work By you always. It would be more helpful if you can add also python code as this is only yolov8 cli commands.
Hi it is Peter from the video. So showing how to process YOLOv8 Instance Segmentation model output in Python?
@@SkalskiP yes.
@@ganeshjoshi4426 cool idea 💡 I’m not sure we will do it on Roboflow channel but something like that could for sure happen on my private channel or on Twitch
is it really instance segmentation since all instances of the same class are assigned the SAME COLOUR mask...same colour mask refers to semantic and not instance.
hi did you get the answer , I also want to work on the instance segmentation model
Hey, it is not instance segmentation, it's semantic segmentation. All the same class objects should have unique IDs. Please update.
Can you please explain on how to know the accuracy of my model i trained using yolov8?
Also explain the precision recall curve, f1 score and confusion matrix. I need for my project immediately within 2 days and i can't find any help regarding that.
how can i determine which pixel belongs to which class? i tried looking in the labels logs but i'm not sure how to decode that into a map of pixel coordinates to identified class
Could you create a thread here: github.com/roboflow/notebooks/discussions/categories/q-a I’ll try to help you out.
@@Roboflow i did. you should also include the response in the docs, as this would be a very useful information
@@semperzero you men YOLOv8 docs? I’m only here to show you how to use the model. :) It’s not our. I guess you’d need to reach out to Ultralytics team :)
@@Roboflow Okay. Thought you guys were working together or something. Waiting for your response on github and thanks a lot.
Hello, do I need the object detection weights for the rectangles and the segmentation weights for the segment area inside the rectangles or the segmentation custom dataset is superior to the object detection data set and the rectangle area appears also? So do I have to label my custom dataset in future only on segmentation? Is the object detection dataset obsolete?
Hi 👋🏻 If you do segmentation you get object detection for free.
@@Roboflow ohhh :) yes that's sad because already finished my detection labeling.... and now i have to do the segmentation :) oh noooooo! ok.....
@@MadeYourVideo oooh :/ I’m sorry to hear that.
@@Roboflow maybe the recognition algorithms work better than the segmentation algorithms ? is there a combination possibility ?
Hi, i'm kinda new here and is trying to use yolov8 as well. May I know what platform is he using? and why he is using that platform instead of vsc or other code editors. Thank you!
Google Colab. To train models like this you need GPU. On Google Colab you can get access to GPU accelerated machines for free. Good place to start your AI journey.
Can you make a video on person re identification.
what is the use case of instance segmentation? when does it being used rather than object detection?
Great question. You would pick segmentation over regular detection when precision is key. Most typical use case would be processing medical images or passing detections to robotic arm 🦾. Every millimeter counts.
is it possible to use yolov8 seg model only for specific classes? (ex: only segment people in the image)
Thank you for the video!
Hi yes it is possible :) Just use classes argument. Take a look here: github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/cfg/default.yaml#L65
Thank you for the content, but I have a question. Is it possible to combine model Yolo-Nas and the part of segmentation in Yolov8-seg to model YoloNas-seg ?
Thank you.
Unfortunately nope. But you can combo YOLO NAS and Segment Anything Model ;)
Please clarify me something
There is a way to train with the roboflow web interface (where there are some paid and free plans) which means that I actually train not using my local GPU
On the other hand there is a way to train via roboflow notebook as presented in the video.
What I do not understand is when I train via notebook, is it the same as in the web interface? Particularly:
does it also train via roboflow GPU (and not mine) and I am being charged for this training? (I mean if my plan includes 10 trainigs, so this trainig will be made from this very pool?)
When you train in the notebook on Google Colab, you are using Googles GPUs. Colab allows you to use NVIDIA T4 for free with some limitations.
Please, do I understand right the following:
1) I have purchased paid roboflow account, (for example, the one that includes 10 trainings per month) Do I understand right that such trainigs I can use ONLY via roboflow web interface? Right?
2)If I train via notebook I can use both - my local GPU or GOOGLE colab GPU? Right?
3)If I train via web interface I see it is somehow limited in detailing results, for example, I trined yolo v8 instance segmentation, and could not find any way to get AP per class or confusion matrix. Do I understand right that such information will be only available while training via notebook?
Great video. Is it possible to determine for two overlapping objects which one is behind the other one ?
Thank u for the video, but can this yolov8 segmentation be exported into tf lite? And if so how can it be tested or deployed on android?
Did you try to call model.export(format="tflite") on the model that you trained? That should be the way to export TF Lite, at least according to the documentation.
i get this error.
Sizes of tensors must match except in dimension 1. Expected size 134 but got size 0 for tensor number 1 in the list.
i tried image sizes like 1072,1088,1024
but not fixed
can i get some help?
i resized images with roboflow
I’m happy to help. Could you create an issue here github.com/roboflow/notebooks/issues Describe your problem, provide link to your dataset and your version of Google Colab?
@@Roboflow i've created the new issue
@@aminmemar416 thanks a lot. Could you provide link to your dataset?
Tell us what is your name now?😅
Thanks for sharing!!! One thing that I don't like about Roboflow is has too many steps to try out something new. I know in a big project you need a good organisation, but sometimes you just need to try out stuff and then if it works as expected being organised.
hahaha, my name is a corporate secret :)
That's a really cool feedback, one of my responsibilities is to make that process as smooth as possible. Any examples that you could share?
@@SkalskiP Hi Peter . Thanks for the tutorials :)
@@cappittall My pleasure!
Amazing as always
but can we upload this instance segmentation model to roboflow like with the object detection ?
Hi it is Peter from the video! Not yet... But we are working on it! Stay tuned. I plan to produce much more content around deployment functionalities.
Thanks for sharing, I need ids number also. How i will get that.
Hi! I tried the notebook used in this tutorial on a custom dataset but whenever I start training getting run time error: RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 834 but got size 0 for tensor number 1 in the list.
I did as you showed in the video.
Could you create issue here: github.com/roboflow/notebooks/issues make sure to provide as much details as possible. I’ll try to help you.
Have you find a solution? because I have a similar error
@@filippoossuzio1346 u should train instance segmentation dataset, not object detection dataset
@@zy.r.4323 It is a instance segmentation dataset.. I labelled using roboflow and from the start I set instance segmentation as a task
@filippoossuzio1346 I have only object detection dataset in roboflow what can I do now should I start from scratch (annotating for instance segmentation in roboflow) or we can convert this dataset
Nice video, thanks for the content. Would it be possible to get the coordinates of the segmented instance and use them to calculate an area? (assuming I could estimate the length/height of the real object)
Hi 👋Yes. I'm actually thinking about making video about it. Would you like to see that?
@@SkalskiPhi! Yes, please :)
@@SkalskiP yes man plz
whats stopping you
I was trying the same and the problem now is - my results aren't getting saved in runs/segment/
Can you help me out with this ?
Hi it's Peter from the video. 👋Try adding `save=True` to the command ;)
@@SkalskiP Fantastic, worked like a charm. Thanks man
Also while using Yolov8 instance segmentation code - after the below line of code
%cd {HOME}
!yolo task=segment mode=train model=yolov8x-seg.pt data={dataset.location}/data.yaml epochs=100 imgsz=640
Getting error - the datasets and runs/segment/ are formed but are not detected
FileNotFoundError
Also another question -
How can we find and convert the bounding boxes co-ordinates from tensors to numpy
Because I want to find the area covered by the masks or the bounding boxes
@SkalskiP
Hi, do you have a model supporting both: segmentation and pose?
YOLOv7 and YOLOv8 both support pose estimation and instance segmentation.
@@Roboflow I mean we have to use 2 models like yolov8l-pose and yolov8l-seg one after another, or we can somehow use/train a single model with single config file/data set?
Thanks
@@darkjudic ah so you would like to get pose and segmentation result from single model at once? Sorry... But no model comes to my mind.
Is there any way to try and test our trained model on a video file?
You should be able to pass that video file as a source in the CLI command.
Can we segment only person using pre trained model
Yes, we can. You can filter detections by class using classes argument: github.com/ultralytics/ultralytics/blob/main/ultralytics/yolo/cfg/default.yaml#L65
how to perform classification on custom dataset in yolov8?
We have example notebook for you to start: colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/train-yolov8-classification-on-custom-dataset.ipynb
@@Roboflow i see this notebook, but whenever i am trying to execute classification 'training' part, i found error in data={dataset.location}..i dont understand 'dataset.location' means what!!
Bt in detection and segmentation 'training' part i use data=data.yaml and this is working fine..
Plz solve this issue, i will be grateful to u
@@sayedhasan5997 you get that error with our notebook without any custom changes?
@@Roboflow i get that error in my notebook,,i follow your notebook code bt perform it in my own data..
In the 'custom training' option i dont understand "data=dataset.location" what does it mean in classification purpose!!
In detection and segmentation 'custom training' i use data='my own yaml file' bt if i use this for classification 'custom training' then i found that error
@@sayedhasan5997 I’ll try to take a look. But it would be awesome if you could create new issue here: github.com/roboflow/notebooks/issues
Hi, your api key was visible at 7.27 =/
Great catch, thanks for noticing. As a note for others, when the API key is visible in a video or example that is shared with other people, click into your Workspace Settings -> "Roboflow API" -> "Revoke API Key" and finally -> "Generate New API Key"
@@Roboflow Hope not so many people noticed / used that)
Hi it's Peter from the video :) Thank you very much! I already regenerated the key, and blured the fragment where it was visible.
That's what happens when you cat the video at 3-4 AM :)
i love you man
you are greate
Great !
This is semantic segmentation not instance segmentation
❤