This is by far the best tool I’ve used. It’s much easier and more flexible than other online options. The interface is intuitive, and integrates smoothly with my YOLOv8 training workflows. Highly recommend for anyone in computer vision! Few very minor improvements: (1) add automated labelling (e.g. DINO+SAM, or custom YOLO that was pretrained on the dataset), (2) add an option Save As... for the projects, (3) editing the polygon of SAM-labelled instances (for minor corrections).
Thank you very much. 1. Right now we have SAM semi-auto labeling. I intend to add others and thanks for your suggestions. 2. Already added Save As option, need to push the updated code today or tomorrow. 3. Polygons can be edited, just double click on an annotation to get into the edit mode. I have also implemented merge annotation feature which can useful when SAM gets it wrong and you need to manually add part of the object and merge the annotations.
Excellent tool. I am really surprised how well this works. I know there are lots of people who would be benefited by your tool without knowing much coding. I think it would also be very beneficial for a lot of people like me who also wants to build some gui applications like yours. If you can make the step by step tutorials for how to package the application and push it to pypl and a bit about making the gui application. Thank you for your great tool.
Using your tool for my first image segmentation project. It is working great and just what I needed. I'm using SAM to annotate many small objects. Sometimes, it's difficult to select bigger objects, but when I switch between SAM models, it works most of the time.
I DIDN'T FIND A FREEWARE WHICH WAS USER FREINDLY WITH A GOOD UI. I WAS PLANNING TO DESIGN ONE EXACTLY SIMILAR TO THIS AND GIVE IT FREE TO THE COMMUNITY IN A YEAR OR SO. YOU JUST DID IT
I am using that tool since yesterday and it is being awesome, tank you! Really better than others open source tools. I have a suggestion, like the merge tool I would enjoy to have a minus operation over areas, could be really helpfull
Hi Sreeni such a great tool! I’ll test it myself soon because i need to tag a bunch of images for a project for college and I don’t have any team to rely on. I was thinking about using SAM or some similar model to help me with bounding boxes, but you’ve just did it even easier for all of us, thanks a lot! By the way, the only issue i can see in your demo is just for bb editing, because once you create anew annotation, the software is just treating it like a polygon, as you showed, so if one needs to do some micro adjustments on width/height it’s not a bounding box anymore, right? I think for that use case, it would be far useful to still editing it as a rectangle without deforming it. Hope it’s helpful feedback for you, not intending at all to minimize or criticize the humongous work you’ve made!! 🙏🏻
Feedback is put a desaturated white green background and darkened green for the font color. Common trick to enhance readability. Add a confirmation to deleting classes. Allowing a user to choose a yolo model to assist with labelling in the same way as SAM Otherwise interesting stuff :)
Thanks for your feedback. When you were referring to darkened green font color, I assume you men for the slices. I fixed that part and also added confirmation for deleting a class. It used to be there, but somehow lost that functionality. I will release the updated version after a few tests. Using alternate, customized model for annotation assist is on my wish list. I personally like Mask R-CNN (Detectron2).
Very nice! Thanks for the release and the video. The auto segmentation is a great tool. Auto labeling using one's own object detection model would be a great addition. Have you considered supporting ONNX and OpenVINO? They both provide an increase in interference speed over the PyTorch model.
Auto-labeling using own trained model is something on my wish list. In fact, I had it for a couple of versions and had to remove it as it wasn't working well on Linux or mac. Thanks for the suggestion, reaffirms my wishes :) I am relying on Ultralytics for SAM which uses Pytorch, hence the need for it.
Thank you, you always help with my project, sir. I have tried your project, and it's great ,Here’s my input: 1. Add an edit annotation feature. When using SAM-assisted annotation, sometimes the annotated area is not quite accurate, so a function for editing annotations is needed. 2. Add CTRL+Z for undo. 3. Add the ability to hold the mouse wheel to drag the image. 🫡
Thank you. 1. Annotations can be edited, just double click on an annotation to edit. I am also in the process of adding merge annotation feature, so you can manually draw around any mistakes by SAM and merge both annotations. 2. Noted. 3. You can zoom in and pan the image by using mouse, just hold the ctrl button down and use the wheel to zoom in and out or click and move the mouse to pan.
AMAZING work! I have a question about the interactive poligons/rectangles: which library did you use to get polygons and the anchor points? was it pure pyqt or a specific library for shape? Was digging in your project and saw a shapely library, but Im not sure it is this one. I'm really struggling to get these interactive shapes in my project using pyqt native tools. Thanks a lot!
Thank you so much for your sharing. I would like to request how to install SAM because SAM2 tiny like that related SAM 2 option does not work in my desktop.
Thank you for this tutorial! I am new to this topic and am wondering why my images are completely black when I use the ‘export labelled images’ function. Only when I open them with ImageJ do I see the mask. I only need black and white images for my application so that the mask can be recognised. What is the best export format to achieve my desired result with binary masks? Thanks!
With SAM 2 Small, it wouldn't work dragging a marque over my particular class, cracks in concrete, which like lightning bolts often run diagonally and branch. Might some added code avail me to the polygon tool to this end? Beautiful work, BTW!
I can create a SAM annotation but I cannot save it. Any tips? Is this a malfunction on my side? And if I want to annotate more than lets say one cell at a time with same, is this supported? Cause so far I could each time only annotate one single cell. Thank you for this tool! You have developed a lot from when you started!! edit: I had to press enter to save the annotation with SAM
This is by far the best tool I’ve used. It’s much easier and more flexible than other online options. The interface is intuitive, and integrates smoothly with my YOLOv8 training workflows. Highly recommend for anyone in computer vision! Few very minor improvements: (1) add automated labelling (e.g. DINO+SAM, or custom YOLO that was pretrained on the dataset), (2) add an option Save As... for the projects, (3) editing the polygon of SAM-labelled instances (for minor corrections).
Thank you very much.
1. Right now we have SAM semi-auto labeling. I intend to add others and thanks for your suggestions.
2. Already added Save As option, need to push the updated code today or tomorrow.
3. Polygons can be edited, just double click on an annotation to get into the edit mode. I have also implemented merge annotation feature which can useful when SAM gets it wrong and you need to manually add part of the object and merge the annotations.
Excellent tool. I am really surprised how well this works. I know there are lots of people who would be benefited by your tool without knowing much coding.
I think it would also be very beneficial for a lot of people like me who also wants to build some gui applications like yours. If you can make the step by step tutorials for how to package the application and push it to pypl and a bit about making the gui application.
Thank you for your great tool.
Using your tool for my first image segmentation project. It is working great and just what I needed. I'm using SAM to annotate many small objects. Sometimes, it's difficult to select bigger objects, but when I switch between SAM models, it works most of the time.
Look how coolly this guy dropped one of the best annotation tools 🔥. #legend
This is wonderful. This video has helped a lot! thank you!!
I DIDN'T FIND A FREEWARE WHICH WAS USER FREINDLY WITH A GOOD UI. I WAS PLANNING TO DESIGN ONE EXACTLY SIMILAR TO THIS AND GIVE IT FREE TO THE COMMUNITY IN A YEAR OR SO.
YOU JUST DID IT
I am using that tool since yesterday and it is being awesome, tank you! Really better than others open source tools.
I have a suggestion, like the merge tool I would enjoy to have a minus operation over areas, could be really helpfull
Hi Sreeni such a great tool! I’ll test it myself soon because i need to tag a bunch of images for a project for college and I don’t have any team to rely on. I was thinking about using SAM or some similar model to help me with bounding boxes, but you’ve just did it even easier for all of us, thanks a lot!
By the way, the only issue i can see in your demo is just for bb editing, because once you create anew annotation, the software is just treating it like a polygon, as you showed, so if one needs to do some micro adjustments on width/height it’s not a bounding box anymore, right? I think for that use case, it would be far useful to still editing it as a rectangle without deforming it.
Hope it’s helpful feedback for you, not intending at all to minimize or criticize the humongous work you’ve made!! 🙏🏻
Feedback is put a desaturated white green background and darkened green for the font color. Common trick to enhance readability.
Add a confirmation to deleting classes.
Allowing a user to choose a yolo model to assist with labelling in the same way as SAM
Otherwise interesting stuff :)
Thanks for your feedback. When you were referring to darkened green font color, I assume you men for the slices. I fixed that part and also added confirmation for deleting a class. It used to be there, but somehow lost that functionality. I will release the updated version after a few tests.
Using alternate, customized model for annotation assist is on my wish list. I personally like Mask R-CNN (Detectron2).
This tool looks great Thanks for sharing
Thanks for watching! I hope you will find it to be useful.
Great piece of work! Thanks for sharing this!
It's just a sick job. Thanks for the time we're going to save !!! Merci !!!
This is a awesome job. I am going to use this right away.
Have fun! :)
Bless your wonderful work Sir, Thank you kindly.
You are very welcome
Thanks for sharing this tool.
You're welcome!
Very nice! Thanks for the release and the video.
The auto segmentation is a great tool. Auto labeling using one's own object detection model would be a great addition.
Have you considered supporting ONNX and OpenVINO? They both provide an increase in interference speed over the PyTorch model.
Auto-labeling using own trained model is something on my wish list. In fact, I had it for a couple of versions and had to remove it as it wasn't working well on Linux or mac. Thanks for the suggestion, reaffirms my wishes :)
I am relying on Ultralytics for SAM which uses Pytorch, hence the need for it.
This is great tool. Thank you
You're welcome!
Thank you, you always help with my project, sir. I have tried your project, and it's great ,Here’s my input:
1. Add an edit annotation feature. When using SAM-assisted annotation, sometimes the annotated area is not quite accurate, so a function for editing annotations is needed.
2. Add CTRL+Z for undo.
3. Add the ability to hold the mouse wheel to drag the image.
🫡
Thank you.
1. Annotations can be edited, just double click on an annotation to edit. I am also in the process of adding merge annotation feature, so you can manually draw around any mistakes by SAM and merge both annotations.
2. Noted.
3. You can zoom in and pan the image by using mouse, just hold the ctrl button down and use the wheel to zoom in and out or click and move the mouse to pan.
@@DigitalSreeni Okay, thank you for the explanation on point 1. Is it possible for us to adjust the background opacity?
AMAZING work! I have a question about the interactive poligons/rectangles: which library did you use to get polygons and the anchor points? was it pure pyqt or a specific library for shape? Was digging in your project and saw a shapely library, but Im not sure it is this one. I'm really struggling to get these interactive shapes in my project using pyqt native tools. Thanks a lot!
Thank you so much for your sharing. I would like to request how to install SAM because SAM2 tiny like that related SAM 2 option does not work in my desktop.
This is sooooooo coooollllllll 😍🤩😍
Thanks
Awesome tool! Have you considered adding „auto“ bbox detection using a model such as Grounding DINO?
excellent tool. just one question, why do tiff files turn black and white when loaded?
Thank you for this tutorial!
I am new to this topic and am wondering why my images are completely black when I use the ‘export labelled images’ function. Only when I open them with ImageJ do I see the mask. I only need black and white images for my application so that the mask can be recognised. What is the best export format to achieve my desired result with binary masks?
Thanks!
With SAM 2 Small, it wouldn't work dragging a marque over my particular class, cracks in concrete, which like lightning bolts often run diagonally and branch. Might some added code avail me to the polygon tool to this end? Beautiful work, BTW!
Thankyou.Saved for later. Could you please make Video on yolo 11.
I can create a SAM annotation but I cannot save it. Any tips? Is this a malfunction on my side? And if I want to annotate more than lets say one cell at a time with same, is this supported? Cause so far I could each time only annotate one single cell. Thank you for this tool! You have developed a lot from when you started!! edit: I had to press enter to save the annotation with SAM
can you add multi-out/ multi label / multi class classification option as well?
Hello Sreeni, could you teach me how to install SAM in digital sreeni.
Hey how can i augument images with yolov8 annotation so i dont need to annotate the augumented images
Great tool. Can I use this with .bmp images?
Well, I tried it and it worked perfectly
Yes, of course
can we get JSON format annotation from it?
Yes.