The way he is using it is just to show you. You need this when you are doing detailed automation. Using this in conjunction with specific words you can auto do stuff. depending on what you want.
Florence 2 has the capabilty to 'select' items from an image (generating a mask) by using referring_expression_segmentation task... Is segment-anything-2 better at creating masks?
I am trying to follow along with what you are creating but I keep getting to a wall trying to figure out where you getting the Nodes your using especially the "Seg 2 Prompt" "KSampler Prompt" and "Denoise Ksampler", I have installed the bmad nodes as well as the mixlab-nodes and even checked the github documentation of these noise like for example on the mixlab-nodes github there is no reference to anything Denoise Ksampler. I want to pull my hair out i dont understand why i cant find it. I've spent the last 2 hours trying to figure it out. Why does this always happen to me when i try to follow an image with nodes or a video like yours that id have to pay to just get the workflow. Is it that yall are using some kind of alternative version of these nodes or are you just relabeling the title of the nodes so to make it even harder for us to figure out how to recreate it? Some one out there that is doing these things and sees this please inform me!
he then converted the input label to an input, denoise uses a float32. Furthermore, there are things that do not appear and are left with a ?, but there are nodes grouped in this workflow
I produce many images on Midjourney but around 10% comes with deformaties. Blurry faces, many fingers, etc. Is it possible to use Florence 2 and SAM to identify those images and eliminate from my files?
IPAdapter is image prompt so you give image(s) and it's used to generate similar images. Contolnet is to force the generation to have certain features, you give it an image and ask the generated image to have either the same depth, person position, contours, etc. Segment anything is to find separate parts of an image
maybe if you grow the mask by a lot. but if you want inpainting of new things a mask with a brush seems more useful. in that case also instead of telea use a different node that does a low-quality inpaint in the area, so there is already something there
The "coordinates" in the Seg2Segmentation node are managed as widget, when I try to convert them to input there's no option to do so in the contextual menu and so I can't connect to the Point Editor node. Did someone had the same issue? Is there a solution? Thanks.
Just a thing to be aware of. At about 1:49 you say you use the package "comfyui-tensorop", but isnt this the name of the conflicting package? I am pretty sure you use the Florence2 package from the same user that also made the KJNodes and point editor? Cool video 👍😊
I am working with a quite outdated RTX 2060 SUPER but I don't see any reason why people are still using 1.5 models. Even my 2060 card does quite well with SDXL and also FLUX works perfectly with it.
Moreover, I'm using mobile 1050 with 4Gb VRAM with sdxl-based model in low-vram mode. I know, 768x1152 image in 6 minutes is a little bit too slow, but nevertheless it works.
Oh no, I hope it wasn't a scammer. Kit Boga did a video about reserving a place at a hotel with a 3rd party who claims they work for the hotel, where they claim the place you want are all booked and try to book you at another place with an increased cost that they pocket. The hotel they claim is booked, is not booked. Doesn't sound like the same thing, but mentioning it anyway.
@@OlivioSarikas Sorry you're going through that. My mate has travelled with them a few times (for work) and always has issues with their customer services. They have a terrible reputation in that department according to him.
Just imagine it was YOU who, by some mistake, had to pay them 500 EUR and not the other way around. I bet they would have called you every 5 minutes...
Whenever I see this messy ComfyUI spaghetti workflow GUI I think three things: FIRST ... this can't be the future of AI image generation without a boatload of additional functions for debugging and automatically arranging the nodes for better navigation SECOND: since there is no magic or even complex parallel processing or even sophisticated if-then-else logic here, and everything runs in a pre-defined sequence, why is it so difficult to visualize this sequence? Comfy workflows are a CLASSIC example for a tool, that best works for those who created a workflow themselves, but not for those who are supposed to re-use a workflow that somebody else designed. And THIRD: Comfy should have made at least a pseudo-code level (each node is a function call with parameters in the end) of the nodes also available in the first place. For many experienced developers this would be a very welcome compromise between wading in Python code directly, and zooming in an out of the the node view permanently like a maniac - like everybody must do at the moment who doesn't have a screen size of cinema format.
@@PeterLunk You are barking up the wrong tree - I am talking especially from the perspective of developers or at least users that have some experiences with GUI and coding tools. How anybody can misinterpret my comment this way escapes me. Read it again! To quote myself: "For many experienced developers this would be a very welcome compromise ..."
Not sure why we're supposed to care about Etihad or your personal experience with them. Random. What did you have for dinner yesterday? Maybe we'll make it for our dinners.
Unnecessarily complicated for what it does.Horrible spaghetti workflow which requires a lot of zooming in and out .And on top of that you want people to pay for it! "Any intelligent fool can make things bigger and more complex... It takes a touch of genius --- and a lot of courage to move in the opposite direction."
As always top quality content!🎖
Great video. Love the art submissions this week also! I'm sure I can find a few uses for SAM2 after this :D
Why not just use the GroundingDinoSAMSegment node? Seems like a lot less hassle than using Florence. 🤷🏼
The way he is using it is just to show you. You need this when you are doing detailed automation. Using this in conjunction with specific words you can auto do stuff. depending on what you want.
Thanks, I've learned something new again. 🤗
Most of gulf customer care don't care about there customer. Been there man
You could mention Kijai that made this nodes…and this workflow ^^
He is linked twice in my Video Info. Did you not see that?
@@OlivioSarikas link is nice but mention is nice too
Florence 2 has the capabilty to 'select' items from an image (generating a mask) by using referring_expression_segmentation task... Is segment-anything-2 better at creating masks?
Hey Olivio, it seems like your Discord invite expired. Could you perhaps create a new invite?
Message received ✈✈✈ Thanks for the great video 🤟
Would this work with Kawaii-kolors model?
I am trying to follow along with what you are creating but I keep getting to a wall trying to figure out where you getting the Nodes your using especially the "Seg 2 Prompt" "KSampler Prompt" and "Denoise Ksampler", I have installed the bmad nodes as well as the mixlab-nodes and even checked the github documentation of these noise like for example on the mixlab-nodes github there is no reference to anything Denoise Ksampler. I want to pull my hair out i dont understand why i cant find it. I've spent the last 2 hours trying to figure it out. Why does this always happen to me when i try to follow an image with nodes or a video like yours that id have to pay to just get the workflow. Is it that yall are using some kind of alternative version of these nodes or are you just relabeling the title of the nodes so to make it even harder for us to figure out how to recreate it? Some one out there that is doing these things and sees this please inform me!
he then converted the input label to an input, denoise uses a float32. Furthermore, there are things that do not appear and are left with a ?, but there are nodes grouped in this workflow
Pay up, totally worth it. Just became a golden supporter.
Could you please share with us wich NVidia you have in your computer? Thank you!
I produce many images on Midjourney but around 10% comes with deformaties. Blurry faces, many fingers, etc. Is it possible to use Florence 2 and SAM to identify those images and eliminate from my files?
I am really confused about the concept of IPAdapter, the ControlNet thing, and segment anything😢😢😢 can someone explain to me pls….
IPAdapter is image prompt so you give image(s) and it's used to generate similar images. Contolnet is to force the generation to have certain features, you give it an image and ask the generated image to have either the same depth, person position, contours, etc. Segment anything is to find separate parts of an image
can it also add hair like for example, i have a bald person then i add him an afro hair? can it do??? sorry newbie here
maybe if you grow the mask by a lot. but if you want inpainting of new things a mask with a brush seems more useful. in that case also instead of telea use a different node that does a low-quality inpaint in the area, so there is already something there
The "coordinates" in the Seg2Segmentation node are managed as widget, when I try to convert them to input there's no option to do so in the contextual menu and so I can't connect to the Point Editor node.
Did someone had the same issue? Is there a solution? Thanks.
Just a thing to be aware of. At about 1:49 you say you use the package "comfyui-tensorop", but isnt this the name of the conflicting package? I am pretty sure you use the Florence2 package from the same user that also made the KJNodes and point editor?
Cool video 👍😊
never annoy a youtuber ... now the airways company can go bankrupt 😅🎉
I am working with a quite outdated RTX 2060 SUPER but I don't see any reason why people are still using 1.5 models. Even my 2060 card does quite well with SDXL and also FLUX works perfectly with it.
Moreover, I'm using mobile 1050 with 4Gb VRAM with sdxl-based model in low-vram mode. I know, 768x1152 image in 6 minutes is a little bit too slow, but nevertheless it works.
Oh no, I hope it wasn't a scammer.
Kit Boga did a video about reserving a place at a hotel with a 3rd party who claims they work for the hotel,
where they claim the place you want are all booked and try to book you at another place with an increased cost that they pocket.
The hotel they claim is booked, is not booked. Doesn't sound like the same thing, but mentioning it anyway.
no, it's the actual airline. the screenshot is from their own homepage from the support chat
@@OlivioSarikas Sorry you're going through that. My mate has travelled with them a few times (for work) and always has issues with their customer services. They have a terrible reputation in that department according to him.
Just imagine it was YOU who, by some mistake, had to pay them 500 EUR and not the other way around. I bet they would have called you every 5 minutes...
👋
interesting
They need the money to make it into an income for Manchester City's revenue to clean their books for the upcoming FFP case they have going on.
I don't have to pay for this, but please pay for it.
Whenever I see this messy ComfyUI spaghetti workflow GUI I think three things:
FIRST ... this can't be the future of AI image generation without a boatload of additional functions for debugging and automatically arranging the nodes for better navigation
SECOND: since there is no magic or even complex parallel processing or even sophisticated if-then-else logic here, and everything runs in a pre-defined sequence, why is it so difficult to visualize this sequence? Comfy workflows are a CLASSIC example for a tool, that best works for those who created a workflow themselves, but not for those who are supposed to re-use a workflow that somebody else designed.
And THIRD: Comfy should have made at least a pseudo-code level (each node is a function call with parameters in the end) of the nodes also available in the first place. For many experienced developers this would be a very welcome compromise between wading in Python code directly, and zooming in an out of the the node view permanently like a maniac - like everybody must do at the moment who doesn't have a screen size of cinema format.
It's a developer tool, not an end user tool.
And this discussion is getting old and stale.
@@PeterLunk
You are barking up the wrong tree - I am talking especially from the perspective of developers or at least users that have some experiences with GUI and coding tools. How anybody can misinterpret my comment this way escapes me. Read it again! To quote myself: "For many experienced developers this would be a very welcome compromise ..."
Hello Etihad , thats a shame
This is complicated. spagetti mess
Not sure why we're supposed to care about Etihad or your personal experience with them. Random. What did you have for dinner yesterday? Maybe we'll make it for our dinners.
You need to take this video down, buddy.... Points editor is experimental and is going to break a lot of peoples setups.
Unnecessarily complicated for what it does.Horrible spaghetti workflow which requires a lot of zooming in and out .And on top of that you want people to pay for it! "Any intelligent fool can make things bigger and more complex... It takes a touch of genius --- and a lot of courage to move in the opposite direction."