Great tutorial! The Seamless Tiling Node sounds like a game-changer for creating those perfect repeatable patterns. Can’t wait to try it out on some 3D models and textures. Thanks for breaking it down so clearly!
I didnt find yet a good video model, all that are free has a very low resolution and a lot of errors, so is not really usable. I was hopping the creators of flux will release the video model. Until then I am using Kling ai for image to video
Great tutorial, I am always learning new things and you make it so simple. Can you make one on ReActor Face Swap its one that I have trouble getting it install.
I have been avoiding that because of two reasons: because it needs insight face and that gives a lot of errors on different systems and I am a designer and cant help people with all those code errors, like is really sensible on the dependencies that it needs. And the second reason is that is only for personal use, the insight face so no commercial work, and both reactor and faceid use that from what i know
Heyo. Do you have an idea on how to get more distance between motives in seamless patterns? I can create great motives, but it's almost always way too crowded, so the final outcomes are mostly way too busy for the eye. I tried stuff like "evenly spread, much distance, much space in between" etc., but it won't have an effect. Btw., great channel and very amazing tips. Coming from a pre-AI Photoshop world, stuff like Stable Diffusion shocks me almost every new day. 👍
I dont maybe you can get one pattern crop it and try again on image to image, maybe can make something to work that way, not sure, is hard to control AI
Thank you for sharing, I'm expecting the Flux inpaint tutorial, also wanna know how to choose the inpaint masked content like in the Forge, Thank you again
This one for example works with lowvram, tested on 6gb rtx2060. Usually for low RAM you just use smaller models, and sometimes help to use vae decode encode the tiled version. But running flux and complex workflow will not work to well, is like you have a tiny car with small engine and you want to compete in.car racing 😁 I usually include on discord workflow for sdxl
Thank you very much. From the beginning of Ai based text 2 image models, people were hardly figuring out how to generate seamless pattern, but could do in some ways somehow.. It was really challenging.. Now it is easier with lots of nodes.. But, the main problem is not generating tileable texture, but UPSCALE'ing them.. Because it will easily lose the connection edges.. For example if you want to cover a bed sheet, 1024x1024 will never be enough for you.. Whether you will need to convert your generation to vector type (if possible good, but still challenging because the content of your generation may not be suitable with too many details) , or upscale it at least 4 times. And when you are upscaling your tiled vae decoder, or normal vae decoder will be most possibly screwing, because of the nature of upscaling. I hope this issue would be resolved with some intelligent nodes, or new approach..
@@pixaroma for sure :-) Just a trick for whom to have upscaled textures.. Upscale to the desired size, with your own choice of technique. In a photoshop like host application, OFFSET your image in both X and Y axis for value of %50 and %50. You will get a (+) shaped image slightly visible of seam connections. Go back to your workspace and inpaint this (+) area with low denoise values like 0.25 - 0.30, (you dont have to use a prompt).. The connection seams will be gone. Thank me later! :-)
use this link, is put in the link of the channels as well, first it will ask if you want to go outside youtube since is discord and is outside, so click go to site then accept invite discord.com/invite/gggpkVgBf3
Hi!!, I just started to follow you, very nice video tutorials, thanks you for these tutos. One Q, can you make a workflow for "Image to seamless" textures, I been trying to do it for a while and I have one but is very complex and a mess jajaja. I hope you can do one more simple. Thanks once more
It kind of works but it doesn't keep images similar , depending on the noise, if i want the edges to be seamless it needs to change the original image, if is too similar it will not be semaless
@@pixaroma yes, this one does not work for "texture to seamless" I mean, if you have any texture non-seamless(ex. random texture from the internet), first, you need to normalise them to get rid of any brightness inconsistencies, In Photoshop you do it with a 1"Blur average" + 2the same texture over that on "linear light 50% opacity + 3highpass filter. then offset and paint the joints... so in Comfy, at this point is an offset node with padding and that padding is fed to an Inpainting workflow as a mask and then filled with a ksampler. I know that doesn't sound very difficult but it is jajaja, it would be nice to have a PBR output with the main ones (normal, roughness and high maps) there are a few models that do the PBR output... but you know, is just a fake
Is hard to keep it similar and to be seamless, you can use image to image but you need high denoise to blend the edges and with high denoise the image would be more different
I tried but because the normal images are not repeatable it doesnt change only the edge it change also the image in order to be repeatable, so if the denoise is higher the images is more seamless but the image is more different, if the denoise is low will be more similar but the part where it combines will be more visible
I didn't.try i usually just use illustrator or Photoshop to create seamless patterns from existing designs, and i use ai to get them generated from scratch
ITools Grid Filler doesn't work with batch generation from file...Or I do smt wrong :((( It saves only one image after this node. If Save Image node stands after Circular VAE Decode, it's OK.
Life saver,thanks :)) Can you do a tutorial where u show how to change product background but product's details still the same?i mean preserve details for example with segmenth anything + controlnet or something like that.bro that would def change my life if u do that and i think many people are interested in that.thanks in advanced +1 sub anyway ))
@@pixaroma there are many workflows in youtube i have seen all of them but when it changes background it also changes main product and its a problem for a product photographers. good luck bro i appreciate that thank you
Is not that easy, i saw something training a Lora with images with floors and then inpainted the image in the area where was the floor with that Lora, not perfect but it might work in a few tries
With all your great videos, ComfyUI is becoming a one stop shop for all things graphic... 🙂 Thanks for another great video.
I always feel inspired and joyful when I watch your videos! Your passion for teaching shines through, and it makes learning so much fun.
Thank you so much ☺️ and thanks for the node ☺️
I've always found creating seamless patterns and textures to be tricky, but your step-by-step guide really simplifies the process.
Thank you.
Thanks ivo ☺️
Thanks, Pixaroma! I had no idea making seamless patterns with Stable Diffusion was this easy. Shoutout to channel legends 🎉
Thank you ☺️
Great tutorial! The Seamless Tiling Node sounds like a game-changer for creating those perfect repeatable patterns. Can’t wait to try it out on some 3D models and textures. Thanks for breaking it down so clearly!
Thank you for giving creative ideas 👍👍👍
Thank you for all your tutorials full of information that help us to better understand this type of applications. Greetings from Maevertigo :P
You are welcome ☺️ greetings 😁
Cool. Thanks for the tips🎉
Amazing tutorial as always! Thank you
you are welcome 🙂
Another gem, so informative as usual!
Glad you enjoyed it☺️
Thank you for sharing this useful work flow.
This video is fantastic
Great looking forward to create some patterns of my own
You the best! Can't wait for tut of creating video with comfy by you, is it in your plans?
I didnt find yet a good video model, all that are free has a very low resolution and a lot of errors, so is not really usable. I was hopping the creators of flux will release the video model. Until then I am using Kling ai for image to video
Thank you dude! This is the one feature I used in Midjourney that I was wishing was available for free.
Yeah is quite useful, hope they make it work with flux also ☺️
Great tutorial, I am always learning new things and you make it so simple. Can you make one on ReActor Face Swap its one that I have trouble getting it install.
I have been avoiding that because of two reasons: because it needs insight face and that gives a lot of errors on different systems and I am a designer and cant help people with all those code errors, like is really sensible on the dependencies that it needs. And the second reason is that is only for personal use, the insight face so no commercial work, and both reactor and faceid use that from what i know
This is awesome.
Please can you make a tutorial on conditioning concat, conditioning average and timestamps?
I will look into it and if in the future i do something similar i can add it to the video, thanks for the suggestion
Thanks!
Any way to limit the tiling to x axis?
*Nevermind, was easier than I thought lol. Thanks for the tutorial!
Heyo. Do you have an idea on how to get more distance between motives in seamless patterns? I can create great motives, but it's almost always way too crowded, so the final outcomes are mostly way too busy for the eye. I tried stuff like "evenly spread, much distance, much space in between" etc., but it won't have an effect.
Btw., great channel and very amazing tips. Coming from a pre-AI Photoshop world, stuff like Stable Diffusion shocks me almost every new day. 👍
I dont maybe you can get one pattern crop it and try again on image to image, maybe can make something to work that way, not sure, is hard to control AI
Thank you for sharing, I'm expecting the Flux inpaint tutorial, also wanna know how to choose the inpaint masked content like in the Forge, Thank you again
I would do an inpaint video, just need a little bit more research to test different options
I have seen a few videos for comfi for low vram 6-8gb. Do you plan on doing a comfi series for low vram devices?
This one for example works with lowvram, tested on 6gb rtx2060. Usually for low RAM you just use smaller models, and sometimes help to use vae decode encode the tiled version. But running flux and complex workflow will not work to well, is like you have a tiny car with small engine and you want to compete in.car racing 😁 I usually include on discord workflow for sdxl
Thank you very much. From the beginning of Ai based text 2 image models, people were hardly figuring out how to generate seamless pattern, but could do in some ways somehow.. It was really challenging.. Now it is easier with lots of nodes.. But, the main problem is not generating tileable texture, but UPSCALE'ing them.. Because it will easily lose the connection edges.. For example if you want to cover a bed sheet, 1024x1024 will never be enough for you.. Whether you will need to convert your generation to vector type (if possible good, but still challenging because the content of your generation may not be suitable with too many details) , or upscale it at least 4 times. And when you are upscaling your tiled vae decoder, or normal vae decoder will be most possibly screwing, because of the nature of upscaling. I hope this issue would be resolved with some intelligent nodes, or new approach..
I am sure it will get better and better in the future ☺️
@@pixaroma for sure :-) Just a trick for whom to have upscaled textures.. Upscale to the desired size, with your own choice of technique. In a photoshop like host application, OFFSET your image in both X and Y axis for value of %50 and %50. You will get a (+) shaped image slightly visible of seam connections. Go back to your workspace and inpaint this (+) area with low denoise values like 0.25 - 0.30, (you dont have to use a prompt).. The connection seams will be gone. Thank me later! :-)
I tried so many times to join your discord, but the invite link is always invalid!!, you are doing great work!!
use this link, is put in the link of the channels as well, first it will ask if you want to go outside youtube since is discord and is outside, so click go to site then accept invite discord.com/invite/gggpkVgBf3
Hi!!, I just started to follow you, very nice video tutorials, thanks you for these tutos. One Q, can you make a workflow for "Image to seamless" textures, I been trying to do it for a while and I have one but is very complex and a mess jajaja. I hope you can do one more simple. Thanks once more
It kind of works but it doesn't keep images similar , depending on the noise, if i want the edges to be seamless it needs to change the original image, if is too similar it will not be semaless
@@pixaroma yes, this one does not work for "texture to seamless" I mean, if you have any texture non-seamless(ex. random texture from the internet), first, you need to normalise them to get rid of any brightness inconsistencies, In Photoshop you do it with a 1"Blur average" + 2the same texture over that on "linear light 50% opacity + 3highpass filter. then offset and paint the joints... so in Comfy, at this point is an offset node with padding and that padding is fed to an Inpainting workflow as a mask and then filled with a ksampler. I know that doesn't sound very difficult but it is jajaja, it would be nice to have a PBR output with the main ones (normal, roughness and high maps) there are a few models that do the PBR output... but you know, is just a fake
Great job! Is there a way to convert an existing image to create a seamless texture instead of txt2img?
Is hard to keep it similar and to be seamless, you can use image to image but you need high denoise to blend the edges and with high denoise the image would be more different
Any way to do a horizontal or vertical half drop repeat
Check minute 9 to repeat only on x axis
@@pixaroma Thank you, cannot fine this info anywhere els. The Best!
Which Juggernaut are you using for this? X? or doing an X to v9 pipeline? tried XL?
Version 10 the x but should work with any
Is this possible to make with image to image?
I tried but because the normal images are not repeatable it doesnt change only the edge it change also the image in order to be repeatable, so if the denoise is higher the images is more seamless but the image is more different, if the denoise is low will be more similar but the part where it combines will be more visible
@@pixaroma And what about just masking certain area so that we can use certain character in the seamless pattern?
I didn't.try i usually just use illustrator or Photoshop to create seamless patterns from existing designs, and i use ai to get them generated from scratch
can we use this node for flux? Alternative?
Doesn't work with flux, and didn't find alternative yet
ITools Grid Filler doesn't work with batch generation from file...Or I do smt wrong :((( It saves only one image after this node. If Save Image node stands after Circular VAE Decode, it's OK.
Maybe is a bug, but the node creator is not on discord anymore to ask him, so not sure
I get somewhat blurry and not perfect edges general shape is matching, I'm using flux, is there a reason you might think of? Thanks!
The node does work with flux unfortunately, only with sdxl, hope they update the node in the future to support flux
@@pixaroma Thank you, I hope so too, I'll focus on other options.
Life saver,thanks :))
Can you do a tutorial where u show how to change product background but product's details still the same?i mean preserve details for example with segmenth anything + controlnet or something like that.bro that would def change my life if u do that and i think many people are interested in that.thanks in advanced +1 sub anyway ))
I am working more with comfyui these days so probably I will do for that.
@@pixaroma there are many workflows in youtube i have seen all of them but when it changes background it also changes main product and its a problem for a product photographers. good luck bro i appreciate that thank you
@@QuickQuizQQ my old approach was this, I have to see if i can find something new ruclips.net/video/6SWXNpKaxys/видео.html
@@pixaroma thats really good in preserving details but is it possible to get workflow for comfyui? thank you bro for your feedback
@@QuickQuizQQ I will se in the future episodes what I can do, the next episode is about ltx-video
what if I want to apply these Patterns & Tileable Textures on wall floor ? How can i do it ?
Is not that easy, i saw something training a Lora with images with floors and then inpainted the image in the area where was the floor with that Lora, not perfect but it might work in a few tries
@@pixaroma Do you have any workflow ? or any link for reference ?
I dont have, i only inpainted some faces with lora, at the end of episode 24
@@pixaroma Okay Thanks