It's a fantastic node! Thank you for creating it! So many others out there, but with bizarre settings and this one just makes sense to me. Wonderful work!
@@sedetweiler thank you for the kind words.. Now I already make an github issue to ComfyMath repo, to notify them there's a lot of another resolutions supported by SDXL from its paper. hopefuly they will also implement it.
Scott, THANK. YOU. None of the other tutorials I watched for upscaling explains how things actually work in such detail. I really appreciate it. I aim to learn as I go along, and your videos are always super helpful.
Two things that were really important to me in this video to use the Ultimate SD Upscaler correctly in my workflow: (1) Use a tile size of 1024 for SDXL. Duh, in retrospect, but I hadn't thought of that. (2) Use the empty positive & negative prompts. The first time I used the upscaler, I ended up with horrific 512 tiling results (seams, non-flowing contents per tile, even w/ low denoise) along with extra stuff (little people) showing up. Both (1) and (2) fixed that for me. When I ran into the problem, this was the first place I turned to. Thank you, Scott. 🙌
glad it worked for you! the denoising is probably the main reason you had all of those tiles. You can use prompts, but if your denoise is low you won't see them do much.
Brilliantly explained and very educational, thank you! In my case the problem is that I’m almost exclusively working with face trained Loras and the more an image is upscaled, the more subtle likeness is lost. I tried throwing a face trained Lora into the mix here but was unable to make it change anything for the better. Possibly due to a lack of knowledge on my part.
Been doing all your tutorials for comfy, they are REALLY GOOD thankyou! Also I LOLed twice here- 'Wow that took forever... well, Learn to paint!' then '384 seconds, find something else to do... like record a video' lolz🤣
I follow the steps, and I learn each video, but tomorrow when I open the comfy again I am blank, have to go through your videos to learn and really digest each steps
After using forge i gave it a shot, since i was kinda interested.. Its pretty neat. Using forge gave me the advantage to know what stuff means, but certainly an adjustment. Its was a great vid, easy to understand. Works great, thank you very much.
This is terrific, thanks! I'd been trying some other recommended ways to upscale, but they always resulted in weird, unsatisfying results. This works marvelously! I see that it's added some super very minor detailing along the way, but without heavily altering the image as it upscales. Color me happy :D Loving these vids, Scott, and can't thank you enough.
Hi Scott, Thanks for putting this video out. I just started using ComfyUI form SD1111 and video's like this have helped out tremendously. I will be checking out more of you vids. Thank you!
Hmmm - I tried this ... but I'm still seeing a lot of tiling artifacts - no matter which tile elimination scheme I pick. Is there a way to hook this up so that it is used in the initial creation of an image? Some way of using it so that if you generate a smaller image you like, you can reload that generation data and regenerate from scratch at the much higher resolution - to avoid the tiling issues? Or is the generation process just inherently "dirty"? I appreciate your putting these videos up :) - Thank You :)
Sometimes Remacri gives oversharpened edges and kills small details, then I find that BSRGAN 2x can actually work quite well for 2-3 times upscaling with denoise at 0.15 - 0.2 and steps at 8 - 16. Especially for detailed face portraits.
Great explanation on the tiling. This is the first workflow I tried, but I was still catching up on what models to use for upscaling, so I wasn't getting good results with the model I had -- I don't remember what it was, but I kept getting tiling seams, or leathery skin/freckling. You mentioned you wouldn't need to worry about seams on this one, where can I find something on the seams settings? I can only guess at what the settings are doing... lol
Thank you Scott, great video like always. For me Upscaling/Recreating an existing image with different denoising values, different tools and so on are more exciting than creating a new image.
Comparing SDXL upscaling to StableSR quickly here: StSR "appears" sharper, but SDXL is a but more "logical". It depends on the image really. The cool thing about difussing upscaling is creation of details. SwinIR works great for very small images, and ESRGAN models have their strengths and weaknesses. Too many options!! But I'm not complaining. Looking forward to a StableSR using the SDXL model.
Where do I find the Nearest SDXL Resolution node? I can't locate it in the manager. In fact, I can't seem to find any custom nodes that take an image input and have outputs for height/width.
Wow, thankyou so much for sharing this. Can you please explain more on upscaler models, hope menu are there, Meyer in a seperate video, or here if it's a short answer. Thankyou again.
I don't know if this has changed, or if I did something wrong, but the upscaled photo has a bunch of obvious grids in it, and I have no clue how to fix it
Thank you, I got that node a few days ago and had no idea how to wire it up, now I do! Also, 300 seconds is not a long time at all when you have a 4GB GPU...I have patience. 😁
At 5:21, I don't see any models in the Load Upscale Model node...? I went to the site for upscale models, downloaded one into models/checkpoints, restarted but nothing shows up
I didn't understand what's the point in the NearestSDXL Resolution and Primitive Multplied and divided again to get the same 4x at the end. Upscale_by will get the same float 4.0 anyway, or did I get something wrong?
you're doing it seperate for the width and the height.. lets you drag and drop any image(not just square images where your width = height) and it'l just work. If you use only one intbinary for both the width and the height you'll convert any image that's not a square to a square.
@@TheNewmanIII I'm sorry, but it doesn't appear to be the case. I just tested it without this part, and it works with images in any aspect ratio; it doesn't create squares. How could a single float represent different ratios for X and Y? The initial math doesn't seem to make any difference.
Very interesting, never heard of ComfyUI, but have been looking for good upscaling apps. I've never used any app that use nodes, but I'm interested to learn.
Insightful. So it is possible to use Tiles with SDXL. With SD1.5 I was able to create very wide images using COntrolnet Tiles and Area Composition / Multidiffusion. Can we do the same thing with SDXL. Have a great day ahead.
Hi Scott. Great tutorial and a really nice break down. Would it be possible to get the workflow please? For some reason, after setting the whole thing up, my workflow has defaulted. Thanks.
Tbh, I didn't get this video the first time I watched it. My first thought was: "The upscaler already says 'upscale by' and you can just plug in the number irrespective of the resolution of the input image. So why do these calculations?" And so I went and read more about the node, 'SDXL Resolution Calculator'. So for any beginner who comes later and gets confused, it "calculate and automatically set the recommended initial latent size for SDXL image generation and its Upscale Factor based on the desired Final Resolution output. According to SDXL paper references (Page 17), it's advised to avoid arbitrary resolutions and stick to those initial resolution, as SDXL was trained using those specific resolution. Basically, you are typing your desired target FINAL resolution, it will gives you: a) what resolution you should use according to SDXL suggestion as initial input resolution & b) how much upscale it needs to get that final resolution."
Would it be possible to explain the function of "reverse upscale for 4x/2x" in a bit more detail? I got what they are for in theory, but can't figure out how to set that up properly. Thanks either way ^-^
Sure. For example you want a final size of 1600x1200 but only have a 4x upscaler you want to use. You can use the reverse upscaler 4x link and it would pass 0.447 to the upscaler node since the actual SDXL dimensions would be 1152x896. But this happens because we are trying to get to a specific size and only have one option.
I am new with ComfyUI. I am getting an error when I am try to install UltimateSDUpscale. "Cmd('git') failed due to: exit code(1)". I will appreciate your help, thanks!
I am going to have to watch this a few more times to get my head around it. I came here trying to figure out how to make high-res wall papers in 16:9 format.
I'm unable to download the upscale model you used (the link has been moved btw), pcloud keeps telling me I'm unable to download bc of high traffic (pushing me to get premium) and Google Drive tells me the requested URL was not found... gonna have to dl a few diff options to try out but yeah, thought you might want to know.
If the Checkpoint Model need to use an specific CFG, steps, sampler and scheduler on a KSampler. Does the same rules apply to the Ultimate SD Upscaler node? (DreamShaperXL Lightning for example have this specific rules)
Please do a tutorial on doing it with controlnet tile. People say it's better but I have no idea why they never explain why it's better to combine them both.
Thank you so much for all your informative videos. They helped us a lot. I have some special request. Could you please create a workflow that can (1) generate image of person using flux model, (2) then that image will go to correct all deformities such as bad hands and bad eyes, (3) after this, this image will go for face enhancement, (4) then more details to the skin and hair will be added to make it realistic skin and more natural human being, not typical ai generated image. (5) And finally, we will upscale the processed image. All above will be done in single workflow. We can also do batch processing in it. also, we can add functionality providing multiple images of single character including whole body to create consistent character. and this would be done without lora, just using multiple images not single image. Thank you.
Hi scott, will you please make a video on Openpose being applied over a 2d-charecter (sketch), so that an animator could easily animate his own charecter accordingly. There are a huge number of people who are waiting for this video. Thank you ....... 🙏
I'm sorry I couldn't find the upscaler and the link didn't work. Please can somebody tell me how I can download an upscaler for this workflow? Thank you 😁.
Wait, why didn't you just put a primitive number to the upscale_by input in the upscaler node? It's virtually the same thing without using 3 more nodes, am i wrong?
does anyone know why the UltimateSDUpscaler keeps returning 'import failed'? Ive tried to install it via manager, via git clone. Neither workds. I dragged in a workflow that include its to get ComfyUI to install it as a misssing node . Same error 'import failed.' Any clues? Ive looked around websites to see if anyones talking about it having an issue but not found anything so far
@@sedetweiler No. actually i resolved that by uninstalling reactor. Although Im having the same problem with ipadapter_plus, which ive mentioned to you elsewhere. I thought i had resolved that too. However, although ive mangaed to make it install, its still returning errors. So once ive undone the tangled knot ive got into trying to solve the problem, ill probably have to post to th edeveloper about that one!
Hi, thank you for the video, I’m super noob in Ai. I installed to upscale some images I have. Let me ask you if is there anyway to do a batch of images instead one by one? Thank you!
i have a question if anyone can help me , i want to upscale a 1920x1088 image what size tiles should i put in Ultimate sd upscaler , i did try 512x512 and 1024x1024 but after its done you can see the tiles in the image its visible like checkerboard + i mixed it with control net TILE but i get same result :( anyone can help ?
Any idea what these errors might mean? Error occurred when executing UltimateSDUpscale: Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 512, 512] to have 3 channels, but got 4 channels instead
Is there a way to set Load Image to watch a hot folder? Also is there a way to get rid of the lines that form like the one in your image about 1/4 of the way down where the tiles don't seam together properly. I get that a lot with this workflow.
A great explainer, I was just trying to figure out upscaling from people’s shared workflows. Setting up my own teaches me better. This is a great series overall, thanks Scott. It arrived just as I got into using image AIs locally. I’m the proud owner of a very bored looking robot, shopping at Wallgreens. I really hope that becomes the “teapot” of image AI. 😀
6:04 using images' width & height X4 for desiredXSIZE, X4 equal typeing upsacele_factor's value : 4. why to add so much nodes for the same value? Primitive node = typeing upsacele_factor :4, there are all the same, why did that?
Heyho, I recently found your video series and they are awesome for beginners! Even though things tend to get dated really quick when it comes to ML utilities, which is one reason for my question: What is the difference or even the advantage of this method compared to the iterative upscaler that you showed a few videos ago? Ssecond question: what's the ComfyUI setting plugin to show where's a node "coming from"? You have those outlines with the title of the custom node package around your nodes, that's what I mean ^^ Oh and third question: do you know of a more "comfortable" / less "noodly" way to create regional prompts? I found two promising custom nodes, though the "comfortable" one (Davemane42's Visual Area Conditioning / Latent composition) that even shows the regions now is quite dated (~ 8 months without updates) and the other one (laksjdjf's attention-couple-ComfyUI) needs many more nodes and connections aswell as manual calculations for each region in question.
Thanks for all the tuts, I have used all your sdxl vids to understand and create my workflow. I have heard him say his workflows are or will be available does anyone know where that might be....thanx
@@sedetweiler Means generating one specific character in different scenes, different clothes, different places. For example In first images James is scientist doing research in a lab. Now i want to generate the same character in next image example James is driving a car. Its just example to explain my point.
sorry for the noob question, but why do we need the operation nodes and the calculator nodes, if the Ultimate SD Upscale has the "upscale by" option. I dont get why we need the binary operations nodes and the resolution calculator node to tell it to "times 4" , if in Ultimate SD Upscale i can have it set to "upscale by 4" what am i missing?
Hey! Do you know why all these upscalers are in .pth and not in safetensors? Same in huggingface. I'm concerned I might be putting my system at risk since I don't know how to check for malware within pickles
is there a way to save the upscaling node set up and then importjust that setup into an existing node flow? Say I have a complex node setup and I want to add the upscale to it with out re-creating the nodes, can i do that?
This actually got answered a few videos on. Highlight multiple nodes. right click away from the highlighted nodes and they can be saved a template. Really handy. (replying to my own comment incase others have the same question)
@sedetweiler ah the same as DIY just jam it all together and hope it stays up lol. Got a few saved nodes now. And tons of custom ones (pipes from impact being my favorite) playing with the nodes is almost as much fun as the results
Thank you! This is an excellent video and it does work great for me with my 1024 x 1024 images. I tried to upscale an SD 1.5, 512 x 512 input, but it did not work for that. It tried to generate a grid of 8 x 8 which would take all day and does not seem right. Do you how set it to upscale a 512 x 512 input image? Maybe it is better to just resize the image. Edit: I did get pretty good results from resizing the image, though I can see the lines in the tile borders in the skin tone.
Is there a good way to get rid of the visible tiling of the upscaled result, apart from rendering it all at once which my GPU is not capable of? It depends on the image, but sometimes it's clearly visible where the borders of the tiles are...
You can try working with the tile mitigation options at the bottom of the node control. You can also try upscaling in steps rather than one giant leap.
wooow.. can't believe my custom node 1:05 (Recommended Resolution Calculator) is shown in my favorite tutorial channel, did not expect that!
It's a fantastic node! Thank you for creating it! So many others out there, but with bizarre settings and this one just makes sense to me. Wonderful work!
@@sedetweiler thank you for the kind words.. Now I already make an github issue to ComfyMath repo, to notify them there's a lot of another resolutions supported by SDXL from its paper. hopefuly they will also implement it.
It's a great custom node for me who's poor at math
Nice work!
Mantappp om, terima kasih sudah buatkan ini node nya om. Sukses terus!!
Scott, THANK. YOU. None of the other tutorials I watched for upscaling explains how things actually work in such detail. I really appreciate it. I aim to learn as I go along, and your videos are always super helpful.
Glad it was helpful!
Two things that were really important to me in this video to use the Ultimate SD Upscaler correctly in my workflow:
(1) Use a tile size of 1024 for SDXL. Duh, in retrospect, but I hadn't thought of that.
(2) Use the empty positive & negative prompts.
The first time I used the upscaler, I ended up with horrific 512 tiling results (seams, non-flowing contents per tile, even w/ low denoise) along with extra stuff (little people) showing up. Both (1) and (2) fixed that for me. When I ran into the problem, this was the first place I turned to. Thank you, Scott. 🙌
glad it worked for you! the denoising is probably the main reason you had all of those tiles. You can use prompts, but if your denoise is low you won't see them do much.
An update on the situation will be interesting and welcome actually :)
Brilliantly explained and very educational, thank you! In my case the problem is that I’m almost exclusively working with face trained Loras and the more an image is upscaled, the more subtle likeness is lost. I tried throwing a face trained Lora into the mix here but was unable to make it change anything for the better. Possibly due to a lack of knowledge on my part.
Been doing all your tutorials for comfy, they are REALLY GOOD thankyou!
Also I LOLed twice here- 'Wow that took forever... well, Learn to paint!' then '384 seconds, find something else to do... like record a video'
lolz🤣
I follow the steps, and I learn each video, but tomorrow when I open the comfy again I am blank, have to go through your videos to learn and really digest each steps
best upscale tutorial i have ever seen for comfy ui. well done
After using forge i gave it a shot, since i was kinda interested.. Its pretty neat. Using forge gave me the advantage to know what stuff means, but certainly an adjustment. Its was a great vid, easy to understand. Works great, thank you very much.
Thats good explanation!
On my 4090 runs so fast!
Thanks!
I have tried to follow other tutorials online. Bay far, you are the best one ! ♥
Top video ... I have a 3090Ti and you are right it takes quite some time to render the upscaled image but it is worth waiting for.
You're 100% right at 4:36, I had no idea. That's a handy little tip!
This is awesome... not so much for the upscaler but learning how to use ComfyUI too :D
Thank you
It worked perfectly, thank you for the step by step guide.
I'm really impressed with the result i get! Thank you so much for this tuto workflow !
You're very welcome!
Thank you for these awesome videos- I now have a control net render set up for my smaller previews- followed by an amazing upscale setup.
Fantastic!
This is terrific, thanks! I'd been trying some other recommended ways to upscale, but they always resulted in weird, unsatisfying results. This works marvelously! I see that it's added some super very minor detailing along the way, but without heavily altering the image as it upscales. Color me happy :D Loving these vids, Scott, and can't thank you enough.
Hi Scott, Thanks for putting this video out. I just started using ComfyUI form SD1111 and video's like this have helped out tremendously. I will be checking out more of you vids. Thank you!
Hmmm - I tried this ... but I'm still seeing a lot of tiling artifacts - no matter which tile elimination scheme I pick. Is there a way to hook this up so that it is used in the initial creation of an image? Some way of using it so that if you generate a smaller image you like, you can reload that generation data and regenerate from scratch at the much higher resolution - to avoid the tiling issues? Or is the generation process just inherently "dirty"?
I appreciate your putting these videos up :) - Thank You :)
5:10 When I drag the upscale_model over there is no models for me to choose from?
Sometimes Remacri gives oversharpened edges and kills small details, then I find that BSRGAN 2x can actually work quite well for 2-3 times upscaling with denoise at 0.15 - 0.2 and steps at 8 - 16. Especially for detailed face portraits.
Great explanation on the tiling. This is the first workflow I tried, but I was still catching up on what models to use for upscaling, so I wasn't getting good results with the model I had -- I don't remember what it was, but I kept getting tiling seams, or leathery skin/freckling. You mentioned you wouldn't need to worry about seams on this one, where can I find something on the seams settings? I can only guess at what the settings are doing... lol
Thank you Scott, great video like always. For me Upscaling/Recreating an existing image with different denoising values, different tools and so on are more exciting than creating a new image.
I agree! It is always fun to see where you can take an image. Cheers!
Amazing! Now, if you could also explain how to insert Face Detailer into this design?
i must stay, i didn't understand anything, but it works
Beautifully laid out content. Thank you!
Thank you very much for a clear and concise tutorial.
Thank you so much for the step by step! I got great results!
Comparing SDXL upscaling to StableSR quickly here: StSR "appears" sharper, but SDXL is a but more "logical". It depends on the image really.
The cool thing about difussing upscaling is creation of details. SwinIR works great for very small images, and ESRGAN models have their strengths and weaknesses.
Too many options!! But I'm not complaining.
Looking forward to a StableSR using the SDXL model.
Great point! I love mixing them as well, again depending on the need. Cheers!
Thank you, detailed and explained but concise, FANTASTIC tutorial, you have a new subscriber.
THANK YOU 👏👏👏👏👏👏👏👏👏👏👌👌
really high quality tutorial, logic and clear!
Where do I find the Nearest SDXL Resolution node? I can't locate it in the manager. In fact, I can't seem to find any custom nodes that take an image input and have outputs for height/width.
Name of the node is "ComfyMath", you can search it in the manager.
The tilings is really prominent with this method.. any ideas how to fix?
There are some tiling settings at the bottom of that control. I would enable it and perhaps set it to handle the edges.
Wow, thankyou so much for sharing this. Can you please explain more on upscaler models, hope menu are there, Meyer in a seperate video, or here if it's a short answer. Thankyou again.
I don't know if this has changed, or if I did something wrong, but the upscaled photo has a bunch of obvious grids in it, and I have no clue how to fix it
Denoise is set to high
Thank you, I got that node a few days ago and had no idea how to wire it up, now I do!
Also, 300 seconds is not a long time at all when you have a 4GB GPU...I have patience. 😁
Glad it helped! It's a fantastic solution. Cheers!
Link please 😊
At 5:21, I don't see any models in the Load Upscale Model node...? I went to the site for upscale models, downloaded one into models/checkpoints, restarted but nothing shows up
I see, so you have to download the models and put them in the "upscale models" folder
I don't get the option to convert on any nodes, this seems like an easy fix but I cannot find anything?
Thank you so much for this tutorial!
Can this upscale by numbers with decimals in them? I want to upscale my images by 1.5, to save time and put less stress on my gpu.
thank you very much for this, much appreciated.
I didn't understand what's the point in the NearestSDXL Resolution and Primitive Multplied and divided again to get the same 4x at the end.
Upscale_by will get the same float 4.0 anyway, or did I get something wrong?
you're doing it seperate for the width and the height.. lets you drag and drop any image(not just square images where your width = height) and it'l just work. If you use only one intbinary for both the width and the height you'll convert any image that's not a square to a square.
@@TheNewmanIII I'm sorry, but it doesn't appear to be the case.
I just tested it without this part, and it works with images in any aspect ratio; it doesn't create squares.
How could a single float represent different ratios for X and Y?
The initial math doesn't seem to make any difference.
You are also assuming I always want 4x, which isn't the case. Sometimes I just 1.5x or 1.7x.
@@sedetweilerno, I'm not assuming you want use always 4x. You can use any ratio without the math part. Just type the ratio in the field upscale_by.
Yeah, this confuses me too. I just set the ratio I want directly in the ultimate node and it works as expected with any image.
Very interesting, never heard of ComfyUI, but have been looking for good upscaling apps. I've never used any app that use nodes, but I'm interested to learn.
Insightful. So it is possible to use Tiles with SDXL. With SD1.5 I was able to create very wide images using COntrolnet Tiles and Area Composition / Multidiffusion. Can we do the same thing with SDXL. Have a great day ahead.
Yes you can. Give it a whirl!
Hi Scott. Great tutorial and a really nice break down. Would it be possible to get the workflow please?
For some reason, after setting the whole thing up, my workflow has defaulted.
Thanks.
Hi there! Workflows are in the community area here on youtube for channel sponsors. There is quite the pile of them in there now!
Tbh, I didn't get this video the first time I watched it. My first thought was: "The upscaler already says 'upscale by' and you can just plug in the number irrespective of the resolution of the input image. So why do these calculations?" And so I went and read more about the node, 'SDXL Resolution Calculator'. So for any beginner who comes later and gets confused, it "calculate and automatically set the recommended initial latent size for SDXL image generation and its Upscale Factor based on the desired Final Resolution output. According to SDXL paper references (Page 17), it's advised to avoid arbitrary resolutions and stick to those initial resolution, as SDXL was trained using those specific resolution. Basically, you are typing your desired target FINAL resolution, it will gives you: a) what resolution you should use according to SDXL suggestion as initial input resolution & b) how much upscale it needs to get that final resolution."
Would it be possible to explain the function of "reverse upscale for 4x/2x" in a bit more detail? I got what they are for in theory, but can't figure out how to set that up properly. Thanks either way ^-^
Sure. For example you want a final size of 1600x1200 but only have a 4x upscaler you want to use. You can use the reverse upscaler 4x link and it would pass 0.447 to the upscaler node since the actual SDXL dimensions would be 1152x896. But this happens because we are trying to get to a specific size and only have one option.
obrigado pela aula! achei sensacional! aprendi muito hoje!
I am new with ComfyUI. I am getting an error when I am try to install UltimateSDUpscale. "Cmd('git') failed due to: exit code(1)". I will appreciate your help, thanks!
I am going to have to watch this a few more times to get my head around it. I came here trying to figure out how to make high-res wall papers in 16:9 format.
where do you find the upscale models?
I'm unable to download the upscale model you used (the link has been moved btw), pcloud keeps telling me I'm unable to download bc of high traffic (pushing me to get premium) and Google Drive tells me the requested URL was not found... gonna have to dl a few diff options to try out but yeah, thought you might want to know.
If the Checkpoint Model need to use an specific CFG, steps, sampler and scheduler on a KSampler. Does the same rules apply to the Ultimate SD Upscaler node?
(DreamShaperXL Lightning for example have this specific rules)
Please do a tutorial on doing it with controlnet tile. People say it's better but I have no idea why they never explain why it's better to combine them both.
Is there a trick to preventing the tiles from appearing in the final image?
The link for upscaler models changed and now I can’t find them.
Well done sir...good explanations, im greatful for this...thx
Thank you so much for all your informative videos. They helped us a lot.
I have some special request.
Could you please create a workflow that can
(1) generate image of person using flux model,
(2) then that image will go to correct all deformities such as bad hands and bad eyes,
(3) after this, this image will go for face enhancement,
(4) then more details to the skin and hair will be added to make it realistic skin and more natural human being, not typical ai generated image.
(5) And finally, we will upscale the processed image.
All above will be done in single workflow. We can also do batch processing in it.
also, we can add functionality providing multiple images of single character including whole body to create consistent character.
and this would be done without lora, just using multiple images not single image.
Thank you.
Does this workflow let you choose x2 as the size even though I might choose a x4 upscale model, like you mentioned? or do I have to modify it?
Could you make a tutorial about ComfyUI Outpainting or fill out the images?
I saw some outpainted tutorials with A1111 but not ComfyUI.
Thank you.
Yup! Those are coming soon as well. Cheers, and thanks for taking the time to leave a comment!
I get some hallucination when upscaling images. I don't find this upscaler usable. :(
I was the 1000th like!!!
Congrats!
Very interesting and helpful, thank you!
Hi scott, will you please make a video on Openpose being applied over a 2d-charecter (sketch), so that an animator could easily animate his own charecter accordingly. There are a huge number of people who are waiting for this video. Thank you ....... 🙏
what do I need to change if i want a 2x upscale?
Thanks for the tutorial! Is this still the most up to date and efficient way to upscale loaded images? Thanks
i have bad tiling, what can i do?
Is this method good when you want to upscale a photo of a character to get more detail in the skin that is realistic?
I'm sorry I couldn't find the upscaler and the link didn't work.
Please can somebody tell me how I can download an upscaler for this workflow? Thank you 😁.
Wait, why didn't you just put a primitive number to the upscale_by input in the upscaler node? It's virtually the same thing without using 3 more nodes, am i wrong?
Sometimes I might need a 1.5 or 1.7x so this just keeps it safe.
You are right. I said the same thing but he seems to not understand.
I ran this upscaler but it's losing the details.
does anyone know why the UltimateSDUpscaler keeps returning 'import failed'?
Ive tried to install it via manager, via git clone. Neither workds.
I dragged in a workflow that include its to get ComfyUI to install it as a misssing node . Same error 'import failed.'
Any clues?
Ive looked around websites to see if anyones talking about it having an issue but not found anything so far
did you post the question to the git of the developer?
@@sedetweiler
No. actually i resolved that by uninstalling reactor.
Although Im having the same problem with ipadapter_plus, which ive mentioned to you elsewhere.
I thought i had resolved that too. However, although ive mangaed to make it install, its still returning errors.
So once ive undone the tangled knot ive got into trying to solve the problem, ill probably have to post to th edeveloper about that one!
Is there any way to use some kind of regional prompt control with tiling upscales in ComfyUi ?
Yup! Working on that one soon!
Hi, thank you for the video, I’m super noob in Ai. I installed to upscale some images I have. Let me ask you if is there anyway to do a batch of images instead one by one? Thank you!
i have a question if anyone can help me , i want to upscale a 1920x1088 image what size tiles should i put in Ultimate sd upscaler , i did try 512x512 and 1024x1024 but after its done you can see the tiles in the image its visible like checkerboard + i mixed it with control net TILE but i get same result :( anyone can help ?
Any idea what these errors might mean?
Error occurred when executing UltimateSDUpscale:
Given groups=1, weight of size [64, 3, 3, 3], expected input[1, 4, 512, 512] to have 3 channels, but got 4 channels instead
Is there a way to set Load Image to watch a hot folder? Also is there a way to get rid of the lines that form like the one in your image about 1/4 of the way down where the tiles don't seam together properly. I get that a lot with this workflow.
A great explainer, I was just trying to figure out upscaling from people’s shared workflows. Setting up my own teaches me better.
This is a great series overall, thanks Scott. It arrived just as I got into using image AIs locally. I’m the proud owner of a very bored looking robot, shopping at Wallgreens. I really hope that becomes the “teapot” of image AI. 😀
Aww, thank you! Glad you got it all setup and you are keeping the robot entertained! Cheers!
6:04 using images' width & height X4 for desiredXSIZE, X4 equal typeing upsacele_factor's value : 4. why to add so much nodes for the same value? Primitive node = typeing upsacele_factor :4, there are all the same, why did that?
Heyho,
I recently found your video series and they are awesome for beginners! Even though things tend to get dated really quick when it comes to ML utilities, which is one reason for my question:
What is the difference or even the advantage of this method compared to the iterative upscaler that you showed a few videos ago?
Ssecond question: what's the ComfyUI setting plugin to show where's a node "coming from"? You have those outlines with the title of the custom node package around your nodes, that's what I mean ^^
Oh and third question: do you know of a more "comfortable" / less "noodly" way to create regional prompts? I found two promising custom nodes, though the "comfortable" one (Davemane42's Visual Area Conditioning / Latent composition) that even shows the regions now is quite dated (~ 8 months without updates) and the other one (laksjdjf's attention-couple-ComfyUI) needs many more nodes and connections aswell as manual calculations for each region in question.
I have a little problem where SDLX model hallucinates faces and hands where there was just other indistinct detail. Is there a way to mitigate that?
Hmm, I am not sure I have seen that before.
upscaler link is out of date :[
Where do I get the upscale models?
Yeah I would like to know that as well, I can't find that model on the comfyui mod manager
upscale.wiki/wiki/Model_Database
good point. I added the link to the description as well. Cheers!
Thanks for the extremely quick reply@@sedetweiler
thanks!
Thanks for all the tuts, I have used all your sdxl vids to understand and create my workflow. I have heard him say his workflows are or will be available does anyone know where that might be....thanx
They are posted on youtube as community posts for those who are sponsors for the channel.
Sir i appreciate your work. Kindly create a video about how to get consistent images in ComfyUI? Kindly make a video on it. It will help a lot.
Thanks
Consistent in what way?
@@sedetweiler Means generating one specific character in different scenes, different clothes, different places.
For example In first images James is scientist doing research in a lab. Now i want to generate the same character in next image example James is driving a car.
Its just example to explain my point.
Thanks, topaz is sooo much faster but this adds a little more detail.
Yes, true. I also use Topaz, but this has a more interesting result.
Does this method use tile overlap to avoid seams? If so, could that be used to adapt between the input resolution abs SDXL's resolution?
Yes, those values are exposed in the bottom of the node. I know 1024 square is a solid tile for use in upscaling with SDXL.
im getting import failed on comfy math for some reason, any ideas? any other tools to use instead for the math portion?
Hi, I have a question about the math option you got under add node. If I don't have that option how do I fix it? Thank you😀
Just be sure you have the latest version. There is a release about 2x a day now for the core, and the other custom nodes are also about that often.
Great video. Would using doing a little vision on the original image to provide guidance to the upscaler be beneficial?
Possibly. It really depends on your denoise level. Thank you for taking to leave a comment as well!
sorry for the noob question, but why do we need the operation nodes and the calculator nodes, if the Ultimate SD Upscale has the "upscale by" option. I dont get why we need the binary operations nodes and the resolution calculator node to tell it to "times 4" , if in Ultimate SD Upscale i can have it set to "upscale by 4" what am i missing?
I had the same question!
Hey! Do you know why all these upscalers are in .pth and not in safetensors? Same in huggingface. I'm concerned I might be putting my system at risk since I don't know how to check for malware within pickles
Most of them are just quite old.
is there a way to save the upscaling node set up and then importjust that setup into an existing node flow?
Say I have a complex node setup and I want to add the upscale to it with out re-creating the nodes, can i do that?
This actually got answered a few videos on.
Highlight multiple nodes.
right click away from the highlighted nodes and they can be saved a template.
Really handy.
(replying to my own comment incase others have the same question)
Thank you. Yes, it is a battle to really show how all of these can be used, but feel free to jam them all together into a masterpiece! :-)
@sedetweiler ah the same as DIY just jam it all together and hope it stays up lol.
Got a few saved nodes now. And tons of custom ones (pipes from impact being my favorite) playing with the nodes is almost as much fun as the results
Thank you! This is an excellent video and it does work great for me with my 1024 x 1024 images. I tried to upscale an SD 1.5, 512 x 512 input, but it did not work for that. It tried to generate a grid of 8 x 8 which would take all day and does not seem right. Do you how set it to upscale a 512 x 512 input image? Maybe it is better to just resize the image. Edit: I did get pretty good results from resizing the image, though I can see the lines in the tile borders in the skin tone.
I was just wondering ... what kind of graphic card do you have. Generations at your side are way faster than over here...
3090, but I also speed up my video when it's boring
wow this is awesome! is it correct to think that we can actually combine this with the iterative upscaler workflow if we are totally insane?
great. I only couldNt use the "4x_foolhardy_Remacri" upscaler model, how can I install it? thx in advance!
got it ^^ into the upsc model folder ...
Any chance you can give us a link to the workflow we can drag and drop? 😅
Those are available in posts on RUclips for Sponsors of the channel.
@@sedetweiler mmm no nononono those are available to people to dl without bribing you.
Is there a good way to get rid of the visible tiling of the upscaled result, apart from rendering it all at once which my GPU is not capable of? It depends on the image, but sometimes it's clearly visible where the borders of the tiles are...
You can try working with the tile mitigation options at the bottom of the node control. You can also try upscaling in steps rather than one giant leap.
@@sedetweiler Thanks, the tile mitigation helped a lot 👍
Awesome!