how do you disable the use_filled_vae ? EDIT: So the reason why you have to install the BlenderNeko: Tiled Sampling for ComfyUI that's the reason behind this issue.
You are SIMPLY THE BEST !!! fluent, effortless, snappy, concise, to the point, crystal clear,... you name it, man you are a Godsend !!!!! ⭐🌟 and love the recap at the end of the video, excellent !!!🤩🌟
You are a hero! Everytime I watch one of your videos, I learn stuff that I would have never guessed. Impact pack is a huge Go To for me, I also LOVE the efficiency nodes.
Great tutorial! Thanks for taking the time to clear these things up. I have to mention that I happened to be watching your tutorial right before my daily workout routine, which added a whole new unexpected layer of entertainment when mixing academia with athleticism . Thank you again for sharing your knowledge!
Excellent tutorial Scott, thank you. I've had the Impact Nodes installed for a week or so but it's really hard to find tutorials on their various functions. I've learned a lot from this video. Please add more Impact Nodes tutorials when you get the chance.
@@DurzoBlunts Yes, I've seen them. They are all silent movies of someone who knows what they are doing but can't communicate it very well to the rest of us.
Yes, I found him a few days ago and it helped with some of the new stuff. I have been using it for a few weeks now, but hopefully my videos will get his pack more noticed.
Amazing tutorial Scott. Thank you very much! I'm learning about stable diffusion and ComfyUI and this class helped me a lot with upscalers and I hope everyone realized that in addition to adding the purple hair we can remove some detail with a negative prompt
This was just so awesome video. This really shows the power of ComfyUI. PLease bring in more videos like these. There very rare videos like these on youtube. Only few people actually are uploading ComfyUI videos.
Excellent video! I need to get in the habit of using the pipes more often. Also, I had no clue about the iterative upscalers, nor have I really been able to figure out hooks before now. This has helped me a bunch. :)
For those with low vRAM, this node eats up vRAM! A useable alternative is the Ultimate SD Upscaler custom node. It's not as vRAM hungry. This iterative limits me to about 7 steps and 1.5x upscale. Whereas I can do 2.25 or even 2.5x with SDUltimate Upscaler.
That is actually the other method I use for upscaling, but I wanted to cover this one as well since there are other strategies I show in here that are not exactly related to the upscaler but are helpful to know overall. Cheers! That video is also coming soon!
@@sedetweilerI completely agree with your documentation of everything and making sure the viewer knows how it works. Doing a great job for the new comers to SD node based gen.
Thank you so much for this tutorial. Now I am learning how to use it. Well, I could have just download workflow and be done with it. But then I haven't learned anything other than knowing how to use it.
Thanks for sharing, there is a huge need to explain custom nodes! Sometimes they don't even have the "automatic" input node to choose from, so it's quite difficult to understand their usage (not speaking about impact pack here). Regarding the topic of the video, I've been experimenting with different upscale methods and nodes, this one included. The outcome, in my opinion, is that Ultimate SD upscale with controlnet tile is the best method (like it is in A1111) :D
Yup! I agree, and that video is probably next. However, I wanted to cover some of the concepts in here that might be useful when dealing with the node driven process. It isn't the best upscaler by far, but it does give us another tool in our pocket. Cheers!
I did have my noise pretty high, so I could have controlled that. However, I always like to see what details it adds, so I sort of enjoy this process of exploration.
Sorry, am noob, but is there a way to utilize this for the option of upscaling already existing images? Everything I've tried either gives me errors or takes ages with no change to the original image - hell, even makes it worse most times.
i learned so many things in this video i didn't know before :D notjust upscaling but also the copy a Node and STRG + arrow ↑ trick.. do u have a video for all these little QOL combinations?
first off all, cool new approach learned here. But furter more, learned just some minor, but handy tricks. like SHIFT+Clone for keeping connections. Or that you can set the line direction on reroute nodes.
I've been using an iterative upscale method where I basically did what A1111 does with img2img and getting good results. Rather than upscaling the latent, I upscale the image and then re-endcode to latent between every step. As you mentioned, upscaling latent images does weird things. The first x2 upscale step uses 0.40 denoise, while using 0.20 for the second. Impact nodes do seem useful, i've been looking for a way to concat prompts.
I think there are so many ways to bend this, and I love that you are coming at it from another angle but finding some of the same things. Keep the ideas coming!
So with some models with SDXL. I had this working pretty well, however other models. It seems to fade to a gray and hazy look. Do you have any idea why this might be happening? I've tried to Justin to CFG but that doesn't seem to have much effect but to be faded and fried Also, did you mean to title the video iterative or infinite?
I'm just shocked after you correct the starting denoise to 0.3, the change to the image is almost like edit the image by prompt. This is going to change the world for a lot of people
Average is pretty straight forward (literally just takes the matrix and averages all the numbers), but what is the difference between concat and combine? And how do they interact with control nets before them? I've gotten strange results when using either depending on which connection I add things to. I have yet to see any documentation that really clarifies the difference. My understanding is that combine is basically the :: operator in midjourney, which makes me wonder what concat does. It can't be adding words to the end of the prompt because it's post-encode. It probably appends the matrix to the end of the previous one but what does that actually do in terms of how it's processed?
Hi. Awesome tutorial but in my case something goes wrong. If in PixelKSampleUpscalerProviderPipe i put denoise under 1.0, eg. like you 0.3, i got low quality output. When left it at 1.0 everything is super crisp, but lot different than first generated image. Do you have any clue how to make this work?
Very well explain, I like it. Question: Do you think it's possible to preserve the pink t-shirt when you change hair color. I wonder I there is way to preserve element color (I try cutoff but the result wasn't perfect).
Could you explain why you have set such a high CFG in the HookProvider and a low CFG in the UpscalerProvider? The default values are the other way round. I can't believe how those values worked fine in your case because they failed miserably in my workflow.
I appreciate these comfy go threw videos highly Scott. Great content! I wonder if midjourney engine (model?) will be accessible in stable diffusion! I find that better than dsxl for time beeing! Keep on the great work!
No, unfortunately there business model means they cannot release their model like stability.ai can. However, there are some models that are close, and you can also use midjourney images you have downloaded in your pipelines to do alterations.
Hi Scott, thanks for your great videos! keep 'em coming. One question though what is the main advantage of using this upscale process? Is it quality or is it quicker - not sure I understood after watching the video why I should use this. thanks
This was just an example of an upscaler workflow, and there are many. I did this one first to show some of the more interesting aspects you can use, like late prompt injection, provider nodes, and other little aspects. It probably isn't the best upscaler, but much better than the default node. Another one is coming soon that is my favorite one but isn't anywhere near as interesting to setup.
Thanks for sharing your process. It's great to see innovation. I'm not sure ComfyUI is the best choice for this workflow though, each step is getting less detailed overall. It might be better to have more control of what happens between each step (ala Auto1111).
You can do that on here as well, I just wanted to show one method. Another method video is coming soon that you might prefer, or even mix them together!
I've downloaded some 'monster workflows' from some very clever users but can't see much value in them compared to your lovely simple workflows....I'm not sure Comfy needs to be complicated as much as some graphs make it...your vids are so accessible, keep em coming...nice simple inpainting one would be good if you are in need of suggestions...😉
Your videos are really helpful. Thanks for making them. After, I ran the process I received a 3rd image without purple hair. The upscaler, 4x_NMKD-Superscale-SP_178000_G.pth, I input to PixelKSampleUpscalerProvider gave me a result without purple hair and is blurry. I may try to track down the upscaler you used. Anyways, how do I re-run the whole process? If I click, Queue Prompt in an attempt to redo the whole process, it does nothing.
Hello, thx so much for your tutorial. I'm just wondering if i can add a 4th upscaler, or even a 5th ? But i can't figure out how to get further the 3th. Do you have any tips please ? Thx again scott
For me under use_tiled_vae there is tile_size i don't what to put in therefore my result or the second image is completely zoomed in and you can't see the actual character. can someone help?
Just use the image loader and VAEencode it to a latent and keep the workflow the same. There is nothing special about the AI image compared to any other. Just getting it into the workflow using the loader is the only extra step. Cheers!
@@sedetweiler thanks but, im getting "Error occurred when executing PixelTiledKSampleUpscalerProviderPipe: object of type 'NoneType' has no len()" reguardless of which upsclaler provider im using
Hey Scott... Do you have a video about the best practices to manage all the workflows config files? Because I always find myself having a proper node workflow, but I test so many new things that it's getting messy, then I get 10 versions of something and I always start over at the end lol. I'm sure there are ways to stay organized.
I tend to keep my favorite ones in a folder on my desktop. I also put the one from today in the Posts area of the channel for Sponsors, and I would probably rename that one to something like "Upscaler Base" and remove a few of the testing nodes. I do have another video coming soon that might actually help you a ton in this area, so perhaps I will push that one up near the top. Cheers!
@@sedetweiler Thanks four your reply! I just recreated what you did in the video, I tested with the upscale model "Siax", well, It's pretty interesting, since the upscale model is super sharp, the 3x added some type of grain into the final image :D Thanks for these videos btw, they are great.
I was hoping you would get to the point where you change the starting image to an upload or whatever you call it dialog. Like i have my own pictures I took as a kid and would love to see how it would try to upscale it. Could you show us how to then add you own image at the beginning to upscale
@@sedetweiler i would love to see it and learn it. comfyui is still so confusing to me, i feel like im just learning it with trail and error an a little serch and find. But the thing is with all the fresh technology through generative ai it changes so fast and the tutorials im finding are out of date for some of them, ill subscribe and hope to see you live soon! you know what imma ask you !
Darn, my PixelKSampleUpscalerProviderPipe it has another pin called use_tiled_vae above the 'basic pipe' pin. not sure where I went wrong there? Anyone know where I should plug this into? Just seen that pinned comment about Blend Neko, will give that a go. Updated comfy, updated Impact, restarted comfy, removed node, added node, same issue. hmm I found the issue to be the sdxl vae that I had fed in at the beginning. I just connected the vae from load checkpoint instewad and problem gone!
Hey Scott, another good tutorial. I did a test at the mid-point of this tut and my first upscale from 704 x 448 to 1880 x 1200 (close enough to 1920 x 1080 to work with) took 19 minutes! Apples to oranges, but using Controlnet Tile in A1111 took 2.5 minutes. I'm working on a series of Deforum animations that mean upscaling of over 40,000 frames. I turned to this tut in the hope that Comfy would come to the 2.5 min rescue. Any chance you've got a trick in your pocket for us animators? Because this won't cut it. (Oh, and I haven't seen a Load Batch node. Is there one?)
Thanks for the clear explanations. I tried to insert a controlnet (the last canny SDXL 256) into the conditioning pipeline, but image generation fails after the first sampler. It seems that this workflow is not compatible with Controlnet. Is there a solution to avoid this?
Thanks for these fantastic videos! They've been incredibly helpful. Can I use this workflow/ComfyUI to generate videos too ? Be it from a single frame of an existing video or from the latent noise.
Pretty new to ComfyUI and working through the tutorials right now. One question I have is: In the "KSampler (pipe)", is the VAE output channel the same VAE that also is passed out in the BASIC_PIPE output channel, or is it a VAE modified by the KSampler Node?
It's the same all the way across the grid. Typically we don't mess with the VAE, but sometimes we will use another one, but we would probably specify that and it would be obvious. No steps should be hidden.
Thanks. I kind of suspected that, but was a little confused by the uppercase/lowercase naming convention, and trying to understand it. Do you know the rationale? The input side is mostly in lowercase, while the output side a little more mixed, usually uppercase but sometimes lowercase. Perhaps I'm just overthinking it, being a software developer by trade. 🙂
Is this workflow still valid? Compared to Automatic111's img2img tab upscaling with just eular a and denoising of 0.2-0.3, I'm clearly losing detailed lines when working on anime pictures. Even if I'd go up really slowly and 5 steps
Is it outdated by now? Because the workflow setup does not work. I installed all the required stuff, but it keeps throwing me this error: When loading the graph, the following node types were not found: MilehighStyler And there doesn't seem to be any fix to this as of 12/9/2023.
It would be very useful if you can share any of those images with us, so we can obtain the blue print, to simplify putting it into practice, with the manager, which allows us to install the missing nodes, it is much simpler, the video is very good, it looks very the process is clear, I had been testing several upscalers, x4 x8 that I was able to generate photos of up to 16000x16000 of 500mb each, the good thing about this technique is that better details can be applied as it is enlarged,
Hi Scott, I've got a use case question. I need to upscale an image that I generated on SDXL using my own trained lora for a highly detailed photorealistic portrait of a person. Now, I am noticing that the skin, which is what I want mostly latent generated, is very good in my LoRA. Is there a way to inject those LoRA weights through the model in the UpscalerProvider Pipe? Only problem is I generated the image in auto1111, so the weight interpreation is a little different. But in principle, would you think there is a workflow to enhance via Iterative Upscaling and piping a lora into it, maybe using weight blocks? Any thoughts? Would love to get this right.
I might have to give you an example for this, but you can easily do it. Don't get into the mindset that you can only use one checkpoint. You can always load others and use them in different places in the workflows as long as they are compatible.
Ok will pursue this further. I managed to finish my job, but with very high resolution but not able to easily control that Lora skin I wanted. I get stuck testing and seeing anything really effective coming from it. Anyway, small use case, nothing to make big fuss about. Thanks
Great video! I'm slowly learning ComfyUI, coming from Automatic1111 (for the easy use of SDXL with my 6gb gpu). One thing I'd like to ask is, what would be the method equivalent to the upscaling you can get in the "extras" tab in Automatic1111? Because whenever I try to upscale to something bigger than 2048x2048 I got vram issues (when in A1111 I can go to x4 that value in Extras tab). Any help will be appreciated!
Awesome video. Can you do a video where we can do image to image masking and inpainting with just prompts and nodes (no manual masking). Is that possible? Similar to stability AI API
Hi Scott, I'm following along with your tutorial but the PixelKSamplerUpscalerProviderPipe node is asking for a use_tiled_vae? This doesn't show on your version of the node? What to do?
Hello and thx for this share ! For some reason, at each Iterative Upscale node, my generation becomes brighter and brighter. Do you have an idea, pleaz ?
I tried this but for some reason the pixelksampleupscalerproviderpipe has a tile size even when use_tiled_vae is disabled and returns nothing useful. Did they make a mistake with an update of the custom node or what am i missing?
This is a bit dated, so things might have changed. All of these nodes get updated several times a days, so be 100% sure both comfy and all of the custom nodes are updated.
Great, if a bit overwhelming, tutorial! One thing is different for me, in than the PixelKSampleUpscalerProviderPipe has an input called 'use_tiled_vae' that's required in order to work. I couldn't find a simple BOOLEAN node so I had to kludge together a few other nodes to create a FALSE for that input. Any idea why the difference, and maybe an easy way to input a BOOL?
I know it's late, but may be useful if someone else has the same issue - if you right click on the node, you should get the option to convert any input to a widget, which puts it into the properties list. In this case it would put the input as a switch that is disabled by default, but you can enable it in the properties section
it seems way too many steps for something that should be simple. Anyway to use img2img like in 1111 to make this easier/faster? I've stayed away from comfyui since to me it complicates everything instead of making it easier on the person.
The goal of comfy is to really let you modify and understand the process. It won't be for everyone. Some people just want to drive a car, while others like to get in there and understand how it works and change it to perhaps make something better. It's not going to be easier, but it will actually teach you how it works. So, if you want to understand the process, stick with it. But, if you just want to make pretty pictures fast, this probably isn't going to be your thing. Either one is a good choice.
A bit late to the show, but.... Impact nodes do seem to install, but when I do Add Node, it is not in the dropdown list after several restarts of ComfyUI (I succesfully installed the custom nodes from the other video tutorials.... Anybody any idea what's going on? How can I debug the installation?
I suggest you look at what is displayed in the console when ComfyUI starts up. It displays a message for each of the custom_nodes packages and will certainly throw some error message if something is wrong.
Not without installing some custom nodes for text manipulation, then it is not difficult as the text from the node itself can be a parameter we can concat with other variables.
Thank you but it's too much for my PC. can you teach Inpainting like you did img2img in ComfyUI ? changing the final image's face or cloths or background etc...
@@sedetweiler Oh yeah i tried it, first phase was 25sec, second one took 40sec with CPU fan going berserk :D third one was faster. i Use Hitpaw for fixing faces and upscaling, not worth it in Comfy and it's not doable in WebUI at all with my GTX 970 (gives memory error in the middle of upscaling).
It would be a great help if you could provide links to the models, as I think many of us here, try to duplicate what you have, and at least for me, it's a bit difficult to see the model name in the nodes, as it's quite small.
how do you disable the use_filled_vae ?
EDIT: So the reason why you have to install the BlenderNeko: Tiled Sampling for ComfyUI that's the reason behind this issue.
Oh, good find! I will pin this comment.
@@sedetweiler Where do I put the BlenderNeko: Tiled Sampling in my ComfyUI directory?
All of those go under custom noise. You can use the Manager to install it and that makes life easier.
Thank you! @@sedetweiler
Btw, great video! Thank you for such an informative tutorial. @@sedetweiler
You are SIMPLY THE BEST !!! fluent, effortless, snappy, concise, to the point, crystal clear,... you name it, man you are a Godsend !!!!! ⭐🌟 and love the recap at the end of the video, excellent !!!🤩🌟
Wow, thank you! Glad you enjoyed it!
This channel is the only one i have all notifications on. Also the only channel i dont fast forward :) i enjoy all moments of the videos
Wow, thanks! You made my day! Cheers!
Hope you get to find more time for more videos.
Yup! More are on the way soon!
I really like how you step through your tutes, step by step and clear as a bell!
Thank you!
@@sedetweilercan you please upload workflow file?
You are a hero! Everytime I watch one of your videos, I learn stuff that I would have never guessed. Impact pack is a huge Go To for me, I also LOVE the efficiency nodes.
Thanks for watching!
Very nice and slow, showing everything how it fits together, really liked watching this.
Awesome, thank you!
Great tutorial! Thanks for taking the time to clear these things up. I have to mention that I happened to be watching your tutorial right before my daily workout routine, which added a whole new unexpected layer of entertainment when mixing academia with athleticism . Thank you again for sharing your knowledge!
Great to hear! I have a few more coming that will be mind blowing as well.
Excellent tutorial Scott, thank you. I've had the Impact Nodes installed for a week or so but it's really hard to find tutorials on their various functions. I've learned a lot from this video. Please add more Impact Nodes tutorials when you get the chance.
Thank you! Yes, there are going to be a lot more coming as I think this is a wonderful pack of custom nodes.
Impact pack creator has a RUclips channel they're uploading examples and eventually tutorials to.
Channel is 'Dr Lt Data' I believe.
@@DurzoBlunts Yes, I've seen them. They are all silent movies of someone who knows what they are doing but can't communicate it very well to the rest of us.
Yes, I found him a few days ago and it helped with some of the new stuff. I have been using it for a few weeks now, but hopefully my videos will get his pack more noticed.
Amazing tutorial Scott. Thank you very much! I'm learning about stable diffusion and ComfyUI and this class helped me a lot with upscalers and I hope everyone realized that in addition to adding the purple hair we can remove some detail with a negative prompt
Glad you enjoyed it!
The best videos on Comfy! Love it, thank you very much!
Glad you like them!
I like that the pipe takes inputs rather than just loads the model. I've gotten great results using a different clip from a different model.
things certainly escalated this video. thank you so much, could not understand without you.
The best ComfyUI tutorial I've come across. Thank you so much mate!
Glad it helped!
This was just so awesome video. This really shows the power of ComfyUI. PLease bring in more videos like these. There very rare videos like these on youtube. Only few people actually are uploading ComfyUI videos.
I will keep them coming!
Excellent video! I need to get in the habit of using the pipes more often. Also, I had no clue about the iterative upscalers, nor have I really been able to figure out hooks before now. This has helped me a bunch. :)
For those with low vRAM, this node eats up vRAM! A useable alternative is the Ultimate SD Upscaler custom node. It's not as vRAM hungry. This iterative limits me to about 7 steps and 1.5x upscale. Whereas I can do 2.25 or even 2.5x with SDUltimate Upscaler.
That is actually the other method I use for upscaling, but I wanted to cover this one as well since there are other strategies I show in here that are not exactly related to the upscaler but are helpful to know overall. Cheers! That video is also coming soon!
@@sedetweilerI completely agree with your documentation of everything and making sure the viewer knows how it works. Doing a great job for the new comers to SD node based gen.
Thank you!
Darn young'uns 'n' their wayfoo models
Kids these days. Geez. ;-)
this is awesome, i love that you walk through the workflow nodes to explain what is happening.
Holy shit this man is a master of ComfyUI. I feel like 'master of ComfyUI' could be a full college course.
Hehe, thank you, sir!
Just wow! Thx for introducing this cool nodes!
Glad you like them!
Thank you so much for this tutorial. Now I am learning how to use it. Well, I could have just download workflow and be done with it. But then I haven't learned anything other than knowing how to use it.
Thanks for sharing, there is a huge need to explain custom nodes! Sometimes they don't even have the "automatic" input node to choose from, so it's quite difficult to understand their usage (not speaking about impact pack here).
Regarding the topic of the video, I've been experimenting with different upscale methods and nodes, this one included. The outcome, in my opinion, is that Ultimate SD upscale with controlnet tile is the best method (like it is in A1111) :D
Yup! I agree, and that video is probably next. However, I wanted to cover some of the concepts in here that might be useful when dealing with the node driven process. It isn't the best upscaler by far, but it does give us another tool in our pocket. Cheers!
Didn't know this technique thanks !
It's changing quite a lot the base image though compared to traditional tiled upscale.
I did have my noise pretty high, so I could have controlled that. However, I always like to see what details it adds, so I sort of enjoy this process of exploration.
I can use SDXL with my 6GB graphics card in comfyUI ! isn’t it amazing ?
I have a 4gb laptop that can also run it... Slow for sure, but the fact it works is pretty amazing! Cheers!
after i hooked up my own upscale models WHEWWWW this is insane
Woot!
@@sedetweiler it's very funky to edit the image with sharpening and higher contrast to crisp it up before the upscaling that usually blands them out
extremely useful things to learn from this video!
Glad to hear that!
Just what I needed, Thank you!
Glad it helped!
Sorry, am noob, but is there a way to utilize this for the option of upscaling already existing images?
Everything I've tried either gives me errors or takes ages with no change to the original image - hell, even makes it worse most times.
i learned so many things in this video i didn't know before :D notjust upscaling but also the copy a Node and STRG + arrow ↑ trick.. do u have a video for all these little QOL combinations?
first off all, cool new approach learned here. But furter more, learned just some minor, but handy tricks. like SHIFT+Clone for keeping connections. Or that you can set the line direction on reroute nodes.
I've been using an iterative upscale method where I basically did what A1111 does with img2img and getting good results. Rather than upscaling the latent, I upscale the image and then re-endcode to latent between every step. As you mentioned, upscaling latent images does weird things. The first x2 upscale step uses 0.40 denoise, while using 0.20 for the second. Impact nodes do seem useful, i've been looking for a way to concat prompts.
I think there are so many ways to bend this, and I love that you are coming at it from another angle but finding some of the same things. Keep the ideas coming!
how you do it with every step?
Great work Scott. Do you have a workflow for taking just an image that's already generated and then upscaling it? Thanks.
Yes, I have done that often (even in the live stream today). I am not sure I have a video on that specifically, but I do it all the time.
great video - very concise explanation and easy to follow
So with some models with SDXL. I had this working pretty well, however other models. It seems to fade to a gray and hazy look. Do you have any idea why this might be happening? I've tried to Justin to CFG but that doesn't seem to have much effect but to be faded and fried
Also, did you mean to title the video iterative or infinite?
I'm just shocked after you correct the starting denoise to 0.3, the change to the image is almost like edit the image by prompt. This is going to change the world for a lot of people
Average is pretty straight forward (literally just takes the matrix and averages all the numbers), but what is the difference between concat and combine? And how do they interact with control nets before them?
I've gotten strange results when using either depending on which connection I add things to.
I have yet to see any documentation that really clarifies the difference.
My understanding is that combine is basically the :: operator in midjourney, which makes me wonder what concat does.
It can't be adding words to the end of the prompt because it's post-encode. It probably appends the matrix to the end of the previous one but what does that actually do in terms of how it's processed?
Nice Workflow. So, basically, this is the HiRes Fix in Automatic1111, but more advanced and customizable.
It's great fun. Using with an image work pretty well when it's an AI image, not so good on real photos.
Hi. Awesome tutorial but in my case something goes wrong. If in PixelKSampleUpscalerProviderPipe i put denoise under 1.0, eg. like you 0.3, i got low quality output. When left it at 1.0 everything is super crisp, but lot different than first generated image. Do you have any clue how to make this work?
You sure it was the denoise and not another setting? I did that in the video and caught myself later.
@@sedetweiler i'm afraid that is denoise...
That's cool, I didn't know you could do stuff like this to have it choose a random one:
{sunrise|sunset|raining|morning|night|foggy|snowing}
Yup! Lots of other prompt tricks in there as well.
Very well explain, I like it.
Question: Do you think it's possible to preserve the pink t-shirt when you change hair color.
I wonder I there is way to preserve element color (I try cutoff but the result wasn't perfect).
There are a lot of ways, but the graph would get complicated. However, I think we are going there soon as a lot of the basics are covered now.
It does make me laugh when your OCD is triggered....you set mine off as well!!
Fantastic demo, thank you!
Glad you liked it!
Could you explain why you have set such a high CFG in the HookProvider and a low CFG in the UpscalerProvider? The default values are the other way round. I can't believe how those values worked fine in your case because they failed miserably in my workflow.
Thank you, this was super helpful tutorial ✌🏻
I appreciate these comfy go threw videos highly Scott. Great content! I wonder if midjourney engine (model?) will be accessible in stable diffusion! I find that better than dsxl for time beeing! Keep on the great work!
No, unfortunately there business model means they cannot release their model like stability.ai can. However, there are some models that are close, and you can also use midjourney images you have downloaded in your pipelines to do alterations.
I love your content
But I can't really find this model anywhere, any chance to provide a link to the checkpoint?
Thank you
Hi Scott, thanks for your great videos! keep 'em coming. One question though what is the main advantage of using this upscale process? Is it quality or is it quicker - not sure I understood after watching the video why I should use this. thanks
This was just an example of an upscaler workflow, and there are many. I did this one first to show some of the more interesting aspects you can use, like late prompt injection, provider nodes, and other little aspects. It probably isn't the best upscaler, but much better than the default node. Another one is coming soon that is my favorite one but isn't anywhere near as interesting to setup.
Thanks for sharing your process. It's great to see innovation. I'm not sure ComfyUI is the best choice for this workflow though, each step is getting less detailed overall. It might be better to have more control of what happens between each step (ala Auto1111).
You can do that on here as well, I just wanted to show one method. Another method video is coming soon that you might prefer, or even mix them together!
is there a similar workflow for Automatic1111?
I've downloaded some 'monster workflows' from some very clever users but can't see much value in them compared to your lovely simple workflows....I'm not sure Comfy needs to be complicated as much as some graphs make it...your vids are so accessible, keep em coming...nice simple inpainting one would be good if you are in need of suggestions...😉
Glad you like them! Inpainting is coming soon! I am actually doing that live on Discord today at the official Stability.ai Thursday broadcast.
So many nuggets here. 🙂
Thank you!
Where can I get this workflow?
Your videos are really helpful. Thanks for making them. After, I ran the process I received a 3rd image without purple hair. The upscaler, 4x_NMKD-Superscale-SP_178000_G.pth, I input to PixelKSampleUpscalerProvider gave me a result without purple hair and is blurry. I may try to track down the upscaler you used. Anyways, how do I re-run the whole process? If I click, Queue Prompt in an attempt to redo the whole process, it does nothing.
Whenever I'm pasting with Shift key it actually doubles pasted object (. Edited - checked on my second device with different OS - same problem
amazing tutorial, thanks! question - do you think it can work off of an input image rather than a prompt?
I would like to use this method to add details to already existing non-ai picture. Is it possible?
Could you give the .json settings for this case?
Hello, thx so much for your tutorial. I'm just wondering if i can add a 4th upscaler, or even a 5th ? But i can't figure out how to get further the 3th. Do you have any tips please ? Thx again scott
For me under use_tiled_vae there is tile_size i don't what to put in therefore my result or the second image is completely zoomed in and you can't see the actual character. can someone help?
You can use a pixel upscal model in latent space? How is that possible?
How to deal with that color bleeding? That purple is not only for the hair, but also to a lot of other things.
On my PixelKsampleUscalerProviderPipe there is a boolean option use_tiled_vae, how do i check this?
Just click it and it will enable.
Update your ComfyUI to latest version.
See the pinned comment. You are probably missing the tiling node like I was.
Nice tutorial, but how do I upscale a pre-existing image that isn't already ai generated?
Just use the image loader and VAEencode it to a latent and keep the workflow the same. There is nothing special about the AI image compared to any other. Just getting it into the workflow using the loader is the only extra step. Cheers!
@@sedetweiler thanks but, im getting
"Error occurred when executing PixelTiledKSampleUpscalerProviderPipe:
object of type 'NoneType' has no len()"
reguardless of which upsclaler provider im using
thank you so much! you guys are amazing.
My pleasure!
Thanks for these, just what I was looking for ... could you share the json for the final flow?
It is in the posts for the page and is visible for channel sponsors.
Hey Scott...
Do you have a video about the best practices to manage all the workflows config files?
Because I always find myself having a proper node workflow, but I test so many new things that it's getting messy, then I get 10 versions of something and I always start over at the end lol. I'm sure there are ways to stay organized.
I tend to keep my favorite ones in a folder on my desktop. I also put the one from today in the Posts area of the channel for Sponsors, and I would probably rename that one to something like "Upscaler Base" and remove a few of the testing nodes. I do have another video coming soon that might actually help you a ton in this area, so perhaps I will push that one up near the top. Cheers!
@@sedetweiler Thanks four your reply!
I just recreated what you did in the video, I tested with the upscale model "Siax", well, It's pretty interesting, since the upscale model is super sharp, the 3x added some type of grain into the final image :D
Thanks for these videos btw, they are great.
I would keep playing with the noise, sampler, scheduler, and all that until you get something you love. It can change a ton by just tweaking values.
"Pretty simple graph"… I’m like 😵🍝
Wow I'ma try this soon 🎉😍, thx 🙏
Hope you enjoy
I was hoping you would get to the point where you change the starting image to an upload or whatever you call it dialog. Like i have my own pictures I took as a kid and would love to see how it would try to upscale it. Could you show us how to then add you own image at the beginning to upscale
Yes, I can do this in a video. I have done things like that in live-streams but not in an official video yet.
@@sedetweiler i would love to see it and learn it. comfyui is still so confusing to me, i feel like im just learning it with trail and error an a little serch and find. But the thing is with all the fresh technology through generative ai it changes so fast and the tutorials im finding are out of date for some of them, ill subscribe and hope to see you live soon! you know what imma ask you !
how do you show the steps through the upscaler? Is that a setting in manager or something else?
yes, you can enable TAESD slow previews and they will show up.
Darn, my PixelKSampleUpscalerProviderPipe it has another pin called use_tiled_vae above the 'basic pipe' pin. not sure where I went wrong there? Anyone know where I should plug this into? Just seen that pinned comment about Blend Neko, will give that a go. Updated comfy, updated Impact, restarted comfy, removed node, added node, same issue. hmm
I found the issue to be the sdxl vae that I had fed in at the beginning. I just connected the vae from load checkpoint instewad and problem gone!
See the pinned comment. There is a component it needed but it wasn't documented.
Hey Scott, another good tutorial. I did a test at the mid-point of this tut and my first upscale from 704 x 448 to 1880 x 1200 (close enough to 1920 x 1080 to work with) took 19 minutes! Apples to oranges, but using Controlnet Tile in A1111 took 2.5 minutes. I'm working on a series of Deforum animations that mean upscaling of over 40,000 frames. I turned to this tut in the hope that Comfy would come to the 2.5 min rescue. Any chance you've got a trick in your pocket for us animators? Because this won't cut it. (Oh, and I haven't seen a Load Batch node. Is there one?)
I am sure once we have controlnet we can get the times closer together.
Crazy
Is it possible to load a stack of images like a task into a comfy UI workflow to change a sequence of images in this way? Thx in advance
Thanks for the clear explanations.
I tried to insert a controlnet (the last canny SDXL 256) into the conditioning pipeline, but image generation fails after the first sampler. It seems that this workflow is not compatible with Controlnet. Is there a solution to avoid this?
Hmmm, it should work. I will have to give it a try.
Thanks for these fantastic videos! They've been incredibly helpful.
Can I use this workflow/ComfyUI to generate videos too ? Be it from a single frame of an existing video or from the latent noise.
Where is the workflow download?
Pretty new to ComfyUI and working through the tutorials right now.
One question I have is: In the "KSampler (pipe)", is the VAE output channel the same VAE that also is passed out in the BASIC_PIPE output channel, or is it a VAE modified by the KSampler Node?
It's the same all the way across the grid. Typically we don't mess with the VAE, but sometimes we will use another one, but we would probably specify that and it would be obvious. No steps should be hidden.
Thanks. I kind of suspected that, but was a little confused by the uppercase/lowercase naming convention, and trying to understand it.
Do you know the rationale? The input side is mostly in lowercase, while the output side a little more mixed, usually uppercase but sometimes lowercase.
Perhaps I'm just overthinking it, being a software developer by trade. 🙂
Is this workflow still valid? Compared to Automatic111's img2img tab upscaling with just eular a and denoising of 0.2-0.3, I'm clearly losing detailed lines when working on anime pictures. Even if I'd go up really slowly and 5 steps
Is it outdated by now? Because the workflow setup does not work. I installed all the required stuff, but it keeps throwing me this error:
When loading the graph, the following node types were not found: MilehighStyler
And there doesn't seem to be any fix to this as of 12/9/2023.
Worked fine for me today.
Hi Scott, would you say that the iterative scaling possible by ComfyUI is now part of "best practices" for upscaling (SD1.5 and SDXL) ?
I sure think so. It takes any image and adds those details everyone seems to want.
It would be very useful if you can share any of those images with us, so we can obtain the blue print, to simplify putting it into practice, with the manager, which allows us to install the missing nodes, it is much simpler, the video is very good, it looks very the process is clear, I had been testing several upscalers, x4 x8 that I was able to generate photos of up to 16000x16000 of 500mb each, the good thing about this technique is that better details can be applied as it is enlarged,
Does not work for me , even with blendeneki, there is this "use_tiled_vae" option and i don't know what to do with it...
Hi Scott, I've got a use case question. I need to upscale an image that I generated on SDXL using my own trained lora for a highly detailed photorealistic portrait of a person. Now, I am noticing that the skin, which is what I want mostly latent generated, is very good in my LoRA. Is there a way to inject those LoRA weights through the model in the UpscalerProvider Pipe? Only problem is I generated the image in auto1111, so the weight interpreation is a little different. But in principle, would you think there is a workflow to enhance via Iterative Upscaling and piping a lora into it, maybe using weight blocks? Any thoughts? Would love to get this right.
I might have to give you an example for this, but you can easily do it. Don't get into the mindset that you can only use one checkpoint. You can always load others and use them in different places in the workflows as long as they are compatible.
Ok will pursue this further. I managed to finish my job, but with very high resolution but not able to easily control that Lora skin I wanted. I get stuck testing and seeing anything really effective coming from it. Anyway, small use case, nothing to make big fuss about. Thanks
Great video! I'm slowly learning ComfyUI, coming from Automatic1111 (for the easy use of SDXL with my 6gb gpu). One thing I'd like to ask is, what would be the method equivalent to the upscaling you can get in the "extras" tab in Automatic1111? Because whenever I try to upscale to something bigger than 2048x2048 I got vram issues (when in A1111 I can go to x4 that value in Extras tab). Any help will be appreciated!
Yes, there are methods for that and I will be covering another upscale method soon.
Awesome video. Can you do a video where we can do image to image masking and inpainting with just prompts and nodes (no manual masking). Is that possible? Similar to stability AI API
Yes I can. I like workflows that don't tend to make assumptions on locations of things.
@@sedetweiler can you do a video on it plz🙏
Hi Scott, I'm following along with your tutorial but the PixelKSamplerUpscalerProviderPipe node is asking for a use_tiled_vae? This doesn't show on your version of the node? What to do?
See the pinned comment. You are probably missing a component. Cheers!
Hello and thx for this share !
For some reason, at each Iterative Upscale node, my generation becomes brighter and brighter. Do you have an idea, pleaz ?
Great tutorial, thanks!
Glad you enjoyed it!
I tried this but for some reason the pixelksampleupscalerproviderpipe has a tile size even when use_tiled_vae is disabled and returns nothing useful. Did they make a mistake with an update of the custom node or what am i missing?
This is a bit dated, so things might have changed. All of these nodes get updated several times a days, so be 100% sure both comfy and all of the custom nodes are updated.
Thank you so much ❤❤
Great, if a bit overwhelming, tutorial! One thing is different for me, in than the PixelKSampleUpscalerProviderPipe has an input called 'use_tiled_vae' that's required in order to work. I couldn't find a simple BOOLEAN node so I had to kludge together a few other nodes to create a FALSE for that input. Any idea why the difference, and maybe an easy way to input a BOOL?
I think you might have the wrong node, as some are quite similar in name.
Update your ComfyUI to latest version.
You might also use the manager and install BlenderNeko: Tiled Sampling
I know it's late, but may be useful if someone else has the same issue - if you right click on the node, you should get the option to convert any input to a widget, which puts it into the properties list. In this case it would put the input as a switch that is disabled by default, but you can enable it in the properties section
it seems way too many steps for something that should be simple. Anyway to use img2img like in 1111 to make this easier/faster? I've stayed away from comfyui since to me it complicates everything instead of making it easier on the person.
The goal of comfy is to really let you modify and understand the process. It won't be for everyone. Some people just want to drive a car, while others like to get in there and understand how it works and change it to perhaps make something better. It's not going to be easier, but it will actually teach you how it works. So, if you want to understand the process, stick with it. But, if you just want to make pretty pictures fast, this probably isn't going to be your thing. Either one is a good choice.
Why is comfyui so slow on my computer?
A bit late to the show, but.... Impact nodes do seem to install, but when I do Add Node, it is not in the dropdown list after several restarts of ComfyUI (I succesfully installed the custom nodes from the other video tutorials.... Anybody any idea what's going on? How can I debug the installation?
This happens via the Manager and via git clone....
I suggest you look at what is displayed in the console when ComfyUI starts up. It displays a message for each of the custom_nodes packages and will certainly throw some error message if something is wrong.
Is it just me or SDXL worse at hands? Anyway to solve this? Would you mind doing a video.
They are better by far than 1.5, but still not perfect. However, they are always getting better!
Is there a wayto save gen parameters into the filename?thks
Not without installing some custom nodes for text manipulation, then it is not difficult as the text from the node itself can be a parameter we can concat with other variables.
If I knew which python function that generates the seed, I could do something with it @@sedetweiler
Thank you but it's too much for my PC. can you teach Inpainting like you did img2img in ComfyUI ? changing the final image's face or cloths or background etc...
It works if you have 3gb of video ram. That isn't much!
@@sedetweiler Oh yeah i tried it, first phase was 25sec, second one took 40sec with CPU fan going berserk :D third one was faster. i Use Hitpaw for fixing faces and upscaling, not worth it in Comfy and it's not doable in WebUI at all with my GTX 970 (gives memory error in the middle of upscaling).
hey! can you provide me the soapmix model, if you have it yet. Can't find anywhere in Civitai
It isn't that great. I don't even have it any longer. Sorry.
It would be a great help if you could provide links to the models, as I think many of us here, try to duplicate what you have, and at least for me, it's a bit difficult to see the model name in the nodes, as it's quite small.
This should work with any model, that part really isn't that important.