I spent a whole day optimizing the ultimate SD upscale for 1024 images, and achieved wonderful results, but your technique is way better and smarter, good job and thanks for sharing!
Fantastic! I ran an SD1.5 image through this and also through SupIR using SDXL. The SupIR version works, but is slow and (as designed) changes some of the details. This is blazing fast and retains all of the details of the original, making it an outstanding tool for upscaling images with no real quality loss. Great job! Liked and subscribed.
Thank you bro you're welcome. The supir method is also good. In fact both are useful. It depends on the image we want to upscale. I will make a video and explain it.
I'm sorry to say - because this is a cool idea - that the Ultimate Upscale with 1 step does exactly the same as "Upscale with Model". In other words, the quality of image is exactly the same as if I upscaled with SIAX and then rescale back image from 4x to 1/2 or 3/4 - giving 2x or 3x - without USDU step. What does work well for me is adding the good old faceDetailer after the upscale - since the face suffers most from upscaling with models.
@@TheBlackbeansoup you're very welcome my friend ✨. What if I say it has a V2 which is more detailed and has more improvements? :) Check it out: ruclips.net/video/T4SNWa-izxM/видео.htmlsi=uLGC6Ftccw84zJ6P
Why did you take the photo to photoshop? When in comfy UI you can load the photo comparison node so you can easily slide left and right to see the comparison of each photos
Interesting method. I checked this metod with my picture (made also some other experiments). I upscaled x4 from 1 megapixel 2:3 only using positive promt from original generation (I left you negative blury but maybe I should remove it) and original loras used in this generation and the same seed and euler+beta (which I used to first image). I feel that effect is too much sharpening also many parts of image has artefacts making something like small grid. Maybe playing with some parameters would help, but I got better result with just SDUltimateUpscale and ClearRealityV1 (soft or sharp version does not matter much). Of course it can depend on picture etc = I dont want to say it is always worse. i also tried 2 step with first upscale with scale image to total pixels (or just 2 time) with latchos and then SDUlimate Upscale x4 for even bigger picture. But in my case the best approach was just straight upscale as i loved original image and every reproduction only worsened this first beautiful image and wanted to end with very similar picture but upscaled (otherwise this 2 part upscaling would be cool too). Ps. Also it is one step but with bigger picture it took me like almost 20 minutes. I did not count time for SDUlimate methods as it split picture on parts but felt like much faster (but I have only 3080 10GB and 32GB of RAM and using Q4_K_M FLUX model + Q8 for clip without memory optimisations)
thank you very much for yout job (sorry for my english, my natural linguish is french) .I have problem! With the compare node (rgthree), switching from image a to image b doesn't work. In the PROPERTIES compare_mode, there is only CLICK and mouse drag is not recognized. Which parameter can I use to make drag work on the [IMAGE (compare)] node? Are there any pip installs to do behind the git clone? (nothing says so on the Rgthree-Comfyui page of github...)
LOL the VRAM used should be huge! Also you defeat the goal of this SD upscale Node that is just tiling the upscale... you can do the same with a Upscale by model then a normal Ksampler with your settings...
@@zephilde it seems you didn't even test what you are suggesting. The output has a huge difference in quality and details especially for skin texture. Because with your method there is no checkpoint model and just using an upscale model and it is not powerful enough. If you want I can send you the result of your method and mine. Huge difference.
You can convert the tile size x&y but getting image dimensions and then multiplying them by the number of times you wish to upscale so you don't have to manually type it in. I thought this was common knowledge as I've been upscaling this way for quite a while. Edit: to clarify. Create some math nodes. In the nodes you put (a * b /2 +32) * c a= Input from scale choice. 2x 3x 4x etc. c= the same exact thing. b= height or width from "get image size" node. Duplicate the math node and do the same thing only for height. The "Int" output form both math nodes goes to tile_height & width after you converted widgets to input. So my file size of 1344x768 was first converted to 1376x800 and then multiplied by c (x2) for 2752x1600. Why? Someone told me the file size was more behaved with the models. I never really understood it myself. I guess you could just have a * b which would simply multiply the raw file size by X where X is the multiplication(upscale) value you chose. See if it is faster or less prone to problems? Maybe I'll do that later.
i remade/modified the workflow to get this feature, it works perfectly on local, but i cant get it to work on my TensorArt Comfy with the exact node setup, always get this error : "... type: Multiply Integer (WLSH) multiplier: field value must gte: 1 number: field value must gte: 1 " .." i dont know why.. any pointers ?
Brilliant clear video, worked well and surprisingly quick (RTX 3060 12GB VRAM) - is there a way to make the scaling numbers 'auto' by feeding them from the original image? Also, some sort of batching process so that you could scan a directory of images and process them whilst you sleep :) - many thx
thank you so much. I haven't found a way that's fully automated like that yet, but I actually needed something similar to what you're asking for. What I did was this: I loaded the first image and manually set the scale, then did the same for the second image, and so on for the rest. All of them were queued up in the process.
I always use these UpscaleSD settings and never had any seems so far: tile size: 1024x1024 mask blur: 16 tile padding: 32 seem fix denoise: 1.0 seem fix width: 64 seem fix mask blur: 16 seem fix padding: 64
When using your workflow, all that happens to me is that the photo is grained. It looks like it was scanned from a magazine or printed on a color printer. There is no rendering of the skin texture, eyelashes, wrinkles (like in your video), only incomprehensible noise (which can be added in Photoshop in 1 click. I use the Flux Model on 23Gig / 4x_NMKD-Siax_200k / RTX3090/64RAM What could be the problem? (There are no errors)
The noise added to image is because of the upscale model you can change it to ultrasharp or something like that. But in Photoshop it's not that simple you explained. I'm Photoshop teacher for 10 years. It is upscaling the image and yes doesn't re-create. If you want to recreation you need another way of upscaling
Thanks. Finally working upscaler for Flux. It's surprisingly difficult to find tutorial about this. Is there a way to first generate the preview image with this workflow and then upscale it if I like it?
Thank you so much. I was testing many methods and finally found this one very useful. Yes you can. just make a Preview image node and link the output upscaling node and unlink the Save Image node. when you happy with the result, you can right click on the preview image and save it or open it in new tab and then save it with your browser.
@@Jockerai But wouldn't that still upscale it? If the upscaler node is still there. I mean like generating the image without upscale because it's much faster.
@@Latetzkii I got it now. You can bypass(disabled) the upscale node, upscale model and load image. When you are happy with the result, you can upscale it. You can open 2 tabs in your browser one for generating and one for upscaling in order to save more time.
I also tried to build such workflow inside one window but every time I wanted to trigger the upscaler only it had to start over the generating process. It seems it is difficult to run the queue just from the desired generated image and then only apply the upscaler. Separating the two workflow is the only solution I have currently but I am still learning.
Instead of doing the manual calculation to determine the desired upscale size, couldn't you just use a node to measure the original size and a node to scale that number by the deesired ratio?
i dopnt understand the purpose of using ultimate SD upscaler like this. Can't you just resize 4x and then move it to another Ksampler with 1 step and 0.2 denoise?
@@p_p I tested it and didn't work you can test it. By my tutorial you’ve got the main point, which is matching the canvas size with the final image dimensions, the rest is up to you.
@@Jockerai I used to have the same specs as yours, but loading Flux dev eat up all 32G RAm when loading and comfyui just crashed (not happening all the time and probably 4 out of 10 times). I am forced to upgrade to 64. Do you have similar issue?
Hi, I have followed your same method but upscaling a 1024x1024 image 4 times. 4070 Ti 12gb + 64gb ram. However, ComyfUI shows 100%, but the terminal window shows 0%. I have all the models, vae, upscalers etc.. Any ideas?
Do you have any suggestions on how to apply this method to video? It would be amazing! I've experimented with Topaz, but it doesn't come close to this level of quality.
I keep telling people on reddit this is the best way to upscale and people go nuts and tell me I am wrong without ever trying it lolll they are missing out
Please watch version 2 of this tutorial it can be more usfull for you and no issues : ruclips.net/video/T4SNWa-izxM/видео.html However the this version is also great.
Hmm. The normal upscaler already makes my comfyui quit ( run out of RAM / VRAM I don't know ), if I set the parameter like yours, it's guarantee to be failed. right? My VRAM only 8Gb, RAM 32Gb.
@@daryladhityahenry I don't think so just check it. However I'm recording a new video for running flux locally even with 4G VRam. It's huge don't miss that out stay tuned.
I see in the comments that this works well for everyone. but for me there is no change in the output. I have everything working and installed, but the outcome remains in the same quality. I tried different inputs, all give the same result. the image remain blurry. weird thing is when I use your original image of the woman it works, but when I use mine in the exact same proportions, it doesn't..... (besides a slight change in brightness)
@@idoshor4470 if your image is too blurry it's not gonna work for you. Because this method doesn't re-create any area but upscaling it. Try another image that has at least a little details in it
You put the link for the 4x-NMKD-siax200k model, but not the one for the UltimateSDUpscale. I don't know where to find it or which folder to put it in. Thus I just get an error when I try to use your workflow.
@@user-rm2ot3hw8e ultimate upscaler can be installed through Manager in install costume nodes. Search for it and you will find it. Install and reset comfyUi. For the model after downloading put it in the main directory of comfyUi models>upscale_models folder and click Refresh in comfyUi and you're done.
I downloaded ultimatesdupscaler but I still get the node not found error. And when I look at the manager, I get an import fail error. please tell me what should i do 🙏
Prompt outputs failed validation VAELoader: - Value not in list: vae_name: 'ae.sft' not in ['diffusion_pytorch_model.safetensors', 'taesd', 'taesdxl', 'taesd3', 'taef1'] UpscaleModelLoader: - Value not in list: model_name: '4x_NMKD-Siax_200k.pth' not in [] I get an error like this, what should I do? I fixed the download error I mentioned in the other comment by deleting ultimatesdupscaler and reinstalling it, but I don't know what to do in this case. pls help me 😊
@@Jockerai Thanks, actually I solved those problems after I posted the comment, but I encountered another problem and I really don't know what to do about it. It gives an error like this ''UltimateSDUpscale Allocation on device'' and I cannot perform the operation. I could only do it twice, but I didn't get the result I wanted either, I guess it can't take high resolution pictures. But the previous 2 uploads I made took an extremely long time, maybe it was because my computer was not powerful enough, I don't know, but it took 1 and a half hours to upscale 2 images 2x, I think I'm doing something wrong. I'm sorry if I'm keeping you busy
Good find! But introduces a lot of artifacts in uniform areas. it looks like it is over-sharpening the noise or something. After seeing it on my images, I can actually see the same artifacts on the skin in your video. I mistaked them for additional detail first. Hopefully someone finds an even better way of doing it, that adds more detail and less artifacts! Great first video!
Doesn't work with my pictures. Makes it a little sharper and with a little more details. But when i zoom in, the eyes are deformed. The original picture have this not. My pics are half or 3/4 body shots and not portraits. Maybe this is the problem.
Kinda works in FORGE, but img2img in new FORGE works only in "Resize and fill" and u can`t use multiply in ultimate SD upscale (2048 max for tile), but I've tester with orig img res for tile and there is no seams visible even with no Seams fix anabled. ultimate sd upscale - Scale from image size -3 / Tile width and Tile height - img size, change Linear to None. flux1-dev-bnb-nf4-v2 do almost identical results to video. 12gig 3060 more then enough for dat model (12 gig 3060 excellent and cheap for FLUX or SDXL even with 3 controlnets actually), but new Forge can handle even with 6 gigs of vram. Maybe u make a video about new FORGE huge update?
folks i have this error message --- Prompt outputs failed validation VAELoader: - Value not in list: vae_name: 'ae.sft' not in ['taesd', 'taesdxl', 'taesd3'] DualCLIPLoader: - Value not in list: clip_name2: 't5xxl_fp16.safetensors' not in [] - Value not in list: clip_name1: 'clip_l.safetensors' not in [] UNETLoader: - Value not in list: unet_name: 'flux1-dev.safetensors' not in [] LoadImage: - Custom validation failed for node: image - Invalid image file: Original.png --- also this ERROR --- Prompt outputs failed validation UNETLoader: - Required input is missing: unet_name DualCLIPLoader: - Required input is missing: clip_name2 - Required input is missing: clip_name1 --- any ideas what i missed?
Make sure to select all models in all nodes that need a model, and also make sure to have all models files in proper directory. Your issue will be solved
hola, cuando entro al comfyui me dice esto Warning: Missing Node Types When loading the graph, the following node types were not found: UltimateSDUpscale No selected item Nodes that have failed to load will show as red on the graph. y cuando doy click para escalar me dice esto Cannot execute because a node is missing the class_type property.: Node ID '#39' me puede ayudar
@@Jockerai Hello, when I enter comfyui it tells me this Warning: Missing Node Types When loading the graph, the following node types were not found: UltimateSDUpscale No selected item Nodes that have failed to load will show as red on the graph. and when I click to scale it tells me this Cannot execute because a node is missing the class_type property.: Node ID '#39' can you help me
This methos is not working for AMD GPU and it takes 4 mins. But with a Nvidia GPU it takes only 80 seconds. Not sure why? Also the driver times out and freezes for an AMD GPU!
what is hard for me to understand is flux is capable of 2 megapixel generation natively. as a one piece upscale of the image (3 times the original) total megapixels exceeds even 9 megapixels. how come the model is capable to iterate 9 megapixels in 1 step
Flux actually divides the original image into several sections and performs the upscaling on each section before combining them at the end. However, in this video, I mentioned that I adjusted the canvas size to the final size, which is three times the original image. This adjustment helps in connecting the pieces together without creating any destroyed edges. Or it is possible that to say some models like Flux use specific architectures like GANs (Generative Adversarial Networks) or Diffusion Models, which are capable of generating high-resolution images and then intelligently upscaling them. These models are able to preserve the details and quality of the image during the upscaling process.
Thank you. Interesting idea... However, getting an error :( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale epositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 180, in linear_process processed = processing.process_images(p) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\modules\processing.py", line 165, in process_images positive_cropped = crop_cond(p.positive, crop_region, p.init_size, init_image.size, tile_size) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\utils.py", line 452, in crop_cond for emb, x in cond: ^^^^^^ ValueError: too many values to unpack (expected 2) This is as soon as it is about to save the image... Thanks
@@Andrey.Alexandru it depends sometimes you will get a great quality if the image is not so blurry. This method doesn't re-create the original image like Hi-res-fix in stable diffusion. If you want recreation you need to make more steps and default tile height and tile width size.
Based on the tests I've conducted, this method does not yield a different outcome than simply upscaling using the 4x_NMKD-Siax-200K model. I suggest doing some comparisons. Has anyone achieved a different result than min
Yup, that's almost the same as doing a upscale by model, i've tryed. And if you want the best model to do so, is 4xNomos8kHAT-L. I've tryed them all (around 200)
@@billbuyers8683 this is a work flow for generating image and upscaling. When you don't want to generate, and just want to upscale an image, you need to bypass other nodes (which make them purple)
I have this error when i launch Queue Prompt : Value not in list: unet_name: 'flux1-dev.safetensors' not in ['flux1-dev.sft', 'flux1-schnell.sft' Do you know what it could be? How to install this ? Thanks !
probably your unet files flux1-dev or shcnell have not proper format. make sure they have .safetensors at the end of their names. or if you didn't download them tell me I'll send you the links
meh. The setting are awful for this type of image. And the fact you use a full flux dev model for such a low quality is frustrating. Plastic look is achievable in trimmed sd.
This method is bad, ive tested your exact method vs using the upscale model on its own and the results are almost the same, I say almost because the 1 sample and 0.2 denoise actually blurs the image extremely slightly making the details less harsh which ends up looking better. I also tested with just ksampler with same parameters and got exact same results as ultimate sd upscale. ultimate sd upscale is meant to split the image into tiles to reduce vram usage and uses ksampler process for each tile, so by making tile size same as image you end up ruining its entire purpose
@@Hycil2023 the image in this video shows everything about this method. But I have to say this method has a new version V.2 which I'm going to share in this week.
@Jockerai i tested everything in comfyui before making my comment just to make sure. model upscalers often have a certain look to them like unreasonably sharp quirky details which you can see when you zoomed into her face to view the skin pores, honestly model upscalers on their own are much better than people think
@@digitaldepot23 "You can message me on Telegram; I can connect you with my students to help with creating LoRAs. Btw I have a video on my channel about that
You can do the same with better results… and specifically FASTER with the free software Upskayl. It also uses the same models or upscale models of your choice.
there are more free online softwares but this is not our goal use them because the free version of every software online is no stable. running locally is the safest one
@@Jockerai upskayl is run locally. You download and install it, along with the same exact upscale models you use in Forge, A111 or ComfyUI. You simply want to continue BSing people.
click the link below to stay update with latest tutorial about Ai 👇🏻 :
www.youtube.com/@Jockerai?sub_confirmation=1
I spent a whole day optimizing the ultimate SD upscale for 1024 images, and achieved wonderful results, but your technique is way better and smarter, good job and thanks for sharing!
Thank you so much mate that was uplifting comment✨💚😉
After the image is upscaled you can resize it smaller by 3 (using any photo editor) then upscale it again by 3. It's even sharper and more detailed.
Very good idea 🤩💡 thank for sharing
Thank you very much.
Thank you for this! Fantastic video!! Keep killing it!!
Fantastic! I ran an SD1.5 image through this and also through SupIR using SDXL. The SupIR version works, but is slow and (as designed) changes some of the details. This is blazing fast and retains all of the details of the original, making it an outstanding tool for upscaling images with no real quality loss. Great job! Liked and subscribed.
Thank you bro you're welcome. The supir method is also good. In fact both are useful. It depends on the image we want to upscale. I will make a video and explain it.
Thank you for this! The simple solutions are usually the best.
you're welcome bro 😉✨
I'm sorry to say - because this is a cool idea - that the Ultimate Upscale with 1 step does exactly the same as "Upscale with Model".
In other words, the quality of image is exactly the same as if I upscaled with SIAX and then rescale back image from 4x to 1/2 or 3/4 - giving 2x or 3x - without USDU step.
What does work well for me is adding the good old faceDetailer after the upscale - since the face suffers most from upscaling with models.
@@AvnerSenderowicz no it's not the same you can test it
@@Jockerai I have tested, that's what I am saying, maybe I missed something, that's possible - but for me they come out the same.
Incredibly and awesome job! Thank you for your work! 🖖
@@design3d225 thank you mate ✨😉
Very good! I subscribed immediately!
thanks bro. my pleasure mate✨💚
NICE video man
Works beautifully.
❤🙏
This is BRILLIANT !!
@@yotraxx thank you ✨
This was awesome. Thank you! Got a sub from me.
@@TheBlackbeansoup you're very welcome my friend ✨.
What if I say it has a V2 which is more detailed and has more improvements? :)
Check it out: ruclips.net/video/T4SNWa-izxM/видео.htmlsi=uLGC6Ftccw84zJ6P
Thanks!! It works and it is much faster!! from 15 minutes down to 2 and a half for me
@@ImAlecPonce I'm happy to here that. Thanks for sharing this
Why did you take the photo to photoshop? When in comfy UI you can load the photo comparison node so you can easily slide left and right to see the comparison of each photos
@@jamesmichaelcabrera9613 thanks for telling me that I didn't know. I'm proud of my subscribers like you✨
@@Jockerai CR Simple Image Compare
@@mattm7319 is that the name of the node?
@@Jockerai And rgthree's image comparer, the most used one.
@@jamesmichaelcabrera9613 because he’s a beginner thinking he can make some easy money here on YT.
Interesting method.
I checked this metod with my picture (made also some other experiments). I upscaled x4 from 1 megapixel 2:3 only using positive promt from original generation (I left you negative blury but maybe I should remove it) and original loras used in this generation and the same seed and euler+beta (which I used to first image). I feel that effect is too much sharpening also many parts of image has artefacts making something like small grid. Maybe playing with some parameters would help, but I got better result with just SDUltimateUpscale and ClearRealityV1 (soft or sharp version does not matter much). Of course it can depend on picture etc = I dont want to say it is always worse. i also tried 2 step with first upscale with scale image to total pixels (or just 2 time) with latchos and then SDUlimate Upscale x4 for even bigger picture. But in my case the best approach was just straight upscale as i loved original image and every reproduction only worsened this first beautiful image and wanted to end with very similar picture but upscaled (otherwise this 2 part upscaling would be cool too).
Ps. Also it is one step but with bigger picture it took me like almost 20 minutes. I did not count time for SDUlimate methods as it split picture on parts but felt like much faster (but I have only 3080 10GB and 32GB of RAM and using Q4_K_M FLUX model + Q8 for clip without memory optimisations)
thank you very much for yout job (sorry for my english, my natural linguish is french) .I have problem! With the compare node (rgthree), switching from image a to image b doesn't work.
In the PROPERTIES compare_mode, there is only CLICK and mouse drag is not recognized.
Which parameter can I use to make drag work on the [IMAGE (compare)] node?
Are there any pip installs to do behind the git clone? (nothing says so on the Rgthree-Comfyui page of github...)
dopepics AI fixes this. ComfyUi Secret: Upscaling with Flux!
Thank your for sharing your knowledge
you're welcome bro. stay tuned for more
LOL the VRAM used should be huge! Also you defeat the goal of this SD upscale Node that is just tiling the upscale... you can do the same with a Upscale by model then a normal Ksampler with your settings...
@@zephilde it seems you didn't even test what you are suggesting. The output has a huge difference in quality and details especially for skin texture. Because with your method there is no checkpoint model and just using an upscale model and it is not powerful enough. If you want I can send you the result of your method and mine. Huge difference.
Deserved a subscriber
It's my pleaser mate✨😍
You can convert the tile size x&y but getting image dimensions and then multiplying them by the number of times you wish to upscale so you don't have to manually type it in. I thought this was common knowledge as I've been upscaling this way for quite a while. Edit: to clarify. Create some math nodes. In the nodes you put (a * b /2 +32) * c a= Input from scale choice. 2x 3x 4x etc. c= the same exact thing. b= height or width from "get image size" node. Duplicate the math node and do the same thing only for height. The "Int" output form both math nodes goes to tile_height & width after you converted widgets to input. So my file size of 1344x768 was first converted to 1376x800 and then multiplied by c (x2) for 2752x1600. Why? Someone told me the file size was more behaved with the models. I never really understood it myself. I guess you could just have a * b which would simply multiply the raw file size by X where X is the multiplication(upscale) value you chose. See if it is faster or less prone to problems? Maybe I'll do that later.
i remade/modified the workflow to get this feature, it works perfectly on local, but i cant get it to work on my TensorArt Comfy with the exact node setup, always get this error :
"...
type: Multiply Integer (WLSH)
multiplier: field value must gte: 1
number: field value must gte: 1
"
.."
i dont know why.. any pointers ?
Thank you so much sir !
you're welcome bro💚
Thank you very much! Great explanation!
@@DanielSchweinert you're welcome bro ❤️
It works! Wonderful find, easy and quality way to upscale. Thanks for sharing!
you're welcome my friend✨😉
Brilliant clear video, worked well and surprisingly quick (RTX 3060 12GB VRAM) - is there a way to make the scaling numbers 'auto' by feeding them from the original image? Also, some sort of batching process so that you could scan a directory of images and process them whilst you sleep :) - many thx
thank you so much. I haven't found a way that's fully automated like that yet, but I actually needed something similar to what you're asking for. What I did was this: I loaded the first image and manually set the scale, then did the same for the second image, and so on for the rest. All of them were queued up in the process.
woooow , this is fantastic, thank you thank you ...
@@angelotsk3173 you're welcome ✨
Subscribe my channel for support if you want❤️
I always use these UpscaleSD settings and never had any seems so far:
tile size: 1024x1024
mask blur: 16
tile padding: 32
seem fix denoise: 1.0
seem fix width: 64
seem fix mask blur: 16
seem fix padding: 64
You're awesome! Thank you.🔥
kind of you❤
Great video! Thanks.
@@baheth3elmy16 thank you mate❤️
When using your workflow, all that happens to me is that the photo is grained. It looks like it was scanned from a magazine or printed on a color printer. There is no rendering of the skin texture, eyelashes, wrinkles (like in your video), only incomprehensible noise (which can be added in Photoshop in 1 click.
I use the Flux Model on 23Gig / 4x_NMKD-Siax_200k / RTX3090/64RAM
What could be the problem? (There are no errors)
The noise added to image is because of the upscale model you can change it to ultrasharp or something like that. But in Photoshop it's not that simple you explained. I'm Photoshop teacher for 10 years. It is upscaling the image and yes doesn't re-create. If you want to recreation you need another way of upscaling
this is so silly because IT WORKS ! -
yes it is !
nice one!
@@andreh4859 thank you ✨
AMAZING THX!
you're welcome my friend😉✨
After two days without a solution to this problem,
I discovered your video.
Thank you. ❤❤
wow I so pleased to here that thank you for sharing this❤😉 subscribe my channel if you may
amazing
@@motherindiafathersurf1568 thank you
Thanks. Finally working upscaler for Flux. It's surprisingly difficult to find tutorial about this. Is there a way to first generate the preview image with this workflow and then upscale it if I like it?
Thank you so much. I was testing many methods and finally found this one very useful. Yes you can. just make a Preview image node and link the output upscaling node and unlink the Save Image node. when you happy with the result, you can right click on the preview image and save it or open it in new tab and then save it with your browser.
@@Jockerai But wouldn't that still upscale it? If the upscaler node is still there. I mean like generating the image without upscale because it's much faster.
@@Latetzkii I got it now. You can bypass(disabled) the upscale node, upscale model and load image. When you are happy with the result, you can upscale it. You can open 2 tabs in your browser one for generating and one for upscaling in order to save more time.
I also tried to build such workflow inside one window but every time I wanted to trigger the upscaler only it had to start over the generating process. It seems it is difficult to run the queue just from the desired generated image and then only apply the upscaler. Separating the two workflow is the only solution I have currently but I am still learning.
Instead of doing the manual calculation to determine the desired upscale size, couldn't you just use a node to measure the original size and a node to scale that number by the deesired ratio?
i dopnt understand the purpose of using ultimate SD upscaler like this. Can't you just resize 4x and then move it to another Ksampler with 1 step and 0.2 denoise?
@@p_p I tested it and didn't work you can test it. By my tutorial you’ve got the main point, which is matching the canvas size with the final image dimensions, the rest is up to you.
wow! amazing results.
What's your PC specs?
@@naeemulhoque1777 thank you ✨
I have 4090 GPU 32G RAM
@@Jockerai I used to have the same specs as yours, but loading Flux dev eat up all 32G RAm when loading and comfyui just crashed (not happening all the time and probably 4 out of 10 times). I am forced to upgrade to 64. Do you have similar issue?
@@azzordiggle5143 upgraded to 64GB just today 😄, with 32GB it's okay too but need a bit more time to render
super!!!!!
@@akhavo thank you ✨
Hi, I have followed your same method but upscaling a 1024x1024 image 4 times. 4070 Ti 12gb + 64gb ram. However, ComyfUI shows 100%, but the terminal window shows 0%. I have all the models, vae, upscalers etc.. Any ideas?
me to bro
ur system is crashing, try a smaller upscale like 2x, then upscaling the new image. if that doesnt work i dont have a clue,
Do you have any suggestions on how to apply this method to video? It would be amazing! I've experimented with Topaz, but it doesn't come close to this level of quality.
@@juanselobo unfortunately no🙏🏻
@@Jockerai thanks for answering tho! awesome videos BTW!
Thank you mate ✨
whenever I upscale the colors changes little bit, like there is less dark tones and looks washed out, any solution?
First time I here that. What upscale model you are using?
Thyankyou Brother for Your Info, can you make a video on animations using flux model.
@@dishacollegeroadnashik5574 you're welcome. Maybe in the future bro 😉❤️
is this work in shakker ai comfyui mode???
I keep telling people on reddit this is the best way to upscale and people go nuts and tell me I am wrong without ever trying it lolll they are missing out
@@CrunchyBagpipe thank you bro they will know it🤌🏻😎
wow that worked for me thank you so much how did you get this solution?😲
you're welcome. I tested many ways this is the best for now and very usfule😉
I am getting no images produced or black renderings with no image. Just a black box.
Please watch version 2 of this tutorial it can be more usfull for you and no issues : ruclips.net/video/T4SNWa-izxM/видео.html
However the this version is also great.
thanks for sharing.. im eager to try this one but I can't access those 3 links =(
@@arcastillo101 I double checked it and links are working. Just click on them and you will have access to them.
I have a question. What's the difference between this and regular model scaling? From my own testing, the results seem pretty similar.
what do you mean regular models? regular models of what my friend?
Hmm. The normal upscaler already makes my comfyui quit ( run out of RAM / VRAM I don't know ), if I set the parameter like yours, it's guarantee to be failed. right? My VRAM only 8Gb, RAM 32Gb.
@@daryladhityahenry I don't think so just check it. However I'm recording a new video for running flux locally even with 4G VRam. It's huge don't miss that out stay tuned.
@@Jockerai Okay. Thanks. Of course will try it ( hopefully it worked ).
Nicee
@@rolarocka thanks ✨
image tile size is set to too big, I reduced it to 1024... now its working
@@vincdivine If you don't have any problems with this canvas size, it's ok. But many users had problems with that
Is there a workflow that just does standalone upscales for any random image we load?
@@FrankEBailey this workflow can. Load image and upscale. Disable rest of the workflow except upscale ones
I see in the comments that this works well for everyone. but for me there is no change in the output. I have everything working and installed, but the outcome remains in the same quality. I tried different inputs, all give the same result. the image remain blurry.
weird thing is when I use your original image of the woman it works, but when I use mine in the exact same proportions, it doesn't..... (besides a slight change in brightness)
@@idoshor4470 if your image is too blurry it's not gonna work for you. Because this method doesn't re-create any area but upscaling it. Try another image that has at least a little details in it
You put the link for the 4x-NMKD-siax200k model, but not the one for the UltimateSDUpscale. I don't know where to find it or which folder to put it in. Thus I just get an error when I try to use your workflow.
It would be like putting a link to electricity. Or the internet🤷♂
@@user-rm2ot3hw8e ultimate upscaler can be installed through Manager in install costume nodes. Search for it and you will find it. Install and reset comfyUi.
For the model after downloading put it in the main directory of comfyUi models>upscale_models folder and click Refresh in comfyUi and you're done.
morel morel morel
@@JayNL sure😃
Thank you I really liked your comment✨
I downloaded ultimatesdupscaler but I still get the node not found error. And when I look at the manager, I get an import fail error. please tell me what should i do 🙏
Prompt outputs failed validation
VAELoader:
- Value not in list: vae_name: 'ae.sft' not in ['diffusion_pytorch_model.safetensors', 'taesd', 'taesdxl', 'taesd3', 'taef1']
UpscaleModelLoader:
- Value not in list: model_name: '4x_NMKD-Siax_200k.pth' not in []
I get an error like this, what should I do? I fixed the download error I mentioned in the other comment by deleting ultimatesdupscaler and reinstalling it, but I don't know what to do in this case. pls help me 😊
@@volkanelbir7474 just in vae node select ae.safetensors if you can't make sure you download vae file and put it in vae folder
@@Jockerai Thanks, actually I solved those problems after I posted the comment, but I encountered another problem and I really don't know what to do about it. It gives an error like this ''UltimateSDUpscale Allocation on device'' and I cannot perform the operation. I could only do it twice, but I didn't get the result I wanted either, I guess it can't take high resolution pictures. But the previous 2 uploads I made took an extremely long time, maybe it was because my computer was not powerful enough, I don't know, but it took 1 and a half hours to upscale 2 images 2x, I think I'm doing something wrong. I'm sorry if I'm keeping you busy
My computer specifications: RTX 3050 4GB and 16 GB ram So it's not strong enough 😁
@@volkanelbir7474 yes it's because of your system and low GPU probably. You can use cloud GPU's they are wonderful or use gguf Q2 on your comfyUi
Good find! But introduces a lot of artifacts in uniform areas. it looks like it is over-sharpening the noise or something. After seeing it on my images, I can actually see the same artifacts on the skin in your video. I mistaked them for additional detail first.
Hopefully someone finds an even better way of doing it, that adds more detail and less artifacts! Great first video!
actually I didn't any artifact on skin! Even in extreme zooming. You can change you upscale model and test again.
isnt this just the siax esrgan model ?
@@daniel99497 what do you mean?
Doesn't work with my pictures.
Makes it a little sharper and with a little more details.
But when i zoom in, the eyes are deformed.
The original picture have this not.
My pics are half or 3/4 body shots and not portraits.
Maybe this is the problem.
@@DakrWingDuck yes maybe👌🏻👌🏻
I use Gigapixel
Looks good but the links in the description is cut short so can't follow them.
@@Hellbiker3000 because it is my first video RUclips doesn't allow me to share link I send you the links here
I shortened the links you can now copy them please check the description again
Kinda works in FORGE, but img2img in new FORGE works only in "Resize and fill" and u can`t use multiply in ultimate SD upscale (2048 max for tile), but I've tester with orig img res for tile and there is no seams visible even with no Seams fix anabled. ultimate sd upscale - Scale from image size -3 / Tile width and Tile height - img size, change Linear to None. flux1-dev-bnb-nf4-v2 do almost identical results to video. 12gig 3060 more then enough for dat model (12 gig 3060 excellent and cheap for FLUX or SDXL even with 3 controlnets actually), but new Forge can handle even with 6 gigs of vram.
Maybe u make a video about new FORGE huge update?
folks i have this error message
---
Prompt outputs failed validation
VAELoader:
- Value not in list: vae_name: 'ae.sft' not in ['taesd', 'taesdxl', 'taesd3']
DualCLIPLoader:
- Value not in list: clip_name2: 't5xxl_fp16.safetensors' not in []
- Value not in list: clip_name1: 'clip_l.safetensors' not in []
UNETLoader:
- Value not in list: unet_name: 'flux1-dev.safetensors' not in []
LoadImage:
- Custom validation failed for node: image - Invalid image file: Original.png
---
also this ERROR
---
Prompt outputs failed validation
UNETLoader:
- Required input is missing: unet_name
DualCLIPLoader:
- Required input is missing: clip_name2
- Required input is missing: clip_name1
---
any ideas what i missed?
Make sure to select all models in all nodes that need a model, and also make sure to have all models files in proper directory. Your issue will be solved
hola, cuando entro al comfyui me dice esto
Warning: Missing Node Types
When loading the graph, the following node types were not found:
UltimateSDUpscale
No selected item
Nodes that have failed to load will show as red on the graph.
y cuando doy click para escalar me dice esto
Cannot execute because a node is missing the class_type property.: Node ID '#39'
me puede ayudar
@@josephrguezglez8670 English please my friend
@@Jockerai Hello, when I enter comfyui it tells me this Warning: Missing Node Types When loading the graph, the following node types were not found: UltimateSDUpscale No selected item Nodes that have failed to load will show as red on the graph.
and when I click to scale it tells me this Cannot execute because a node is missing the class_type property.: Node ID '#39' can you help me
How can I fix the missing node? ultimatesdupscale
@@josephrguezglez8670 watch this video I covered that for you :
ruclips.net/video/rqP7t4P8yWA/видео.html
@@josephrguezglez8670 watch this video in my channel: ruclips.net/video/rqP7t4P8yWA/видео.html
I know I'm being pedantic but an X3 upscale actually makes the image 9 times bigger.
Apologies 😂
yes you are right but the height is 3 times bigger same as width, that was the point😉
This methos is not working for AMD GPU and it takes 4 mins. But with a Nvidia GPU it takes only 80 seconds. Not sure why? Also the driver times out and freezes for an AMD GPU!
@@arunhk3 may in different AMD GPU's you can achieve better results. But definitely Nvidia GPU's are better for ai generating images and videos.
what is hard for me to understand is flux is capable of 2 megapixel generation natively. as a one piece upscale of the image (3 times the original) total megapixels exceeds even 9 megapixels. how come the model is capable to iterate 9 megapixels in 1 step
Flux actually divides the original image into several sections and performs the upscaling on each section before combining them at the end. However, in this video, I mentioned that I adjusted the canvas size to the final size, which is three times the original image. This adjustment helps in connecting the pieces together without creating any destroyed edges.
Or it is possible that to say some models like Flux use specific architectures like GANs (Generative Adversarial Networks) or Diffusion Models, which are capable of generating high-resolution images and then intelligently upscaling them. These models are able to preserve the details and quality of the image during the upscaling process.
Thank you. Interesting idea... However, getting an error :(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale
epositories\ultimate_sd_upscale\scripts\ultimate-upscale.py", line 180, in linear_process
processed = processing.process_images(p)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\modules\processing.py", line 165, in process_images
positive_cropped = crop_cond(p.positive, crop_region, p.init_size, init_image.size, tile_size)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\ComfyUI\ComfyUI\custom_nodes\ComfyUI_UltimateSDUpscale\utils.py", line 452, in crop_cond
for emb, x in cond:
^^^^^^
ValueError: too many values to unpack (expected 2)
This is as soon as it is about to save the image...
Thanks
@@MannyGonzalez test this remove upscale node and save node and create them again and test again
How is it with real photos?
@@Andrey.Alexandru it depends sometimes you will get a great quality if the image is not so blurry. This method doesn't re-create the original image like Hi-res-fix in stable diffusion. If you want recreation you need to make more steps and default tile height and tile width size.
why my 4070 rtx ti freeze and stuck with this workflow?
@@karatrash21 you have to see cmd and check what happened there and tell me
Try to use flux1-dev-bnb-nf4-v2 model - or new GGUF flux.
@@Gmlt3000 I tried with this solution both gguf and nf4 but it's always too slow and doesn't upscale
you have to set the "model weight" to fp8 or it go by default to fp16 and it saturates the vram and stucks the generation
Based on the tests I've conducted, this method does not yield a different outcome than simply upscaling using the 4x_NMKD-Siax-200K model. I suggest doing some comparisons. Has anyone achieved a different result than min
Yup, that's almost the same as doing a upscale by model, i've tryed. And if you want the best model to do so, is 4xNomos8kHAT-L. I've tryed them all (around 200)
Nice, but super slow method.
@@DanielScofano if you are trying to upscale big images yes but normal images take 1 min for 2x upscale with 3060 12G
i can't open the links.
I updated the links. please download again it works.
@@Jockerai thank u. but my Mac can't handle it.
@@yancysnookie yes may be the GPU of Mac can't handle it. Unfortunately 😕
why all the purple nodes??
@@billbuyers8683 this is a work flow for generating image and upscaling. When you don't want to generate, and just want to upscale an image, you need to bypass other nodes (which make them purple)
I have this error when i launch Queue Prompt :
Value not in list: unet_name: 'flux1-dev.safetensors' not in ['flux1-dev.sft', 'flux1-schnell.sft'
Do you know what it could be? How to install this ?
Thanks !
probably your unet files flux1-dev or shcnell have not proper format. make sure they have .safetensors at the end of their names. or if you didn't download them tell me I'll send you the links
@@Jockerai Thanks for your answer ! I only have flux1-dev.sft and flux1-schnell.sft in the list of models. Can you send me this model or link ?
@@PeacefulVibes24.7 just rename the last part. "flux1-dev.sft" to "flux1-dev.safetensors" you dont need to download again.
The file is located in your main comfyui directory models/unet folder
@@Jockerai All work Thank you so much !
i'm getting heaps of artifacts 🤔
its taking ages here, dont know why
@@TheUntouchedd maybe your system isn't good enough or your original image is too big or you're upscaling more than 2 or 3 times which takes more time
meh. The setting are awful for this type of image. And the fact you use a full flux dev model for such a low quality is frustrating. Plastic look is achievable in trimmed sd.
This method is bad, ive tested your exact method vs using the upscale model on its own and the results are almost the same, I say almost because the 1 sample and 0.2 denoise actually blurs the image extremely slightly making the details less harsh which ends up looking better. I also tested with just ksampler with same parameters and got exact same results as ultimate sd upscale. ultimate sd upscale is meant to split the image into tiles to reduce vram usage and uses ksampler process for each tile, so by making tile size same as image you end up ruining its entire purpose
@@Hycil2023 the image in this video shows everything about this method. But I have to say this method has a new version V.2 which I'm going to share in this week.
@Jockerai i tested everything in comfyui before making my comment just to make sure. model upscalers often have a certain look to them like unreasonably sharp quirky details which you can see when you zoomed into her face to view the skin pores, honestly model upscalers on their own are much better than people think
You can get faster results in seconds with free online resizers this is too slow and that is on a 12 GB graphics card and 32 GB of RAM.
yea, it's sharper, but still plastic and fake AF.
I need an AI artist who knows Lora's training. I will pay. Is anyone here to help me?
@@digitaldepot23 "You can message me on Telegram; I can connect you with my students to help with creating LoRAs. Btw I have a video on my channel about that
You can do the same with better results… and specifically FASTER with the free software Upskayl. It also uses the same models or upscale models of your choice.
there are more free online softwares but this is not our goal use them because the free version of every software online is no stable. running locally is the safest one
@@Jockerai BTW, upscayl runs locally, very easy to install.
@@Jockerai upskayl is run locally. You download and install it, along with the same exact upscale models you use in Forge, A111 or ComfyUI. You simply want to continue BSing people.
i try and is really bad, a do many trys, but always get poor results, better the Freepik magnific upscale
@@baag78 yes, Freepik is also a good choice and much better than Flux because it’s faster.
все равно пластиковые лица. отстой, а не пример. 4x-NMKD-siax-200k это лютый треш!
Would say it works like a magic.
got prompt
loaded completely 10207.096051406861 4777.53759765625 True
Canva size: 1536x1536
Image size: 512x512
Scale factor: 3
Upscaling iteration 1 with scale factor 3
Tile size: 1536x1536
Tiles amount: 1
Grid: 1x1
Redraw enabled: True
Seams fix mode: NONE
100%|████████████████████| 20/20 [00:44
@@onezen yeah it works really great