SUPIR: Best Stable Diffusion Super Resolution Upscaler + full workflow.
HTML-код
- Опубликовано: 31 май 2024
- Install and build a worflkow for SUPIR, the HOT new Stable Diffusion super-res upscaler that destroys every other upscaler (again). Or does it? / discord
🤗Help keep this channel brutally honest: ko-fi.com/stephantual
Workflow: flowt.ai/community/supir-v2-p...
Do not buy workflows: it's a grift that hurts creators like myself (and your wallet)
▬ LINKS REFERENCED ▬▬▬▬▬▬▬▬▬▬▬▬
Discord and help: tinyurl.com/URSIUM
Kijai's repo: github.com/kijai/ComfyUI-SUPI...
Installing xformers: pip install -U xformers --no-dependencies (within your python env, evidently)
Downloading the supir models: huggingface.co/camenduru/SUPI...
▬ TIMESTAMPS ▬▬▬▬▬▬▬▬▬▬▬▬
00:00 - Introduction
00:22 - Cloning SUPIR wrapper nodes
00:47 - Installing xFormers
02:04 - Downloading SUPIR models
02:47 - Building the workflow
03:11 - Adding moondream LLAVA model
04:01 - Going through SUPIR settings and recommendations
06:42 - Finalizing the worklow
07:11 - First results
08:10 - Improving the workflow and lowering vRam requirements
09:23 - Testing all the things
09:50 - Improving results
10:23 - More results
10:53 - Comments vs Magnific
11:30 - Conclusion
▬ SOCIALS/CONTACT/HIRE ▬▬▬▬▬▬▬▬▬▬▬▬
Discord: / discord
All Socials: linktr.ee/stephantual
Hire Actual Aliens: www.ursium.ai/ - Наука
Update: 3/29/24 (yes it moves fast) - it's been updated again - well about 10+ times actually. Because vRAM is still a concern, I've made it available as a one-click app at tinyurl.com/supirv2 , and updated the downloadable worfklow to reflect the addition of better Lightning support as well as LORAs. Cheers! 👽
Not really useful as an upscaler if with 24GB it can't even do 4K. More like a restorer really. I'll pass for now.
@@Homopolitan_aiYeah it's super violent resource-wise. 👽
Can be run on a 12Gb VRAM card? Even with limited output scaling?
@@moebiusSurfing With teeny-tiny images yah, I think you'll be good. It's really hard to gauge though because comfy doesn't always allocate vram in expected ways.
yup@@Homopolitan_ai
Love your presentation style, thanks
Forgive me if I'm too over-excited but I love every node in this workflow, all a bonus. The switches, DeJPG Downsize, Query, text watermark, and of course the SUPIR Upscaler, all elegantly in one. Sadly I work mostly with low-quality source images almost exclusively. But I'm now getting the best results on the poorest quality source images so far that over the years of trying "this and that". This has made my day! Thank you.
Great to hear! You're going to like the next video - i've managed to upscale a tiny , TINY video from my first digicam (176 px wide), using ADLCM+SD15+the new modelscope nodes.
Thank you for your tutorial! It's really interesting and useful
Insane results!
Good video, good speed. Thanks!
I like that transformer kid in the corner great video
Thanks again, great content
Thanks a lot!
Hey Stephan, I am stumped I keep getting the error (When loading the graph, the following node types were not found: SUPIR_Upscale....), when loading your workflow as well as a few over similar workflows around SUPIR. I've installed the requirement.txt file into the SUPIR custom node folder and nothing...can you offer any advice?
You mean the node doens't appear when you have installed it? Make sure you have git pulled the node into the right directory, i've done that too :) it should automatically pick it up - but only after a comfy restart of course :) 👽
same@@stephantual
i think you need to run "python_embeded\python.exe -m pip install -r ComfyUI\custom_nodes\ComfyUI-SUPIR
equirements.txt" in your main folder if your using the portable version
Hello Stephan! Thx for sharing your knowledge. I want to asking this is work with lightning models?
That's a good question, sadly the anwer is not yet, probably some imcompatibility with the internal control net. I tried and while it didn't crash, the image came back totally cooked no matter what settings I used :)
WOW Stephan this is awesome! Thanks for the hard work!. May I ask a simple question regarding the outpainting? Do you know there is any way to do outpainting using "inpaint and Lama" like we did in A1111 webui? All I can find is a preprocessor called Lama Preprocessor for Comfy, But that node can only repaint the whole picture and change the final outputs color for some wierd reason. Is there better workflow to achieve outpainting meanwhile dont change its color and keep the consistency? Thanks in advance dude! you are a legend!
It's definitely possible, but I don't think youtube comments would let me type enough words - it's quite involved but not that hard. Hop on the discord! 👽
its Xformers necessary to use? It keeps braking my comfy UI instalation
Great video, thanks for the effort to research & put it together. A question, please. My installation won't load the SUPIR custom node, getting " No module named 'omegaconf' ". Any idea where this came from for you?
Pytorch is probably out of date - try updating the whole of comfy + dependencies.
ty
my comfyui shell shows: "Using pytorch cross attention" - no idea how to switch it to xformers... Any help, pretty please?
mmm 0:51 is how you install xformers - but in the lastest SUPIR update (yes it's been updated twice since the vid) , xformers are no longer required. They're nice to have though :) 👽
It's amazing! Any idea on how to process a folder of images in batch? I want to use it for improve the images of a dataset for a DreamBooth
It will work in batch but you have to be SUPER careful with the ram. There's a batch setting in the new update of the node, and some advice by kijai on the discord 👽Thanks for watching! and it's a good point you make.
You can use the Load Image From Path node, there's also VHS_LoadImages node but this requires all images to be the same resolution.
@@stephantual @svenvarg6913 thanks for answering both of you. I tried the batch setting and it worked! It's going to improve my dasets for sure, even with default prompt works amazingly well
In the [Load Upscale Model] node, it shows that I am missing the [1xDeJPG_OmniSR.pth] model file. Can you tell me where to download it? Thank you very much.
You can use any upscaler/artifact remover here, just grab it from openmodeldb.info/. The one you're looking for (which doesn't upscale but removes compression artifacts) is at models/1x-DeJPG-OmniSR
Hey, I'm getting this error from the interrogator: "ImportError: cannot import name 'ToImage' from 'torchvision.transforms.v2'"
Any idea how to fix please?
Really hard to tell without knowing the environment but that sounds like your pytorch is old, or torchvision is old, or both. It can be tricky as some nodes have different dependencies, the best bet is run a update_comfyui_and_python_dependencies.bat (assuming you're on windows) in this case.Good luck!
I loved the music you played at min 9:25, and the woman was a little more Asian and came out a little Latin😀
Haha yeah the real version of her wasn't very happy with the video 😅👽
how to show CPU, GPU, RAM and VRAM like your Comfyui?
Error occurred when executing SUPIR_Upscale:
'MemoryEfficientAttnBlock' object has no attribute 'group_norm'
😥
Join us at tinyurl.com/ursium or if you don't have discord post worfklow + full trace on the SUPIR github issue page 👽
Anytime I try to install SUPIR in comfyui I just get the same error, (import failed.)
Same
Hello there! 🛸 It's extremely difficult for me to give technical support over RUclips, but if you join the discord I would be more than happy to have a look! Usually this is due to either a configuration error (directories) or we could be linked to PyTorch. Try installing the requirements using pip install -r requirements.txt within the SUPIR custom nodes folder. 👽
Yes, even RTX 3060ti 8GB can handle this Workflow for SUPIR. Thanks you. you and Kijai.
Wasn't 12GB the minimum Vram needed to use this? Maybe in your case it's sharing with the system memory so it would be extremely slow?
--lowvram?
@@highcollector in this workflow it use 4x upscale, but i use 2x, and i use WD14 tagger and custom prompt instead. 14 minute for 720x898 px resource image.
@@highcollector i think SUPIR good for upscale with small img. but med not quite good if i compare with StableSR Upscale.
Thank for confirming that, short of switching gpus for testing I have no way to check 😂👽👍
Do you think I can use it for making animation?
I'll be honest - I've seen ONE person use it for that, as a test, on an A6000, with a very short vid. It's a better strategy to use animate diff + ipadater + controlGIF cn then pass the lot to Ultimate SD upscaler (the cn will maintain structure, the adapter style) - I've seen this done on zeroscope content with great results! Jump on my discord i'll be posting workflows demoing that in the next 24h (its bed time here :) Cheers! 👽
Great job, thx for this video. I have à error message: omegaconf missing (requirements.txt line 8).
If anyone knows what to do, thanks in advance.
Otherwise, I will delete the files before starting again.
Sure thing! Try pip install -r requirements.txt from within the SUPIR custom node folder. It's very hard for me to type RUclips, especially while I'm on mobile, but feel free to join our Discord where there is a good community of people that can help! Tinyurl.com/URSIUM. Cheers! I'd love for you to sort it out because it's a great tool! 👽👽👽
What exactly does "this item is non-commercial, so please be very careful and considerate" mean?
It means that you cannot use this on commercial projects because the license says so :( . Sadly, it's becoming increasingly common with projects, but it's not the fault of the node developers - it's models themselves that are under certain licensing terms. As for 'considerate' - it means "don't abuse the system just because no will will go and check" :) 👽
@@stephantual this is actually an interesting topic for discussion where exactly the boundary between commercial and non-commercial use is. For example, I have a side project where I need to make X portraits of people instead of a photo shoot or other people's photos from google images and the project is 100% public good kind. But let's imagine that my supporters gave me 1 million dollars through patreon or some crowdfunding platform. Would that be a commercial use?
keep getting this error, did everything like you. not sure what "usually" goes in the ksampler
Error occurred when executing SUPIR_Upscale:
Sizes of tensors must match except in dimension 1. Expected size 376 but got size 375 for tensor number 1 in the list.
also im getting more out of the interrogator with this prompt: describe the image as if i were blind, tell me everything about it, explain it in great detail, my life depends on this
Supir got updated 7 times since this video. I'd recommend giving it another try, lower the steps and if you are 100% sure you found a bug, post on the github help section: github.com/kijai/ComfyUI-SUPIR/issues
Qlhow to uninstall comfy ? I guess I don't have xformer
You mean uninstalling the xformers? pip uninstall and location of packages. To completely uninstall comfy, just wipe the folder clean if you're using portable (it's self-contained). But 99% of the time it's not needed as you can recover just about anything using the terminal. Plus, it's sad if you remove comfy 😢
@stephantual I mean if I have some problems with comfy I will Uninstaller to reinstall again...
Supir Upscaler not showing up, Noob here what am i missing?
If you mean 'it's not on manager' - it might not be like most things I demo here (I haven't checked - I don't use manager). In my video there's an explanation on how to install it via github. If you need more help don't hesitate to join the discord at tinyurl.com/URSIUM 👽
the ComfyUI node should not exists because it is a licence violation in my opinion (GPL). SUPIR is non-commercial only so it is not really useful without buying a proper license from them.
4:10 no safetensors models? Crazy in 2024.
That's a fair comment. Two points:
a) most if not all of the comfyui nodes are 'wrappers' for algos developed by 3rd party as i'm sure you know. If they don't provide the safetensors, they can't be included.
b) I agree safety is a huge concern at this point. It would be trivial to write or modify a node that embeds entire payloads, and if you look at the comments or the reddit, you'll see a lot of people don't watch the videos, they just install the workflow like it was an app, which is a disaster waiting to happen.
I'm doing my best to educate, and I hope that as a community we're going to bring up the security standards higher.
Bruh! Should have not spent hours to get the yesterdays WorkFflow LMAO
Yeah thing move FAST. The switcharoo happened to me mid-recording when Kijai announced the new one, so started from scratch haha :) it's okay it's part of the fun! 👽
Thank you! I can't seem to be able to get the node working I guess the problem is "No module named 'omegaconf'" I have spent 2 hours trying to install omegafuckinconf with no luck. However, you are a good human and your tutorials are gold.
Sorry, that's probably my bad. The node was updated again this morning, and it looks like the requirements are well, required now :) - just run pip install -r requirements.txt inside the node's folder (and if you're on portable, you might need to do something like going to python_embeded, then python.exe -m pip install -r ..\ComfyUI\custom_nodes\ComfyUI-SUPIR
equirements.txt. I hope this helps! 👽
@@stephantual Thank you, I get a syntax error when in python.exe, I tried to (-m pip install -r G:\comfy 2
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-SUPIR
equirements.txt). Ignoring me would make your life so much better D
what's your conclusion and answer to "the HOT new Stable Diffusion super-res upscaler that destroys every other upscaler (again). Or does it?"? As always thank your for sharing this walkthrough.
The title hints at the answer - there's no such thing despite what people would like to say. Depends on the job at hand. I feel CCSR is tighter , more akin to a pixel upscaler, it's also slow. I'm using SIAX models or real_ersgan or foolhardy depending on the need when it need to 'go fast' or have an intermediary step to complete with something like zeroscope. For photo upscales, I'm a sucker for 1:1 matches so I'm using topaz. For AI-generate video upscales, something like a chain of AD LCM + Ipadapter + Ultimate Upscale. Really depends on the job!
It does it. Trust me.
@@stephantualthanks for your answer.
Non commercial is annoying. Id like to use this as a last pass on my freelance work
Yeah I agree. It's a hot button topic right now for sure. Cascade is non-com, now this, tons of things are following suit. Very sad imho :( 👽
@@stephantual hey btw do you do consulting? Would be interested in having your input on a comfy workflow Im working on.
okay, with 16 gb ram it is just crashing without saying anything :(
Try lowering the size of the pixels going in by using a downscale first, start at 512x512 and work your way up. Kijai might have reverted a PR.
I tried even 256x256 image but it crashes while loading models, probably 16gb is just not enough to fit both sdxl and supir models@@stephantual
Yes that's likely. Are you using tiled vaes? It can help. Also you can reduce the size of the encoder tiles. It may introduce lines but it does make the vram usage more reasonable, a bit like what you encouter with things with ultimateSD upscaler. @@user-cz3io5tg5l
@@stephantual yea I did... gonna do a bit of upgrading soon
this will not on mac I guess (no xformers)
It's getting updated as we speak and Kijai said he would provide an option. Keep an eye on the repo 👽
how to run run_nvidia_gpu.bat?
I get this error when using SUPIR with moondream, please help me solve it, thank you
Error occurred when executing Moondream Interrogator:
You have to trust remote code to use this node!
File "G:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\ComfyUI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Hangover-Moondream\ho_moondream.py", line 74, in interrogate
raise ValueError("You have to trust remote code to use this node!")