UPDATE: THE SCRIPT I SHOW IS NOT THE HIGHRES NODE!!! This is the pack with the Highres Node: github.com/jags111/efficiency-nodes-comfyui DOWNLOAD my Workflows: drive.google.com/file/d/1GNDM5N26j-Ngkj7K6agp4iefdEvygKft/view?usp=sharing #### Links from my Video #### Highres Fix Node: gist.github.com/laksjdjf/487a28ceda7f0853094933d2e138e3c6/#file-kohya_hiresfix-py ComfyUI Workflows: comfyworkflows.com/ Reddit Post: www.reddit.com/r/StableDiffusion/comments/1805v7j/after_many_allnighters_i_made_a_way_to_run/
I guess I am confused now. Are you saying don't follow your instructions in the first part of the video, but follow the install instructions on the updated link? hmmmm
You don't need to copy the Kohya Highres fix script, it's already built into Comfy. Also a little advice, Running Facefix as the last step kinda makes it look like it's pasted onto the image. So I always run the facefix before the last upscaling pass, that makes it blend better with the rest of the image.
Wow...just when I think I've got a comfyui workflow that gets amazing results, You come out with something that literally blows it away in comparison. Great video and thank you for your contribution to the community!!!!
Great video, but you've mixed a couple of things up. The hiresfix node that you installed in the beginning is not the highresfix script node. And they do 2 separate things.
are you sure? because Grockster send me a screenshot and that link to the script i show in the video and that was exactly that node. plus it's the only node i have that says "highres fix"
@@OlivioSarikas there's one that came with the comfy update called deep shrink and there's the hires fix like in the video, they do different things but I think the one in the video is better.
@@KINGLIFERISM It's called "hires" in the search function after you add the code at the start of the video. You add it to the model like a LoRA or IPA adapter.
Hi Olivio. I'm wondering why you're using the complete LCM model as LORA? There are 2 LORA LCMs for SD1.5 and SDXL of 130Mb and 380Mb respectively, which produce exactly the same. Also, keep in mind that you need to use LCM as the sampler and sgm_uniform as the scheduler; the CFG should be between 1-2 (around 1.5, 1.8 works better), and the steps should be between 2 and 8. Greetings and keep up with these excellent videos.
Tremendous video. Clear, concise, helpful and entertaining. Would be great to see more videos on rendering multiple, different figures in a single image. Most seem to show just a single figure. Also don't forget A111 :) Keep up the great work.
Thx for the video. But I don't quite understand why you'd run a second KSampler with drastically different parameters, because when I used your double-KSampler method, the second output image changed so much from the first one that I don't think the added details are worth it (I think this could be caused by my choice of checkpoint, sampler, or denoise strength. If that's the case please let me know). Why wouldn't you use the HiRes-fix script on the first KSampler? Or, maybe in your approach, consistency is not the top priority? Please kindly enlighten me.
Dear Olivio, I couldn’t find the URL of the highres fix py script you used at the start. Can you add that into the description as well? Thank you so much for the brilliant work you have done, my friend ❤
what is the model you use for the animated diff loader Olivio? I keep getting this error, Prompt outputs failed validation ADE_AnimateDiffLoaderWithContext: - Required input is missing: model_name
I've tried this like 9x and I can never get the highresfix node to work. I have double checked the code the file type? Anyone else having similar issues?
Can someone share what the correct file name is, there what he says in the video, what he has in his own directory, and what the code is labeled under?
i dod something wrong my highres-fix stay in red when i start the image ! i don't know what i do wrong ! any idea ? i put the scrip but i am not sure how you shoud write it , in yours ist highres-fix and you show highresfix but both do not work for me
Error occurred when executing UltimateSDUpscale: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Getting this error, anyone else - "Error occurred when executing KSampler (Efficient): name 'calc_cond_batch_original_tiled_diffusion_a0820916' is not defined"
Came for the Affinity. Stayed for the AI. Always loved your channel. I have played with clipdrop and want to download my version of SD. Do you have a "starting with SD" video?
What I found was that by simply changing the CFG in the 1st k-sampler it improves the image dramatically - I use 1.5 instead of 4 with LCM - and then denoise at around 0.78 - can almost skip the next step
Nice vid. I've been comparing against using the advanced KSampler nodes and doing part of the rendering on the first and then the latent upscale on the 2nd. The results are somewhat comparable with hires fix being a bit faster. It will come in really useful for mix & matching SDXl/SD1.5 or different models though - a nice addition to the ComfyUI arsenal. Thanks!
What's the point of downloading that script? Doesn't the HighRes-Fix Script in the Efficiency Node Pack have that already? Is that a better one? If so, how do I connect that script to the "HighRer-Fix Script" node?
Could you please fix the link for the script in the description? Just give us a link to the correct text that we need to copy paste. The "fixed" link you put in the description doesn't lead directly to the text. Or at least tell us what to click on at that point.
You still have the choice, it all depends if you need these features with Comfy. I'm not swapping although I do have a Comfy install. As nice as this is, I don't need it. Which of the images at the start is 'best' depends solely on what you want from the image. None of them is 'better' than the other.
Your content is not only informative but entertaining as well. You are the best Olivio!
Congratulations on 200K subscribers! Well deserved, keep it goin!
Thank you :)
UPDATE: THE SCRIPT I SHOW IS NOT THE HIGHRES NODE!!! This is the pack with the Highres Node: github.com/jags111/efficiency-nodes-comfyui
DOWNLOAD my Workflows: drive.google.com/file/d/1GNDM5N26j-Ngkj7K6agp4iefdEvygKft/view?usp=sharing
#### Links from my Video ####
Highres Fix Node: gist.github.com/laksjdjf/487a28ceda7f0853094933d2e138e3c6/#file-kohya_hiresfix-py
ComfyUI Workflows: comfyworkflows.com/
Reddit Post: www.reddit.com/r/StableDiffusion/comments/1805v7j/after_many_allnighters_i_made_a_way_to_run/
👋
Many thanks :)
So, whats the kohya_hiresfix then doing exactly?^^
I guess I am confused now. Are you saying don't follow your instructions in the first part of the video, but follow the install instructions on the updated link? hmmmm
you should pin this message in the comments
You don't need to copy the Kohya Highres fix script, it's already built into Comfy. Also a little advice, Running Facefix as the last step kinda makes it look like it's pasted onto the image. So I always run the facefix before the last upscaling pass, that makes it blend better with the rest of the image.
cool.
You think you will edit the work flow to a better setup?
This is AMAZING, you are the best ComfyUi teacher around whole youtube . Easy to understand + workflow , Thank you 🍻🍕
I can see why you have already passed 200k Olivio. Great video. Love the enthusiasm (and the shirts)!!
Wow...just when I think I've got a comfyui workflow that gets amazing results, You come out with something that literally blows it away in comparison. Great video and thank you for your contribution to the community!!!!
Olivio you convinced me to install ComfuUI - thank you for sharing
Great video, but you've mixed a couple of things up. The hiresfix node that you installed in the beginning is not the highresfix script node. And they do 2 separate things.
are you sure? because Grockster send me a screenshot and that link to the script i show in the video and that was exactly that node. plus it's the only node i have that says "highres fix"
@@OlivioSarikas there's one that came with the comfy update called deep shrink and there's the hires fix like in the video, they do different things but I think the one in the video is better.
@@OlivioSarikas , it's called "Hires" and has a title "Apply Kohya's HiresFix"
Where do you get it then? Please.
@@KINGLIFERISM It's called "hires" in the search function after you add the code at the start of the video. You add it to the model like a LoRA or IPA adapter.
Hi Olivio. I'm wondering why you're using the complete LCM model as LORA? There are 2 LORA LCMs for SD1.5 and SDXL of 130Mb and 380Mb respectively, which produce exactly the same. Also, keep in mind that you need to use LCM as the sampler and sgm_uniform as the scheduler; the CFG should be between 1-2 (around 1.5, 1.8 works better), and the steps should be between 2 and 8. Greetings and keep up with these excellent videos.
why my text box node shows missing? I have downloaded the chibi node as it says
this seems so powerful for a newbie like me. Awesome, thanks for sharing :D
Tremendous video. Clear, concise, helpful and entertaining. Would be great to see more videos on rendering multiple, different figures in a single image. Most seem to show just a single figure. Also don't forget A111 :) Keep up the great work.
Thx for the video. But I don't quite understand why you'd run a second KSampler with drastically different parameters, because when I used your double-KSampler method, the second output image changed so much from the first one that I don't think the added details are worth it (I think this could be caused by my choice of checkpoint, sampler, or denoise strength. If that's the case please let me know). Why wouldn't you use the HiRes-fix script on the first KSampler? Or, maybe in your approach, consistency is not the top priority? Please kindly enlighten me.
don't forget A1111! thanks for the video
Bro. Nice make up your wearing for media presentation. (or is that a filter)
Dear Olivio, I couldn’t find the URL of the highres fix py script you used at the start. Can you add that into the description as well? Thank you so much for the brilliant work you have done, my friend ❤
He didn't use the Script he's installing at the start. He's using the Highres Node from Jags Efficiency Nodes.
@@denacejones2401 Thank you Denace for the headsup. Got confused for a second there :)
My grandma couldn’t find this video.
finally You are on that level of ComfyUI that I was waiting for soooooo long ⏳
Muchos Gracias, Olivio!
Is it just me are do the filters on the comfyworkflows website not work? I filtered for SD1.5 and it's showing loads of SDXL etc
Nice video, but I didn't get how downloading a file into the comfyUI folder then actually has an effect. Could someone explain? :) LG
what is the model you use for the animated diff loader Olivio? I keep getting this error, Prompt outputs failed validation
ADE_AnimateDiffLoaderWithContext:
- Required input is missing: model_name
Do we still need the script since the node is also in the efficiency pack?
You don't do videos about automatic 1111 anymore ? :(
Thanks!! my windows didn't change it to py directly, but they let me save it as such from notepad++
I've tried this like 9x and I can never get the highresfix node to work. I have double checked the code the file type? Anyone else having similar issues?
Can someone share what the correct file name is, there what he says in the video, what he has in his own directory, and what the code is labeled under?
what is the difference between this and normal high res?
i dod something wrong my highres-fix stay in red when i start the image ! i don't know what i do wrong ! any idea ? i put the scrip but i am not sure how you shoud write it , in yours ist highres-fix and you show highresfix but both do not work for me
So you done with A111?
Error occurred when executing UltimateSDUpscale:
CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
could you do a tutorial yet on how to combine this with a hand fix?
Getting this error, anyone else - "Error occurred when executing KSampler (Efficient):
name 'calc_cond_batch_original_tiled_diffusion_a0820916' is not defined"
Came for the Affinity. Stayed for the AI. Always loved your channel. I have played with clipdrop and want to download my version of SD. Do you have a "starting with SD" video?
Instructions unclear, grandma became a top level hacker and now she laughs at my coding attempts
What I found was that by simply changing the CFG in the 1st k-sampler it improves the image dramatically - I use 1.5 instead of 4 with LCM - and then denoise at around 0.78 - can almost skip the next step
Thank you! 🐐
Nice vid. I've been comparing against using the advanced KSampler nodes and doing part of the rendering on the first and then the latent upscale on the 2nd. The results are somewhat comparable with hires fix being a bit faster.
It will come in really useful for mix & matching SDXl/SD1.5 or different models though - a nice addition to the ComfyUI arsenal. Thanks!
you're loving dreamshaper I see :)
He missed this comment :P Love your work man. Thanks.
Anyone else's canvas tab stop working after importing the workflow?!
how would i use this workflow without a image input , help
disconnect the image input and use "empty latent image" node instead. in that case use denoise of 1 in ksmapler
Thank You!!
can i do it with runpod???
What's the point of downloading that script? Doesn't the HighRes-Fix Script in the Efficiency Node Pack have that already? Is that a better one? If so, how do I connect that script to the "HighRer-Fix Script" node?
Could you please fix the link for the script in the description? Just give us a link to the correct text that we need to copy paste. The "fixed" link you put in the description doesn't lead directly to the text. Or at least tell us what to click on at that point.
Great Job!
any1 have workfolow to inpaint mask only in comfyUI share for me pls, thanks
Seems I have no choice but switch to comfyUI now :P
You still have the choice, it all depends if you need these features with Comfy. I'm not swapping although I do have a Comfy install. As nice as this is, I don't need it.
Which of the images at the start is 'best' depends solely on what you want from the image. None of them is 'better' than the other.
do a video for Fooocus
I have 3060 laptop with 6 GB Vram so im always üüüüüüüüüüüüüüü :)
time to change the ending screen to hot neko woman ... so much more clicks .
😍😍😍😍😍😍😍😍😍
Have you toned your beard? :)