Excellent! Clarified a number of things for me. Plus, I've heard we get better final results by doing an upscale in latent first but i couldnt figure out how and i couldnt find that random video i watched that stated this and showed it. So this quick tutorial is very helpful!
Thank you, this has been so helpful. ComfyUI is very user friendly once you get past the learning curve (which you have shown me), and you have absolutely made that curve so much shorter. On to L4!!!
how do I transfer image from the first workflow to the second, when i bypass 1st one, i have image that i want to upsclae, but there is no way to do that...
Hi Olivio, thank you for that insightfull video about latent upscaling. Did you know you can just Edit the "run_nvidia_gpu.bat" file with "--preview-method auto" on the end? This will show a preview in the in the K-Sampler. Happy creating.
Thank you Olivio for your tremendous work ! Can you inform if you know a way to manage regions in the image with different denoise ? Indeed, the latent upscale is changing some details of the image. I would like to keep specific region with unchanged regions. Thank you very much !
"Upscale latent" before "VAE decode" is much better than upscale the output image with more detail. This lesson integrates Upscale and KSampler, one more step for beginner.
Muchas gracias maestro! Me enganché con Comfyui gracias a ud. y ya no puedo cambiarlo. Excelentes tutoriales, aunque ciertamente no son para principiantes.
because a) the model might not be able to handle that high of a resolution and b) if you upscale a image that already exists the AI can work with and improve the existing details, rather than creating a new image with less details. The 1.5 models are created for low res images. only XL and later models can handle higher resolutions for the initial text 2 image render
I don't think so, actually this lesson doesn't really provide enough information to upscale an image in the end you end up with nothing. For example I follow the tutorial and then i make an image, then I can't keep it, i need to make another image and upscale it.... What is the point in this?
I'm interested in how this compares to doing the intermediary upscale step in pixel space rather than in latent space. And if doing it in latent space is consistently better, then I'm interested in understanding why, as I thought latent upscaling was inferior to using esrgans in pixel space.
Please someone tell me how to increase the size of the blue area in ComfyUI. The blue lines. So that my workflows can be saved the next time I open Comfyui, I assume? :)
Do you prefer Comfy because of it’s customizable and for speed? Been using a1111 about a year, and happy to have learned it. Is it bad, not developed anymore? I think the ui is simpler and less cluttered for my eyes, so I rather not change or is it necessary? Been following your channel and all a1111 tutorials, so a bit sad and wondering why no more on this. But also totally understand. Thanks 🙏
Because no one care´s about, its youtube (duh), it basically is pointless. What even is the value, if you can´t even dislike which would at least tell you if you´re dealing with bs content or not, even if there were just a couple of K likes on a million views video you would know what´s up
You are a ginius in AI creative world. Thanks for your lesson. I have a query. Can we create a visual multi character interaction UI images, where multiple characters are taking each other in a background environment....? Want to know how it made..... Thank you.
It IS called region prompt in comfyUi. There is an extension in A1111 that Can give you a great deal of "what you want" under each and every Chunk (or Mask) you make.
As I understand this is not an example of how to upscale already existing image, but how to upscale images on the go, and doing it this way causes a lot of issues, espacially when cancelling an image in progress, it can glitch the GUI. For a person that came here for upscale, i want to see how to upscale an image that I already created.... this is not really helpful
@@PixelsVerwisselaar if they learned how to get A1111 going they're willing to learn. Otherwise they wouldn't generate anything. People who don't want to learn use Midjourney ;)
@PixelsVerwisselaar is that so bad though. For the longest time, A1111 was what we had, and that was the interface used to make LOADS of neat and even provocative stuff. Not all tools are meant to be used by everyone. Nothing wrong with that.
What are you talking about artistic expression? You are telling a computer what to do hwo is that artistic expression the closest thing you get to being either artistic or expressing yourself is the prompt you write. Bypassing the years and decades that actual artists spent practicing and practicing. Look I have no problem with image generation even though it's putting artists out of work because that's life. Technology comes along and up-ends industries. But I do have a problem with people writing promises and calling themselves artists, forgetting that you didn't actually create that image, or video, or story. And TBH it's kind of a slap in the face to the people who put in the work and the time, and the sweat, and sometimes even blood and tears to become masters of their craft. So please, don't talk as though you're artists.
Don’t worry bro… some form of social ip made from AI + blockchain (or similar tech) will make it so Artist will be able to get paid for AI using their art for generating images and video.
Human expression is art whether the tool you used is a paintbrush or computer. Imagination is still needed and the end result can still be felt by the viewer.
I appreciate what tou jave been doing for a long time, but this just seems overly complicated compared to just using a better program like Leonardo AI. These images look so fake and seem useless to any commercial output. Leonardo's Photoreal is so much better and they now have real time output so you can see instantly changes. It would be useful if there was a less potent AI but which had more control over pose and output, but i jist dont see it in all your videos.
Did you get paid to say that? There's a lot of that going around over the last few months i've noticed, especially from NAI v3, all it does is make sure i'll never even consider using it.
in Leonardo it is possible to take a reference picture, on its basis to make your own in a similar style (ipadapter), to put it in the necessary pose and location, to make so that in one generation to apply 2 different models (for example photorealistic object in the drawn world), etc.? Maybe I'm wasting my time learning Comfy when there is already such a convenient and simple interface 🤔
Dude absolutely everything you've said is nonsense. You can control absolutely everything in comfyui. Stick with Leonardo, comfyui is clearly too complicated for you.
This is very helpful and well organized. Thanks for your wonderful video!
Going through these one by one. Thank you. Very clear and concise directions and explanations.
Excellent! Clarified a number of things for me. Plus, I've heard we get better final results by doing an upscale in latent first but i couldnt figure out how and i couldnt find that random video i watched that stated this and showed it. So this quick tutorial is very helpful!
Dear Olivio thanks for so enlighten videos! greetings from thailand. (I ll keep watching until the last! )
(Upscale 5:44)
I really appreciate your lessons. Super job, great explanations. Thank you
Dude you are awesome. Thank you for sharing all this.
Thank you. Very clear and concise directions and explanations.
Great lessons. I'm following along everything you do.
Man, I haven't finished the tutorial yet but this tutorial is bonkers! Great job, thanks a lot!
These tutorials are excellent! Thank you Olivio!
your enthusiasm is contagious!
Thank you, this has been so helpful. ComfyUI is very user friendly once you get past the learning curve (which you have shown me), and you have absolutely made that curve so much shorter. On to L4!!!
This helped me so much, thank you!!!!!
The step to do another i2i was key compared to other tutorials. Thanks!
Man... I code, but I love a node based workflow sometimes.
This, Unreal Engine Blueprints, DiVinci Resolve. Chef kiss 👌
Thank you very much for this amazing lesson!
Let's go another masterpiece! greetings from Greece!
5:33 nowadays, use preview bridge node to interrupt at the preview, then re-queue to finish if you like it. Or use interrupt node.
Thanks for your lesson, it helps a lot, thanks !
Many thanks for explanation!
THATS PRICELESS!
This series is great Olivio!
how do I transfer image from the first workflow to the second, when i bypass 1st one, i have image that i want to upsclae, but there is no way to do that...
Hi Olivio, thank you for that insightfull video about latent upscaling. Did you know you can just Edit the "run_nvidia_gpu.bat" file with "--preview-method auto" on the end? This will show a preview in the in the K-Sampler. Happy creating.
This is comment is more productive then the last 10 videos I have watched.
Thank you Olivio for your tremendous work ! Can you inform if you know a way to manage regions in the image with different denoise ? Indeed, the latent upscale is changing some details of the image. I would like to keep specific region with unchanged regions. Thank you very much !
Great tutorial. I'm looking forward to getting to grips with comfy using this.
Though I'm afraid the method gave some ungodly results though haha
"Upscale latent" before "VAE decode" is much better than upscale the output image with more detail. This lesson integrates Upscale and KSampler, one more step for beginner.
you are a legend ! thank you so much
Thank you very much sir.
Is there a function that pauses the generation while you inspect the lower res image, see if you like, then click "proceed" to upscale it?
My question is wouldn't it be equivalent or better to put more noise an more steps (to be equivalent in GPU time) into the ultimate upscaler ?
What is the ultimate upscaler you talk about at the end of the video?
the node is called "ultimate upscale"
@@OlivioSarikas Oh strange. I could not find it. Thank you
The upscaled images still had wonky eyes and mistakes in the clothing. I tried a different checkpoint and the upscaled image came out blurry.
yoooo here before the video is even out. now thats early access!
Muchas gracias maestro! Me enganché con Comfyui gracias a ud. y ya no puedo cambiarlo. Excelentes tutoriales, aunque ciertamente no son para principiantes.
Igual es muy util y se puede hacer de todo
where in your workflow do I pick the upscaler, animesharp for example? I cant find it anywhere.
Why upscale the rendered latent image, instead of using a bigger empty latent image at the start of the workflow?
because a) the model might not be able to handle that high of a resolution and b) if you upscale a image that already exists the AI can work with and improve the existing details, rather than creating a new image with less details. The 1.5 models are created for low res images. only XL and later models can handle higher resolutions for the initial text 2 image render
Hi there! That upscale can be used on an existing image?
I don't think so, actually this lesson doesn't really provide enough information to upscale an image in the end you end up with nothing.
For example I follow the tutorial and then i make an image, then I can't keep it, i need to make another image and upscale it.... What is the point in this?
I wanna be comfy should I learn this UI
I have probleme with face, mouth without teeth , horrrible eyes etc. Any tutorial ma man ?
I'm interested in how this compares to doing the intermediary upscale step in pixel space rather than in latent space. And if doing it in latent space is consistently better, then I'm interested in understanding why, as I thought latent upscaling was inferior to using esrgans in pixel space.
Latent upscaling IS inferior, no questions about it.
Please someone tell me how to increase the size of the blue area in ComfyUI. The blue lines. So that my workflows can be saved the next time I open Comfyui, I assume? :)
This confy ui series is so great!!!! 💯👌
Why don't I have the "TEXT TO IMAGE" bar at the top of my ComfyUI workspace?
Right click and select "Add Group" to create an empty panel to group your nodes.
MASTER
Do you prefer Comfy because of it’s customizable and for speed? Been using a1111 about a year, and happy to have learned it. Is it bad, not developed anymore? I think the ui is simpler and less cluttered for my eyes, so I rather not change or is it necessary? Been following your channel and all a1111 tutorials, so a bit sad and wondering why no more on this. But also totally understand. Thanks 🙏
why so few likes? Thank you, Olivio
Because no one care´s about, its youtube (duh), it basically is pointless. What even is the value, if you can´t even dislike which would at least tell you if you´re dealing with bs content or not, even if there were just a couple of K likes on a million views video you would know what´s up
You are a ginius in AI creative world. Thanks for your lesson.
I have a query. Can we create a visual multi character interaction UI images, where multiple characters are taking each other in a background environment....? Want to know how it made.....
Thank you.
It IS called region prompt in comfyUi.
There is an extension in A1111 that Can give you a great deal of "what you want" under each and every Chunk (or Mask) you make.
As I understand this is not an example of how to upscale already existing image, but how to upscale images on the go, and doing it this way causes a lot of issues, espacially when cancelling an image in progress, it can glitch the GUI.
For a person that came here for upscale, i want to see how to upscale an image that I already created.... this is not really helpful
The real ComfyAI is free. You don't need OpenArt for it.
🌿
👋
A1111
No thanks
Thumbs up for A1111 only channel.
Why should I use an inferior interface? Comfy is faster, uses less VRAM, can do way more...
Thumbs UP for comfyUi only channel
@@mongini1 Boooo!! A1111 doesn't have spiderweb like interface, plus low vram doesn't bother me.
@@PixelsVerwisselaar if they learned how to get A1111 going they're willing to learn. Otherwise they wouldn't generate anything. People who don't want to learn use Midjourney ;)
@PixelsVerwisselaar is that so bad though. For the longest time, A1111 was what we had, and that was the interface used to make LOADS of neat and even provocative stuff.
Not all tools are meant to be used by everyone. Nothing wrong with that.
What are you talking about artistic expression? You are telling a computer what to do hwo is that artistic expression the closest thing you get to being either artistic or expressing yourself is the prompt you write. Bypassing the years and decades that actual artists spent practicing and practicing. Look I have no problem with image generation even though it's putting artists out of work because that's life. Technology comes along and up-ends industries. But I do have a problem with people writing promises and calling themselves artists, forgetting that you didn't actually create that image, or video, or story. And TBH it's kind of a slap in the face to the people who put in the work and the time, and the sweat, and sometimes even blood and tears to become masters of their craft. So please, don't talk as though you're artists.
Don’t worry bro… some form of social ip made from AI + blockchain (or similar tech) will make it so Artist will be able to get paid for AI using their art for generating images and video.
Human expression is art whether the tool you used is a paintbrush or computer. Imagination is still needed and the end result can still be felt by the viewer.
Virtue signal somewhere else.
"my workshop" you barely know how this software works and it shows.
launch wokfow doesnt really work, asking me to join discord that i did. still
Gracias Olivioooooooo
I appreciate what tou jave been doing for a long time, but this just seems overly complicated compared to just using a better program like Leonardo AI. These images look so fake and seem useless to any commercial output. Leonardo's Photoreal is so much better and they now have real time output so you can see instantly changes.
It would be useful if there was a less potent AI but which had more control over pose and output, but i jist dont see it in all your videos.
Did you get paid to say that? There's a lot of that going around over the last few months i've noticed, especially from NAI v3, all it does is make sure i'll never even consider using it.
smells fishy, reminds me on Shilli Vanilli ....
in Leonardo it is possible to take a reference picture, on its basis to make your own in a similar style (ipadapter), to put it in the necessary pose and location, to make so that in one generation to apply 2 different models (for example photorealistic object in the drawn world), etc.? Maybe I'm wasting my time learning Comfy when there is already such a convenient and simple interface 🤔
Dude absolutely everything you've said is nonsense. You can control absolutely everything in comfyui. Stick with Leonardo, comfyui is clearly too complicated for you.