#### Links from the Video #### GET my WORKFLOW here: www.patreon.com/posts/super-flux-turbo-115107809 Flux Turbo Lora: huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha
For the nr of images i know 2 methods (probably not what you mean but that is what I get out of the question), One is the batch nr the other is in the manager with the queue options which is the one I use, if you press extra options you can set the nr of image it makes (I usually do 4), then when I click "Queue prompt" it will make 4 different images one by one, with the same prompt. This is how I understood the question but I figured an expert like you would already know this so I might have understood you wrong. Then if you press " View Queue" you see the nr of images that are still in the queue, I usually do this overnight, run like 25 different prompts, giving me 100 images, of which I can choose the best out of 4, so every morning I wake up see 100 new images, sometimes very bad results off course, because I am asleep and it is running on it's own, but usually with 100 very surprising new images
This reminds me that in the breaks from Monster Hunter this weekend, I was going to build Flux txt2img and upscalers in Comfy. I finally built some very nice SDXL amd SD15 workflows but need to do this on Flux and potentially SD3+
right click on the KSampler Advanced node, Convert widget to input, Convert noise_seed to input. (but you can just use KSampler Advanced as is, it is already in there)
really i like your video but how we can achieve workflow allow us compose consistent characters and different environment for different shots during work with 3d animation episodes? or have video explain this? and really thanks for your informative info, and if we can make work meeting its will be fine maybe can corporate in our project?
the blur option? isnÄt that for the edges between the individual tiles? because the pattern is within the image and created by flux, not by the tiling. but i will have a look
@@OlivioSarikas The problem of an image full of pattern and horrible quality it's because people using a NORMAL KSAMPLER (advanced) with contains a field called STEPS setting in 20 by default (the field above the CFG field), then, you resolve this setting the STEPS field at 8 in the first KSampler and 12 in the last KSampler...😉
Silly. question but isn't that first and second ksampler basically... splitsigmas, just 12 steps split at 8, with the alternative guidance and model on the low sigmas?
@@OlivioSarikas Thank you for the quick reply. There are 3 uploads on Civitai with that name. iuliastarcean536, maitruclam and EKKIVOK are the uploaders. The downloads appear to be the same size - are they the same file?! Did you get yours from Civitai or Huggingface? Thank you.
So what's this thing with flux generating images with those strange horizontal lines running across the entire image ? Appears a lot clearer on the darker regions but it's present all over the generated image. I've started to encounter this issue from a few weeks now if I'm being honest, i first believed changing the samplers would make a difference but it still remains the same. Also this happens not just on the Super flux workflow but pretty much any workflow we use. Is this some type of a nerf or a glitch on flux ?
i think it's a problem with how flux is trained. it's not refined enough. it's just a base model and the community needs to figure out how to make it good
my second K sampler is producing over contrasted, and burnt images (on realistic images). If I lower the 7.5 to 5 its better but less detail is added.. Do you have any recommendations to play with to add the extra detail but control how burnt the second pass gets? do I play only with the guidance? of the model sampling parameters as well?
Interessting. I was also playing around with your workflow and combining it with a SD Ultimate Upscale. But I followed another video where the upscale node uses only 1 step and a single upscale tile (set width and height to the upscaled resolution) with flux which works pretty nice. I also encountered the problem with the pattern, especially in shaddows. Maybe it 's a lighting thing?
It's related to how the model works and you can makes it appear all over the picture very easily by upscaling to 2k or more. They even appears as squares depending your parameters. Having really sharp detailed pictures through upscaler is especially difficult with alot of steps to prevent those patterns to show up. I'd say, above 12 they start to come out first in dark areas then above 30 they start to be all over the picture. So rendering sharp/detailed images rely a lot more on the model upscaler instead of flux itself. Finding the correct middle between flux pass parameters and model upscaler for no patterns but sharp/detailed picture is different from almost each output style/loras/subject type.
regarding your questions ... Don't know about the 1st question but for the 2nd one can't you just increase the queue by the number of images you want in the sequence?
Hi Olivio, thank you for another great video! Can you please share why did you chose to use a triple loader when you don't seem to utilize the clip_g in your workflow.
Which model do I need for this workflow? And how much VRAM do I need? Is 10GB VRAM enough to run this fast? Can you give a link because it is confusing with all these different version which often seem to have the same names but are different. I don"t get it.
Try testing various models and see what your system can handle. If you can't run the standard DEV model I would try some of the GGUF options, preferably Q6 and higher. Also what works for me is to utilize Force/Set CLIP Device node in my workflows. I set it to "cpu" which helps in reducing the GPU load at the cost of somewhat longer loading time.
The problem of an image full of pattern and horrible quality it's because people using a NORMAL KSAMPLER (advanced) with contains a field called STEPS setting in 20 by default (the field above the CFG field), then, you resolve this setting the STEPS field at 8 in the first KSampler and 12 in the last KSampler...😉
#### Links from the Video ####
GET my WORKFLOW here: www.patreon.com/posts/super-flux-turbo-115107809
Flux Turbo Lora: huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha
The images look stunning, but I cannot see myself re-connoting the wires endlessly for each image.
That "preview latent chooser" does the trick!
To process images one after the other convert the batch input to list first. There is a convert batch to list mode that does this.
workflow = freaky hard !
can you make a video on how to animate in flux Forge
i will check it out
For the nr of images i know 2 methods (probably not what you mean but that is what I get out of the question), One is the batch nr the other is in the manager with the queue options which is the one I use, if you press extra options you can set the nr of image it makes (I usually do 4), then when I click "Queue prompt" it will make 4 different images one by one, with the same prompt. This is how I understood the question but I figured an expert like you would already know this so I might have understood you wrong. Then if you press " View Queue" you see the nr of images that are still in the queue, I usually do this overnight, run like 25 different prompts, giving me 100 images, of which I can choose the best out of 4, so every morning I wake up see 100 new images, sometimes very bad results off course, because I am asleep and it is running on it's own, but usually with 100 very surprising new images
where do you get the Preview Chooser ?
Thank You!!!
Great video! I had no idea you could use the triple clip loader for flux. I've been using the double this whole time.
This reminds me that in the breaks from Monster Hunter this weekend, I was going to build Flux txt2img and upscalers in Comfy.
I finally built some very nice SDXL amd SD15 workflows but need to do this on Flux and potentially SD3+
Why does my "KSampler Advanced" don´t have any Connection namend "noise_seed"? 🤔
right click on the KSampler Advanced node, Convert widget to input, Convert noise_seed to input. (but you can just use KSampler Advanced as is, it is already in there)
@@2008spoonman Thank you very much!
really i like your video but how we can achieve workflow allow us compose consistent characters and different environment for different shots during work with 3d animation episodes? or have video explain this? and really thanks for your informative info, and if we can make work meeting its will be fine maybe can corporate in our project?
Hi Olivio, where do you get the Seed Generator node? I only see "Advanced Sequence Seed Generator" in the Custom Nodes Manager
changing the blur option in ultimateupscaler would help a lot with pattern problem ;)
thx mate you're awesome
the blur option? isnÄt that for the edges between the individual tiles? because the pattern is within the image and created by flux, not by the tiling. but i will have a look
@@OlivioSarikas The problem of an image full of pattern and horrible quality it's because people using a NORMAL KSAMPLER (advanced) with contains a field called STEPS setting in 20 by default (the field above the CFG field), then, you resolve this setting the STEPS field at 8 in the first KSampler and 12 in the last KSampler...😉
Thank you! Very useful workflow!
hi Olivio, what's the idea behind the 1600x1600 setting in the ModelSamplingFlux node ??
Silly. question but isn't that first and second ksampler basically... splitsigmas, just 12 steps split at 8, with the alternative guidance and model on the low sigmas?
and a different seed. i feel like it brings better details. especially for example when the head is smaller in the image
Which FLUX Turbo Lora is this? Cititai now has at least 3! Thank you.
FLUX.1-Turbo-Alpha
@@OlivioSarikas Thank you for the quick reply. There are 3 uploads on Civitai with that name. iuliastarcean536, maitruclam and EKKIVOK are the uploaders. The downloads appear to be the same size - are they the same file?! Did you get yours from Civitai or Huggingface? Thank you.
@@OlivioSarikas But which one? There are different on Civitai. I found 4 different Flux.1 Turbo Alpha.
@@varyonalquar2977 oh, actually it is this one: huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha
So what's this thing with flux generating images with those strange horizontal lines running across the entire image ? Appears a lot clearer on the darker regions but it's present all over the generated image. I've started to encounter this issue from a few weeks now if I'm being honest, i first believed changing the samplers would make a difference but it still remains the same. Also this happens not just on the Super flux workflow but pretty much any workflow we use. Is this some type of a nerf or a glitch on flux ?
i think it's a problem with how flux is trained. it's not refined enough. it's just a base model and the community needs to figure out how to make it good
@OlivioSarikas surprisingly enough, the super turbo generations look fine so far.
@@OlivioSarikas it seems to be the upscaling that does it and will depend on the upscaling model used
Before my morning cup of coffee I hate everyone. After my morning cup of coffee I feel much better about hating everyone. ☕☕☕☕
😂
100%
my second K sampler is producing over contrasted, and burnt images (on realistic images). If I lower the 7.5 to 5 its better but less detail is added.. Do you have any recommendations to play with to add the extra detail but control how burnt the second pass gets? do I play only with the guidance? of the model sampling parameters as well?
are you using a different seed for it?
@ yes a separate random seed like. I think u suggested.
Thanks OLivio,.....abudant details, nice speed ,............ still the lack of FLUX DOF . Anti Blur no help here either
yes, flux love to put DOF into everything for some reason
@@OlivioSarikas make sense when the focus is on a subject in the foreground. Scenery is fine for example
Interessting. I was also playing around with your workflow and combining it with a SD Ultimate Upscale. But I followed another video where the upscale node uses only 1 step and a single upscale tile (set width and height to the upscaled resolution) with flux which works pretty nice. I also encountered the problem with the pattern, especially in shaddows. Maybe it 's a lighting thing?
It's related to how the model works and you can makes it appear all over the picture very easily by upscaling to 2k or more. They even appears as squares depending your parameters. Having really sharp detailed pictures through upscaler is especially difficult with alot of steps to prevent those patterns to show up. I'd say, above 12 they start to come out first in dark areas then above 30 they start to be all over the picture.
So rendering sharp/detailed images rely a lot more on the model upscaler instead of flux itself. Finding the correct middle between flux pass parameters and model upscaler for no patterns but sharp/detailed picture is different from almost each output style/loras/subject type.
Thanks. Is there a specific reason that guidance is skipped for the ultimate SD upscaling, i.e. the third pass.
not really. it works pretty well without it. but i will test different values and add them in a future update
@@OlivioSarikas ok thanks!
Thank You. For the 2nd FluxGuidance i prefer higher Value. For my Portraits with Character Lora, i get much better Results with a Value of 9.0.
I experimented with higher values, but for me it would make the face features too strong,
@@OlivioSarikas crazy, exactly the opposite for me, must be due to my Lora, which I trained myself.
@@Radarhacke yes, of course. different models, different settings
regarding your questions ... Don't know about the 1st question but for the 2nd one can't you just increase the queue by the number of images you want in the sequence?
Hi Olivio, thank you for another great video! Can you please share why did you chose to use a triple loader when you don't seem to utilize the clip_g in your workflow.
it gives better qualtiy. but how would you use the clip_g? can you message me on discord?
i thinks i konw how to render image after image, like u said
How to use in Forge?
Which model do I need for this workflow? And how much VRAM do I need? Is 10GB VRAM enough to run this fast?
Can you give a link because it is confusing with all these different version which often seem to have the same names but are different. I don"t get it.
i'm using the base flux model by black forest labs that you can see in the video
I enjoy the silly ending animation with the music.
Most peoples videos with an outro like that, i skip it, but I always look forward to yours.
amazing grumpycorn as well as ur workflow!!!
Which model should I use with GTX 1070 8GB and 16GB Ram?
thanks ❤
Try testing various models and see what your system can handle. If you can't run the standard DEV model I would try some of the GGUF options, preferably Q6 and higher. Also what works for me is to utilize Force/Set CLIP Device node in my workflows. I set it to "cpu" which helps in reducing the GPU load at the cost of somewhat longer loading time.
@petrspaceman i have Q4 and Q5 Model but it takes 6 or 7 minutes and its slow. can you send your workflow for me?
I thought dev can't be used for commercial purpose. Is it not now?
Anyone else getting this error?
mat1 and mat2 shapes cannot be multiplied (2x2048 and 768x3072)
Oh strange that is what I suggested to you 10 times in one your live while you were ignoring me.
That guidance is too strong for realistic images. For illustrations it’s fine but realism too much
Can't you just increase the queue count, to batch generate multiple images one by one?
the problem is, if you use the image choose node it will pause the workflow until you choose images to progress
@@OlivioSarikas Yes that's true, you will have to choose the image to procced, which obviously is not compatible with batch generation.
А то, что солнечный свет постоянно слепит нам глаза и постоянно светит в лицо зрителя - это норма?
The problem of an image full of pattern and horrible quality it's because people using a NORMAL KSAMPLER (advanced) with contains a field called STEPS setting in 20 by default (the field above the CFG field), then, you resolve this setting the STEPS field at 8 in the first KSampler and 12 in the last KSampler...😉
And I prefer o MaraScottUpscalerRefinerNode V3, is more fast
Exactly like Olivio does in his workflow. 😊
Faster more detail but how is the censorship ?
it's flux and as such not good at nsfw content ;)
Your channel is truly wonderful! Do you know of any methods or ways to create 3D models of humans ?
I only see those lines and grids with flux when I am using Loras now that I also use a custom Scheduler Sigma.
Even at 4K I am not impressed with the quality but it could be how RUclips compresses things.
@@krakenunbound I don’t believe it is
А то, что солнечный свет постоянно слепит нам глаза и постоянно светит в лицо зрителя - это норма?