SUPER FLUX Turbo - Better, Faster, More Details!

Поделиться
HTML-код
  • Опубликовано: 14 дек 2024

Комментарии • 89

  • @OlivioSarikas
    @OlivioSarikas  Месяц назад +2

    #### Links from the Video ####
    GET my WORKFLOW here: www.patreon.com/posts/super-flux-turbo-115107809
    Flux Turbo Lora: huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha

  • @kalevikivimaa7551
    @kalevikivimaa7551 Месяц назад +2

    The images look stunning, but I cannot see myself re-connoting the wires endlessly for each image.

  • @IsraelDeLamo-tf3wk
    @IsraelDeLamo-tf3wk Месяц назад +1

    That "preview latent chooser" does the trick!

  • @Elias-nj6gi
    @Elias-nj6gi Месяц назад

    To process images one after the other convert the batch input to list first. There is a convert batch to list mode that does this.

  • @angelalmalaq
    @angelalmalaq Месяц назад

    workflow = freaky hard !

  • @Gli7chSec
    @Gli7chSec Месяц назад +3

    can you make a video on how to animate in flux Forge

  • @sergetheijspartner2005
    @sergetheijspartner2005 Месяц назад

    For the nr of images i know 2 methods (probably not what you mean but that is what I get out of the question), One is the batch nr the other is in the manager with the queue options which is the one I use, if you press extra options you can set the nr of image it makes (I usually do 4), then when I click "Queue prompt" it will make 4 different images one by one, with the same prompt. This is how I understood the question but I figured an expert like you would already know this so I might have understood you wrong. Then if you press " View Queue" you see the nr of images that are still in the queue, I usually do this overnight, run like 25 different prompts, giving me 100 images, of which I can choose the best out of 4, so every morning I wake up see 100 new images, sometimes very bad results off course, because I am asleep and it is running on it's own, but usually with 100 very surprising new images

  • @maayaa77
    @maayaa77 12 дней назад

    where do you get the Preview Chooser ?

  • @AI.Absurdity
    @AI.Absurdity Месяц назад

    Thank You!!!

  • @tripleheadedmonkey6613
    @tripleheadedmonkey6613 Месяц назад

    Great video! I had no idea you could use the triple clip loader for flux. I've been using the double this whole time.

  • @DronesClubMember13
    @DronesClubMember13 Месяц назад

    This reminds me that in the breaks from Monster Hunter this weekend, I was going to build Flux txt2img and upscalers in Comfy.
    I finally built some very nice SDXL amd SD15 workflows but need to do this on Flux and potentially SD3+

  • @armauploads1034
    @armauploads1034 Месяц назад +1

    Why does my "KSampler Advanced" don´t have any Connection namend "noise_seed"? 🤔

    • @2008spoonman
      @2008spoonman Месяц назад +1

      right click on the KSampler Advanced node, Convert widget to input, Convert noise_seed to input. (but you can just use KSampler Advanced as is, it is already in there)

    • @armauploads1034
      @armauploads1034 Месяц назад

      @@2008spoonman Thank you very much!

  • @Ozstudiosio
    @Ozstudiosio 27 дней назад

    really i like your video but how we can achieve workflow allow us compose consistent characters and different environment for different shots during work with 3d animation episodes? or have video explain this? and really thanks for your informative info, and if we can make work meeting its will be fine maybe can corporate in our project?

  • @nickd9274
    @nickd9274 Месяц назад

    Hi Olivio, where do you get the Seed Generator node? I only see "Advanced Sequence Seed Generator" in the Custom Nodes Manager

  • @amirshahidi8180
    @amirshahidi8180 Месяц назад

    changing the blur option in ultimateupscaler would help a lot with pattern problem ;)
    thx mate you're awesome

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад

      the blur option? isnÄt that for the edges between the individual tiles? because the pattern is within the image and created by flux, not by the tiling. but i will have a look

    • @polloloco6353
      @polloloco6353 Месяц назад

      ​@@OlivioSarikas The problem of an image full of pattern and horrible quality it's because people using a NORMAL KSAMPLER (advanced) with contains a field called STEPS setting in 20 by default (the field above the CFG field), then, you resolve this setting the STEPS field at 8 in the first KSampler and 12 in the last KSampler...😉

  • @CalladsEssence
    @CalladsEssence Месяц назад

    Thank you! Very useful workflow!

  • @2008spoonman
    @2008spoonman Месяц назад

    hi Olivio, what's the idea behind the 1600x1600 setting in the ModelSamplingFlux node ??

  • @cchance
    @cchance Месяц назад +1

    Silly. question but isn't that first and second ksampler basically... splitsigmas, just 12 steps split at 8, with the alternative guidance and model on the low sigmas?

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад

      and a different seed. i feel like it brings better details. especially for example when the head is smaller in the image

  • @grahamastor4194
    @grahamastor4194 Месяц назад +1

    Which FLUX Turbo Lora is this? Cititai now has at least 3! Thank you.

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад +1

      FLUX.1-Turbo-Alpha

    • @grahamastor4194
      @grahamastor4194 Месяц назад

      @@OlivioSarikas Thank you for the quick reply. There are 3 uploads on Civitai with that name. iuliastarcean536, maitruclam and EKKIVOK are the uploaders. The downloads appear to be the same size - are they the same file?! Did you get yours from Civitai or Huggingface? Thank you.

    • @varyonalquar2977
      @varyonalquar2977 Месяц назад +1

      @@OlivioSarikas But which one? There are different on Civitai. I found 4 different Flux.1 Turbo Alpha.

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад +1

      @@varyonalquar2977 oh, actually it is this one: huggingface.co/alimama-creative/FLUX.1-Turbo-Alpha

  • @lucifer9814
    @lucifer9814 Месяц назад +1

    So what's this thing with flux generating images with those strange horizontal lines running across the entire image ? Appears a lot clearer on the darker regions but it's present all over the generated image. I've started to encounter this issue from a few weeks now if I'm being honest, i first believed changing the samplers would make a difference but it still remains the same. Also this happens not just on the Super flux workflow but pretty much any workflow we use. Is this some type of a nerf or a glitch on flux ?

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад +2

      i think it's a problem with how flux is trained. it's not refined enough. it's just a base model and the community needs to figure out how to make it good

    • @lucifer9814
      @lucifer9814 Месяц назад

      @OlivioSarikas surprisingly enough, the super turbo generations look fine so far.

    • @livinagoodlife
      @livinagoodlife Месяц назад

      @@OlivioSarikas it seems to be the upscaling that does it and will depend on the upscaling model used

  • @cowlevelcrypto2346
    @cowlevelcrypto2346 Месяц назад +10

    Before my morning cup of coffee I hate everyone. After my morning cup of coffee I feel much better about hating everyone. ☕☕☕☕

  • @dkamhaji
    @dkamhaji Месяц назад

    my second K sampler is producing over contrasted, and burnt images (on realistic images). If I lower the 7.5 to 5 its better but less detail is added.. Do you have any recommendations to play with to add the extra detail but control how burnt the second pass gets? do I play only with the guidance? of the model sampling parameters as well?

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад

      are you using a different seed for it?

    • @dkamhaji
      @dkamhaji Месяц назад

      @ yes a separate random seed like. I think u suggested.

  • @Popow2
    @Popow2 Месяц назад

    Thanks OLivio,.....abudant details, nice speed ,............ still the lack of FLUX DOF . Anti Blur no help here either

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад

      yes, flux love to put DOF into everything for some reason

    • @livinagoodlife
      @livinagoodlife Месяц назад

      @@OlivioSarikas make sense when the focus is on a subject in the foreground. Scenery is fine for example

  • @andreh4859
    @andreh4859 Месяц назад

    Interessting. I was also playing around with your workflow and combining it with a SD Ultimate Upscale. But I followed another video where the upscale node uses only 1 step and a single upscale tile (set width and height to the upscaled resolution) with flux which works pretty nice. I also encountered the problem with the pattern, especially in shaddows. Maybe it 's a lighting thing?

    • @mistraelify
      @mistraelify Месяц назад +2

      It's related to how the model works and you can makes it appear all over the picture very easily by upscaling to 2k or more. They even appears as squares depending your parameters. Having really sharp detailed pictures through upscaler is especially difficult with alot of steps to prevent those patterns to show up. I'd say, above 12 they start to come out first in dark areas then above 30 they start to be all over the picture.
      So rendering sharp/detailed images rely a lot more on the model upscaler instead of flux itself. Finding the correct middle between flux pass parameters and model upscaler for no patterns but sharp/detailed picture is different from almost each output style/loras/subject type.

  • @devnull_
    @devnull_ Месяц назад

    Thanks. Is there a specific reason that guidance is skipped for the ultimate SD upscaling, i.e. the third pass.

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад

      not really. it works pretty well without it. but i will test different values and add them in a future update

    • @devnull_
      @devnull_ Месяц назад

      @@OlivioSarikas ok thanks!

  • @Radarhacke
    @Radarhacke Месяц назад

    Thank You. For the 2nd FluxGuidance i prefer higher Value. For my Portraits with Character Lora, i get much better Results with a Value of 9.0.

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад

      I experimented with higher values, but for me it would make the face features too strong,

    • @Radarhacke
      @Radarhacke Месяц назад

      @@OlivioSarikas crazy, exactly the opposite for me, must be due to my Lora, which I trained myself.

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад

      @@Radarhacke yes, of course. different models, different settings

  • @michaelbayes802
    @michaelbayes802 Месяц назад

    regarding your questions ... Don't know about the 1st question but for the 2nd one can't you just increase the queue by the number of images you want in the sequence?

  • @petrspaceman
    @petrspaceman Месяц назад

    Hi Olivio, thank you for another great video! Can you please share why did you chose to use a triple loader when you don't seem to utilize the clip_g in your workflow.

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад

      it gives better qualtiy. but how would you use the clip_g? can you message me on discord?

  • @ADAM-t8h8f
    @ADAM-t8h8f Месяц назад

    i thinks i konw how to render image after image, like u said

  • @jasonstetsonofficial
    @jasonstetsonofficial Месяц назад +5

    How to use in Forge?

  • @varyonalquar2977
    @varyonalquar2977 Месяц назад

    Which model do I need for this workflow? And how much VRAM do I need? Is 10GB VRAM enough to run this fast?
    Can you give a link because it is confusing with all these different version which often seem to have the same names but are different. I don"t get it.

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад

      i'm using the base flux model by black forest labs that you can see in the video

  • @FusionDeveloper
    @FusionDeveloper Месяц назад +2

    I enjoy the silly ending animation with the music.
    Most peoples videos with an outro like that, i skip it, but I always look forward to yours.

  • @NotThatOlivia
    @NotThatOlivia Месяц назад

    amazing grumpycorn as well as ur workflow!!!

  • @Vnull-x2z
    @Vnull-x2z Месяц назад

    Which model should I use with GTX 1070 8GB and 16GB Ram?
    thanks ❤

    • @petrspaceman
      @petrspaceman Месяц назад +1

      Try testing various models and see what your system can handle. If you can't run the standard DEV model I would try some of the GGUF options, preferably Q6 and higher. Also what works for me is to utilize Force/Set CLIP Device node in my workflows. I set it to "cpu" which helps in reducing the GPU load at the cost of somewhat longer loading time.

    • @Vnull-x2z
      @Vnull-x2z Месяц назад

      @petrspaceman i have Q4 and Q5 Model but it takes 6 or 7 minutes and its slow. can you send your workflow for me?

  • @adisatrio3871
    @adisatrio3871 Месяц назад

    I thought dev can't be used for commercial purpose. Is it not now?

  • @nickd9274
    @nickd9274 Месяц назад

    Anyone else getting this error?
    mat1 and mat2 shapes cannot be multiplied (2x2048 and 768x3072)

  • @antiplouc
    @antiplouc Месяц назад +2

    Oh strange that is what I suggested to you 10 times in one your live while you were ignoring me.

  • @DaniDani-zb4wd
    @DaniDani-zb4wd Месяц назад +2

    That guidance is too strong for realistic images. For illustrations it’s fine but realism too much

  • @ImShubhamY
    @ImShubhamY Месяц назад

    Can't you just increase the queue count, to batch generate multiple images one by one?

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад

      the problem is, if you use the image choose node it will pause the workflow until you choose images to progress

    • @ImShubhamY
      @ImShubhamY Месяц назад

      @@OlivioSarikas Yes that's true, you will have to choose the image to procced, which obviously is not compatible with batch generation.

  • @diez19781
    @diez19781 Месяц назад

    А то, что солнечный свет постоянно слепит нам глаза и постоянно светит в лицо зрителя - это норма?

  • @polloloco6353
    @polloloco6353 Месяц назад

    The problem of an image full of pattern and horrible quality it's because people using a NORMAL KSAMPLER (advanced) with contains a field called STEPS setting in 20 by default (the field above the CFG field), then, you resolve this setting the STEPS field at 8 in the first KSampler and 12 in the last KSampler...😉

    • @polloloco6353
      @polloloco6353 Месяц назад

      And I prefer o MaraScottUpscalerRefinerNode V3, is more fast

    • @2008spoonman
      @2008spoonman Месяц назад

      Exactly like Olivio does in his workflow. 😊

  • @dwainmorris7854
    @dwainmorris7854 Месяц назад

    Faster more detail but how is the censorship ?

    • @OlivioSarikas
      @OlivioSarikas  Месяц назад

      it's flux and as such not good at nsfw content ;)

  • @whiletrue1-wb6xf
    @whiletrue1-wb6xf Месяц назад

    Your channel is truly wonderful! Do you know of any methods or ways to create 3D models of humans ?

  • @jibcot8541
    @jibcot8541 Месяц назад

    I only see those lines and grids with flux when I am using Loras now that I also use a custom Scheduler Sigma.

  • @krakenunbound
    @krakenunbound Месяц назад +1

    Even at 4K I am not impressed with the quality but it could be how RUclips compresses things.

    • @ryan18462
      @ryan18462 Месяц назад

      @@krakenunbound I don’t believe it is

  • @diez19781
    @diez19781 Месяц назад

    А то, что солнечный свет постоянно слепит нам глаза и постоянно светит в лицо зрителя - это норма?