Flux.1 vs AuraFlow 0.2 - Is Flux The Best EVER?! Free & Local in ComfyUI + Upscaling

Поделиться
HTML-код
  • Опубликовано: 8 сен 2024

Комментарии • 181

  • @TheDocPixel
    @TheDocPixel Месяц назад +64

    Let’s not forget that Black Forest was the original team that StableDiffusion and Midjourney were built on top of. These guys are geniuses!

    • @Huang-uj9rt
      @Huang-uj9rt 26 дней назад

      I use comfyui online, which is what I use on my Mimicpc, and I loaded the workflow from flux into comfyui, which quickly generated results that surprised me. I think flux does a pretty great job of detailing the characters, at least that's how I feel about using it on my mimicpc!

    • @Huang-uj9rt
      @Huang-uj9rt 21 день назад

      So it still has to be the original team strength bull, unfortunately, flux for some of the requirements of the graphics card is a little high, I can only through the mimicpc online to run flux, the details of the processing is really better than sd3, but I for some of the handling of fine-tuning is not yet very proficient, so flux to learn the road will be more and more long piles!

  • @wsippel
    @wsippel Месяц назад +44

    Flux was created by 14 members of the original Stable Diffusion team, they certainly know what they're doing. Black Forest Labs, the company behind Flux, is headquartered in the Black Forest (southwest Germany), hence the name.

  • @muuuuuud
    @muuuuuud Месяц назад +12

    This is such a thrilling era we live in, I always had too many ideas i had nowhere near the time or energy to produce before and now I can almost produce them in real time :3, its incredibly actualizing creatively. Thanks for the videos Nerdy Rodent! ^-^

  • @runebinder
    @runebinder Месяц назад +8

    Hearing the genuine surprise and being lost for words with Flux was very funny, had set up the dev version of Flux earlier so had an idea of what it could do and was interested to hear someone else's perspective, didn't disappoint :)

    • @NerdyRodent
      @NerdyRodent  Месяц назад +2

      It’s not often something does that 😉

  • @marsandbars
    @marsandbars Месяц назад +10

    9:55 "He's got two hands." It's unthinkable that one could possess two hands.

    • @Lesani
      @Lesani Месяц назад +1

      two right hands, one with 6 fingers, yes pretty unthinkable :)

    • @lalayblog
      @lalayblog Месяц назад

      @@Lesani This is what I mean... SDXL / Pony models has fixed most of the issues with hands and anatomy. ADetailer and Inpaint allows us to fix minor issues quickly. AuraFlow needs much more work to produce stable anatomy.

    • @MrGenius2
      @MrGenius2 Месяц назад

      ​@@lalayblog The one with 2 right hands is not auraflow its Flux and it's not a day to day update model, so talking about more work for anatomy after this next big leap it made it doesn't make sense, just learn to prompt better.

    • @FoulPet
      @FoulPet Месяц назад

      @Lesani having 2 right hands is more likely than being an anthropomorphic rat.

  • @Patheticbutharmless
    @Patheticbutharmless Месяц назад +9

    NerdyRodentMan you holding your breath/loosing your mind("Whaaaaaaa???"), so CUUUUUUUUUUUUUUTTE!

  • @KoolenDasheppi
    @KoolenDasheppi Месяц назад +21

    The Flux model is very exciting, I've already tried it out a bit and it's currently blowing my mind. Good thing I have a 3090 lmao

    • @NerdyRodent
      @NerdyRodent  Месяц назад +4

      Yes indeed! It's "next level" :D

    • @bolon667
      @bolon667 Месяц назад +1

      How much vram this model requires?

    • @KoolenDasheppi
      @KoolenDasheppi Месяц назад +5

      @@bolon667 At the moment it requires 24GB of VRAM because it's such a HUGE model. 12 billion parameters. I'm sure the community will optimize it further soon though.

    • @bolon667
      @bolon667 Месяц назад +2

      @@KoolenDasheppi Sad, i thought this model will cost less, because it is smallest model, but, welp. Rtx 3060 12gb, not enough. Thanks for answer, tho.

    • @KoolenDasheppi
      @KoolenDasheppi Месяц назад

      @@bolon667 Yeah, sorry about that. But it is no small model, it's 12 billion parameters. That is HUGE for a text to image model. For comparison, stable diffusion 1.5 is only 983 million parameters, and SDXL is 3.5 billion.

  • @JustFeral
    @JustFeral Месяц назад +46

    Holy shit Flux is insane.

    • @97BuckeyeGuy
      @97BuckeyeGuy Месяц назад +7

      Insanely VRAM hungry. Wsit, my mistake. It's several SYSTEM RAM hungry! 32GB gone!

    • @TheDoomerBlox
      @TheDoomerBlox Месяц назад

      @@97BuckeyeGuy Not really, considering the fact that proper high-quality quantization efforts are very likely to reduce the total footprint of the model to less than 12 GB, while offering very similar quality to the full precision model.
      Look up "stable diffusion BitsFusion" for an example.
      It's just that until now, there hasn't really been a really good incentive to crunch a model to less than 1/4th its FP16 size - but now, with a model that has such broad visual capabilities out of the box - it does make sense to crunch it down.
      Hell, for a "vanilla" model, it does risque stuff very well, crazy good release considering that there's no scheduler funnies, no sampler funnies, not even attention guidance funnies (there's not even a CFG control lol) - and yet, it's still super impressive. It'll get even better, that's crazy to think about.
      Training it, on the other hand - haha, yes. lol lmao

    • @procrastinatingrn3936
      @procrastinatingrn3936 Месяц назад

      Holy Beautiful

  • @jibcot8541
    @jibcot8541 Месяц назад +8

    I have been making custom AI Birthday cards for over a year now, it's a great use of AI.

    • @MrGTAmodsgerman
      @MrGTAmodsgerman Месяц назад

      Oh really cool idea. Expecially they can be very defined for the person that it getting it.

  • @ickorling7328
    @ickorling7328 Месяц назад +2

    Looks like the community missed 'Dimba', from 2 months ago. A transformer and mamba image ai, which is good. This flux does look better, but progress will come from blending solutions and rich high quality data. 🎉

  • @tdfilmstudio
    @tdfilmstudio Месяц назад +1

    Great tutorial! Love your reactions to Flux outputs

  • @NewPhilosopher
    @NewPhilosopher Месяц назад +1

    Took me a while, but I finally got batches of different prompts to work on Flux Schnell in ComfyUI. Set it going and wake up in the morning with a whole bunch of new pictures to look at.

    • @NerdyRodent
      @NerdyRodent  Месяц назад +2

      Yup, I ran a batch of 200 last night so I had loads to look at!

  • @S4f3ty_Marc
    @S4f3ty_Marc Месяц назад +3

    Yep, It's great. This is the model we've been waiting for!

    • @lalayblog
      @lalayblog Месяц назад

      Nope... currently it is too raw in sense of anatomy. Just a yet another commercial AD of a Early Access buggy model.

  • @vi6ddarkking
    @vi6ddarkking Месяц назад +3

    Honestly I look foward to AuraFlow 1.0.
    That number Alone will spark a rush of fine tunes in of it self.

  • @USBEN.
    @USBEN. Месяц назад +2

    Bro they came swinging with FLUX and knocked out all competition. It gets better! They are working on SOTA, that just might force SORA out into the open.

  • @DOTATOU
    @DOTATOU Месяц назад +1

    Holy molly flux is good , ive seen long format videos are performing well your channel deserves more!
    make very long videos like about tips and tricks or things you learned along the way with AI

  • @alanreynolds4262
    @alanreynolds4262 Месяц назад +1

    The prompt adherence is insane on Flux. The biggest fault of previous models is trying to get exactly what you want in the picture, and this is huge step forward for open source.

    • @bgtubber
      @bgtubber Месяц назад +1

      Yes, it's really amazing at that. It can still lack fine details sometimes, so you can use Flux as a base model and then you can use an SDXL or SD 1.5 model as a refiner/2nd pass to add in the details.

    • @Huang-uj9rt
      @Huang-uj9rt 26 дней назад

      I think the current flux running on mimicpc is good enough for the detail I need to work with, the speed of generation and the detail in the image is great!

  • @bgtubber
    @bgtubber Месяц назад +2

    As soon as I got fal's newsletter, I was waiting for you to do a review on it. This is really mind blowing! I assume there will be fine tunes and controlnets released for it?
    BTW, that's one massive model! Good thing I've got a 24GB GPU. I can't wait to try it out!

  • @Tystros
    @Tystros Месяц назад +6

    Can you also make a video about the full-quality Flux model? Flux Schnell is basically a Turbo version with worse quality. If it's this good, I want to see what the "real" model can do!

    • @MyAmazingUsername
      @MyAmazingUsername Месяц назад +2

      Flux Dev = Turbo model. (Quantized). Excellent.
      Flux Schnell = Turbo Turbo model. (4bit Quantized). Very bad limb understanding.
      Flux Pro = The real base model. The best. Will not be released.

  • @knightride9635
    @knightride9635 Месяц назад +3

    Flux looks amazing

  • @MyAmazingUsername
    @MyAmazingUsername Месяц назад +1

    Both the Dev + Schnell models are quantized ("Turbo") models. The true base model is private. So it might not be possible to finetune or create LoRAs. No word about training have been made anywhere even though everyone is asking.

    • @phoenixfire6559
      @phoenixfire6559 Месяц назад

      It says the following in their blog post:
      "FLUX.1 [dev]: The base model, open-sourced with a non-commercial license for community to build on top of. "
      I think many of the derivative SD models use fp16 precision. Given the blogpost it seems to imply users should make derivatives on top of the fp16 dev model.

    • @MyAmazingUsername
      @MyAmazingUsername Месяц назад

      @@phoenixfire6559 The dev model is fp8 and is distilled meaning that all parameters are dependent on each other, so you cannot finetune it without completely destroying all of its prior knowledge. But loras might be possible since they use fewer steps and less affected layers.

  • @Neko_Dave
    @Neko_Dave Месяц назад +4

    Flux seems the biggest step forward for local image models since SD 1.x. Shame it's so demanding to run

    • @RhysAndSuns
      @RhysAndSuns Месяц назад

      I'm just testing out on a 16gb card and it's running at 8-9s/it incl clip reading, so about 30s/1024 image at 4 steps. Looks to be using about 15gb of vram on my machine

  • @VisibleMRJ
    @VisibleMRJ Месяц назад +1

    This changes EVERYTHING

  • @robxsiq7744
    @robxsiq7744 Месяц назад +2

    Tried both and Flux...man, Its fantastic. Different level remaking the game. Its what we all wished SD3 would have been...it runs like a truck in mud, but the results are fantastic. it (mostly) knows the human anatomy...to a degree. a bit similar to SDXL base model, and it nails the look overall. This can be the winner if we can tweak it to run a bit better and get some tunes out. It follows prompts better than I ever though could be possible. There are issues still of course. weird feet, some screwy things, but overall. Flux is the king in my book for general artbot prompting. Now, for adult content...yeah, you'll need finetunes for sure as it isn't much on that, but it knows the body. For style...it struggles a bit to get away from realism compared to say, Dall-E, but again, finetunes...

  • @MissingModd
    @MissingModd Месяц назад +2

    Is this your voice nerdy?! Ooo nice!

  • @KnutNukem
    @KnutNukem Месяц назад

    This is some much needed updraft for the open source community.
    A MA ZING!

  • @TheGanntak
    @TheGanntak Месяц назад +18

    Sitting in the corner into my RTX2070 8 GB

    • @xbon1
      @xbon1 Месяц назад +5

      y'all humans spend thousands on cars but not your computer which you use every day? my 3090 handles it fine

    • @synthoelectro
      @synthoelectro Месяц назад

      I have 4 GB 1650 and use virtual memory, which works for SDXL, so I think I'll use this method toward this.

    • @ayaneagano6059
      @ayaneagano6059 Месяц назад

      @@synthoelectroI don’t think that’s gonna work… even SDXL in the beginning required a lot less VRAM than this does now… this new model has 3~4x the parameters

    • @MrGTAmodsgerman
      @MrGTAmodsgerman Месяц назад

      @@xbon1 Not everyone owns a car and would even spend so much. And using a car every day would also be insane in cost as the ridiculous amount of energy consumption those kidney cards eat while not fitting into big PC's anymore like traveling back in time.

    • @lalayblog
      @lalayblog Месяц назад

      Just use some Pony model with ComfyUI. For example some derivatives from DucHaiten's Pony Real.

  • @Halsu
    @Halsu 25 дней назад +1

    Just a note about the schnell version: While it indeed can make decent images at 4 steps, i find that often higher steps will help, for example in cases of missing or extra limbs etc., and for creating a more detailed image overall. I find the most useful range to be between 6 and 25 steps, maybe.
    I also did an experiment with higher step counts. Much higher. I stress-tested it on up to 640 steps (yes, 640, no extra zero), and it still produced a valid image, though the results were perhaps a little weird, Flux sort of overdid prompt following and added odd details.

  • @estrangeiroemtodaparte
    @estrangeiroemtodaparte Месяц назад +1

    You should try the dev one. Amazing stuff!

  • @phoenixfire6559
    @phoenixfire6559 Месяц назад +1

    The problems with the Flux model are:
    1. Base model has no commercial license - you need to contact them to get one and who knows the cost? The base model is where you make derivative models from (they mention this in their blog post)
    2. The model size is too large (12b parameters and 24GB file size) for making local LORAs or finetunes - will probably need to use Runpod and spend hundreds of dollars.
    1. & 2. means there are likely to be fewer derivative models and I know for me personally, it's the derivative models and making my own Loras that are the most important factor which means I probably won't use this model.

  • @YTbannedme-g8x
    @YTbannedme-g8x Месяц назад

    7:34 was hilarious

  • @randomanum
    @randomanum Месяц назад +1

    THE BOMB!

  • @electronicmusicartcollective
    @electronicmusicartcollective Месяц назад +1

    WWWWWWWWWWWWOOOOOOOOWWWWWWW FLUX AMAZING

  • @RHYTE
    @RHYTE Месяц назад +2

    so cool!

    • @NerdyRodent
      @NerdyRodent  Месяц назад +4

      I'm even more addicted to testing prompts now!

    • @RHYTE
      @RHYTE Месяц назад

      hopefully there will be finetuning options soon

  • @Firespark81
    @Firespark81 Месяц назад +1

    Sucks its takes so much vram. Needing over 24GB is kinda nuts. Hopefully they get that down in future models.

    • @douchymcdouche169
      @douchymcdouche169 Месяц назад +1

      People on Reddit said they managed to run it with just 6gb VRAM. Someone posted a guide on how to make it work with weak GPUs.

    • @procrastinatingrn3936
      @procrastinatingrn3936 Месяц назад

      so my i5 8 gb ram laptop will explode?

    • @squallseeker-i2i
      @squallseeker-i2i Месяц назад

      working fine on my 3080 with 10GB VRAM with 64GB system ram

    • @streetdrums
      @streetdrums Месяц назад

      works fine with rtx3060 12gb and 64gbram in comfyui with the fp16 model 5-6/it on 1024x1024

  • @TPCDAZ
    @TPCDAZ Месяц назад

    The image is also the workflow in Forge and A1111 I'm surprise more people don't know that

  • @ICHRISTER1
    @ICHRISTER1 Месяц назад

    Yay my 3090Ti will start running again! :D

  • @wilbertandrews
    @wilbertandrews Месяц назад +1

    yeah, much better than sd 👍

  • @user-iw4rs2kh6n
    @user-iw4rs2kh6n Месяц назад +1

    i'm using RTX 3060 with 12GB VRAM and it is very fast! Probably faster that stable diffusion, all other things being equal.

  • @phizc
    @phizc Месяц назад +1

    Damn, my investment in a 4090 last year seems to be bearkng fruit 😅.

  • @SandCastleMania
    @SandCastleMania Месяц назад

    ThankYou Nerdy!

  • @Ilua1Sud1
    @Ilua1Sud1 Месяц назад

    Нам нужны сравнения модели Flux с разными параметрами, семплерами, на 30 шагах как по мне лучше выходят генерации, также хотелось бы рассмотреть твой workflow с возможностью Inpaint, и img2img. Жду в следующих видео. Модель очень перспективная и надо её тестировать по полной.

    • @NerdyRodent
      @NerdyRodent  Месяц назад +2

      All on the patreon post if you’re interested 😉

  • @goodie2shoes
    @goodie2shoes Месяц назад

    the future is HERE!

  • @ahsookee
    @ahsookee Месяц назад +1

    Important, the really good open source version is called "Flux Dev". "Flux Schnell" is like a lightning version of it with worse quality

    • @ahsookee
      @ahsookee Месяц назад +1

      Schnell means fast in German, so that's the idea behind it

    • @Dave-rd6sp
      @Dave-rd6sp Месяц назад +2

      Flux Dev isn't open source, just open weights. Flux Schnell is open source.

    • @ahsookee
      @ahsookee Месяц назад

      ​@@Dave-rd6sp source-available, whatever. The outputs can be used commercially, that's what counts

    • @Dave-rd6sp
      @Dave-rd6sp Месяц назад +1

      @@ahsookee No. Open source is about whether the thing has a permissive license. And it doesn't.

  • @tfairfield42
    @tfairfield42 Месяц назад +1

    Aw man, getting an error with my 3090: module 'torch' has no attribute 'float8_e4m3fn. Flux looks awesome and I can't wait to fix this issue and try it out. Too bad it wasn't fixed by something simple like updating torch, transformers, and comfy. Had the issue with SD3 as well but I could just load the FP16 clip and ignore the issue 😅. I'm sure over the next couple of days some fix will come through

    • @tfairfield42
      @tfairfield42 Месяц назад +2

      While updating comfy and dependencies didn't help, simply installing the windows portable again and running that new installation made it work. Probably some weird python issue with the version of transformers and torch

  • @ysy69
    @ysy69 Месяц назад

    Wow, FLux seems impressive. Do you know if the models can be fine tuned ?

  • @sb6934
    @sb6934 Месяц назад +1

    Thanks!

  • @streetdrums
    @streetdrums Месяц назад

    I get it running with the 16fp model on cumfyui with my 12GB rtx 3060 ! it runs in lowmemmorymode. i have 64GB RAM and getting 5-6/it . thats pretty fine !!!

  • @mikegaming4924
    @mikegaming4924 Месяц назад

    I tested Flux Schnell and it sometimes makes good fingers, but it's not much better then SDXL and img2img would still be needed to fix things

  • @DavidBrown-tv8fx
    @DavidBrown-tv8fx Месяц назад

    It's amazing!!!!

  • @Ethan_Fel
    @Ethan_Fel Месяц назад +1

    sadly Flux license and high gpu requirement will probably stop it from being the new standard.

  • @sdgtr4
    @sdgtr4 3 дня назад

    me having 8g of vram, watching this with tears in my eyes

  • @florentChevalier111
    @florentChevalier111 Месяц назад +1

    wow

  • @adrianmunevar654
    @adrianmunevar654 Месяц назад

    Hello nerdy 🐭, what do you recommend the most, installing comfy locally step by step by myself or through Pinokio? Is it any difference? I'd like to know that because I see this is the most capable UI and I'm with a1111 watching how the party is getting better and better in the comfy side 🤷🏻‍♂️

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      I’d definitely do a normal install, but that’s me - I just like installing everything the same way!

  • @synthoelectro
    @synthoelectro Месяц назад

    oddly it can't find ComfyUI-Lumina-Next-SFT-DiffusersWrapper when you clone it, it simply doesn't exist.

  • @FlowFidelity
    @FlowFidelity Месяц назад

    This may be a noob question, but is that second workflow in a JSON format somewhere?

  • @quercus3290
    @quercus3290 Месяц назад

    its a crazy model, still carries over a lot of flaws inherent to models like this, but its really very impressive.

  • @GeekDynamicsLab
    @GeekDynamicsLab Месяц назад

    I Wonder if flux could be run with a 4070 ti super?, Nerdy?

  • @yngeneer
    @yngeneer Месяц назад

    so....I previously decided to not download auraflow because of my 16gb vram, waitin for some lower quant or so....and you tell me that I didn't need to?

  • @drawmaster77
    @drawmaster77 Месяц назад

    Any chance for a tutorial how to get it running on cloud service? I dont have 32GB VRAM :(

  • @electronicmusicartcollective
    @electronicmusicartcollective Месяц назад

    BAM...very xiting...thanx

  • @Grunacho
    @Grunacho Месяц назад

    Anazing model! My poor 4090 😢 Its time for a RTX 6000 ADA 😉💰

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      I could do with a 4090 myself! 😄

    • @Grunacho
      @Grunacho Месяц назад

      @@NerdyRodent 😄 I believe you 👍🏼

  • @KDawg5000
    @KDawg5000 Месяц назад

    How do we upscale the FLUX images as well?

  • @marcdevinci893
    @marcdevinci893 Месяц назад

    Thank you for this, it won't let me select the Flux in UNET Loader, the ae.sft doesn't show up either in the VAE (I did refresh and even restarted)

  • @KDawg5000
    @KDawg5000 Месяц назад

    Do Loras or Controlnet work with Flux?

  • @juanjesusligero391
    @juanjesusligero391 Месяц назад +1

    Oh, Nerdy Rodent, 🐭
    he really makes my day; ☀🎵
    showing us AI, 🤖
    in a really British way. 🫖🎶

  • @gobsmacked_digital
    @gobsmacked_digital Месяц назад

    How to create UNETLoader?

  • @artman40
    @artman40 Месяц назад

    Perhaps it needs to look less "generic" in terms of style.

  • @marshallodom1388
    @marshallodom1388 Месяц назад

    Version 1 looks WAY better than 2.

  • @bgtubber
    @bgtubber Месяц назад

    Hm.. It doesn't seem to recognize negative prompts at all. So how do we remove things from our generations?
    Also, what samplers and schedulers are best for maximum quality and sharpness? Euler + Simple are good I guess, but could it be better?

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      I prefer deis myself, but a few others work too!

    • @bgtubber
      @bgtubber Месяц назад

      ​@@NerdyRodent Good to know. uni_pc_bh2 sampler + sgm_uniform scheduler seems to work too. I think it is a bit better than the default Euler + Simple.

  • @Satscape
    @Satscape Месяц назад +1

    Gonna try Flux on my 4GB card. Y'never know it might work.

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      Go for it 😄 There’s cut-down models now too, so who knows how low it will go?

    • @Satscape
      @Satscape Месяц назад +2

      @@NerdyRodent Update: It works! It's slow (350s per step but only 4 steps!) on 1024x1024, but it looks SO GOOD!
      This is some sort of witchcraft!

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      @@Satscape awesome!

  • @INVICTUSSOLIS
    @INVICTUSSOLIS Месяц назад

    Does Flux work on MAC OS. Getting MPS Error.

  • @simongotz8126
    @simongotz8126 Месяц назад

    Did you notice the rodent moon?

  • @hatelortechnch
    @hatelortechnch Месяц назад

    what are minimum hardware to run this one

  • @AlxB17
    @AlxB17 Месяц назад

    My ComfyUI crashes on VAE, sampler generates just fine and then crash (4080+32Gb, fp8 models), comfy is fully updated

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      Worth checking the error output to see why it’s crashing. Hardware is fine as others are running in just 4Gb apparently!

    • @AlxB17
      @AlxB17 Месяц назад

      @@NerdyRodent solved. in unet loader "default" is not working, select fp8_e4m3fn and it's working, some outputs blurry a little... maybe it's my prompt...

  • @akasht5
    @akasht5 Месяц назад

    Dang it the results look so good...but I could not load the workflow my comfyui did not recognize .sft files and when i renamed them it caused an error:(

    • @squallseeker-i2i
      @squallseeker-i2i Месяц назад +1

      Needs FLUX1 directory under both VAE and UNET to put the models in... can also load them directly from the comfyui manager models, and it puts them in the right place.

  • @electronicmusicartcollective
    @electronicmusicartcollective Месяц назад

    🤩

  • @youMEtubeUK
    @youMEtubeUK Месяц назад +5

    00:01 New versions of AuraFlow and Flux Schnell introduced
    01:22 AuraFlow 0.2 improves image quality
    02:39 AuraFlow 0.2 allows for custom birthday card creation
    03:55 AuraFlow 0.2 improves chaos and upscaling
    05:15 Setting up Flux & AuraFlow 0.2 in ComfyUI workflow
    06:35 Introduction to ComfyUI Flux & AuraFlow 0.2
    07:55 Flux & AuraFlow 0.2 showcases impressive Misty letters and beautifully detailed artwork
    09:22 Discussing different models and preferences

  • @ALTINSEA1
    @ALTINSEA1 Месяц назад

    i cant wait until i can run flux in my 4gb vram GTX 1050TI lol, so far i am running SD 1.5 only

  • @sadshed4585
    @sadshed4585 Месяц назад

    It feels like I'm missing out on everything because I have a 8gb gpu, Like a 3090 is 1300-1500 usd who can afford that with inflation in the US

    • @NerdyRodent
      @NerdyRodent  Месяц назад +2

      Works on 8gb, just slower is all

    • @sadshed4585
      @sadshed4585 Месяц назад

      @@NerdyRodent just use the fp8 version?

  • @NoMorePlayTV
    @NoMorePlayTV Месяц назад

    flux - makes only black pictures ( don't know what's wrong

    • @bgtubber
      @bgtubber Месяц назад

      Did you update ComfyUI to the latest version before using Flux? This is mandatory for it to work. Also, there's a workflow for the Dev and Schnell versions on ComfyUI's GitHub page. Did you try using it instead of building it yourself? I used that one and it worked fine for me.

  • @graylife_
    @graylife_ Месяц назад

    The specs kill me though.. 24gb vram 😅

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      Try the Flux FP8 options for lower VRAM cards

  • @zecetry4970
    @zecetry4970 Месяц назад

    Is it working with comfy ui portable? I did everthing right but looks like comfy ui can't read the files.

    • @sempratempus8671
      @sempratempus8671 Месяц назад

      Have you updated ComfyUI to the latest version?

    • @zecetry4970
      @zecetry4970 Месяц назад

      @@sempratempus8671 thank you that was the problem!

  • @youMEtubeUK
    @youMEtubeUK Месяц назад

    Do we need to do a upscale with flux?

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      You don’t need to, but you can 😉

    • @youMEtubeUK
      @youMEtubeUK Месяц назад

      @@NerdyRodent Thanks for the quick reply do you need a negative prompt as well?

    • @NerdyRodent
      @NerdyRodent  Месяц назад +1

      @@youMEtubeUK nope!

    • @bgtubber
      @bgtubber Месяц назад

      @@youMEtubeUK You typically don't as it is much better at following prompts than SDXL/SD1.5. Unfortunately, you can't use negative prompts with Flux (in the rare cases you need them) even if you wanted to. I tried and it doesn't work.

  • @MilesBellas
    @MilesBellas Месяц назад +2

    Better than Kolors ?

    • @NerdyRodent
      @NerdyRodent  Месяц назад +4

      I think so :)

    • @MilesBellas
      @MilesBellas Месяц назад

      @@NerdyRodent
      Wow!
      Black Forest Labs Flux 1 just launched.
      .
      The output = amazing quality
      Former StabiityAI members are involved.
      "The company, led by Robin Rombach, Patrick Esser, and Andreas Blattmann, has secured $31 million in seed funding. Andreessen Horowitz (a16z) spearheaded the investment round, with additional backing from notable figures including Brendan Iribe, Michael Ovitz, and Garry Tan."

    • @Cingku
      @Cingku Месяц назад +1

      miles ahead of kolor bro.

  • @Nicodedijon2
    @Nicodedijon2 Месяц назад

    why my generation are really slow with mmy 4090

    • @bgtubber
      @bgtubber Месяц назад

      Flux IS slower than SDXL, but it shouldn't be that slow. I'm getting ~1.50 s/it on my RTX 3090 at resolutions of 1mpix (1024x1024 e.g). I'm using the workflow from ComfyUI's GitHub page. I'm getting ~3.50 it/s when I render at the max resolution supported by Flux, which is 2mpix (1536x1536).

  • @kargulo
    @kargulo Месяц назад

    I have 16 GB vram and its creating ages,

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      I know people have said they’re using just 4GB VRAM, so yes… with less VRAM it can take a while as it swaps!

  • @jonmichaelgalindo
    @jonmichaelgalindo Месяц назад

    Flux Dev fp8 on 4090 is *way* better. 20 seconds per image.

  • @lukas5220
    @lukas5220 Месяц назад

    sadge, tried to run it ..... welp 64GB ram + 3090 is not really enough for this

    • @Cingku
      @Cingku Месяц назад +3

      I got a 12 GB RTX 3060 and 32 GB RAM. I run the dev version fine, although it takes much longer than SDXL like 1 minutes+. I used T5xx FP8 but still amazing though..

  • @MaxSMoke777
    @MaxSMoke777 Месяц назад

    That's 25Gb's for one model, or what was that, around 40GB's for another set? Is the secret to good results just to make INSANELY MASSIVE models?
    Also... Canada. Canadian. It's first thought was Asian, and the second was Latino? Did I miss something? I know America has had unhinged migration, but what happened to Canada? Maybe some parts of Vancouver, but the rest of it is pale white. They don't even tan up there. Maybe it snowed when they were making the model and all of the white people just got lost in the background? Is this like the Google AI, when you ask for Historical European and you get nothing but black people? WHITE PEOPLE ARE REAL, I SWEAR!

  • @lalayblog
    @lalayblog Месяц назад +1

    I don't see where this model much better than SDXL / Pony. It is 2.5-time bigger, 2-3 times slower, renders garbage anatomy comparing to DucHaiten's PONY Real or CinEro Pony. The only thing slightly better adherence. But in ComfyUI Omost workflow node with Pony model gives a much better combination of adherence and zonal control on picture than this AuraFlow.
    Also, PONY model can be trained in LORA mode with Tesla P40 (24G) while I don't even imagine how much VRAM is needed to train this AuraFlow.
    I will wait when this architecture will be trained on rendering at least moderate anatomy, compared with Pony models.

    • @lalayblog
      @lalayblog Месяц назад +1

      And BTW, this model is incompatible with all Conditional block except basic CLIP Prompt node.

    • @NewPhilosopher
      @NewPhilosopher Месяц назад

      Text is better in Flux. The great thing is you can use different models to suit different purposes. I still even use Disco Diffusion occasionally.