FLUX Comparison SCHNELL vs DEV IMAGE TO IMAGE TEXT TO IMAGE INPAINTING

Поделиться
HTML-код
  • Опубликовано: 24 окт 2024

Комментарии • 96

  • @Utoko
    @Utoko 2 месяца назад +27

    Dev is just way better no competition, nice inpainting workflow

    • @PixelEasel
      @PixelEasel  2 месяца назад +5

      agreed!

    • @AaliDGr8
      @AaliDGr8 2 месяца назад

      Plzzzzzz tells us how to use dev flux free anywhere??? Online or...​@@PixelEasel

    • @KingBerryBerry
      @KingBerryBerry 2 месяца назад

      @@PixelEasel Now use SCHNELL with a High "resolution" in pixels, and get surprise 😎 maybe you need make a new video for that.

    • @PixelEasel
      @PixelEasel  Месяц назад

      checking ​@@KingBerryBerry

  • @haka8702
    @haka8702 2 месяца назад +4

    the consistency of the woman with different expressions is quite a game changer

  • @pixelcounter506
    @pixelcounter506 2 месяца назад +1

    I agree with you that dev looks more photorealistic. Thank you very much for your informative summary!

    • @PixelEasel
      @PixelEasel  2 месяца назад

      thanks for commenting!!

  • @tobycortes
    @tobycortes 2 месяца назад +16

    why would you do less steps in Schnell and then drag about less quality thourhg the whole video ???? clearly this is not accurate, Schenll has sick quality with 30+ steps

  • @mic2016
    @mic2016 2 месяца назад

    This worked. You are a gentleman and scholar sir...

  • @marcoantonionunezcosinga7828
    @marcoantonionunezcosinga7828 2 месяца назад +3

    Thank you so much. You could make other comparisons with the models (SDXL turbo, SDXL lighthing, Stable Difusion 3) vs Dev and Schnell

  • @MrRandomnumbergenerator
    @MrRandomnumbergenerator 2 месяца назад

    amazing and detailed comparison, like and subscribed

  • @Nrek_AI
    @Nrek_AI 2 месяца назад +1

    Thanks for giving us this comparison 🖖

  • @59Marcel
    @59Marcel 2 месяца назад +2

    Thank you for the great tutorial.

    • @PixelEasel
      @PixelEasel  2 месяца назад

      thanks for a great comment!

  • @alexanderj.6701
    @alexanderj.6701 2 месяца назад +3

    I feel like in order to make it fair both SCHNELL and DEV should've been tested with the same number of steps, not 4 vs 20.

  • @cekuhnen
    @cekuhnen 2 месяца назад +6

    Actually the schnell illustration is much better than the dev - dev looks like a photo mixed with illustration

    • @stepfury
      @stepfury 2 месяца назад +2

      Wow , you got a point. Maybe the comparison with anime and painting should be the next one.

    • @CodexPermutatio
      @CodexPermutatio 2 месяца назад

      Both models are awesome! Each one is tailored for a specific use.

  • @florianschmoldt8659
    @florianschmoldt8659 2 месяца назад +3

    Training on datasets that include tons of copyrighted art and then tell others it's not for commercial use until licensed is mindblowing

    • @jassimibrahim6535
      @jassimibrahim6535 16 дней назад

      i mean realistically speaking they wont know if ur using their model or not

    • @florianschmoldt8659
      @florianschmoldt8659 16 дней назад

      @@jassimibrahim6535 If you copy & paste the image into a new document, yes. ComfyUI and Automatic111 writes the model and workflow info into Exif data.
      ... But if you'll tell nobody, I won't either 😏

  • @AlistairKarim
    @AlistairKarim 2 месяца назад +1

    Thanks! Very informative and on point.

  • @TheMadManTV
    @TheMadManTV 2 месяца назад

    Thank you very much, bro. It is a very clear comparison.

  • @philippeheritier9364
    @philippeheritier9364 2 месяца назад +1

    a big thank you, very informative.

  • @duskairable
    @duskairable 2 месяца назад +6

    There's schell & dev combined model.
    4 steps near dev quality, best of both worlds. Use that instead.

    • @stepfury
      @stepfury 2 месяца назад +1

      What is that version😮 The pro one?

    • @JackCrossSama
      @JackCrossSama 2 месяца назад +1

      where can we find the link

    • @mylittleheartscar
      @mylittleheartscar 2 месяца назад

      Sauce?

    • @sertocd
      @sertocd 2 месяца назад

      it doesn't understand the prompts as good as the standalone Dev.

    • @franciscobutte
      @franciscobutte Месяц назад +2

      Dude tell the source if you gonna tell something like that

  • @wellshotproductions6541
    @wellshotproductions6541 2 месяца назад +1

    Great content and video! Keep up the great work!

    • @Huang-uj9rt
      @Huang-uj9rt 2 месяца назад

      Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.

  • @muggzzzzz
    @muggzzzzz 2 месяца назад +2

    Can flux generate a normal female face without square jaw and dimple on chin? I tried but couldn't get it. All faces looked the same.

  • @ernatogalvao
    @ernatogalvao 2 месяца назад

    You broke me. Im desperate do understando how to do you use an Image as a style like we do in midjourney? How do you keep the characters consistent? How do I do these basic inpaintings?

  • @NeVraX
    @NeVraX 2 месяца назад

    Could you give us the prompt of the city at night with neons ? We only see the prompt from the previous images.

  • @ismgroov4094
    @ismgroov4094 2 месяца назад +1

    ❤😊thx sir

    • @PixelEasel
      @PixelEasel  2 месяца назад +1

      you're welcome 😊

  • @spiritpower3047
    @spiritpower3047 2 месяца назад

    very nice !! and now, can you compare "flux dev" and "flux pro" ? thank you vey much ! 😉

    • @Huang-uj9rt
      @Huang-uj9rt 2 месяца назад +1

      Flux Dev is for developers, providing basic development tools and functionality, while Flux Pro is an advanced version with more features, performance optimizations, and enterprise-level support for professional users who need a more comprehensive service.

  • @janvollgod7221
    @janvollgod7221 Месяц назад

    I like Flux. Use both schnell and dev with the GGUF Extension in comfyUI. Fast and furious. The only thing i don't like, is the trend since sdxl3, that "problematic" Stuff, mostly NSFW is blocked somehow, even with forcing through sides in prompt, you will barely come to some nice result. But hey..that what sd sdxl are for

  • @siliconbrush
    @siliconbrush 2 месяца назад

    I really wish I new how the model got down to 11 gig, something had to go right? Perhaps that additional bump in hand training? Quality, ect, this could have a huge impact on your test results, just think about it, if slight difference in scaling impacts the model what does reducing the original model by 50% do? I not saying we shouldnt use the smaller model, I have to use to it! to get anything out in a reasonable amount of time I am just curious about the process of reducing model sizes. What is exactly that we lost?

    • @madrooky1398
      @madrooky1398 2 месяца назад

      The reduction in model size usually involves a process called quantization, which essentially compresses the data by using fewer bits to represent the numbers (weights) in the model. In machine learning, weights are often stored as floating-point numbers, like FP32 (32-bit floating point) or FP16 (16-bit floating point). Reducing the model size could mean switching from FP32 to something smaller like FP16 or even FP8.
      When you use smaller numbers like FP8, the model consumes less memory and processes faster, but there is a trade-off: you lose some precision in the weights. This loss in precision can affect the model's accuracy or the quality of the generated images.
      However, the impact of this precision loss varies depending on the model and its use case. Specialized models, designed for specific tasks, might still perform well with lower precision because they don't need to generalize across as many scenarios. On the other hand, more generalized models that need to work well across a wide range of inputs may experience a noticeable drop in performance.
      So, when the model was reduced from 30GB to 11GB, some precision was likely sacrificed, which could potentially affect the quality of its outputs, especially in more complex or diverse tasks. And that is exactly what we can see in the comparison.

  • @mr.entezaee
    @mr.entezaee 2 месяца назад +1

    I didn't understand how you did it without changing anything.. it was too vague for me. While I was waiting for such a tutorial...

    • @Huang-uj9rt
      @Huang-uj9rt 2 месяца назад

      I've had your problem before; if you don't think you're doing anything wrong, you can either reboot and rebuild the model or go to mimicpc and run flux, I was just frustrated with the fuzzy images I was getting, and used mimicpc's online comfyui to run flux after trying other solutions that didn't work, and I think the images it produced were very good, but I don't know what to do with them. I was really surprised by the detail of the images it generated.

  • @TridentHut-dr8dg
    @TridentHut-dr8dg 2 месяца назад +1

    I feel this is way better than Dall E and I don't even need to compare this to google...

    • @gameon2000
      @gameon2000 2 месяца назад

      Imagen 3 is even better at making people (fingers & toes) photorealism and "true to the prompt" accuracy than Flux.

    • @TridentHut-dr8dg
      @TridentHut-dr8dg 2 месяца назад

      @@gameon2000 Will try

    • @TridentHut-dr8dg
      @TridentHut-dr8dg Месяц назад

      @@gameon2000 Ohh thanks for the info I will check it out

  • @LibertyRecordsFree
    @LibertyRecordsFree 2 месяца назад

    can you give your tamplate for comfyui ? Thanks for the video, interesting

  • @grillodon
    @grillodon 2 месяца назад

    Error in inpaint workflow:
    When loading the graph, the following node types were not found:
    GetImageSizeAndCount
    MaskPreview+
    Image Save
    Nodes that have failed to load will show as red on the graph.

    • @PixelEasel
      @PixelEasel  2 месяца назад +1

      try to update comfy. it should work

  • @AlistairKarim
    @AlistairKarim 2 месяца назад +2

    Also, one more detail. Black forest labs say that outputs themselves of the dev model can be used for commercial purposes. Model itself can't be used for profit though. So you can't integrate it in your product, or use as basis for your commercial fine-tune and stuff. I'm intrigued how civitai fits into this.

    • @valheyrie404
      @valheyrie404 2 месяца назад

      Je ne comprends pas bien.
      C'est autorisé de vendre les images ?
      Qu' est ce qui est interdit alors ?

    • @AlistairKarim
      @AlistairKarim 2 месяца назад +1

      @@valheyrie404 It is allowed to sell images that you generated. It is not allowed (without direct agreement) to host the model and sell the access to it. Like civitai and other generation services. Although, it seems civitai already reached some sort of commerical agreement with authors. But, also you cannot include "dev" model in some sort of commercial software product, like a photo editor. Or to train your own commercial (production) model while using "dev" model as a basis for that.

    • @valheyrie404
      @valheyrie404 2 месяца назад

      @@AlistairKarim C'est plus clair. Merci ! :)

    • @CodexPermutatio
      @CodexPermutatio 2 месяца назад

      @@valheyrie404 Oui, vous pouvez vendre vos images générées.
      Ce que vous ne pouvez pas utiliser commercialement, en l'intégrant dans un produit tel qu'une application, c'est le modèle lui-même (Dev). Mais vous pouvez vendre les images générées.

    • @AlistairKarim
      @AlistairKarim 2 месяца назад +1

      @@valheyrie404 I apologize. While many people, including myself (and even chatbots like Claude), generally agree that outputs seem viable to sell, the license is actually quite confusing. The more I read it, the more confusing it becomes. Until Black Forest Labs specifically clarifies this point, I advise being careful for now.

  • @Avalon19511
    @Avalon19511 2 месяца назад

    Wish I could get it too work, tried in pinokio and comfy and all I get are errors, I have a RTX 3070 w/8GB

  • @Avalon19511
    @Avalon19511 2 месяца назад +1

    Oh and by the way i'm using the DEV 8 model

  • @MikevomMars
    @MikevomMars 2 месяца назад

    To be honest, it's just a matter of taste, since the differences are mostly placebo like and depend on the prompting and the random seed. But it's interesting to see that most humans bias towards the dev version because they think, it MUST be better (as more "expensive", as "better" something must be). That's how human psyche works 😉

  • @nicejungle
    @nicejungle 2 месяца назад

    At 3:49, the differrence is very obvious. Schnell is artificial, specially the lighting, pants seems plasticy, like MidJourney, SDXL and many others.
    Dev has very natural lighting ambience

  • @vladimirzhuruk1418
    @vladimirzhuruk1418 Месяц назад

    Is it possible to run flux schnel with img2img input? I see it only on dev version but online it costs x10 times more which is frustrating

  • @creepybeat
    @creepybeat 2 месяца назад +1

    schnell looks more cartoonish and dev more realistic overrall.

  • @michaelt8717
    @michaelt8717 2 месяца назад

    How much GPU VRAM or CPU RAM do we need for the model?

  • @just51835
    @just51835 2 месяца назад

    Next Flux Dev - Fp8 vs Fp32

  • @sertocd
    @sertocd 2 месяца назад

    This morning I started to use flux dev but only allows me to generate 1 image but the next generation is interrupted by this error. Error occurred when executing SamplerCustomAdvanced: list index out of range. UPDATE: I rolled back to an earlier state from experimental snapshot manager menu , and the problem is gone.😊

    • @master-tp6wc
      @master-tp6wc 2 месяца назад

      What GPU are you using

    • @sertocd
      @sertocd 2 месяца назад

      @@master-tp6wc 4080 super with 96gig. Reddit was flooded with this dev version memory issue even for those with 4090 cards.

  • @Hearcharted
    @Hearcharted 2 месяца назад

    This is your real voice or AI ?
    By the way, keep Fluxing, because Flux is just getting started :)

    • @LukasBazyluk1117
      @LukasBazyluk1117 2 месяца назад

      Clearly AI voice as it's lacking inflection.

    • @Hearcharted
      @Hearcharted 2 месяца назад

      @@LukasBazyluk1117 Interesting

  • @bm63
    @bm63 2 месяца назад +2

    I think the schnell produced images sometimes better than Dev…
    Take the Body Cream. The fact the Schnell put a label on it without being prompted, and it’s a better angle, than a pot of white stuff of whatever that Dev produced.
    Also comparing two same prompts will always produce random images, some good, some bad, be it Dev or Schnell being used.
    I’ve had some absolutely shockingly bad Dev results, like really bad early first version Dalle images, using my detailed prompts, that Dalle / Chat GPT absolutely nailed perfectly. These comparison videos in my opinion just don’t work as they’re just too random what they will produce.

  • @DataJuggler
    @DataJuggler 2 месяца назад +2

    This sounds exactly like voice Andrew from Microsoft Azure Speech AI. When your channel won't hit monetization till the year 3918, commercial use doesn't really matter.

  • @SupermanRLSF
    @SupermanRLSF 2 месяца назад

    Hey mate are you able to post a copy of the complete workflow shown at 9:00?

    • @PixelEasel
      @PixelEasel  2 месяца назад

      you can find it on the description.. just clean it up a bit

    • @SupermanRLSF
      @SupermanRLSF 2 месяца назад

      @@PixelEasel not exactly sure how to yet, still a noob but thanks for reply.

  • @zWaKez
    @zWaKez Месяц назад

    Schnell gives me better results for whatev reason so far

  • @hu-ry
    @hu-ry 2 месяца назад

    "Schnell" ist german for "fast" btw

    • @PixelEasel
      @PixelEasel  2 месяца назад

      of course

    • @ryanhart9391
      @ryanhart9391 2 месяца назад

      Oh interesting! I'm learning Dutch and fast is "snell" in Dutch, which seems counterintuitive to me because it sounds like snail, which is a very, very slow animal.

  • @cosmicrdt
    @cosmicrdt 2 месяца назад +1

    It's too bad ai voice generation like in this video isn't nearly as good as ai image generation

    • @stepfury
      @stepfury 2 месяца назад +2

      I'm the one that struggling with talking. This Ai is better than me talking 😅

  • @cmdr_stretchedguy
    @cmdr_stretchedguy 2 месяца назад +1

    Meanwhile we can do all the same or better with SDXL, and do so 10x faster. Flux is just overhyped mediocrity right now.

    • @valheyrie404
      @valheyrie404 2 месяца назад

      C'est à peu près le commentaire que je cherchais : vous faites du photorealisme avec et obtenez d'aussi bons résultats ?
      Et pour le respect des prompts ?

  • @vasiliybutenko4810
    @vasiliybutenko4810 2 месяца назад

    Flux very bad in nature , no forest, no trees . Only one scene with path in the middle .

  • @kallethoren
    @kallethoren 2 месяца назад +5

    Annoying AI voice

  • @JamesBray-qm8gr-q3w
    @JamesBray-qm8gr-q3w 2 месяца назад

    if they want to capture a large part of existing market share others have, this is TOO complicated.