PixelWave Model = Artistic Flux in 8+GB VRAM

Поделиться
HTML-код
  • Опубликовано: 30 янв 2025

Комментарии • 54

  • @USBEN.
    @USBEN. 3 месяца назад +3

    Its very good, i love the balance it has for colors and styles. Base Flux is always leaned towards cinematic.

  • @pn4960
    @pn4960 3 месяца назад +4

    Excellent model ! Thanks

  • @jibcot8541
    @jibcot8541 3 месяца назад +4

    It is really good, cool to get the better art styles back.

  • @wiz-white
    @wiz-white 3 месяца назад +6

    Pixelwave is great. Been my goto-model since forever. 10/10

  • @97BuckeyeGuy
    @97BuckeyeGuy 3 месяца назад +3

    I've been using this model for a while, now, and I absolutely love it. And yes, it does handle NSFW images well, also.

  • @OmriSadeh
    @OmriSadeh 3 месяца назад

    great comparison, we indeed needed that.
    would love a bit more about what it does worse than regular flux (if you found anything)

  • @blackvx
    @blackvx 3 месяца назад +2

    Thank you👍👍

  • @ShubzGhuman
    @ShubzGhuman 3 месяца назад +1

    great video again

  • @magimyster
    @magimyster 3 месяца назад +2

    Wow😮

  • @erans
    @erans 3 месяца назад +2

    can we use our face loras that were trained with flux dev?

  • @olivierniclausse1791
    @olivierniclausse1791 3 месяца назад +1

    yes thanks a lot

  • @randomnumbers84269
    @randomnumbers84269 28 дней назад

    Trying to create that setup but I don't have all of the building blocks. Can't find them when trying to download from the manager either. For example "force/set CLIP device"

  • @MilesBellas
    @MilesBellas 3 месяца назад +6

    Kijai's wrapper for Mochi next ?👍🐁

    • @eveekiviblog7361
      @eveekiviblog7361 3 месяца назад +1

      @@MilesBellas how you inserted hyperlink?

  • @thewaife
    @thewaife 2 месяца назад

    Great video, mate! Quick question: have you figured out how to use Pixelwave with LoRAs, especially for character LoRAs? I tried the trick suggested by the author with the merge model, but the results were disappointing-it completely ruined all the amazing features of Pixelwave. Thanks for any tips!

    • @NerdyRodent
      @NerdyRodent  2 месяца назад +1

      As it’s a different model, the easiest way is to use pixelwave as the base and train your LoRAs on that. Makes it a bit tricky to use things like Hyper though 🫤

    • @thewaife
      @thewaife 2 месяца назад

      @@NerdyRodent Thank you very much for advice)

  • @SimosFunk
    @SimosFunk 3 месяца назад +2

    🌊🌊🌊

  • @DaveTheAIMad
    @DaveTheAIMad 3 месяца назад

    Is there a video on the double sampler / split sigma setup? really liked the detail in those generations.

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      Yup, it’s what I’ve been using for months here on the channel! Think of it like a refiner, where you have one sampler that does part of the image before passing it on to the next. In the original video from months ago, I also then showed like an image to image to upscale / hires fix - giving essentially 3+ samplers per image. Check the flux playlist for all the fluxy videos 😉

    • @DaveTheAIMad
      @DaveTheAIMad 3 месяца назад

      @@NerdyRodent will look for the vid in a bit.
      Been using the 10 20 30 method I saw a while back.
      Send it to do 10 of 10 steps pass the latent on to do steps 10 to 20 (20 steps) then send that on to do steps 20 to 30 (though I found doing steps 20 to 40 was key to maintaining text quality) making for 30 (or in my case 40) steps per image with a different seed per stage. I am guessing it's a similar principle but when you called it split sigma as well it sounds like it may be different lol
      I was going to look at the workflow, but alas like many RUclipsrs of late it's locked behind a paywall :( less of an issue if there's a guide for it though

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      @@DaveTheAIMad I’ve got free stuff on both patreon and hugging face too 😉 Nothing is actually locked behind a pay wall, but paying supporters do get extras!

    • @DaveTheAIMad
      @DaveTheAIMad 3 месяца назад

      @@NerdyRodent The workflow link in another comment states pay £3 to unlock.
      I looked through your other videos on flux and could not find the one on the dual sampling, tbh I would rather see a video about it and how it works than just have a workflow that has it, I am curious what it is doing. Having a workflow would be nice, learning why it does it and getting ideas from the methodoldgy is way better. do you have a video describing what it is, how it works? or is it mixed into someother video? ran out of free time for today so cant look further until after work (or during if its quiet).
      I also found that despite watching your videos and having them pop up frequently... i wasnt subbed so fixed that.

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      @@DaveTheAIMad If you’ve a hankering for the extras, or just want to say thanks, then you can indeed buy me a coffee via an individual post! Another option is to add a small biscuit to go with that, and in return you’ll unlock all the course materials there (currently over 70 posts), gain early access, become cool, etc… I know which option I’d pick 😎
      For the full Nerdy Rodent ComfyUI Course focusing on the multi-sampler aspect alone, I’d go back to where it all began around a year ago with the SDXL + refiner workflows (links in the video description). As an optional extra, it’s also worth looking at the workflow basics video. After that, move on to the Pixart Sigma ones (Sigma also has a special double model version as well. I went the most nuts using Sigma, as some of those switch models and use over 5 samplers). Next up would be the video with SD3 as a refiner, and then move on to Flux videos. My recent Flux ones cover loads of options for extra samplers, schedulers, using latent multiply, and also various noise types. If you finish with the scheduler toolbox video, you should then be able to gain full control over each, individual step - likely also gaining total enlightenment by the end (*enlightenment and coolness may go down as well as up, terms and conditions apply, for entertainment purposes only, etc)

  • @RichardSekmistrz
    @RichardSekmistrz 3 месяца назад +1

    Do you have the workflow - I came from MJ recently so I struggle to build them from scratch still. Either way, thanks!!

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +2

      It’s just a standard flux workflow like you get with comfy, but you can grab the exact one used in the video from www.patreon.com/posts/pixelwave-flux-114819050

    • @MrCai01
      @MrCai01 3 месяца назад

      As NerdyRodent says, its the bog standard Flux workflow, with the only difference, outside the layout, is the inclusion of the split sampling shown at the 1:40 mark - not something i've seen before but i'll give it a go as see what it produces. Nice video as always.

  • @Larimuss
    @Larimuss 3 месяца назад +1

    Forceclip cpu 😮 and force vae cuda 0.. interesting.
    Does this split checkpoint and vae to gpu and clip to cpu and ram? Because ive been looking for something like that to take some load of my poor 12gb vram.

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      Yup. Love saving me a bit of VRAM 😁

  • @joechip4822
    @joechip4822 3 месяца назад

    Used it in Forge but it doesn't work as expected. If I only add a image style like 'cubist' or 'psychedelic' to the prompt, with CFG = 1 it doesn't do much an always gives more or less an impressionist image output. If I up the CFG scale, the style creeps in - but soon becomes overcooked. Does this only work in ComfyUI at the moment? Or what is the trick?

  • @researchandbuild1751
    @researchandbuild1751 2 месяца назад

    Can you still use regular flux control nets with it

    • @NerdyRodent
      @NerdyRodent  2 месяца назад

      Nope! It’s OmniGen 😉

  • @glendaion-vk6pf
    @glendaion-vk6pf 3 месяца назад +2

    Where download the workflows from this video?

    • @bushwentto711
      @bushwentto711 3 месяца назад +2

      make it yourself

    • @glendaion-vk6pf
      @glendaion-vk6pf 3 месяца назад +2

      Of course? Do not tell me, XD. The video does not explain which nodes he is using, nor is it clear what interconnections between them are needed to create it yourself. However, I have already made a similar one.

    • @bushwentto711
      @bushwentto711 3 месяца назад +1

      @@glendaion-vk6pf Where is the download for this workflow that you just made then?

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +2

      It’s just a model, so use any Flux workflow you like. For the exact one in the video, see www.patreon.com/posts/pixelwave-flux-114819050

  • @BroJo420Cafe
    @BroJo420Cafe 3 месяца назад +1

    greek to me but here to show support

    • @NerdyRodent
      @NerdyRodent  3 месяца назад +1

      Get yourself an Nvidia graphics card and join the fun! 😉

  • @jeffbull8781
    @jeffbull8781 3 месяца назад

    The single sampler versions are generally better imo, composition wise they are just less generic.

  • @SeaScienceFilmLabs
    @SeaScienceFilmLabs 3 месяца назад +1

    Rodent! 👋

  • @joecarioti629
    @joecarioti629 3 месяца назад +2

    None of these fine tunes will ever be usablefor commercial use, right?

    • @sherpya
      @sherpya 3 месяца назад

      they need to use schnell as starting model

  • @DrMacabre
    @DrMacabre 3 месяца назад

    i only gets terrible results out of this model. i tried the fp8 and bf16 with recommended sampler and they are equally bad. :/

  • @cosmicrdt
    @cosmicrdt 3 месяца назад +1

    It's a great model but I think the sampler you're using for the original model is what's causing all the bad results.

  • @juanjesusligero391
    @juanjesusligero391 3 месяца назад +1

    Oh, Nerdy Rodent! 🐭🎵
    He really makes my day! ☀
    Showing us AI, 🤖
    in a really British way! ☕🎶

  • @kyle-bensnyders3147
    @kyle-bensnyders3147 3 месяца назад +1

    Why not just use sdxl or even sd1.5 for this. You can get similar styled results at the fraction of the time and much less fuss

    • @Elwaves2925
      @Elwaves2925 3 месяца назад +2

      You can get the styles but you don't get the same prompt adherence, text, details, higher resolutions and so on that Flux gives. It all depends on what you want and how you feel with the result, they all have pros and cons.

    • @kyle-bensnyders3147
      @kyle-bensnyders3147 3 месяца назад

      @@Elwaves2925 No true, if you know what you're doing you can get good results. Don't get me wrong, flux is great and all, I just fear people are just charging ahead and using flux everywhere and forgetting about even sd1.5, which is still a very powerful and fast model if used right. But you're right about pros and cons.

    • @Elwaves2925
      @Elwaves2925 3 месяца назад

      @@kyle-bensnyders3147 I didn't say you couldn't get good results but in no way does SD1.5 match Flux for the things I mention, not out of the box. So what I said is true and text as just one example is nowhere near as good in SD1.5. Sure you can get there with external editing or whatever but with Flux none of that is needed.
      However, I kind of get your point but it's not so much about forgetting, it's that Fllux (and SD3.5) are the new kids on the block. SD1.5 and SDXL aren't new, we all know what they can achieve and that's why Flux and SD 3.5 are getting all the attention right now.
      Personally, as much as I'm loving Flux (especially with the new Pixelwave model), SDXL (RealVis checkpoint) is still my main model and I don't see that changing. That's partly because of keeping consistency with projects on the go but also because I like what I can get out of it and it's a hell of a lot quicker right now. 🙂

    • @Elwaves2925
      @Elwaves2925 3 месяца назад

      @@kyle-bensnyders3147 I didn't say you couldn't get good results from SD1.5. You certainly can but Flux is objectively better at certain things out of the box, like those I mentioned. So what I say is true.
      However, I kind of get what you're saying but it's not people forgetting. It's that SD1.5+XL are relatively and aren't offering anything new. While Flux is the shiny new toy on the block and that's why it's getting all the attention at the moment. 🙂