Blazing Fast AI Generations with SDXL Turbo + Local Live painting

Поделиться
HTML-код
  • Опубликовано: 31 янв 2025

Комментарии • 135

  • @sebastiankamph
    @sebastiankamph  Год назад +1

    Detailed text guide (Automatic1111 and ComfyUI) for Patreon subscribers here: www.patreon.com/posts/sdxl-turbo-guide-94305599

  • @speedy_o0538
    @speedy_o0538 Год назад +5

    The live painting workflow is literally a glimpse into the future. I imagine at some point well be creating everything with this "instant refresh" technique, even inpainting. Zooming in on an image that has mutated hands, paint over them a few times until it looks right, retouch the hair, retouch a few things in the background, and finally put it through a very high quality upscaler like Magnific.

  • @clumsymoe
    @clumsymoe Год назад +9

    I was surprised to learn that adding LCM to my model, would not only end up in speeding up the generation inference, but literally helped to reach near SDXL quality at higher resolutions and even fixed hands and few other things. LCM with LCM Sampler is godlike.

  • @weirdscix
    @weirdscix Год назад +14

    I actually prefer Krita for this, it can even take my scribbles and make them amazing, plus it has layers, lots of tools, etc. It will hook into Comfyui and either generative standard images, or you can use it in a live mode for both SDXL/1.5 and it supports turbo models

    • @westingtyler1
      @westingtyler1 Год назад +2

      i agree. i've been using epic realism in krita for live landscape concepting for my twin peaks inspired game, and it generates an image about every 4 seconds on my rtx 2060 super. i have not tried sdxl turbo in there but will soon, since the speed seems nuts here. and being able to add pose vector layers to add people and pose them, is great in krita.
      PS. how do you keep it from "muting" the colors? i put vivid ones, and the result is muted.

    • @sznikers
      @sznikers Год назад +1

      ​@@westingtyler1 did you check your vae is loading?

    • @westingtyler1
      @westingtyler1 Год назад

      @@sznikers hm not sure how to check if it loaded. but I'll check the settings to ensure it's part of the profile. but also, I know in a1111 there's a tickbox that says "do color correction to keep original colors in image to image", and I wonder if there is something like that inside krita.

    • @sznikers
      @sznikers Год назад

      @@westingtyler1 try to force your own VAE in a1111, maybe your model doesn't have one baked in.

    • @weirdscix
      @weirdscix Год назад

      @@westingtyler1 I haven't had any issues with muted colours, although I tend to stick to a couple of models, realisticvision, dreamscaper 8/SDX, Juggernaut, and now a couple of Turbo models, I particularly like Pixelwaveturbo, I would try different VAE's, the 840000-ema-pruned/Anything-v3, and the SDXL VAE

  • @lambgoat2421
    @lambgoat2421 Год назад +4

    The live painting is incredible it's literally like translating thoughts into images

    • @visuhall9298
      @visuhall9298 Год назад

      Not even close.
      Try to really imagine an image.
      Then try to generate your REAL thought to image . GOOD LUCK !

  • @mada_faka
    @mada_faka Год назад +1

    WOWW, thanks sir sebastian. You always bring best video for AI. Im glab to subs u. Best content.

  • @Clupea101
    @Clupea101 Год назад +3

    Great guide

  • @MrNorBro
    @MrNorBro Год назад

    "I´m just gonna stop this so my hard drive isn´t full with images of bottles with sunrises in them" :D I dont´t know why, but that was funny! Nice Work Seb! I saw already a similar way for live painting using a plugin for Kirta. You can use SD for generative fills as well similar to Photoshop! Although in Kirta you have more control ( like pose editor etc..) this workflow is simper to use!

  • @81HM
    @81HM Год назад

    My daughter just told me the hailing taxis joke last night. So funny to hear you say it today.

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      Hah, she must have amazing taste in jokes!

  • @JohnnyAirbag
    @JohnnyAirbag Год назад +7

    Very informative! And thanks for covering *both* Comfy and A1111. I'm sure like many I use both. I don't like seeing development/support for A1111 decline and I think a lot of that is coming from somekinda elitism thing that's started where if you dont' use comfy you're not doing it right. Eh, can use either...sometimes you don't want overcomplexity to make a run of images. Cheers.

    • @76abbath
      @76abbath Год назад

      Absolutely! I totally agree with you. I prefer A1111 but tried Comfyui. Both are interesting.

    • @bandinopla
      @bandinopla Год назад +2

      comfy is cleaner, it allows you to organize the nodes how you like and be creative in the mix and match of nodes in new interesting ways. A1111 is limited in that sense.

    • @ShiroCh_ID
      @ShiroCh_ID 10 месяцев назад

      @@bandinopla but sometimes or i must say A1111 UI was more Comfortable than ComfyUI, because node based thing always felt daunting even blender node thing,and yes im not kidding despite i am okay with nodes my friends just nah to that thing

  • @pruebalandia1234
    @pruebalandia1234 Год назад +4

    where can i get the "painter node"?? It doesn't appear when I search for it within the nodes in Comfyui

    • @gandulfo77
      @gandulfo77 Год назад

      AlekPet/ComfyUI_Custom_Nodes_AlekPet

    • @fretts8888
      @fretts8888 Год назад

      search for alekpet I think, it's part of a suite

  • @jjcs2917
    @jjcs2917 Год назад +2

    Another great vid! thank you! I am having a bit of trouble with the workflow... I installed the Inpaint custom nodes. But these nodes are missing: "seed" and "image scale side to side" ... I'd really appreciate a few pointers. T Y again! and keep it up!!

    • @FriedMonkey362
      @FriedMonkey362 11 месяцев назад +2

      found anything i have the same problem?

  • @DemoEvolvedGaming
    @DemoEvolvedGaming Год назад

    @sebastian kamph for the live painting mode, what is needed is for the software to write the result to the same filename until you press a button. this way turbo live paint would not fill your harddrive with infinite pictures, but would actually only save the results that the user wants.

    • @weirdscix
      @weirdscix Год назад +7

      You could just have the image result as a preview, then it won't write to disk, then you can just right click on it and save it. Or have a switch and pass it on to an upscaler

  • @dasomen
    @dasomen Год назад +2

    Excellent video! , Thanks a lot Sebastian, great as usual but where is the live painting workflow ?

    • @sebastiankamph
      @sebastiankamph  Год назад +2

      Hi, I added it in the description now.

    • @dasomen
      @dasomen Год назад

      Much appreciated !@@sebastiankamph

  • @jaydendwyer-read3712
    @jaydendwyer-read3712 Год назад +1

    really useful, thank you :)

  • @crwde
    @crwde 7 месяцев назад

    You are my hero! :O Thanks (cfg 1.0 my god....... thanks)

  • @ashishgupta1060
    @ashishgupta1060 Год назад

    Turbo is absolutely amazing.

  • @hleet
    @hleet Год назад

    very nice. I’ll try that 🎉

  • @TPCDAZ
    @TPCDAZ Год назад +8

    You had me at Automatic 1111. So glad you haven't gone fully to the dark side of Comfy like other youtubers. Appreciate you trying to be neutral

    • @kevinm4x
      @kevinm4x Год назад +3

      yeah comfy is not comfy at all

    • @westingtyler1
      @westingtyler1 Год назад +1

      yeah I dont get the appeal of comfyui unless someone is doing some unique complex chain. comfy DOES seem to render images a bit faster than a1111 though in my experience.

    • @sznikers
      @sznikers Год назад +1

      Its paint to switch, but once you do your charts its way more quick

    • @TPCDAZ
      @TPCDAZ Год назад

      @@sznikers It really isn't that much quicker. You can do all the same presents in auto 1111, the png tab also allows you to drag in a photo generated with auto 1111 and auto fill the settings just like comfy ui.

    • @sznikers
      @sznikers Год назад +1

      @@TPCDAZ well i dont have to do anything in comfy once i have my workflow done. Thats the whole point of comfy, you dont have to sit and click things like in a1111, you start the workflow and after many automatically executed steps all of which you have adjusted to your needs you get your end results.

  • @maikelkat1726
    @maikelkat1726 11 месяцев назад +1

    nice vid to do the live painting, however the pics keeps filling the drive because it saves every picture...how can we disable the auto saving in comfyui?

    • @maikelkat1726
      @maikelkat1726 11 месяцев назад +1

      ah i see, use preview node instead of image save node

    • @maikelkat1726
      @maikelkat1726 11 месяцев назад +1

      ah i see, use preview node instead of image save node

  • @bry_n
    @bry_n Год назад +2

    so how do we control how much weight the drawing has on the output image? it doesn't seem to affect it enough

  • @sabbib007madness
    @sabbib007madness Год назад

    strangely enough, ive set up exactly as the comfyui guide shows, which I can see your doing as well, some reason my generations are more than minutes long, I'm trying to discover what is going wrong

  • @hotlineoperator
    @hotlineoperator Год назад +1

    Awesome

  • @enescelik1845
    @enescelik1845 Год назад

    wow that's very useful

  • @doords
    @doords Год назад +1

    Does this thing need the LCM thing you mentioned previously? Does this use way less ram then?

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      This does not require lcm

    • @doords
      @doords Год назад +1

      @@sebastiankamphbut it still uses way less ram right?

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      It'll still need enough to be able to run SDXL, but yes, a little less over time due to it needing less steps.@@doords

  • @StefanPerriard
    @StefanPerriard Год назад +1

    Is live painting available for A1111?

  • @titanstudios4843
    @titanstudios4843 Год назад +2

    you have not linked the turbo workflow!

  • @MrSongib
    @MrSongib Год назад +1

    What sampler, steps and cfg did you recommend for sdxl turbo?
    nvm, I think we just stuck with no negative prompt, at 1 cfg and 2 steps. xd

  • @MaisnerProductions
    @MaisnerProductions Год назад

    cool stuff!

  • @xxraveapexx2750
    @xxraveapexx2750 11 месяцев назад

    ok i used the same prompt, the same checkpoint, the settings were also the same like in your video (automatic 1111). while generating was at 50% the image loked good just blurry but when it finished it is always a total mess. how comes that the qulity of your frog is muuuch better than mine?

  • @ohnoitsaninja
    @ohnoitsaninja 10 месяцев назад

    So I'm trying to load these workflows.... unless I'm mistaken you've only uploaded them as png images? Why not the json file of the workflow template?

  • @johndoefpv
    @johndoefpv Год назад

    My A1111 is up to date, but I do not have the R-ESRGAN 4x+ upscaler, I have the ESRGAN_4x. I'm guessing they are not the same since the results I'm getting are not good at all, bunch of fused faces.

  • @MrErick1160
    @MrErick1160 Год назад

    Hey Seb, could you do a series of videos on animation with stable diffusion A1111? or Comfy UI perhaps?, I've been looking into doing animation but I'm too distracted to compile all the info plus everybody seems to have a different way of doing it and it's quite confusing.. Would love your help for doing a nice animation

    • @sebastiankamph
      @sebastiankamph  Год назад

      I got a few animation videos but will probably do more over time 🌟

  • @KDawg5000
    @KDawg5000 Год назад +1

    Weird, I've been using Turbo in A1111 with standard SDXL resolutions (1024x1024, 1152x896, etc), and I didn't have any problems. Not sure what I'm doing different 🤷‍♂
    EDIT: I see what I'm doing different. I'm using 10 samples and a CFG of 4.

  • @KDawg5000
    @KDawg5000 Год назад

    It would be nice if there was a way for Auto-queue to just overwrite the last image so it doesn't fill up your hard drive.

    • @fretts8888
      @fretts8888 Год назад +2

      Cant you just use the preview node (not save node)?, maybe have a disable save node for ones you want to keep, when you connect it it will only save one image unless you make changes upstream

    • @KDawg5000
      @KDawg5000 Год назад

      @@fretts8888 That's a good idea

  • @ericfoster5267
    @ericfoster5267 Год назад

    I really just come here for the jokes, the learning is a bonus

  • @MisterWealth
    @MisterWealth Год назад +1

    What are the drawbacks of a turbo model?

    • @SlizenDize
      @SlizenDize Год назад +1

      this. I would assume it's quality but an answer would be nice.

    • @LuisAFlorit
      @LuisAFlorit Год назад +1

      Not good enough quality for people.

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      The quality is not as great, but compared to LCM and other stuff, the quality vs speed is still very very good. Especially with custom SDXL Turbo models. Some output images for me I couldn't see if it was turbo or not.

  • @DJVibeDubstep
    @DJVibeDubstep Год назад

    I'm using the DirectML version because I have an AMD and I have to use my CPU and It's PAINFULLY slow. Will this help with that? Or is it only for those using GPUs?
    I actually have a really decent GPU (RX 5700 XT) but I sadly can't use it since SD hardly supports AMD.

  • @JustFor-dq5wc
    @JustFor-dq5wc 11 месяцев назад

    I was watching this 2 months ago and I was thinking - he has 4090, that's why it's so fast. And only today I installed Turbo model. It's 1 sampling step generation! WTF? O.o

  •  Год назад

    Could you update the new install 2023? There has been some much new content you added, and following tutorials, your stable diffusion UI is different from mine, despite using the latest version.

  • @thegreatujo
    @thegreatujo Год назад

    I'm puzzled by this. I am using the exact same setting. CFG scale 1, Sampling step 1, 512x512 with HiresFix 2x, same sample same upscales. All the images have the symptoms of using 712 or 1024 back in the days of SD 1.5. Multiple eye or cascading limbs, etc. I'm truly confused. Classic SDXL works just fine. Using AUTO1111. Any ideas what am I missing ?

  • @subtly_improvised
    @subtly_improvised Год назад

    having issues getting my manager to look like yours, also wont load image scale node is there a git for that one?

    • @fretts8888
      @fretts8888 Год назад

      you might have to go to the extensions folder and then the comfyui manager menu (command line) and do a git pull to update the manager

  • @parthwagh3607
    @parthwagh3607 Год назад

    Can you please provide a specification for PC build of $2400, which will run ai models locally in fastest way possible at this price. What things we should consider when building PC solely for running ai models locally and rarely gaming? What really helps to run this model fastest locally? please provide related information also. I want to build a PC with budget of $2400. Thank you.

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      Nvidia GPU with as much gb vram as possible. Rest is irrelevant currently

    • @parthwagh3607
      @parthwagh3607 Год назад

      thank you so much.@@sebastiankamph

    • @parthwagh3607
      @parthwagh3607 Год назад +1

      can you please provide minimum requirement for VRAM.@@sebastiankamph

  • @davewills6121
    @davewills6121 Год назад

    Question!! My son is doing a project on ''The Mesolithic period'', he wants to use some examples of AI art for his talk on the subject. The problem is, all my attempts using Comfyui are a mutated group of cavemen, he's nervous as it is, he said my AI art will make him the laughing stock. So, are there any simple PROMPTS that i could use to produce good results?. cheers

    • @sebastiankamph
      @sebastiankamph  Год назад

      I got prompt styles on my Patreon. But for easy quick images, check out Fooocus.

  • @twilightfilms9436
    @twilightfilms9436 Год назад

    Can it be used with img2img and controlnet?

  • @TravisEugeneBrown
    @TravisEugeneBrown Год назад

    Do they make any for non sdxl mondel Sabastian

  • @gpatil4456
    @gpatil4456 9 месяцев назад

    how to install in forge ? it lagging in forge , generation time is too long in forge pls make tutorial on it

  • @TravisEugeneBrown
    @TravisEugeneBrown Год назад

    Am using a tesla nvidia card on azure does not seem to work 5 step get an image but 1 step does not

  • @TazzSmk
    @TazzSmk 8 месяцев назад

    despite video title, there's no live painting in A1111 -_-

  • @jonathaningram8157
    @jonathaningram8157 Год назад

    it's sdxl but the base latent is 512x512 ?

  • @Shingo_707
    @Shingo_707 Год назад +3

    But wasn't sdxl models trained on 1024x1024 datasets ? that's confusing

    • @sebastiankamph
      @sebastiankamph  Год назад +3

      I agree, very confusing indeed. Apparently it wasn't the case for turbo

  • @RSV9
    @RSV9 Год назад

    Can it work as fast with inpaiting ?

  • @tr1pod623
    @tr1pod623 Год назад

    my lcm lora is not working for sdxl :(

  • @vince2nd
    @vince2nd Год назад

    What graphics card are you using?

  • @JSwanson547
    @JSwanson547 Год назад

    Dragging and dropping png images to load workflows... I don't know how it works, but it's legit.

  • @wernerblahota6055
    @wernerblahota6055 Год назад

    👍👍👍👍

  • @slashkeystudio
    @slashkeystudio Год назад

    A potato computer would not have the VRAM for SDXL. Or?

  • @irisfabianagonzalezpacua7388
    @irisfabianagonzalezpacua7388 Год назад

    I can't control the denoising on that workflow

    • @bry_n
      @bry_n Год назад

      yeah seriously it's kind of an issue. not really a usable workflow without that

  • @bentp4891
    @bentp4891 Год назад +2

    I come for the Dad joke but stay for the AI

  • @guilhermerodrigues3073
    @guilhermerodrigues3073 11 месяцев назад +1

    mine 3060ti took 3 times more with this model than the standard 1.5

  • @Onsearching
    @Onsearching Год назад

    👍

  • @kallamamran
    @kallamamran Год назад

    SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)..... Yay :/

  • @sumsii1
    @sumsii1 Год назад

    Are you American or something?

    • @sebastiankamph
      @sebastiankamph  Год назад +1

      I thought the Swedish accent gave it away.

  • @1Know1tHurts
    @1Know1tHurts Год назад +11

    Honestly, I am not impressed. 1.5 models are much better than SDXL Turbo and standard SDXL is better than 1.5. I have a powerful GPU so I better wait a little longer for a much better result.

    • @aegisgfx
      @aegisgfx Год назад +12

      Well that might not be the point of this model, the point of it might be to get a lot of high-res concepts that you can then take into image to image to get higher res versions of. Honestly I'd rather use a model like this to get hundreds of base images so I can choose a few good ones to upscale.

  • @CherrieHughes-c1s
    @CherrieHughes-c1s 4 месяца назад

    Anderson Susan Martinez Sharon Rodriguez Patricia

  • @AlisonHoward4ye
    @AlisonHoward4ye 4 месяца назад

    Taylor Thomas Taylor Paul Perez Dorothy

  • @Arrestsomx-f8c
    @Arrestsomx-f8c 4 месяца назад

    Walker Charles Martin William Brown John

  • @kevinm4x
    @kevinm4x Год назад

    can we just stop using comfy? its so dumb and overly complicated without any benefits

    • @sebastiankamph
      @sebastiankamph  Год назад

      I would love to use a1111 even more as that has been my goto for a long time, however all the new tech and advanced workflows gets released to Comfy first 🥲❤️

    • @fretts8888
      @fretts8888 Год назад +1

      To answer your question..."Yes", yes you can stop using it... its not compulsory at all. Also if you find something complicated I wouldn't jump to the conclusion that *it's* dumb.

  • @oldleaf3755
    @oldleaf3755 Год назад

    please blink more often and dont do the weird slow zoom-ins on your face

  • @eepypilot
    @eepypilot Год назад +2

    Can you use medvram and xformers with it?