1000% FASTER Stable Diffusion in ONE STEP!

Поделиться
HTML-код
  • Опубликовано: 23 сен 2024
  • Up to 10x Faster automatic1111 and ComfyUI Stable Diffusion after just downloading this LCM Lora.
    Download LCM Lora huggingface.co...
    Blog post huggingface.co...
    Prompt styles for Stable diffusion a1111 & Vlad/SD.Next: / sebs-hilis-79649068
    ComfyUI workflow for 1.5 models: / comfyui-1-5-86145057
    ComfyUI Workflow for SDXL: / comfyui-workflow-86104919
    Get early access to videos and help me, support me on Patreon / sebastiankamph
    Chat with me in our community discord: / discord
    My Weekly AI Art Challenges • Let's AI Paint - Weekl...
    My Stable diffusion workflow to Perfect Images • Revealing my Workflow ...
    ControlNet tutorial and install guide • NEW ControlNet for Sta...
    Famous Scenes Remade by ControlNet AI • Famous Scenes Remade b...

Комментарии • 264

  • @tungstentaco495
    @tungstentaco495 10 месяцев назад +27

    As others have mentioned, not using this LCM at full strength helps if you are having issues with messy/distorted images. I'm getting pretty good results with setting the LCM at 0.5 with 16 steps. Still really fast, but with better looking generations. Also, I recommend trying this if you are having issues with the LCM while using models and lora's that are trained on a particular subject.

    • @haggler40
      @haggler40 10 месяцев назад +1

      One issue is it makes animatediff not work well since animatediff usually needs more steps like 25-30 to get some good motion. Just wanted to put that out there, it does work with animatediff though.

    • @alsoeris
      @alsoeris 7 месяцев назад

      how do you change the strength? if its not in the prompt

    • @tungstentaco495
      @tungstentaco495 7 месяцев назад

      @@alsoeris In automatic1111, when LCM is used in the prompt, it would look something like this...
      for half strength.
      for full strength
      for 20% strength
      etc.

  • @leecoghlan1674
    @leecoghlan1674 10 месяцев назад +20

    You've made my day, no more waiting 30 mins on my potato pc for a generation. Thank you so much

    • @CoconutPete
      @CoconutPete 7 месяцев назад +7

      i installed but must have done something wrong as the quality seems poorer... back to the drawing board lol

  • @ovworkshop3105
    @ovworkshop3105 10 месяцев назад +8

    it actually works very well, to create small samples then upscale them img2img, even SDXL is quick.

  • @marlysilva2816
    @marlysilva2816 10 месяцев назад +16

    Sebastian, I really like your videos and your simple way of explaining things. Could you create a tutorial or recommend a video for Stable Diffusion or CmofyUI on how to insert an object that has been generated into other scenes? Generate the same element in different scenes? For example, I generated the design of a new bottle and then, the prompt gave me a perfect result, after which I want to create an image of this same bottle in a scene with different angles or different poses (like a new photo of someone holding the bottle of juice, for example) It would be very interesting to have this type of video.

  • @Dzynerr
    @Dzynerr 10 месяцев назад +4

    Sometimes you give us quite the gems from the industry. Your research and sharing the knowledge is highly appreciated.

  • @joppemontezinos2092
    @joppemontezinos2092 9 месяцев назад +5

    I am also using an RTX4090 setup and i gotta say that i dont see much of a speed difference, however finding out about the comparison capabilities made it so much better to choose what model to use based on what i wanted to create. thank you for the info

    • @joppemontezinos2092
      @joppemontezinos2092 9 месяцев назад

      May also be noted i was doing about 80 sampling steps and at an upscale value of 2.3

    • @memb.
      @memb. 9 месяцев назад

      @@joppemontezinos2092 You're supposed to use 4 to 10 sampling steps AND cfg 1 to 3. It's very fast and yields good results but it's honestly a godsendfor mass producing images. You can make 100+ images SO FAST you can just pick the best one and high-res that with a better config to get the absolutely best of the best results.

    • @user-cute371
      @user-cute371 2 месяца назад

      SAME

  • @davewxc
    @davewxc 10 месяцев назад +24

    Tip for experimentation: use it like a regular lora and play with the weight. Some custom models that give horrible colors at 1, will actually work better at 0.7.

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад +4

      Great tip!

    • @KrakenCMT
      @KrakenCMT 10 месяцев назад +2

      I've discovered the same. Also increasing steps to hone in on the right quality. Maybe not 1000% increase, but 500% is still pretty good :) Even going all the way down to .1 will allow some to work much better and still get the speed increase.

    • @cyberprompt
      @cyberprompt 10 месяцев назад

      yes, I'd feel more comfortable using the standard lora syntax instead of this black box method from the dropdown. same with my saved styles. Anyone know how to see them again and not just the tabs to add them? (please don't mention styles.csv that's where I edit them).

    • @jonathaningram8157
      @jonathaningram8157 10 месяцев назад

      It doesn't appear under the regular lora network for me. I can just choose it from the dropdown menu

    • @wilsonicsnet
      @wilsonicsnet 8 месяцев назад +1

      Thanks for the tip, I've seen my Anime models get really dim after applying LCM.

  • @irotom13
    @irotom13 10 месяцев назад +8

    I made the same grid as in the video with 8 sampling steps for 2 cases: 1) with this LoRA and 2) withOUT it / None.
    The time to generate is basically the same (actually without this LoRA is 10 seconds faster) => so the speed depends on the sampling steps rather than LoRA.
    While quality => depends on the sampler but there are some VERY good effects without this LoRA at all for the same sampling steps.
    I can't see much difference in either speed or quality if the right sampler is used.

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад +1

      The point of using this lora & sampler is that you can achieve results in 8 steps that you otherwise might need 25 or more steps with other samplers. For the best quality, I'd recommend the Comfy route using the lcm sampler together with that Lora, as a1111 with another sampler is more of a half-measure atm.

    • @petec737
      @petec737 10 месяцев назад +1

      ​@@sebastiankamphlet's be honest, nobody uses lcm if they are looking for the best quality. The only people using lcm are the ones with old pc's who want to have some fun poking a couple 512x512 still unusable image.
      On any high end graphic card, 8 steps vs 25 steps is only 1 second difference, no matter the model or sampler used, so something like the lcm makes no sense to professional users.

  • @duskairable
    @duskairable 10 месяцев назад +30

    I've tried this with my ancient gpu gtx 970😂, generating 512x768, cfg 7, 30 steps image usually takes 42 seconds. With LCM it takes only 7 seconds, the result is comparatively good 👍

    • @jibcot8541
      @jibcot8541 10 месяцев назад +7

      You should be able to do it in 4-8 steps with LCM, my 3090 can make a a 512x512 image in 0.25 seconds

    • @eukaryote-prime
      @eukaryote-prime 10 месяцев назад

      980ti user here. I feel your pain.

    • @TheMaxvin
      @TheMaxvin 10 месяцев назад

      I have tried GTX1080Ti generating 768x768, cfg 8, 30 steps: with LCM or no the same result 30 sec.((((

    • @petec737
      @petec737 10 месяцев назад

      ​@@jibcot8541which 100% looks like trash and is totally unusable. Not sure what's up with people wanting to brag about being able to generate some tiny (512x512px) low quality images in a second.

    • @mehmetonurlu
      @mehmetonurlu 9 месяцев назад

      I'm wondering what would happened if i use this with vega8. Hope it helps.

  • @pavi013
    @pavi013 8 месяцев назад +3

    This helped a lot, i dont want to wait 1 hour to generate one image 😅

  • @UHDking
    @UHDking 2 месяца назад

    I am a big fan of you. Thanks for sharing knowledge in easy to follow language while everything is explained within the details not like other radio just repeating information that sometimes is not fully useful. Your stuff is good. Got my like and sub and a long time follower. I am one of you as AI researcher. Thanks very much.

    • @sebastiankamph
      @sebastiankamph  2 месяца назад +1

      So nice of you!

    • @UHDking
      @UHDking 2 месяца назад

      @@sebastiankamph Thanks man. I told it from heart and I benefited couple time due to your video. Good job while sharing info like a champ.

  • @bankenichi
    @bankenichi 10 месяцев назад +2

    Duuuude, ive been using the sdxl one for a few days and it is a gamechanger, didnt know there was one for 1.5, awesome!

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад +1

      Sweet! How have you been liking it for SDXL?

    • @bankenichi
      @bankenichi 10 месяцев назад +1

      @@sebastiankamph It's been amazing honestly, an order of magnitude faster on my 1080, going from 20+ mins with hires fix to about 1.5-3 mins using lcm. I was trying it out with 1.5 yesterday and it's great too, went from about 3 mins to just 30 secs. It honestly makes the experience much more enjoyable for me, being able to see this kind of improvement.

  • @sinisterin5832
    @sinisterin5832 8 месяцев назад +2

    My not so "ptato PC" and my impatience thank you very much, I am your fan. I already passed the information on to my brother, I'm sure he will be happy too.

  • @VooDooEf
    @VooDooEf 10 месяцев назад +1

    fuck das ist das beste SD Video dieses Jahr, ich kannst nicht fassen, wie schnell man jetzt damit arbeiten kann! Nvidia kann ihre TensorRT extention in die Tonne hauen!

  • @rycrex7986
    @rycrex7986 3 месяца назад

    JUst started a week ago and ive been loving it. Sweitching to comfy

  • @timhagen1426
    @timhagen1426 10 месяцев назад +5

    Doesn't work

  • @alderdean6112
    @alderdean6112 10 месяцев назад +4

    The SDXL lora does not seem to work for me. My RTX3060 with 12Gb VRAM gets 100% loaded and freezes the whole system for several seconds for each iteration. The outcoming images are usually a jumble of pixels. SD1.5 lora, however, seems to somewhat accelerate things for SD1.5 trained models.

  • @marhensa
    @marhensa 10 месяцев назад +3

    I found out the picture quality is worse ONLY when applied to custom SDXL models, when applied to SDXL vanilla, or SDXL SSD-1B, it's somewhat par in quality yet SUPER FAST!!! (Tested on ComfyUI, LCM SSD1-B, LCM Sampler, 8 Steps).

    • @taiconan8857
      @taiconan8857 10 месяцев назад +1

      Useful info, thanks! Unfortunately, in my case, I'm often on custom checkpoints, but the methodology could be instrumental in making future iterations faster. 👏🤩

    • @marhensa
      @marhensa 10 месяцев назад +2

      @@taiconan8857 yah, surely it's doable for helping animated diff, that needs many frames to generate.

    • @taiconan8857
      @taiconan8857 10 месяцев назад

      @@marhensa OH! I HADN'T EVEN CONSIDERED THAT YET! You're totally right! I'ma definitely need to revisit this when I'm at that stage. 👌😲

  • @ArchangelAries
    @ArchangelAries 10 месяцев назад +1

    Dunno if it's bc I'm on AMD windows system and on the DirectML branch of A1111, but it doesn't seem like I have any improvement in speeds with this LoRA and even with reduced weight to 0.5 still seems like all it does is reduce generation quality. Oh well. Thanks for sharing Seb, still love your content!
    Edit: Finally got it to work, my generations went from 38 sec/image with hires fix and ADetailer inpainting all the way down to 12 sec/image... Only downside is that the quality is worse than I'd prefer, most likely due to the requirement of low cfg scale basically ignoring negative prompts and embeddings

  • @Mowgi
    @Mowgi 10 месяцев назад +1

    LCM's are what we call Rice Crispy Treats in Australia. Used to love when Mum put them in my lunch box for school 🤣

  • @2008spoonman
    @2008spoonman 8 месяцев назад +1

    FYI: install the animatediff extension in A1111, this will automatically install the LCM sampler.

  • @markusblandus
    @markusblandus 10 месяцев назад +1

    Any chance you can show how the live webcam setup can be done?
    Thanks!

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад +1

      For the quickest answer, I'd guide you towards my Discord and ask kiksu himself.

  • @DJVibeDubstep
    @DJVibeDubstep 8 месяцев назад +1

    I'm using the DirectML version because I have an AMD and I have to use my CPU and It's PAINFULLY slow. Will this help with that? Or is it only for those using GPUs?
    I actually have a really decent GPU (RX 5700 XT) but I sadly can't use it since SD hardly supports AMD.

    • @LinkL337
      @LinkL337 7 месяцев назад +1

      Did u try it? I have rx 7800 xt and have the same problem. Looking for options to improve rendering performance. AMD released a video with a tutorial but I haven't tried that yet.

    • @DJVibeDubstep
      @DJVibeDubstep 7 месяцев назад +1

      @@LinkL337 I have not I just sucked it up and using the painfully slow CPU way lol. I spent 7+ hours trying all types of things though and nothing worked. I literally have to use my CPU it seems.

  • @CoconutPete
    @CoconutPete 7 месяцев назад +1

    Update: I wasn't able to get it to work, then found a post on Reddit which suggested deleting the "cache.json" file in the webui directory. I renamed mine to cache2.json (just in case) and sure enough the Lora tab was showing ssd-1b in it and noticed speed improvements. Must be a bug of some sort as the cache.json file showed up again and everything seems to be working

  • @maikelkat1726
    @maikelkat1726 7 месяцев назад +1

    'thanks but it doesnt make it faster...its the same speed...3-4 secs sdxl with or without lora in between ... any ideas why? i have old rtx 3090, 8g

  • @ulamss5
    @ulamss5 10 месяцев назад

    Thanks for the mega grid comparison - most of the comparisons so far are probably using the DPM 2M Karras, long time best performer, and seemingly terrible with LCM. I'll let the community do a few more evaluations with sampler and CFG before switching over.

  • @aegisgfx
    @aegisgfx 10 месяцев назад +6

    Wow so instead of creating a hundred images every day that nobody cares about I can create 10,000 images a day that nobody cares about, fantastic!!!

    • @politicalpatterns
      @politicalpatterns 10 месяцев назад +2

      Why are you so salty over this? It's a tool that some people use in their workflow. 😂

  • @sidejike438
    @sidejike438 5 месяцев назад

    I already did the --xformers edit, can I still use this Lora or would the quality of images be affected?

  • @hjjubnh
    @hjjubnh 10 месяцев назад +4

    In A1111 I don't see any difference in speed, the results are just worse

  • @biggestmattfan28
    @biggestmattfan28 3 месяца назад

    Do you know how to make it faster for pony diffusion? I dont think this works for pony models

  • @TheSparkoi
    @TheSparkoi 4 месяца назад

    hey do you think we can have more than 0.7 frame par second if you render only 500X500 with a 4090 as hardware

  • @andreassteinbrecher458
    @andreassteinbrecher458 10 месяцев назад +1

    hey :) did the KSampler changed with the last update? i get errors on all my animatediff workflows since i updated all comfi-ui.
    Error occurred when executing KSampler:
    local variable 'motion_module' referenced before assignment

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад

      Hmmmmm, good question 🤔

    • @keymaker.3d
      @keymaker.3d 10 месяцев назад

      me, too!

    • @andreassteinbrecher458
      @andreassteinbrecher458 10 месяцев назад

      today i did another UPDATE ALL in comfy-ui, and now animatediff is working fine again :)

    • @keymaker.3d
      @keymaker.3d 10 месяцев назад

      @@andreassteinbrecher458 yes,'UPDATE ALL' is the key

  • @CoconutPete
    @CoconutPete 7 месяцев назад

    I'm confused with trying to get this working with SSD-1B. I downloaded, put in the correct folder, renamed and it shows in the add to network prompt drop down, but so far notice no improvements and quality seems poor. I keeps seeing something about diffusers but not sure what that is all about . Going back to the drawing board lol

  • @daan3898
    @daan3898 10 месяцев назад

    Thanks for the research, will try it out !! :)

  • @matthallett4126
    @matthallett4126 10 месяцев назад +1

    I've got a 4090 as well, and I can't not reproduce your results in A1111. Will keep trying.

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад

      I am running with sdp memory optimization. Similar speed increase as xformers.

  • @claudiox2183
    @claudiox2183 8 месяцев назад

    Thank you! It works nice, Both A1111 and Comfy as well. But I have a rookie question. I can't save the Comfy workflow explained in the video, with the Lora loader node installed. If I save it as a .JSON file or PNG image it does not reload....

  • @BlueSentinel-o1r
    @BlueSentinel-o1r 8 месяцев назад

    After my first generation, the following generations are much slower. Any idea why this happens and how to avoid it?

  • @spiritsplice
    @spiritsplice 8 месяцев назад

    Vanlandic can't even see the files. They won't show up in the list after dropping them in the folder and restarting.

  • @N-DOP
    @N-DOP 10 месяцев назад +1

    Is there also a way to enhance performance for image2image generations?
    I selected the Lora, adjusted the steps and the CFG Scale but the render time is still the same if not worse. Please help :'D

  • @ScorgeRudess
    @ScorgeRudess 10 месяцев назад +1

    This is amazing!!! Thanks!

  • @april11729_
    @april11729_ 7 месяцев назад

    my god! it works !!!!thankyou !!

  • @clay6440
    @clay6440 4 месяца назад

    your link for civitai is no longer working

  • @Steamrick
    @Steamrick 10 месяцев назад +1

    At a cfg scale of 2, how well does it adhere to complicated prompts?
    I get that it's amazing for AnimateDiff or real-time applications, but is the quality good enough to replace workflows for image generation?

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад

      Probably less than usual. But try shorter prompts and weight them more.

  • @stanTrX
    @stanTrX 9 месяцев назад

    thanks but mine is still very very very slow... what else can i do?

  • @타오바오-h8l
    @타오바오-h8l 9 месяцев назад

    Thanks as always! I have an off-topic question, is there any way to make StableDiffusion not show people but only clothes? I put no human, no girl, etc. in the negative prompt and it still shows people.

  • @intelligenceservices
    @intelligenceservices 9 месяцев назад

    i have a 3060 12GB gpu, was getting vram errors with this workflow on XL. process was rerouted to cpu. 50-70 seconds. so i suspected my vram was being squatted by orphan processes. rebooted and it's now working the way you describe. thanks.

  • @olvaddeepfake
    @olvaddeepfake 9 месяцев назад

    i don't have the option to add the lora setting to the UI

  • @victorvaltchev42
    @victorvaltchev42 10 месяцев назад

    Great video. What I don't get is why the CFG needs to be so low?

  • @ADZIOO
    @ADZIOO 10 месяцев назад +1

    Not working for SDXL. Always bad quality, it should be also 8 steps/1CFG Scale at SDXL?

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад

      Works great for me with the LCM sampler. Not well without it.

    • @ADZIOO
      @ADZIOO 10 месяцев назад

      @@sebastiankamph Okay then, now I know. I am at A1111, there is still not patch with LCM sampler, at least 1.5 working with Euler A.

    • @2008spoonman
      @2008spoonman 8 месяцев назад

      ​@@ADZIOOinstall the animatediff extension, this will automatically install the LCM sampler.

  • @ortizgab
    @ortizgab 10 месяцев назад

    Hi! Thanks for the lessons, they are great!!!
    I cant set the sample steps below 20... Am I missing something?

  • @_trashcode
    @_trashcode 10 месяцев назад

    you mentioned animateDiff? how can you use LCM with animateDIff? great video, btw

  • @ferluisch
    @ferluisch 9 месяцев назад

    How much faster is it really? A comparison would be nice, also this could be used with the new tensorCoreRT?

  • @Christian-iu3lo
    @Christian-iu3lo 10 месяцев назад +1

    Lmao this crashes the crap out of my amd card. I have a 7800xt and it steals all of my vram immediately which forces me to restart

  • @ComplexTagret
    @ComplexTagret 10 месяцев назад

    And how to manage weight of lora in that upper menu? If you add lora to prompt field it is possible to manage as .

  • @zahrajp2223
    @zahrajp2223 8 месяцев назад

    How i can use it with fooocus?

  • @khalifarmili1256
    @khalifarmili1256 Месяц назад

    can this work on SD 3 ?

  • @gorge.p96
    @gorge.p96 10 месяцев назад

    Cool video. Thank you

  • @kahosin890
    @kahosin890 6 месяцев назад

    How did you get that Ui?

  • @SupremacyGamesYT
    @SupremacyGamesYT 10 месяцев назад

    I assumed this video would be about the RT in A111, what's going on with that is it out yet? I've break from AI since March.

  • @stableArtAI
    @stableArtAI 4 месяца назад

    Ok first run of video, very confused what the one step use to make it 1000% faster??? download "1" file?? you started download several files and what so lost..

  • @HaiderAli-l1y2w
    @HaiderAli-l1y2w 10 месяцев назад

    hello sebastian love your videos
    can you also make video on how to use 2 character loras in image to image generation without inpainting ?
    thank you

  • @tuhinbiswas98
    @tuhinbiswas98 9 месяцев назад

    this will work with intel arc???

  • @AndyHTu
    @AndyHTu 10 месяцев назад

    Does this trick only work with the dreamshaper model or would it work on any models?

  • @jordanbrock4142
    @jordanbrock4142 6 месяцев назад

    I'm kinda new, but isnt it a problem if i have to use this LoRA? I mean, I can only use 1 LoRA at a time right? And if Im using this one it means I can't use another, which sort of defeats the purpose...

    • @zuriel4783
      @zuriel4783 5 месяцев назад

      You can use as many LoRAs at a time as you like, there could possibly be a limit that i'm not aware of, but I know for sure you can use at least 4 or 5 at a time

  • @micbab-vg2mu
    @micbab-vg2mu 10 месяцев назад

    Amazing!!! Thank you :)

  • @henrischomacker6097
    @henrischomacker6097 10 месяцев назад +1

    Hmm... why is it working for you and not for a lot of us in automatic1111?
    * Downloaded and renamed both Loras and put them into their Lora directory
    * Enabled sd_lora in User-Interface Option in main UI
    * Reloaded UI
    * Updated complete automatic1111 with all extensions
    * Restarted automatic1111 (ORIGINAL)
    * lcm Loras do NOT appear in the Lora Tab Gallery, Only in the unusable dropdown list if you have a lot of Loras
    * Tried all my Models AND Samplers for 1.5. and XL, all with really bad results with 8 sampling steps
    My Options in main UI (like the "Add network to prompt" dropdown is shown in the left column under CFG scale, seed, etc.
    Are you using a different version of automatic1111 or ist there something else that has to be anabled what a lot of us maybe don't have?

  • @MatichekYoutube
    @MatichekYoutube 10 месяцев назад

    test LCM on stable diffusion - seems that img2img lcm and vid2vid has an error - TypeError: slice indices must be integers or None or have an __index__ method

  • @cyberprompt
    @cyberprompt 10 месяцев назад +1

    Oh and @sebastiankamph... I almost always laugh at your jokes even if my wife hates when I tell her them. Said the facial hair one to her yesterday because I DON'T like facial hair and she knows that! :)

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад

      Hah, I love it! Keep spreading the dad jokes for everyone to enjoy 😊🌟

  • @alexvovsu675
    @alexvovsu675 10 месяцев назад

    Is it possible to do on Silicon M2? I try, but have some issues

  • @flareonspotify
    @flareonspotify 10 месяцев назад

    I have a M1 16 unified memory MacBook Air I wonder how has it would be on it

  • @athenalong
    @athenalong 10 месяцев назад

    HAHAHA 😅
    I ::: honestly ::: look forward to the Dad jokes 🤣
    Even if I don't have time to watch the entire video when I initially see it, I will watch until the joke and then come back later 😆👏🏾

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад +1

      Hah, glad to hear it! And great that you're coming back too 😅😁

  • @NamikMamedov
    @NamikMamedov 10 месяцев назад

    How to make common image like yours? With all generations results in one table with methods and scalers?

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад

      Xyz plot in script at bottom. Can see my settings in video

  • @FearfulEntertainment
    @FearfulEntertainment 7 месяцев назад

    does having A1111 installed on a HDD or SSD matter?

    • @scarekrow1264
      @scarekrow1264 6 месяцев назад

      absolutely - ssd is way faster

  • @palax73
    @palax73 10 месяцев назад

    Thanks bro!

  • @unowenwasholo
    @unowenwasholo 10 месяцев назад

    This is WILD! This ecosystem continues to boggle the mind. There's certainly some amount of "too good to be true" in here, such as the lora not playing nice with a lot of samplers, but cool nonetheless.
    Btw, a couple things I would have liked discussed / to see is how this performs with common current settings (i.e. higher steps ~20 / CFG ~5), and on other models even if just sd1.5 / sdxl based models. Even if it was just like 15-30 seconds showing a good model vs a bad model that you've found. ofc, there's always the whole "try it in your workflow to see how it is for you," just would be nice to know if I can expect this to work outside of vanilla sd.

  • @solidkundi
    @solidkundi 8 месяцев назад

    can you use it with turbo xl?

  • @cyberprompt
    @cyberprompt 10 месяцев назад

    still have to experiment with this more but wow.. zoom! a 960x640 usually takes at least 1.5 minutes (RTX1080), this is done in seconds. Not quite happy with the detail yet however. But great for a quick try of a prompt I guess until I do more tweaking.

  • @PerChristianFrankplads
    @PerChristianFrankplads 10 месяцев назад

    Will this work on Apple silicon like M1?

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад +1

      Actually, Apple M1 reached the most speed improvements (10x). I haven't tested myself, but the claims seem to be solid.

  • @thegreatujo
    @thegreatujo 10 месяцев назад

    How do I make the interface like yours ? At the top where you select the model/checkpoint you have two more dropdowns to the right called SD_VAE and Add Network to Prompt. If somebody else than the video creator has the answer feel free to reply

    • @drabodows
      @drabodows 10 месяцев назад

      Watch the video, he shows you how...

    • @xyzxyz324
      @xyzxyz324 9 месяцев назад +1

      01:38 - 01:57

  • @consig1iere294
    @consig1iere294 10 месяцев назад

    I am super confused, when I go to download the LCM model for SDXL, are we downloading the "pytorch_lora_weights.safetensors" file? I did that and used it as LORA, it is stuck! I am using a RTX 4090.

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад +2

      Yes! One for 1.5 and one for SDXL. Rename so you know which is which. Put in Loras folder

  • @MisterWealth
    @MisterWealth 10 месяцев назад

    But does this work with sdxl?

  • @_trashcode
    @_trashcode 10 месяцев назад

    i would like to find a way to use that with deforum and control net. does anybody have an idea how to make it work in automatic1111?

  • @marcus_ohreallyus
    @marcus_ohreallyus 10 месяцев назад +1

    IS this LORA affecting the outcome of the artwork look or style, other than speeding it up? If this changes quality for the worse, I would not see the point of using it because SD is pretty fast as it is.

  • @Wunderpuuuus
    @Wunderpuuuus 10 месяцев назад

    I am seeing a lot of Comfy UI and Automatic 1111. Is there and advantage to use one over the other? Is one better at "A" and another at "B"?

    • @jonathaningram8157
      @jonathaningram8157 10 месяцев назад +2

      It's a very different philosophy. I would recommend automatic1111 for beginner and also for flexibility. ComfyUI in my opinion is more specialized but you don't have as much creative power (the inpaint for instance is quite annoying to setup). I tried ComfyUI and I'm back to automatic1111, it gives me the best results (also I kinda lost my node setup for ComfyUI and it's a pain to do).

    • @Wunderpuuuus
      @Wunderpuuuus 10 месяцев назад

      @@jonathaningram8157 thank you! I also have been using automatic 1111 atm, but saw so many videos for ComfyUI so I thought i'd ask. thanks for the response!

  • @DerXavia
    @DerXavia 10 месяцев назад +3

    Its even slower for me and looks much much worse using XL

  • @maestromikz
    @maestromikz 9 месяцев назад

    will this work on mac m1?

  • @ArcaneRealities
    @ArcaneRealities 10 месяцев назад

    can this be done with animation ? animated diff or video to video ? not sure I am setting it up right - in Comfy

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад

      Yes!

    • @elowine
      @elowine 10 месяцев назад

      ​@@sebastiankamph I tried it too but I only get "weight" errors and noise. The creator of AnimateDiff seems to be working on a fix, not sure why some people claim it works for them?

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад +1

      @@elowine I used it just a few hours ago and worked ok. Not amazing, but ok.

    • @elowine
      @elowine 10 месяцев назад

      @@sebastiankamph Ah nice, thanks for checking. Maybe an issue with certain GPU's

  • @ragnarmarnikulasson3626
    @ragnarmarnikulasson3626 10 месяцев назад

    tried this with sdxl with no good resaults. sdv1-5 worked great though. any ideas? was using sd_xl_base_1.0.safetensors [31e35c80fc] with the lcm-lora-sdxl on mac m1 if that makes any difference

  • @nermal93
    @nermal93 10 месяцев назад

    Is this working with img2img?

  • @luxecutor
    @luxecutor 10 месяцев назад

    Does the LCM model work only with SDXL, and not SD 1.5 based models?

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад +1

      This is available for both. Works best in Comfy atm.

    • @luxecutor
      @luxecutor 10 месяцев назад

      @@sebastiankamph Thank you. I look forward to trying it out. Still haven't taken the plunge on comfy yet. I really need to take some time and get it set up.

  • @povang
    @povang 10 месяцев назад

    Not optimized for a1111 yet. Im using a custom checkpoint, a1111, 1.5 same settings as in video. Im on a 1080ti, and the quality is worse and the generation speeds are faster, but lower quality image.

  • @donschannel9310
    @donschannel9310 9 месяцев назад

    mine is not even generating any pic

  • @xorvious
    @xorvious 10 месяцев назад

    Im not seeing where to get the LCM scheduler for comfy, can someone point me in the right direction?

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад +1

      It comes automatically when you update comfy

    • @xorvious
      @xorvious 10 месяцев назад

      @@sebastiankamph Found it, thank you!

  • @rakibislam6918
    @rakibislam6918 10 месяцев назад

    how to add cinematic styles file?

  • @metanulski
    @metanulski 10 месяцев назад +1

    I am confused. my pictures look worse using this :-(

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад

      Make sure to use the LCM sampler in Comfy for best results.

    • @metanulski
      @metanulski 10 месяцев назад

      @@sebastiankamph I used Auto1111. I did Put the 1.5 lora in the lora folder, loaded a 1.5 model, added the lora to the prompt and set the steps to 8 with euler. Result looks worse than without the lora.

    • @metanulski
      @metanulski 10 месяцев назад

      I did not use the lora dropdown Like you did. Ist this a must?

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад

      Not at all. Just an easy way of using it. But it limits the use of weights.@@metanulski

    • @metanulski
      @metanulski 10 месяцев назад

      @@sebastiankamph thanks. will try again today. :-)

  • @neuraldee
    @neuraldee 10 месяцев назад

    Thanks, but it's not working on mac m2(

  • @peacetoall1858
    @peacetoall1858 10 месяцев назад

    Newbie question - would this speed up image generation on my Nvidia GTX 1060 gaming laptop with 4GB VRAM?

    • @user-jw9kg5rt4d
      @user-jw9kg5rt4d 10 месяцев назад

      Yup, should speed up image generation on any platform so long as you make use of the lower steps "requirement" for a decent image.

    • @peacetoall1858
      @peacetoall1858 10 месяцев назад

      @@user-jw9kg5rt4d That's awesome. Thanks!

  • @duskairable
    @duskairable 10 месяцев назад

    When is the LCM sampler gonne be in A1111?

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад

      I have no idea, hopefully soon 😅

    • @2008spoonman
      @2008spoonman 8 месяцев назад

      Install the animatediff extension in A1111, this will automatically install the LCM sampler.

  • @sinanisler1
    @sinanisler1 10 месяцев назад

    sdxl doesnt work, not sure why.
    probably need latest pips. will test again later.

  • @the_smad
    @the_smad 10 месяцев назад

    Need to try for my gtx 1060. yesterday, with xformers and medvram it took 30 minutes to do a single image with sdxl and no refiner

    • @sebastiankamph
      @sebastiankamph  10 месяцев назад +1

      Let me know what speed improvements you get 😊