ComfyUI - SUPER FAST Images in 4 steps or 0.7 seconds! On ANY stable diffusion model or LoRA

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024
  • Today we explore how to use the latent consistency LoRA in your workflow. This fantastic method can shorten your preliminary model inference to as little as 0.7 seconds and in only 4 steps using ComfyUI and SDXL. This will also make it a lot easier to run these models on older hardware and is just mind-blowing fast! Now, it isn't perfect, but it sure helps you find some base images quickly.
    #comfy #stablediffusion #aiart #ipadapter
    You can download the LCS LoRA models from hugging face here:
    huggingface.co...
    Interested in the finished graph and in supporting the channel as a sponsor? I will post this workflow (along with all of the previous graphs) over in the community area of RUclips. Come on over to the dark side! :-)
    / @sedetweiler

Комментарии • 157

  • @JustFeral
    @JustFeral 10 месяцев назад +35

    You just made me redo my whole workflow since this alone alows me to iterate on ideas so much faster. This stuff moves so damn fast. I get off youtube for a week and so much changes in AI.

    • @sedetweiler
      @sedetweiler  10 месяцев назад +4

      Yeah, it is a bit insane for sure.

  • @jameslafritz2867
    @jameslafritz2867 3 месяца назад

    This is awesome, it even works on the SDXL turbo models taking my time from about a minute per sample to ~14secs.

  • @Satscape
    @Satscape 10 месяцев назад +17

    4GB VRAM normally takes 2 to 5 minutes, this takes 20 seconds. Great for use as a starting point!

  • @EpochEmerge
    @EpochEmerge 10 месяцев назад +5

    Could you please explain what did you mean on 4:40, I need to use ModelSamplingDiscrete(lcm) node AFTER LCM-lora? If i want to stack loras

  • @mo5909
    @mo5909 10 месяцев назад +3

    I have ADHD so I love your short but very inforamtive videos, plz don't stop!

  • @sparkilla
    @sparkilla 5 месяцев назад

    Thanks for the information, I want extremely detailed hyper realistic images and this helps out a lot adding a sampler before my main sampler, also doing face swapping and supir upscaling in the same workflow results are terrific and also about 10-15 seconds faster per pic now as well.

  • @bronkula
    @bronkula 10 месяцев назад +7

    It really feels like you skipped 5 steps here, considering that it is not at all clear how to get lcm into the sampler_name or that you renamed the lora. Can you sticky a comment that explains a couple extra steps to get to your initial position?

    • @KINGLIFERISM
      @KINGLIFERISM 10 месяцев назад +6

      Yeah he wants a cash grab... does not care about people watching his youtube or why would he do that. Here... Latent Consistency Models LoRAs vs Latent Consistency Models Weights. The first one is LCM LoRA (SDXL LCM LoRa, SD1.5 LCM LoRA, SSD1B LCM LoRA), the second one is LCM models (SDXL, SSD1B, Dreamshaper7). This video use the smaller one, the LoRA. Just rename the downloaded LoRA pytorch_lora_weights.safetensors from lcm-lora-sdxl and renamed it to lcm_sdxl_lora_weights.safetensors to match his.

    • @gordonbrinkmann
      @gordonbrinkmann 10 месяцев назад +4

      If you update ComfyUI with the manager, the lcm sampler will be there. When watching tutorials on new features make sure your software is updated...

    • @whatwherethere
      @whatwherethere 10 месяцев назад

      @@gordonbrinkmannSo I went through ComfyUI_Manager and selected Update ComfyUI. I still don’t have the lcm sampler. It’s a relatively new install altogether, maybe Friday. Any idea what I am doing wrong?

    • @gordonbrinkmann
      @gordonbrinkmann 10 месяцев назад

      @@whatwherethere I updated it after watching the video and not seeing the lcm sampler, then I restarted ComfyUI - did you restart it? The changes will only be implemented after closing ComfyUI and starting it again. That was all I did, then there was the lcm sampler.

    • @rinpsantos
      @rinpsantos 10 месяцев назад

      I dont need rename the Lora files. It is optional.

  • @hleet
    @hleet 10 месяцев назад

    Thank you for showing all theses new features ! AI still has so much to show to the world !

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      It sure is coming along fast!

  • @moon47usaco
    @moon47usaco 10 месяцев назад +2

    Great Scott (pun intended)... This is amazing. SDXL may be back on the table again. 1024 sdxl generation went from less than 20 seconds for one image to less than 5... =0

    • @sedetweiler
      @sedetweiler  10 месяцев назад +1

      It's pretty wicked for sure!

    • @moon47usaco
      @moon47usaco 10 месяцев назад

      @@sedetweiler Unfortunate that It seems to degrade quickly with cntrolnet added in the flow. =\

  • @MarkDrMindsetChavez
    @MarkDrMindsetChavez 10 месяцев назад +1

    keep 'em coming bro!

  • @spiralofhope
    @spiralofhope 10 месяцев назад +3

    [edit] - an update fixed it
    nope.
    There is no sampler_name "lcm". I had to try euler_ancestral, but that looks mostly shit.

    • @bwheldale
      @bwheldale 10 месяцев назад +2

      I had the same until I updated comfyui via manager -> update comfyui (The update .bat file didn't do it but through manager did)

    • @AlIguana
      @AlIguana 10 месяцев назад

      yeah the LCM k-sampler just came out today, you need to update Comfy

    • @sedetweiler
      @sedetweiler  10 месяцев назад +1

      Update comfy. This is less that 24 hours old.

    • @spiralofhope
      @spiralofhope 10 месяцев назад +1

      @@sedetweiler
      I updated and see it, thanks!
      I now have a problem with LoRAs not being used, but I'll look around for answers.

  • @rinpsantos
    @rinpsantos 10 месяцев назад

    Works like a charm for me. Thank you!!!

  • @jonmichaelgalindo
    @jonmichaelgalindo 10 месяцев назад +2

    It's fantastic for quick prompt experimenting!

  • @jasoa
    @jasoa 10 месяцев назад +9

    Sometimes it's hard to find the model you're using on Huggingface. Did you rename the downloaded LoRA? Edit: I think I found it. I downloaded pytorch_lora_weights.safetensors
    from lcm-lora-sdxl and renamed it to lcm_sdxl_lora_weights.safetensors to match yours.

    • @sedetweiler
      @sedetweiler  10 месяцев назад +5

      Yes, sorry. They are all named the same thing. You will always need to rename them. I should have mentioned that, but this is a very consistent thing with the names being generic.

    • @jasoa
      @jasoa 10 месяцев назад +2

      Thanks for the tutorial. I wonder how much amazing stuff is hiding in comfyui and the stable diffusion world that we'd never know about without your videos.

    • @sedetweiler
      @sedetweiler  10 месяцев назад +7

      There is a ton! There are also things in AUTO1111 that no one has covered yet that I will probably make videos on as well. So much, and it is constantly evolving!

    • @bwheldale
      @bwheldale 10 месяцев назад

      I'm at the download site crossroads feeling lost: Latent Consistency Models LoRAs vs Latent Consistency Models Weights. I downloaded one of each no wiser, it's not eady being a noob. PS: pytorch_lora_weights.safetensors = 380MB other = 4.5GB My guess it's the smaller.

    • @marhensa
      @marhensa 10 месяцев назад +4

      @@bwheldale Latent Consistency Models LoRAs vs Latent Consistency Models Weights. The first one is LCM LoRA (SDXL LCM LoRa, SD1.5 LCM LoRA, SSD1B LCM LoRA), the second one is LCM models (SDXL, SSD1B, Dreamshaper7). This video use the smaller one, the LoRA.

  • @Queenbeez786
    @Queenbeez786 10 месяцев назад +1

    to install put the file in lora model

  • @erperejildo
    @erperejildo 5 месяцев назад

    don't you have to connect the LoadLora to the prompt? Does it really matter?

  • @CY_max
    @CY_max 10 месяцев назад +1

    i did exactly what you did but it's taking way longer. Dont know why. Im using RTX4060 (laptop)

  • @kachuncheng-s1v
    @kachuncheng-s1v 10 месяцев назад

    Very clear ! Thanks !

  • @raven1439
    @raven1439 10 месяцев назад

    Could you make a video on how to properly connect this lora with sdxl base and refiner from earlier video?

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      We do that in live streams as well. It's very similar to what we did here.

  • @MrPlasmo
    @MrPlasmo 10 месяцев назад +1

    wow - do the LCM Loras only work with ComfyUI or does it also work in A1111?

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      I have no idea on A1111.

  • @francescobriganti6029
    @francescobriganti6029 10 месяцев назад +1

    Hi! I'm trying it with AnimateDiff but I keep getting this error "'VanillaTemporalModule' object has no attribute 'cons", have you also got it / any solutions? thanks!!

    • @sedetweiler
      @sedetweiler  10 месяцев назад +2

      I will have to mess around with it.

  • @kasoleg
    @kasoleg 3 месяца назад

    how to enable upscale?

  • @Al_KR_t
    @Al_KR_t 10 месяцев назад +1

    Is it possible to combine it with animatediff? I ran into a lot of errors when I tried, and a lot of models don't seem to be compatible

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      I have seen people doing so, but I don't tend to do a lot of animation.

  • @loubakalouba
    @loubakalouba 10 месяцев назад

    Thank you for a great tutorial.

  • @AI-Efast
    @AI-Efast 10 месяцев назад

    I tried it, work fine with still image generation, but when work with animatediff, why the image quality dropped significantly

  • @Cocaine_Cowboy
    @Cocaine_Cowboy 10 месяцев назад

    Amazing. AnimateDiff speed work just wow! Thank you very much!

  • @paul606smith
    @paul606smith 10 месяцев назад

    If you switch sdxl model for segmind ssd cut down model this workflow works even faster and it will work on low end 1650 4gb and 8 GB ram laptop

  • @jonmichaelgalindo
    @jonmichaelgalindo 10 месяцев назад

    If you're on A1111 and don't have LCM sampler, Euler A works well enough to test this. (It's not perfect, but it's usable.)

  • @victorvaltchev42
    @victorvaltchev42 10 месяцев назад +1

    Great content. What if you make it 30-50 steps. Is it better quality than without this lora or is it just speed boost on low steps?

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      Nope, it often seems to get worse and it changes a ton as it advances.

    • @Utoko
      @Utoko 10 месяцев назад +1

      Not with SGM_Uniform since it adds constant noise and just keeps changing but I got higher quality on SD15 with 20 steps but with exponential sampler.
      For SDXL I had the best results also with exponential around 8 steps so far.

  • @stephantual
    @stephantual 9 месяцев назад +1

    Thank you! this is useful for quickly making video frames. I use comfyroll , works well with it (but it's not easy - maybe you could make a tutorial? - see what i did here ;) - great vid as usual.

    • @sedetweiler
      @sedetweiler  9 месяцев назад +2

      Great suggestion!

    • @stephantual
      @stephantual 9 месяцев назад +1

      Thank for replying@@sedetweiler . Comfryroll, Rgthree and Trung's 0246 + anything everywhere are my go to nodes right now.

  • @Bikini_Beats
    @Bikini_Beats 10 месяцев назад +1

    Very cool,, Thanks

  • @gordonbrinkmann
    @gordonbrinkmann 10 месяцев назад

    I'm having a technical question, in the video you say, cfg values of 1 or below make the sampler ignore the negative prompt. I have no idea if this is true because I usually have no or just a short negative prompt so I could not see a difference in the images.
    But what I saw - and this brings me to the technical question: at a cfg value of 1.0 (and only there, neither above nor below 1), the steps took only about 50% to 60% percent of the time as usual with my GPU - so could it be there is something happening exactly at 1.0 that is different from the other values, like it's ignoring the negative prompt only at 1, but not the other values?
    And if so, is there a way for people who usually don't use negative prompts, to somehow speed up the rendering by making the K Sampler ignore the negative prompt? Because the speed increase is only at cfg 1, simply leaving the prompt empty at e.g. higher values does not work. Unplugging the negative prompt does not work either because it simply throws an error for an unconnected socket and doesn't start to render.
    Maybe this is not even happening on other machines... but if so, I would really like to find a way to deliberately ignore the negative prompt even at higher cfg values (because my images with cfg 1 are most of the time not detailed enough). Or maybe this has nothing to do with ignoring the negative prompt and is just happening because of that specific cfg value?

  • @--signald
    @--signald 10 месяцев назад +1

    Does it work with A1111?

    • @sedetweiler
      @sedetweiler  10 месяцев назад +1

      Probably in a few weeks.

  •  10 месяцев назад

    Cool video ! Thanks

  • @ywueeee
    @ywueeee 10 месяцев назад +3

    can you animatediff LCM

    • @christianblinde
      @christianblinde 10 месяцев назад

      Just tried it, works better than i thought!

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      Yup, should work just fine.

  • @aerofrost1
    @aerofrost1 10 месяцев назад +1

    My ksampler doesn't have the LCM sampler?

    • @sedetweiler
      @sedetweiler  10 месяцев назад +1

      Make sure everything is updated. This sampler is less that 24 hours old.

    • @aerofrost1
      @aerofrost1 10 месяцев назад

      @@sedetweiler Updated all and it's there now, thank you! I was running around 16 steps with DDIM but the quality of LCM is even better quality with 4 steps than it was with DDIM 16 steps. Do you know if quality increases exponentially with LCM or does it max at 4?

  • @telemole9427
    @telemole9427 10 месяцев назад +1

    I have no sampler called 'lcm' - have i missed a step? :(

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      Yup, this is a day old, so if you are not up-to-date you will not have it.

    • @telemole9427
      @telemole9427 10 месяцев назад

      @@sedetweiler- i discovered! Thanks so much for this one - this is SO fast - reworking tons of workflows now ;)

  • @oranguerillatan
    @oranguerillatan 10 месяцев назад

    Hi Scott, great video, thank you.
    When doing "vid2vid" with comfy and animdiff/controlnet...
    Do you pass the video frames straight into the ksampler, or do you push empty latents into it?
    I'm getting sub par results with the former, and have not tried the latter yet.

    • @sedetweiler
      @sedetweiler  10 месяцев назад +1

      I will look into it. I don't do much in the way of video at this time.

    • @oranguerillatan
      @oranguerillatan 10 месяцев назад +1

      @sedetweiler results have gotten better due to some help from the wonderful Coffee Vectors and Purz and others, sd1.5 vid2vid working quite well with lcm and anim diff now, results on my Twitter from last night. Defo still needs some tweaks.
      Sdxl vid2vid lcm anim diff is still proving a little more elusive, but I am doing a lot of tests to find the right combo of weights and control nets. Results coming soon.
      Thank you for your amazing walkthroughs, you've really helped get me going with comfy in the recent weeks.

  • @Fouadfmtv
    @Fouadfmtv 10 месяцев назад

    Amazing thank you

  • @vilainm99
    @vilainm99 10 месяцев назад

    Like many others, no LCM sampler. Tried many updates through Manager with every time a restart, but no luck....

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      And you are pulling the latest from all of the extensions? These releases are less than a day old, so everything needs to be updated to keep up. Sorry you are unable to find it, but it isn't hidden, something just isn't updating.

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      You can actually see it here in the comfy code, added 3 days ago. You need to be sure to do a "git pull" on comfy.
      github.com/comfyanonymous/ComfyUI/commit/002aefa382585d171aef13c7bd21f64b8664fe28

    • @vilainm99
      @vilainm99 10 месяцев назад

      Nuked pinokio, reinstall and tadaaa!! Working great!

  • @Smashachu
    @Smashachu 10 месяцев назад

    Does this work with tensor RT?

  • @fanyuworld
    @fanyuworld 10 месяцев назад

    Decline in quality or

  • @andresz1606
    @andresz1606 10 месяцев назад +3

    Don't forget to install WAS Node Suite and LCM Sampler with the manager before trying to build this workflow. Also, your ModelSamplingDiscrete seems mostly useless, I have 3 loras chained and adding your MSD node makes no remarkable difference or improvement whatsoever.

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      Yes, those nodes are critical and I actually feel they should be part of the base product they are so good.

  • @LouisGedo
    @LouisGedo 10 месяцев назад

    👋

  • @Deadgray
    @Deadgray 10 месяцев назад +7

    Idea here is that you just put lcm lora with any model, and also any sampler and scheduler, you don't have to use lcm as sampler, you can use slow ones like heun and get great results but so much faster. Also with Comfy Efficient Nodes it feels like XY plot with 5x5 is made as fast as 1 image before. Try that, see the difference 🙂

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      That's a great idea!

    • @gordonbrinkmann
      @gordonbrinkmann 10 месяцев назад +2

      I tried it with other samplers instead of lcm, and the results were actually terrible no matter if I did a few or my usual number of steps and no matter what cfg value 😆
      However, when using lcm set to get good results I found no matter which scheduler I used, all images were very similar so I ended up using karras because it was (marginally) the fastest. Even ddim_uniform which Scott mentions not seeming to work with lcm gave great results - just very different from all the other similar looking images.

  • @skycladsquirrel
    @skycladsquirrel 10 месяцев назад +7

    Running on a 4090 and it's incredible with Animate Diff. What a dream. Thanks for the incredible video.

    • @sedetweiler
      @sedetweiler  10 месяцев назад +1

      Glad you enjoyed it!

  • @ga1205
    @ga1205 6 месяцев назад +1

    Does the model still exist? I went to the link but can't find the same lora.

  • @othoapproto9603
    @othoapproto9603 10 месяцев назад +2

    don't attempt if you don't understand HuggingFace. Love how he assumes you know how to DL and install.

    • @itscaptainterry
      @itscaptainterry 4 месяца назад

      if downloading from HuggingFace is where you get stuck, you should prolly take a couple steps back before you delve into running all of this locally.

  • @thinkright5611
    @thinkright5611 8 месяцев назад +1

    Loll..... I was hoping you'd show how the LoRA model connected to the CLIP O_O. Guess not. lol

  • @JimPenceArt
    @JimPenceArt 10 месяцев назад +4

    Thanks! I've got a pretty slow system and it greatly improved the speed. 45 sec/img down from 3+ min/img👍👍

  • @dylanfrercks
    @dylanfrercks 7 месяцев назад +1

    Even with this LoRa (1.5) it’s taking me 2 minutes to generate a single image. Is my MacBook Pro (8gb ram) actually that bad, or is there something else I could be mission?

    • @Tofu3435
      @Tofu3435 7 месяцев назад

      Maybe you run it in CPU mode

  • @dkamhaji
    @dkamhaji 10 месяцев назад +2

    Where can I find the LCM Sampler for my Ksampler? Its not in my list and I just updated my Comfy.

    • @francoisneko
      @francoisneko 10 месяцев назад

      I have the same issue

    • @dkamhaji
      @dkamhaji 10 месяцев назад +1

      Just update comfy ui through the manager and restart.

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      Yes, always be sure you are updated. 99.9% of the time that will be the cause of most issues.

  • @maestromikz
    @maestromikz 9 месяцев назад +1

    Is this the same with Mac M1? Because i have lcm and lora on my comfyUI but still the loading time is around 300sec.

  • @sewn1
    @sewn1 13 дней назад

    using a 1660ti and dreamshaper 8, 512x512 images only take 2 seconds to generate. super crazy!

  • @zoemorn
    @zoemorn 9 месяцев назад +1

    My ksampler doesnt show an image like Scott's does, can someone advise why? is it a special ksampler?

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 10 месяцев назад +1

    Would love video time on swarm. Really want to run generations on it but still a little awkward

  • @JackTorcello
    @JackTorcello 10 месяцев назад +1

    Where to find an LCM Sampler?

    • @sedetweiler
      @sedetweiler  10 месяцев назад +1

      Make sure you are on the latest of all nodes and comfy. It is in the comfy core, so a git pull should get you all you need.

  • @Disco_Tek
    @Disco_Tek 10 месяцев назад +1

    Yeah this is a instant tool now to get the prompts and weights close before I really get to work.

  • @Darkfredor
    @Darkfredor 8 месяцев назад +1

    Impressionant,merci pour le tips

  • @lastlight05
    @lastlight05 4 месяца назад

    LOL how do you install this LCM?

  • @JLITZ88
    @JLITZ88 10 месяцев назад

    workflow posted?

  • @0A01amir
    @0A01amir 10 месяцев назад

    Sadly it's slower than a normal 30setps generation on a low end machine. (caused mainly by Lora itself and comfyui sucks at loading lora before ksampler)
    512x512 - 4steps - 1image = 115 seconds. ~7s/it (30 steps normaly takes ~20 seconds - less than 1s/it)
    512x768 - 5steps - 1iamge = 130 seconds. ~4s/it (30 steps normaly takes ~35 seconds - less than 2s/it)
    644x1000 - 5steps - 1image = 161 seconds. ~6s/it (30 steps normaly takes ~50 seconds - less than 3s/it)

  • @paul606smith
    @paul606smith 10 месяцев назад

    doesn't work for me. Get out of memory errors with 8gb ram, 4gb 1650 for sdxl. Comfyui normally runs sdxl on this system but really really slow but at least it works.

  • @twistedcraftproductions1697
    @twistedcraftproductions1697 10 месяцев назад

    for some reason this lora unloads the checkpoint from memory after every generation so that you have to wait a whole minute for it to load back into memory before the ksampler even starts to do anything. I seen the processes in the background status window so that's how I know. Using sdxl without the lora there is no 60 sec wait for the model to load before the ksampler starts making an image.

  • @Skettalee
    @Skettalee 10 месяцев назад

    Wait im confused. Does anyone else have the LCM sampler? I dont know how to get that put ii my life but its not there

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      Make sure you always update before attempting new workflows. It is not even a day old.

  • @extraframe6376
    @extraframe6376 10 месяцев назад

    Damn! Just a month away and world of AI has chnaged upside down

  • @AI-Efast
    @AI-Efast 10 месяцев назад

    Have you tried it with SD1.5 checkpoint

  • @MrPlasmo
    @MrPlasmo 10 месяцев назад

    How do you get the Model Sampling Discrete node to show up? I don't have it

  • @paul606smith
    @paul606smith 10 месяцев назад

    it does work with gtx1060 6gb and 16gb ram. makes excellent speed improvement

  • @AI.ImaGen
    @AI.ImaGen 10 месяцев назад

    😛It's...AWESOME !!! Especialy to make videos.

  • @minimalfun
    @minimalfun 10 месяцев назад

    Incredibly useful, thank you very much, really awesome!

    • @sedetweiler
      @sedetweiler  10 месяцев назад +1

      You're very welcome!

  • @tartwinkler1711
    @tartwinkler1711 10 месяцев назад

    I don't see the graph under the member section🤔

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      Are you sponsor level or higher? You should see if there with all of the other graphs.

  • @23rix
    @23rix 10 месяцев назад

    Is this just for sdxl models?

    • @sedetweiler
      @sedetweiler  10 месяцев назад +1

      You can use it with any model as long as you use the proper LoRA.

  • @aiqinggirl
    @aiqinggirl 10 месяцев назад +1

    hi, thanks, anybody know if the LCM lora will lead to a low quality of images? because it is too fast so l am totally confused and can't help stop thinking its quality? anyone?

    • @sedetweiler
      @sedetweiler  10 месяцев назад +2

      Yes, it does take a bit of a hit, but it is just different, perhaps not lower quality.

  • @Kavsanv
    @Kavsanv 10 месяцев назад

    Thank you! Do you have tutorials to change pose without changing details of character as much as posible?

    • @sedetweiler
      @sedetweiler  10 месяцев назад +1

      Not yet. That will be a bit of a challenge, but we can probably do it using a few techniques.

    • @Kavsanv
      @Kavsanv 10 месяцев назад

      Thank you for answering. Yeah to reach that I use mostly inpaint and some combos of AI adapter, posex, and control net a bit helps. Found very cool thing to use right lora over but it seems that it look to similar character to pervious.@@sedetweiler

    • @sedetweiler
      @sedetweiler  10 месяцев назад +1

      @@Kavsanv I was leaning on the IPadapter for a lot of that for sure, but I also think a bit of roop in combination with that would also help.

  • @feisimo5479
    @feisimo5479 10 месяцев назад +1

    Was lost without how to get this installed... figured it out and already love it. Thanks for all your great demos and tutorials

    • @sedetweiler
      @sedetweiler  10 месяцев назад

      Great to hear!

    • @cheese6870
      @cheese6870 10 месяцев назад

      How do I get it installed?@@sedetweiler

    • @aiqinggirl
      @aiqinggirl 10 месяцев назад

      just put the lora into the lora folder, as a normal lora!

  • @WhySoBroke
    @WhySoBroke 10 месяцев назад +1

    Not instructive at all, was very lost