7GB RAM Dreambooth with LoRA + Automatic1111

Поделиться
HTML-код
  • Опубликовано: 3 июн 2024
  • The day has finally arrived: we can now do local stable diffusion dreambooth training with the automatic1111 webui using a new teqhnique called LoRA (Low-rank Adaptation for Fast Text-to-Image Diffusion Fine-tuning). We walk through how to use this new method to create some improved quality images.
    Discord: / discord
    ======= Parameters =======
    training steps per img: 150
    batch size: 1
    lora unet learning rate: 0.0008
    lora text encoder leraning rate: 0.00006
    lr scheduler: constant
    horizontal flip: on
    resolution: 768
    Use LORA: on
    Use 8bit Adam: on
    mixed precision: fp16
    Memory Attention: xformers
    don't cache latents: off
    Train Text Encoder: off
    Pad Tokens: off
    NO REGULARIZATION IMAGES
    ======= Links =======
    Image Cropping Tool: www.birme.net/?target_width=7...
    LoRA Reddit Post: / p_using_lora_to_effici...
    LoRA Github: github.com/cloneofsimo/lora
    ======= Music =======
    Music from freetousemusic.com
    ‘Animal Friends’ by LuKremBo: • lukrembo - animal frie...
    ‘Onion’ by LuKremBo: • (no copyright music) l...
    ‘Rose’ by ‘LuKremBo’: • lukrembo - rose (royal...
    ‘Snow’ by LuKremBo: • lukrembo - snow (royal...
    ‘Sunset’ by ‘LuKremBo’: • (no copyright music) j...
    #LoRA #stablediffusion #aiart #art #ai #techtutorials #tutorial #dreambooth
  • НаукаНаука

Комментарии • 172

  • @lewingtonn
    @lewingtonn  Год назад +14

    There's another really good video on this topic btw: ruclips.net/video/gw2XQ8HKTAI/видео.html&ab_channel=NerdyRodent he does some really good ablation on different params, hard recommend

    • @JJ-vp3bd
      @JJ-vp3bd Год назад +1

      Is there any cpkt files that have built in dreambooth? Where can I find the ones that have dreambooth as part of it

  • @jgodvliet
    @jgodvliet Год назад +3

    Finetuning with SD can be quite overwhelming. Thank you for your tutorial about LORA :)

  • @salvador9431
    @salvador9431 Год назад +1

    Thanks, I'll try it tomorrow. Everyone have a beautiful Christmas!

  • @devnull_
    @devnull_ Год назад

    Thanks for the rundown!

  • @mygamecomputer1691
    @mygamecomputer1691 Год назад +2

    Your explanation was the easiest to follow and resulted in immediate success. Thank you I have subbed :-)

    • @lewingtonn
      @lewingtonn  Год назад

      ChatGPT PROMPT: someone just made a nice comment on my youtube video """
      Your explanation was the easiest to follow and resulted in immediate success. Thank you I have subbed :-)"""
      write a nice reply thanking them
      REPLY: Thank you so much for the kind words! I'm glad that my explanation was able to help you and that you were able to achieve immediate success. It's always a pleasure to know that my content is able to make a positive impact on others. I really appreciate your support and subscription, and I look forward to continuing to share helpful content with you in the future. Thank you again for your kind words and for taking the time to leave a comment.

  • @VanadiumBromide
    @VanadiumBromide Год назад +3

    I trained a model for the first time using this vid. Thanks!

  • @LilShepherdBoy
    @LilShepherdBoy Год назад +3

    Was waiting for this one.

  • @cryptidsNstuff
    @cryptidsNstuff Год назад +5

    Thanks for making these. Also, I think tintin is tired of playing around, we need to be alert.

  • @olvaddeepfake
    @olvaddeepfake Год назад +4

    can't get this to work i have rtx 3070 8gb but i keep getting "Exception training model: No executable batch size found, reached zero."

  • @friendofai
    @friendofai Год назад

    Thats awesome. I have a few rendered out 3D models, that I no longer have the software or time to use them properly. I am thinking about exporting them all out as .pngs, then the character will shown at every angle, but only one pose. Do you think that would make a good accurate model to re-vamp the character?

  • @zb4984
    @zb4984 Год назад

    THANK YOU! I finally got it!

  • @ZeroCool22
    @ZeroCool22 Год назад +5

    There is a 2.1 512 version model too, so you can Train with that one instead of the 768.

  • @ohthehuemanatee
    @ohthehuemanatee Год назад

    My training keeps coming out wrong. I'm trying to do a person but when I test the model it gives me a baby, bird or mattress. Any suggestions on what I need to do?

  • @koto9x
    @koto9x Год назад

    ur videos are so good

  • @koto9x
    @koto9x Год назад

    what software do you use to record your videos? i like the rounded corner and layout. is it OBS?

  • @MA-ck4wu
    @MA-ck4wu Год назад

    Would adding captions in text files corresponding to each training image make the model even better? e.g. ''tintin/a man/custom identifier walking outside in a blue jacket''

  • @adlerdec2425
    @adlerdec2425 Год назад

    thank you g
    working good

  • @ivizlab622
    @ivizlab622 2 месяца назад

    on training, I got this error: Exception training model: 'type object 'LoraLoaderMixin' has no attribute '_modify_text_encoder''.
    any thoughts?

  • @mattfx
    @mattfx Год назад

    Bravo, great video!!! What kind of graphic card do you use ? With my rtx3080 it doesn't work above 256X256 resolution.

  • @generalawareness101
    @generalawareness101 Год назад

    I am so lost as a lot of new features now, while a lot of this is now obsolete (removed) so this helpful video is now obsolete.

  • @jonathaningram8157
    @jonathaningram8157 Год назад

    Is dreambooth broken or something ? I'm able to do training but my model makes no difference at all for my prompt. And the preview are nothing like the images I gave it.

  • @treeseeker5884
    @treeseeker5884 11 месяцев назад

    Bro please help, as soon as I start train, it says import of xformers halted; None in sys.modules

  • @MarcinGornyTricking
    @MarcinGornyTricking Год назад

    Fuck yess. Finally a tutorial that worked for me

  • @YenTube
    @YenTube Год назад

    Ey! I made your tutorial and I had the same problem. It only trains 500 step. Did you know what was the problem? Thanks!

  • @TheCopernicus1
    @TheCopernicus1 Год назад +2

    Awesome work!!

    • @lewingtonn
      @lewingtonn  Год назад

      eeeey bass! merry Christmas ya drongo!

    • @TheCopernicus1
      @TheCopernicus1 Год назад

      @@lewingtonn LOL mate a big Merry Christmas to you to bro! Hoping you have a good one with the fam :) as always love your work!

    • @TheCopernicus1
      @TheCopernicus1 Год назад +1

      @@lewingtonn Oh actually a question, I am on a Mac not sure if you can run the xformers on it however If I am trying to train a STYLE what would the filewords and prompts actually be?

    • @lewingtonn
      @lewingtonn  Год назад +1

      @@TheCopernicus1 If you're training a style you'd still use normal filewords (e.g. the style is monet, you'd still put "an impressionistic image of a bridge" or something like that)... and you don't NEED xformers (except for some 2.0 models), it just might take more mem

    • @TheCopernicus1
      @TheCopernicus1 Год назад +1

      @@lewingtonn Thanks my friend!

  • @DJVARAO
    @DJVARAO Год назад +1

    Awesome😁

  • @simonbronson
    @simonbronson Год назад

    Thanks!

  • @alexm498
    @alexm498 Год назад

    Thank you for this tutorial, but I can't make my own model :\ I get a long complex error that ends with tuple index out of range when I try to create a new model :(

  • @LoFiLunatics
    @LoFiLunatics Год назад +3

    my version of Automatic 1111 says "settings" instead of parameters and seems to be missing some options you have. Also when I installed dream booth it had an error saying i did not have permissions to install lora but I can see lora after rebooting Automatic 1111. I matched the settings as best I could and got Returning result: Exception training model: 'No executable batch size found, reached zero.'. and I am finding nothing online that an help me. If anyone has any Ideas please let me know.

    • @nikobellic1993
      @nikobellic1993 Год назад

      I'm getting the same error, if you got it running please let me know

  • @SkyGeekWave
    @SkyGeekWave Год назад

    Man, I don't know, how to make this Xformers thing installed on my PC (windows 10), my automatic 1111 when boots, whos that it has no xformers, I have only either Default or Flash_attention, I tried to install it with Anaconda, I get error in the end. I tried CUDA 11.7 1.8 12, I'm on python 3.10 (do I need to get 3.9?) with all damn visual studios and stuff, still no use. Why it's not working at all? should I maye get another automatic 1111?

  • @swfsql
    @swfsql Год назад +1

    Today I used kohya-ss and could train a lora model of 512x512 (and 32 dim, 32 alpha) under a 6GB vram card.
    But I could not manage to do so in auto1111, so I think the later is more bloated and that makes a difference when every vram MB matter!

  • @calebweintraub1
    @calebweintraub1 11 месяцев назад

    I would love to see a video that explains how to combine a style checkpoint with a character checkpoint so that one can generate an image of a person/character in the style. I have had success creating checkpoints for styles and characters separately, but the checkpoint merger has never worked for me to create a model. Ditto with Loras. I am hoping there is an alternate solution.

  • @HanSolocambo
    @HanSolocambo Год назад +2

    "Clone a model for training" ... I was looking for "how to train a model from scratch" basically. But I guess that's impossible right ? I have to start with SD1.5 model or any model downloaded around, in order to train my own images. Right ?

    • @lewingtonn
      @lewingtonn  Год назад

      yeah lol, you'd need like $50,000 to train one from scratch... come back in 12 months tho...

    •  Год назад

      Hard, not impossible. Keep in mind that Ai been around for some years and most of that time have been spent downloading images from sites and then train the Ai on them, and that been done by universities and companies, to do the same alone would take very long time if you want same result. So it best to just train on things you are interested in to use in art, may start with their own face and then goe over to what they like or want, like art and cartoon styles, a location and so on.
      But do not give up, the technologo rush forward and in the future there may be tools to help single users to build more complex models, Ai creation need folk that are villing to put in time to create models.

  • @RexelBartolome
    @RexelBartolome Год назад +13

    when lora got released i was amazed at how much people can optimize these things. Training seems to be the next big thing compared to prompt engineering.
    hopefully i can run it sooner or later when it becomes available using a 2060 (6gb vram)

    • @lewingtonn
      @lewingtonn  Год назад

      you might be able to run it @ 512x512!

    • @RexelBartolome
      @RexelBartolome Год назад +2

      @@lewingtonn hmmm wouldn't that cause some distortion? or is there a 2.X model thats designed for 512? Either way, I can always do it on runpod as I've done on my other models :)

    • @oldman5564
      @oldman5564 Год назад +3

      LORA for Stable Diffusion Video on youtube by Nerdy Rodent says it works on 6gb cards, Does show him uncheck the fp16 and use none (he mentions thats sometimes needed for older cards)

    • @generalawareness101
      @generalawareness101 Год назад

      @@oldman5564 My 1060 does fp16 and many have been using it with it checked at 768x768 with much better results than him. Now pre 10x0 cards you don't have a choice and must pick none as I do believe the cards prior to Pascal couldn't do any floating point (I can even do fp32 with mine but omg is it ever slow and the vram issues).

    • @AgustinCaniglia1992
      @AgustinCaniglia1992 Год назад +1

      @@RexelBartolome yes there is a 2.0 512 model.

  • @RanDappProductions
    @RanDappProductions Год назад +1

    finally cracked it... for best settings to insert yourself via LORA dreambooth, use 10x the number of pics of you youre feeding it as class images, LORA learning rate 0.0009, LORA text encoder learning rate 0.0002, LORA ticked, 8bit adam ticked, fp16, xformers, Dont cache latents or pad tokens. Every guide i've read or seen so far has the text encoder learning rate set way too low at like 0.000005 and it does nothing but waste vram that low. I cranked it up and it fixed my problem.

    • @RanDappProductions
      @RanDappProductions Год назад

      512x512 max for 5-600 steps per image. I've tried higher resolutions but unless you have more than 8gb, the highest that will train text encoded is 704x704 but it doesn't work. Like it says it trains but it doesn't really. 768x 768 with no text encoder or 512x512 with it but for a person that doesn't already exist in the model, it's important to include it. Gonna try 640x640 as well but I don't hold high hopes for it after the 704x704 results. 512x512 works pretty well with 1.5 versions and it worked great with the 512 base version of 2.1 when I tried it so who knows 🤷🏻‍♂️

  • @BrandosLounge
    @BrandosLounge Год назад

    When I click on Training model/person, first it gives me a warning and then raises my steps to 150, if i click it again then it changes Image generation > Class Images per instance image to like 16. Every video i watch this is in the 100's, but if i set it to like 100, then it has to go through over 2000 images. Also my models are coming out not looking like me

  • @mchamster7
    @mchamster7 Год назад

    Trying this at the moment, with an 8GB 2080, and the newer version of Dreambooth - does not work.

  • @DJVARAO
    @DJVARAO Год назад +4

    Help. I get this message:
    Returning result: Exception training model: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 8.00 GiB total capacity; 7.26 GiB already allocated; 0 bytes free; 7.30 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

    • @JohnEliot1978
      @JohnEliot1978 Год назад +2

      yeah i get the same cuda out of memory error and also have 8gb vram, very confusing

    • @lds_drive
      @lds_drive Год назад +1

      @@JohnEliot1978 10GB and getting the same error. :(

    • @bobjohnson9354
      @bobjohnson9354 Год назад

      did you fix the problem ? I've 12 gb and get the same

  • @DarkFactory
    @DarkFactory Год назад +1

    I get "join() argument must be str, bytes, or os.PathLike object, not 'list'" error after webui finished generating class images.
    using 3060Ti with xformers

    • @dysnomia1738
      @dysnomia1738 Год назад

      im getting the same error with the same GPU

  • @JohnEliot1978
    @JohnEliot1978 Год назад +6

    any chance you could update your guide now that it doesn't work with the new version using 8gb vram? pretty please :)

    • @odawgthat3896
      @odawgthat3896 Год назад +1

      Isn’t it so annoying! I just tried to run it and only just found out that it doesn’t even work for 8gb because if the update 🙉

  • @yutupedia7351
    @yutupedia7351 Год назад

    sorry, so source of checkpoint would be what?

  • @RemusRichard
    @RemusRichard Год назад +3

    Still not able to run dreambooth + LORA in my rtx 3080 without getting OOM exceptions :( No clue what is wrong. Even turning eveything off and just keeping 1 monitor

    • @lewingtonn
      @lewingtonn  Год назад

      that's super strange, do you know how much memory IS being used (e.g. performance tab for GPU on task manger)

    • @RemusRichard
      @RemusRichard Год назад

      @@lewingtonn Using nvitop to get the memory usage before launching SD i manage to get to about 300mb VRAM, killing everything and unplugging one of the monitors.
      After starting to train, it skyrockets, and the exception says it had reserved about 9.2 and had 0 free.

    • @lewingtonn
      @lewingtonn  Год назад

      @@RemusRichard yeah, ok then it's a very straightforward too-much-ram issue. For me it only took 7 GB so there is some kind of difference in our config. I know that the mixed precision setting is very important so is use 8bit adam... possibly you might need to install xformers too (ruclips.net/video/ZVqalCax6MA/видео.html&ab_channel=koiboi)

    • @JohnEliot1978
      @JohnEliot1978 Год назад

      @@lewingtonn installing xformers didnt solve the oom errors for me either :( also have 8gb vram

  • @RanDappProductions
    @RanDappProductions Год назад +1

    i'd also like to add that if youre using this method to model a particular person, i'm making one of myself right now for example, you need to generate class images but the wizard options under the LORA box make this easy by filling in an appropriate amount of classifiers for the task by clicking one. Be it person or object. without generating them, it wasn't really able to add my face to other characters like a Colab generated Dreambooth otherwise would but I have a feeling this is what I was missing from before... ill update in the comments if my theory holds

    • @RanDappProductions
      @RanDappProductions Год назад

      I was wrong about this. It's the training of the text encoder that allows for better prompting but at the cost of more vram overhead. My RTX 3060 tie fighter oomed on a 768x768 set but it's chugging thru a 512x512 with text encoding enabled

    • @RanDappProductions
      @RanDappProductions Год назад +2

      A ghetto workaround for this would be to train a 512x512 model with the text encoder ticked on then training the same model again for a duplicate number of steps with 768x768 and no encoder 🤷🏻‍♂️

    • @RanDappProductions
      @RanDappProductions Год назад +7

      After some tinkering, if you're training a person, use 10x concept images as the number of source images, 300 steps per picture, lora learning 0.0009, lora text encoder learning 0.00005, polynomial or cosine learning rate scheduler instead of constant(box still unticked), text encoder checked, 704 resolution, no other programs eating vram, and she'll go...

  • @swannschilling474
    @swannschilling474 Год назад +1

    Nice one...still not sure about the whole mess of merging the LORA back into the checkpoint, it would be awesome if this would work kinda the same as embeddings work!
    And Merry Christmas to you toooooo!!!!!! 🎄✨🎁

    • @lewingtonn
      @lewingtonn  Год назад +2

      haha thanks, really though, it's very easy to merge back, just a click and then like 20 seconds of processing

    • @swannschilling474
      @swannschilling474 Год назад +2

      @@lewingtonn true, but the thing I really like about embeddings is that you can change them on the fly...without having to reload the checkpoint.
      But of course they are also less powerful than LORA or Dreambooth...

    • @lewingtonn
      @lewingtonn  Год назад

      @@swannschilling474 huh, I wasn't aware that frequent swapping was a part of anyone's workflow, interesting!

    • @swannschilling474
      @swannschilling474 Год назад

      @@lewingtonn there are a lot of new embeddings popping up right now and you can also use multiple embeddings or merge multiple embeddigs into one...so when prompting it is very easy to change an embedding vs changing the checkpoint!

    • @generalawareness101
      @generalawareness101 Год назад +3

      That is how it was supposed to work but Automatic never did it.

  • @niknitro8751
    @niknitro8751 Год назад +3

    can someone please help me. i try to run the dreambooth extension locally on a rtx 3070 laptop 8gb and i always get the same error. it says cuda out of memory, but from what I've gathered it allocates 0,0GB although ther should be enough to allocate. I'm not a programmer or understand anything I'm doing but a musician trying to use SD for music videos. So please talk to me like a dummy and specify exactly what to change in what file or setting.
    thanks a lot in advance ❤️

    • @lewingtonn
      @lewingtonn  Год назад

      what are you doing exactly? just normal dreambooth training? I would try checking (1) are your images too large and (2) what does the task manager say about your GPU RAM usage while training vs at rest?

    • @niknitro8751
      @niknitro8751 Год назад +1

      @@lewingtonn I made a folder with sample images for the instance that are 512x512 and try to create a ckpt to make stable diffusion able to draw members of my band. I also tried both to let dreambooth automatically make the sample images to compare to and also to do it beforehand and designate a folder for dreamboth to load from. i pretty much tried every combination of source checkpoint, lora no lora, adam, xformers etc. the problem starts at the very beginning because it always allocates 0.0 GB of Vram and and finishes training without doing a single training step. Also I'm on my third clean install of both SD and the dreambooth extension.

    • @niknitro8751
      @niknitro8751 Год назад

      @@lewingtonn i will check my taskmanager but from what I've understood from the Errors I never even got to training.

    • @lewingtonn
      @lewingtonn  Год назад

      @@niknitro8751 hmmmmmm, perhaps you have too many images (it might be loading them into memory all at once, but that's doubtful). One good debug is to make sure that you can do normal image generation at least, and see how much RAM that takes up

    • @niknitro8751
      @niknitro8751 Год назад

      @@lewingtonn Normal generation works fine. also hypernetwork works. how many images do you recommend? should I just try with one single Image for testing it out?

  • @LaurelledMSK
    @LaurelledMSK Год назад

    Error:
    Exception training model: 'No executable batch size found, reached zero.'.

  • @Nick-vd7cg
    @Nick-vd7cg Год назад

    Hey does anybody know why i cant create a Lora that's worth like a few 100MB's at most? Whenever i create a lora from a base model my output results in a multi GB filesize. Please help!

  • @dylanhaze1455
    @dylanhaze1455 Год назад +1

    Still having an OOM when saving.. I checked my files and I have the two fixes that have been merged.. 🤔

    • @dylanhaze1455
      @dylanhaze1455 Год назад

      Tried to allocate 16.00 MiB 🥲(GPU 0; 6.00 GiB total capacity; 5.19 GiB already allocated; 0 bytes free; 5.26 GiB reserved in total by PyTorch)
      Any clue ? What parameters to reduce, to reduce the Vram usage when saving? It was 20 MiB, I checked the "don't cache latents" and it went to 16 .. almost there

  • @JoseGudino-lc3hr
    @JoseGudino-lc3hr Год назад +3

    hello friend, i have updated the dreambooth version and the interface has changed, now i can't train any more, i have a gpu quadro rtx 4000 8gb vram and before the update yes i could train,

    • @lossk8350
      @lossk8350 Год назад +3

      I have the same question. I hope someone can upload the old version.
      Or someone can tell me how to set up the new version to train normally with 8G VRAM.

  • @RanDappProductions
    @RanDappProductions Год назад +4

    you got the error out at 500 steps bc for whatever reason, even if you leave it unchecked, it attempts to save a checkpoint and generate an image every 500 steps unless you set the Save Checkpoint and Save Preview frequencies to 0. Hope this helps with future trainings!

    • @lewingtonn
      @lewingtonn  Год назад +2

      thaaaaaaaank you!

    • @RanDappProductions
      @RanDappProductions Год назад

      I remembered this step from the dreambooth training video by Olivio Sarikas and it fixes the problem on LORA as well. Up to 1050ish so far on my first attempt after remembering that when mine oomed and 500

    • @RanDappProductions
      @RanDappProductions Год назад +1

      Also checkout the Dreamlike Diffusion model sometime if you haven't already. I'm really loving it so far and produces images quite similarly to midjourney. It's a 1.5 base but I honestly haven't been all that impressed yet by 2/2.1 bc of the added requirement to run at full precision if you want it to work properly which doesn't go over so well on 8 gig cards as I've found out. Either way... Dreamlike diffusion 👌

    • @RanDappProductions
      @RanDappProductions Год назад

      Merry Christmas from my side of the world to yours!

    • @lewingtonn
      @lewingtonn  Год назад +2

      @@RanDappProductions sounds mint, I'll check it out

  • @itsmenord1993
    @itsmenord1993 Год назад

    What is the number of class image per intance for a man

  • @tomm5765
    @tomm5765 Год назад

    Australian accent, are you wearing that hat in the middle of summer 🤨😄

  • @abdelhakkhalil7684
    @abdelhakkhalil7684 Год назад +1

    Koiboi, waiting for Krita extension v2.0 with impatience.

    • @lewingtonn
      @lewingtonn  Год назад +1

      There are so many good webui's these days that I don't even know if krita is worth it tbh :(

    • @abdelhakkhalil7684
      @abdelhakkhalil7684 Год назад +1

      @@lewingtonn But with Krita, one can actually draw. Mixing traditional art and AI generated images may makes the life of artist easy. I think it's a workflow that can speed up results exponentially. For instance, I can draw the characters and then generates better coloring and backgrounds. I don't think outpainting webui's can do that.

  • @keisaboru1155
    @keisaboru1155 Год назад +1

    Still the only dud that makes stuff work . But he only gets 4000 views.

    • @lewingtonn
      @lewingtonn  Год назад +2

      quality audience > quantity audience mate

  • @andrewkarn9255
    @andrewkarn9255 Год назад

    ya i just get > Exception training model: No executable batch size found, reached zero.

  • @AgustinCaniglia1992
    @AgustinCaniglia1992 Год назад +1

    I am doing this but it says its doing 56700 steps. It will be a while it seems. what did I ddo wrong I don't know because I did exactly as you did.

    • @lewingtonn
      @lewingtonn  Год назад

      you must have many more images than I did, I only had 12

    • @AgustinCaniglia1992
      @AgustinCaniglia1992 Год назад +1

      @@lewingtonn i have 18 images only. But I used regularization images and it was multiplying that. I dont know why though.

    • @lewingtonn
      @lewingtonn  Год назад

      @@AgustinCaniglia1992 yeah, short of reading the code, or looking at the automatic1111 github issues page, you might just have to wait for an update or manually stop it

    • @AgustinCaniglia1992
      @AgustinCaniglia1992 Год назад

      @@lewingtonn I solved it now. I did a working model although the resemblance isn't good. Trying again with new settings.

    • @lewingtonn
      @lewingtonn  Год назад

      @@AgustinCaniglia1992 what was the issue in the end?

  • @generalawareness101
    @generalawareness101 Год назад

    Finished and it never saved the lora stuff. edit: It didn't because no matter what I do it wants 20-30mb more before it saves. It does train it though.

  • @HAPKOMAHFACE
    @HAPKOMAHFACE Год назад

    Hello, soviet comrad! Where is your balalaika?

    • @lewingtonn
      @lewingtonn  Год назад

      I have a bouzuki, is that close enough?

  • @NorikoTakedaSL
    @NorikoTakedaSL Год назад +2

    12 images x 150 steps = 1800 steps, if you trained 501 steps that means LORA stopped working after 3.3 images

    • @zeuszl1566
      @zeuszl1566 Год назад

      How many steps per image?

  • @Akumetsu-wy3nd
    @Akumetsu-wy3nd Год назад +1

    Problem is you forgot negative prompt

    • @lewingtonn
      @lewingtonn  Год назад

      yeah, I'm still working out how to use those properly... Is there a good guide somewhere?

  • @tioedu_
    @tioedu_ Год назад

    TypeError: isinstance() arg 2 must be a type, a tuple of types, or a union

  • @androidgamerxc
    @androidgamerxc Год назад

    pip install -U xformers
    this is what its saying me to do

  • @kallamamran
    @kallamamran Год назад

    Dreambooth extension has updated, so...... :P

  • @aDAMoscar
    @aDAMoscar Год назад +1

    can u just use the pt file?

    • @lewingtonn
      @lewingtonn  Год назад

      no sir, you cannot, but the conversion is very easy

  • @quartz9369
    @quartz9369 Год назад

    can you make lora in google colab?

  • @kallamamran
    @kallamamran Год назад

    As expected: "Exception training model: module 'tensorflow' has no attribute 'io'"

  • @springheeledjackofthegurdi2117
    @springheeledjackofthegurdi2117 Год назад +1

    has anyone else been unable to get anything but green and black results out of 2.1 on a 6gb card?

    • @oldman5564
      @oldman5564 Год назад

      Think the black screen is from when it runs into a nsfw not sure what green screen is from.

    • @aDAMoscar
      @aDAMoscar Год назад

      wrong paramotor or bad ckpt might be

    • @springheeledjackofthegurdi2117
      @springheeledjackofthegurdi2117 Год назад +1

      @@aDAMoscar just using the standard 2.1 512 ckpts

    • @springheeledjackofthegurdi2117
      @springheeledjackofthegurdi2117 Год назад +1

      @@oldman5564 don't have any nsfw filters, running locally

    • @aDAMoscar
      @aDAMoscar Год назад +2

      @@springheeledjackofthegurdi2117 im using the 768 had alot of black screens ususlly restarting the whole thing and currently using arguments --no-half and --xformers and its working after the last git pull. but i never know when it might go black again

  • @arthurjeremypearson
    @arthurjeremypearson Год назад

    I'm an artist and I have several thousand images. They're finished, but I also have thousands more sketches that need to be finished. I need to train an AI to finish my sketches into completed pictures

  • @drkgld
    @drkgld Год назад

    Another easy-to-follow tutorial, but could you include your computer specs please? Your 'small potato' may still outweigh my serving of French fries.

  • @solarisone1082
    @solarisone1082 Год назад

    It's depressing how quickly graphics cards with 8GB VRAM are becoming obsolete. It's not just in the AI space, either. 8GB of VRAM is also restrictive when working with 3D programs like Daz Studio.

    • @jonathaningram8157
      @jonathaningram8157 Год назад

      10GB RTX 3080 were such a rip off. It's so low for a "4K" graphics card. I barely manage to make stable diffusion work with that.

  • @devnull_
    @devnull_ Год назад +2

    It is annoying that all these training methods (TI, DB, hypernetworks) have a ton of these magic numbers and tokens and whatnot, it would be nice if there were some at least some tooltips and better yet, docs for these, now it seems like all the articles and videos have their own idea what to put in these fields...

    • @lewingtonn
      @lewingtonn  Год назад

      I usually find that the automatic1111 wiki is a pretty darn good place to start but I know what you mean, sometimes you have to trawl reddit for some good hyperparameters

  • @baagrooves
    @baagrooves Год назад

    2:48

  • @abdelhakkhalil7684
    @abdelhakkhalil7684 Год назад +5

    I must say, I am very disappointed by LORA fine-tuning. I was waiting for its implementation for weeks and the results do not live up to the hype. The original Dreambooth is perfect in my opinion, and I just wish it could work on my RTX3070ti with it's 8GB of Vram.

    • @oldman5564
      @oldman5564 Год назад +1

      You can just alter the settings seems to run out of memory around 3000 steps with unaltered settings on 512/512 (3070nonti) 2000 steps ran to completion after a few settings changes.

    • @oldman5564
      @oldman5564 Год назад +1

      LORA for Stable Diffusion Video on youtube by Nerdy Rodent says it works on 6gb cards so your 70ti should run just fine. His vid goes into more detail. Might help ya set it up

    • @abdelhakkhalil7684
      @abdelhakkhalil7684 Год назад +1

      @@oldman5564 I watched it first and tried, and I had no issues with memory. It's just the results are... Meh! I'll wait for an even faster version of dreambooth.

    • @mikealbert728
      @mikealbert728 Год назад +1

      You can already use Dreambooth with no GPU at all

    • @abdelhakkhalil7684
      @abdelhakkhalil7684 Год назад

      @@mikealbert728 Have you tried it? It takes about 19 hours on my PC. The problem with CPU fine turning with DB is it's it runs on a single core! It takes forever.

  • @ZeroCool22
    @ZeroCool22 Год назад +1

    Activate Windows please.

  • @adriangpuiu
    @adriangpuiu Год назад +4

    lora is pure sheet, my trained person models don't have anything in common

  • @abdelhakkhalil7684
    @abdelhakkhalil7684 Год назад +9

    Please activate your windows😆🤣

    • @lewingtonn
      @lewingtonn  Год назад +5

      hmm... maybe I'll start a GoFundMe...

    • @abdelhakkhalil7684
      @abdelhakkhalil7684 Год назад

      @@lewingtonn I think you should. Keep up the good work and I am sure you will be rewarded!

    • @2PeteShakur
      @2PeteShakur Год назад

      @@lewingtonn lol you get win keys for as little as a few pints these days on ebay! ;)

    • @2PeteShakur
      @2PeteShakur Год назад

      @ramanauskiene edita hehe if only it was easy as that lol

  • @fantoons_comics_creator
    @fantoons_comics_creator Год назад

    trying to understand why my trained lora shows up with a lora.safetensors file extension.

  • @VladislavPunk
    @VladislavPunk Год назад +1

    На нем что ушанка? Хоть не руzzкая? Слава Украине русаки