UPDATED: SDXL Local LORA Training Guide: Unlimited AI Images of Yourself

Поделиться
HTML-код
  • Опубликовано: 28 сен 2024
  • НаукаНаука

Комментарии • 141

  • @weeliano
    @weeliano 3 месяца назад +2

    Thank you for producing this video it has helped me tremendously to figure out the training settings. I realized that the LORA that I trained without any regularization images look better than those that have. Been having great fun rendering many iterations of my alter-egos.

  • @metanulski
    @metanulski 4 месяца назад +23

    What I would do different is to use WD14 captioning since it has more details on the picture. Also my setting need only one hour training on my 4060. I have to check the difference. Here is a nice trick: Once the training is done, it saves a setting file in the result folder. So if you need to train another model, you just need to load the setting file again, change the pictures, captions and model name and then hit start. :-)

    • @metanulski
      @metanulski 4 месяца назад +5

      Also, I think it is a good idea to let kohya generate a picture each epoch, so it is easy to pick the correct epoch for focus. Like "Full body picture of a man standing in a Forrest". Low epoch will not look like you, and at some point the Forrest might disappear because of overfitting. Use the highest epoch with the Forrest still in the picture.

    • @Heldn100
      @Heldn100 Месяц назад

      how in one hour ?
      i have 4070 and it took 9 hour for me !

    • @metanulski
      @metanulski Месяц назад +1

      @@Heldn100 20 pictures max, 768 max resolution and 2000 to 3000 total steps max :-)

    • @Heldn100
      @Heldn100 Месяц назад

      @@metanulski
      mine is 13 pictures, 1024x1024, but i dont touch max steps or anything i just do like what he do.
      i will try yours too, thanks

    • @metanulski
      @metanulski Месяц назад +2

      @@Heldn100 21:44:24-383786 INFO Valid image folder names found in: D:\SDXL Lora Training\models\img
      21:44:24-384788 INFO Folder 40_Test woman: 21 images found
      21:44:24-385788 INFO Folder 40_Test woman: 840 steps
      21:44:24-385788 INFO Total steps: 840
      21:44:24-386789 INFO Train batch size: 1
      21:44:24-387791 INFO Gradient accumulation steps: 1
      21:44:24-388791 INFO Epoch: 4
      21:44:24-389792 INFO Regulatization factor: 1
      21:44:24-390792 INFO max_train_steps (840 / 1 / 1 * 4 * 1) = 3360
      21:44:24-391794 INFO stop_text_encoder_training = 0
      21:44:24-392795 INFO lr_warmup_steps = 336
      21:44:24-393795 INFO Saving training config to D:\SDXL Test\Test_20240811-214424.json

  • @undertalebob3207
    @undertalebob3207 3 месяца назад +20

    For those of you wondering why he has the Network Rank (Dimension) so high (256), I am fairly certain it is because of the thousands of reference images to men he is using in his training. If you arent using that many pictures and are just sticking to your 15-50 reference pictures you are probably fine to leave it on 32-64 unless your training something less human like. This will also cut down on your training time immensely! Also yes, unless you want your training to stop early (around 3 out of the 10) then make sure to change your "Max trained Steps" to 0, (default is 1600). Good video though Thank you!

    • @Heldn100
      @Heldn100 Месяц назад

      thanks for that, this was so useful

  • @Art0691p
    @Art0691p 2 месяца назад +2

    Great video. Clear, no hype and to the point. Thanks.

  • @hleet
    @hleet Месяц назад +3

    Thank you it works ! But the final result is not really as I would have expected (I didn't use the extra dataset of man or woman). Anyway I was able to make a lora file with your tutorial, that's the main point :)

  • @birbb0
    @birbb0 4 месяца назад +3

    This video was really good but I was wondering why you had the Network Rank at 256 while you had the Network Alpha at 1, which is a really small value when compared to the network rank? I've seen people use 64,32 a 2:1 ratio or just use the same number for both, I'd love to hear your explanation!

  • @Mranshumansinghr
    @Mranshumansinghr 4 месяца назад +1

    Exactly what I was looking for. Thank you.

  • @NickPanick
    @NickPanick 3 месяца назад +1

    I'm completely new to all of this. Will this work using sd3_medium as the pretrained model or should I stick with your template from your Patreon for SDXL base 1.0?

  • @lostinoc3528
    @lostinoc3528 Месяц назад +1

    What resolution should the training images be? Should they all be the same resolution? Or is a mix, is square, landscape and portrait fine, or preferred? Also when you choose to train for the basic checkpoint, could you instead train for a specific one somehow? Or is it better to train on a basic one?

    • @elizagarcia8799
      @elizagarcia8799 Месяц назад

      Watch again the video and you can train WHEREVER checkpoint

  • @xnauwfallproductions
    @xnauwfallproductions Месяц назад +1

    you remind of a very famous actor, very versatile actor who has a wide range of roles, i don't know who his manager/director is but he always ends up tired in his films

    • @allyourtechai
      @allyourtechai  Месяц назад

      haha, he must have kids. I always end up tired too.

  • @andresmartin8895
    @andresmartin8895 Месяц назад +1

    To avoid the "long process of resize every image" you can do from the terminal "mogrify -resize 1024 *.jpg" and it resizes all files ending with .jpg in the current folder"

  • @RayMark
    @RayMark 4 месяца назад +3

    Thanks for the tutorial! Your last one gave me my best results so was excited to try this. Question: I only got 3 tensor files after 10hrs. They’re all quite big (over 1 gb). Not sure where I went wrong? I have epochs set to 10 like you said. Thanks!

    • @allyourtechai
      @allyourtechai  4 месяца назад +3

      What type of gpu do you have? The network rank settings at 256 take a long time to train and produce large files, but the quality is also higher. You can lower the rank setting and train faster, smaller files but with lower quality

    • @RayMark
      @RayMark 4 месяца назад +1

      @@allyourtechai RTX 3080 (10gb)... everything seemed to work fine throughout the process and the Loras worked pretty well even though I only got 3 of them to test. Not sure if I had some setting wrong that it would only create 3. Thanks for your help, really appreciate the guides!

    • @avaloki9577
      @avaloki9577 4 месяца назад +4

      @@RayMark I was having a similar problem look for the option in parameters: "Max train steps" it is by default set as: '1600' change it to '0' so it does not limit the max number of steps of the training. it worked for me now it is doing the whole training

  • @febercantes
    @febercantes 3 месяца назад +2

    and to train styles?

  • @Heldn100
    @Heldn100 Месяц назад

    thanks for it i try it and i get great result, but 1.7g for one lora is too much we need to know how we can fix this

  • @timeisupchannel
    @timeisupchannel Месяц назад

    Hello, Is there any way to continue training after stop? Thank you!

  • @ollyevans636
    @ollyevans636 3 месяца назад +2

    I’m experiencing an issue with my model training process. When I clicked “train,” the terminal indicated that it was done in less than 30 seconds, but no safetensor files appeared. Do I need to leave it overnight, or did my machine not execute the process correctly because the terminal said it was complete?

    • @allyourtechai
      @allyourtechai  3 месяца назад +1

      If the terminal said complete without any errors then something went wrong. Hard to say without errors though, so I would go through the settings and folders again

    • @HiramosCM
      @HiramosCM 3 месяца назад

      Same here!

    • @Jammy1up
      @Jammy1up 3 месяца назад

      Same for me, did you ever find a fix? Followed this awesome tutorial to a T

    • @slann303
      @slann303 3 месяца назад

      You most probably ran out of VRAM. Try to not use the regularization-images, this should lower the VRAM usage

    • @Jammy1up
      @Jammy1up 3 месяца назад

      @@slann303 well I had the same issue, 16gb of vram and I did not use regularization images. Pretty sure that's not it

  • @jungbtc
    @jungbtc 4 месяца назад +1

    lol i was just getting confusing with your older tutorial!
    thanks for the update.

    • @Fanaz10
      @Fanaz10 4 месяца назад

      yeahh this seems like civitai uses for training?

  • @artyfly
    @artyfly Месяц назад

    Sorry, I missed, why at the end we have 10 trained files? Were is the setting, when did we set this up?? Can we get one file at the end? Thanks)

  • @insurancecasino5790
    @insurancecasino5790 4 месяца назад +1

    Wow. I'm thinking of getting an external GPU now. I can do comics with this.

  • @superhachi
    @superhachi 3 месяца назад

    best tutorial so far!!

  • @El_Rey_Diamante
    @El_Rey_Diamante 2 месяца назад +1

    @allyourtechai can you do this in comfyui instead of kohya?

    • @allyourtechai
      @allyourtechai  2 месяца назад

      I should be able to do a guide on that :)

  • @satoshidarikotamasara910
    @satoshidarikotamasara910 3 месяца назад

    Can you make a tutor on how to train Lora slider, such as Lora detailer, detail tweaker etc.

  • @Beauty.and.FashionPhotographer
    @Beauty.and.FashionPhotographer 4 месяца назад +1

    tried in in Pinokio on a mac where this Koya SS can be installed with one button . YET, as is the case with 99% of all ai apps , it does not work. Same settings. terminal gave me some errors after the start button was pushed and after it would stop its processes just a few second later . so its a dead end street and yet another useless phantom ai exercise .

    • @allyourtechai
      @allyourtechai  4 месяца назад +1

      What errors were in the terminal?

    • @Beauty.and.FashionPhotographer
      @Beauty.and.FashionPhotographer 4 месяца назад

      @@allyourtechai no blip prompts in any of the generated TXT BLIP captions besides the name ,neysalora, (which is yours blove), but no description on any of the 120 images in their respective text files...so blip never really did anything...this is how it starts to go wrong, and where i can detect it being wrong, being myself a total NEWBIE and all, ...here the terminal : The above exception was the direct cause of the following exception:
      Traceback (most recent call last):
      File "/Users/akos/pinokio/api/kohya_ss.pinokio.git/app/sd-scripts/finetune/make_captions.py", line 21, in
      import library.train_util as train_util
      File "/Users/akos/pinokio/api/kohya_ss.pinokio.git/app/sd-scripts/library/train_util.py", line 46, in
      from diffusers import (
      File "", line 1075, in _handle_fromlist
      File "/Users/akos/pinokio/api/kohya_ss.pinokio.git/app/venv/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 701, in __getattr__
      value = getattr(module, name)
      File "/Users/akos/pinokio/api/kohya_ss.pinokio.git/app/venv/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 701, in __getattr__
      value = getattr(module, name)
      File "/Users/akos/pinokio/api/kohya_ss.pinokio.git/app/venv/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 700, in __getattr__
      module = self._get_module(self._class_to_module[name])
      File "/Users/akos/pinokio/api/kohya_ss.pinokio.git/app/venv/lib/python3.10/site-packages/diffusers/utils/import_utils.py", line 712, in _get_module
      raise RuntimeError(
      RuntimeError: Failed to import diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion because of the following error (look up to see its traceback):
      Failed to import diffusers.loaders.ip_adapter because of the following error (look up to see its traceback):
      module 'torch' has no attribute 'compiler'
      15:32:53-239016 INFO ...captioning done

    • @Beauty.and.FashionPhotographer
      @Beauty.and.FashionPhotographer 3 месяца назад

      @@allyourtechai i did reply and paste the few lines of terminal here, but i am guessing that youtube deleted my reply here... ? the first issue is that BLIP captioning does not return text files with descriptions of whats in the images,... besides the LORA NAME which i decided on, in my cas "neysalora*,.... so only one word in these text files . terminal does say there was an erros with a loading of a diffusion file. The People at Pinokio , with which i installed this, tried to help , they are super great over there, amazing support,... yet it still does not work . i am going to assume that without image-descriptive words or content in these text files, beside neysalora, its never going to work, ...right?,

  • @kenshisaan2207
    @kenshisaan2207 4 месяца назад +1

    at 2:55 can you train on other checkpoints like juggernaut?

    • @allyourtechai
      @allyourtechai  4 месяца назад

      Yes, definitely

    • @kenshisaan2207
      @kenshisaan2207 4 месяца назад +1

      @@allyourtechai im sorry to bother you first thanks for the 1 click setup and reg. i appreciate and subscribe to your patreon at 10:10 you cut the reg file directory is it the new folder it made in final or its the original one with all the picture

  • @TyreII
    @TyreII 2 месяца назад

    Such a frustrating process for me. Got it all setup but when I hit the training button it throws an error message. I have reinstalled everything like 12 times.
    torch.distributed.elastic.multiprocessing.redirects: [WARNING] NOTE: Redirects are currently not supported in Windows or MacOs.
    [W socket.cpp:663] [c10d] The client socket has failed to connect to

    • @allyourtechai
      @allyourtechai  2 месяца назад +1

      It is a massive pain in the ass to be honest. I’ve done a half dozen of these guides and it’s still painful to install and use every time. I’m building a model trainer into PixelDojo to simplify this whole training process.

    • @PretendBreadBoy
      @PretendBreadBoy 2 месяца назад

      Yeah it's a pain. It's frustrating when you install everything and just know it's not going to work lol.

  • @daviduartep
    @daviduartep 3 месяца назад

    Thanks for the amazing tutorial! Sadly I couldn't manage to get it working.
    I am getting "A tensor with all NaNs was produced in Unet." during generation. Loss also appear as NaN during training. It appears to be related to the optimizer, things work when removing relative_step=False, but model quality becomes very poor.
    A similar training with Prodigy optimizer worked though. I have a 4090.
    I recall I trained a Lora with these extra optimizer params for adafactor months ago when I had a 3080 and it worked.

    • @allyourtechai
      @allyourtechai  3 месяца назад

      Interesting, I’ll check it out as well

  • @uKnowMister
    @uKnowMister 3 месяца назад

    tried everything and if iam starting training i get the following error:
    'Parameters: scale_parameter=False relative_step=False warmup_init=False
    ' is not a valid settings string.
    what is wrong?

    • @yourhighnessla
      @yourhighnessla 3 месяца назад +1

      delete the mark before the P

    • @uKnowMister
      @uKnowMister 3 месяца назад

      ⁠@@yourhighnesslaI will try later again, but I double checked, there wasn’t a mark before the P. Maybe a imaginary one 🤣

  • @gabrieljuchem
    @gabrieljuchem 3 месяца назад

    Thank you so much for this. I have a RTX 4060 Ti 8GB, do you think it's possible to train a SDXL LoRA with only 8GB of VRAM? If so, what settings would you recommend on Kohya? Thanks!

  • @moc_santos
    @moc_santos 2 месяца назад

    I tried EVERYTHING and I can't get the same results as you did. They must have update it again, 'cause I followed what you've done and I got only 2 final models insted 10 like you did. I'm getting sick of this kohya. I have a rtx 3060 12gb

    • @allyourtechai
      @allyourtechai  Месяц назад

      They seem to break the workflow every couple weeks. It’s one of the reasons I built PixelDojo.ai . I wanted to be able to control the process and the quality

  • @prismoZTN
    @prismoZTN 4 месяца назад

    I cant find my Safetensor files :(

  • @jahinmahbub8237
    @jahinmahbub8237 3 месяца назад

    after clicking start training, getting an error saying accelerate not found.
    01:46:14-810348 WARNING Regularisation images are used... Will double the number of steps required...
    01:46:14-811350 INFO Regulatization factor: 2
    01:46:14-811350 INFO Total steps: 18300
    01:46:14-812351 INFO Train batch size: 10
    01:46:14-813352 INFO Gradient accumulation steps: 1
    01:46:14-813352 INFO Epoch: 4
    01:46:14-814353 INFO Max train steps: 1600
    01:46:14-815354 INFO stop_text_encoder_training = 0
    01:46:14-815354 INFO lr_warmup_steps = 160
    01:46:14-820872 ERROR accelerate not found

    • @allyourtechai
      @allyourtechai  3 месяца назад

      Accelerate is a PyTorch library and must be missing from your system for some reason. You should be able to install it manually though

    • @jahinmahbub8237
      @jahinmahbub8237 3 месяца назад

      @@allyourtechai I pip installed it. Still doesn't show up. How can I manually install it? And configure it?

  • @JaysterJayster
    @JaysterJayster 3 месяца назад

    I have SDForge rather than automatic1111, would this work with that?

  • @yoyosfsf9021
    @yoyosfsf9021 2 месяца назад

    i have rtx 4060 8GB . can i do that ?

    • @allyourtechai
      @allyourtechai  2 месяца назад

      12GB of vram is about the minimum for an XL model. You can probably train a stable diffusion 1.5 model

  • @thedevilgames8217
    @thedevilgames8217 4 месяца назад

    i click on start training but nothing happened

    • @allyourtechai
      @allyourtechai  4 месяца назад +2

      check your command prompt window for errors.

  • @valentinotrinidad
    @valentinotrinidad 2 месяца назад +1

    Understand nothing in parameters tab, but it Fworks 🤣

  • @silentsubz
    @silentsubz 4 месяца назад

    Where are the epochs? Can't seem to find them.

    • @allyourtechai
      @allyourtechai  4 месяца назад

      They should be in your output folder specified earlier before you start the process

  • @brunohof2972
    @brunohof2972 3 месяца назад

    My character with the lora is pretty bad, its like what we got with ai two years ago. I guess I have to play with all available settings before training.

    • @flow9463
      @flow9463 3 месяца назад

      No one cares

    • @allyourtechai
      @allyourtechai  3 месяца назад

      The prompting afterwards plays a major role as well.

  • @Fanaz10
    @Fanaz10 4 месяца назад

    Does anyone know how to make this work on colab?

  • @Strawberry_ZA
    @Strawberry_ZA 4 месяца назад +2

    Khoya is painful to get working - for w/e reason the optimiser extra arguments your provided were causing errors and preventing khoya from initializing training

  • @Ishmaam
    @Ishmaam 4 месяца назад

    Thank you so much for the useful video, My GPU is Nvidia 3060, can I put Value 256 as Network Rank?

    • @allyourtechai
      @allyourtechai  4 месяца назад

      How much vram does your card have? 256 requires at least 12gb

    • @Ishmaam
      @Ishmaam 4 месяца назад

      @@allyourtechai Thank you, Its a 12GB card.

  • @vk28a12
    @vk28a12 4 месяца назад

    I'm getting "returned non-zero exit status 1" in the error log. Any insights?

    • @allyourtechai
      @allyourtechai  4 месяца назад

      are you loading stable diffusion 1.5 instead of SDXL? github.com/kohya-ss/sd-scripts/issues/1041

    • @vk28a12
      @vk28a12 4 месяца назад

      @@allyourtechai I was pretty sure that I set it to sdxl as in the video, but I'll try the whole thing from scratch and try again. I'll keep an eye on the model! Thanks.
      EDIT:
      I retraced the steps and made sure it was sdxl base, doubled checked the path as well. This time I paid more attention to the log and saw: "NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:", some stuff about caching, and xformers.
      I tried to toggle a bunch of things related to caching, with no luck until I switched xformers to sdpa in the advanced section. Now I'm getting further than before, and it appears to be working!

    • @magneticanimalism7419
      @magneticanimalism7419 4 месяца назад +1

      Have you tried deleting the "Optimizer extra arguments" that he pasted as "Parameters: scale_parameter=False relative_step=False warmup_init=False" in his description. This worked for me.

    • @vk28a12
      @vk28a12 4 месяца назад

      @@magneticanimalism7419 I have not tried that, but I will. Thanks!

    • @vk28a12
      @vk28a12 4 месяца назад

      @@magneticanimalism7419 I have tried that. In the end I had to go with onerainer, as koyha just refuses to work. I used essentially the same settings in onetrainer and got some decent results.
      @allyourtechai Have you considered diving into onetrainer for tutorial purposes? It might be helpful for viewers like me that struggle to get kohya working.

  • @quinn479
    @quinn479 2 месяца назад

    should my gpu be making brrrr

  • @jasonlisenbee3747
    @jasonlisenbee3747 3 месяца назад +7

    I noticed your Optimizer was on Adafactor by default. Mine wasn't so I changed it. You didn't mention the setting for LR Scheduler, but I see in the video yours is set to Constant. Mine was set to Cosine. I changed it to match yours, but my LoRa's came out goofy and I got 3 instead of 10 somehow. Could that have anything to do with it?

  • @OCGamingz
    @OCGamingz 4 месяца назад +13

    thats probably the best and most useful LoRa guide I've seen so far. thank you very much it helped me alot!

    • @allyourtechai
      @allyourtechai  4 месяца назад +2

      Thank you!!

    • @maxdeniel
      @maxdeniel 2 месяца назад +1

      Certainly is the best one. I have seen some other and they skip important steps and focus on other not that important. This guy knows how to impart a master class.

  • @maxdeniel
    @maxdeniel 2 месяца назад +7

    Bro, this tutorial is sooo straight forward, and I really appreciate that you were taking the time to do an updated version, AI world is evolving so fast that the tutorials that were made 6 months ago are outdated, the interface for kohya changed a little bit and your tutorial just walk me through the new version step by step... I just clicked Start Training and I'm waiting to finish and star to run and check how the LoRA comes out.
    Thanks again!

    • @Heldn100
      @Heldn100 Месяц назад

      i make a lora and its come out great better than what i want...
      but 1.7g for one lora is too much we need to know how we can fix this

  • @mnedix
    @mnedix 3 месяца назад +2

    EDIT: for some reason it started working just fine, I have no idea what I did to it. I think it ok, I have to do more testing with the optimizers; so far, the training is - 30 training pics / 20 reps / 5 Epochs / Rank 32 / Alpha 16 = 3~4hrs
    Thank you for the tutorial, I really hoped I could create LoRAs. I followed it to the letter and I get a RuntimeError: NaN detected in latents. I'm on a brand new 4070 and the resolution is 512, 512, so I should have enough vram for it.

    • @Heldn100
      @Heldn100 Месяц назад

      i do 1024x1024 with 4070 and there was no problem.
      you need to close games or anything use your vram background programs or wallpaper anmition or anything use so much power.
      i get really greate lora with it for wonyoung

  • @NateMac000
    @NateMac000 4 месяца назад +2

    thanks for the tutorial... one thing I did differently was use that BLIP2 for captioning which IMO did a much more detailed caption of the image... at that point I didn't have a prefix so I used ChatGPT to make me a windows bat file, to add the prefix (trigger word) to all the txt files. Great tutorial, thanks again!

    • @allyourtechai
      @allyourtechai  4 месяца назад +1

      Great tips! Thanks for sharing, I need to test BLIP2 now!

  • @TravelPostcards
    @TravelPostcards Месяц назад

    Thank You! .... I have this Error when I hit Prepare training data:
    RecursionError: maximum recursion depth exceeded while calling a Python object

  • @zei_lia
    @zei_lia 3 месяца назад +2

    Very good tuto ! Everything worked on my end, I just had to create the “log”, “images” and “models” folders, as it didn't do it automatically.
    My model works perfectly, thank you! 🙏

  • @theironneon
    @theironneon Месяц назад

    am i the only one that comparing the result images with his face on the bottom right? :D

  • @jasonlisenbee3747
    @jasonlisenbee3747 3 месяца назад +1

    I think mine stopped. I checked it after maybe an hour and it said it was complete, but I only had 3 finished files, not 10. And they were named Final 1,2, and 3 which is strange. I closed the command window and they're all pretty bad. I've got 16vram and matched the Network Rank to the numbers shown in the video. I'm wondering if that was a mistake. I'm trying it again but have lowered it to 101 and 13 on the Network Alpha and going to bed to see what I come back to in the morning.

  • @KlausMingo
    @KlausMingo Месяц назад +1

    Great guide, but you didn't tell us why you generated 10 tensor files.

  • @omegablast2002
    @omegablast2002 19 дней назад

    i thought Adafactor was a self adjusting learning rate and you set it to 1 just like prodigy, can someone chime in?

  • @Nikida18
    @Nikida18 4 месяца назад +1

    I'm training LORA in this moment, I hade to delete this scale_parameter=False relative_step=False warmup_init=False because i got error "returned non-zero exit status 1".
    Anyway, why the GPU usage is 0% around but GPU memory is 100%?
    I have a RTX4070 for laptop and i set Network Rank (Dimension) 32 ------------- Network Alpha 16.
    After 30 minutes i'm at 8%, this is what i see:
    steps: 8%|████▍ | 133/1600

    • @Nikida18
      @Nikida18 4 месяца назад

      Update.
      I had 12 of my own pictures and 4990 man's regulations images.
      After 5 hours It's finished creating only 2 tensor files, about 210 MB each one.
      I tried in fooocus and I didn't get good results.
      I will repeat training again by changing settings.

  • @greengenesis
    @greengenesis Месяц назад

    I always get no data found... -.-

  • @alexalves3293
    @alexalves3293 23 дня назад

    How can I train the model using the CPU? I know it's not ideal...

    • @allyourtechai
      @allyourtechai  23 дня назад

      That would take an insanely long time if even possible. You can use something like pixeldojo.ai

  • @Gamer4Eire
    @Gamer4Eire 2 месяца назад

    The approach should be select images, set resolution to 1024, 1024, add tags, edit tags, refine tags, do it again, repeats x images x epochs / batches = steps. Always always use epochs, these ensure you generate a number of evolving Loras so you can try each one and see what fits.

  • @lilly2379
    @lilly2379 4 месяца назад

    Thank you so much, I'm so happy you updated this! However, I can't seem to find your low VRAM config file. The patreon link only leads to the 3090 one along with the regularization files. I may have missed something (and it's not a big deal) but I thought I'd bring it up just in case. Thanks

  • @FirstLast-tr3ub
    @FirstLast-tr3ub Месяц назад

    This has been really helpful and gotten good results from, thank you

  • @webtrabajoscolombia4124
    @webtrabajoscolombia4124 2 месяца назад

    muchas gracias muy valioso

  • @geoffreybirt8899
    @geoffreybirt8899 2 месяца назад

    Followed exactly and double checked. It is done in like an hour and I only have 1 LORA file...

    • @geoffreybirt8899
      @geoffreybirt8899 2 месяца назад +1

      Found the issue. Set max training steps from 1600 to 0.

  • @-Belshazzar-
    @-Belshazzar- 2 месяца назад

    thanks for the tutorial, i am wondering though, what has really changed since the last tutorial? if i remember correctly, nothing really, exact same setting, no? except the fact this time you remembered to point out the prepare training data and model selection. which brings me to the question, why do you choose sdxl base and not a better trained checkpoint to start from like juggernaut for example? also, i noticed that even without regularization images i get good results and with an rtx 3090 and 23 highrez images training is about an hour and half long (same settings from your tut) not sure if it’s the lack of reg images, but you said 10 hours!? seems a bit too much i think. anyway, thank you again!

    • @allyourtechai
      @allyourtechai  2 месяца назад

      The settings were the same, but the software changed completely. They moved everything around in the UI, so I end up with 10+ questions a day about where to edit vari pi s settings. Hopefully it remains the way it is for a bit lol

    • @KingBerryBerry
      @KingBerryBerry 2 месяца назад

      @@allyourtechai What are the more common questions and responses? I follow every step from this today and IT WORKS really good maybe change one thing or two. In a 4090 20 images (differents quality) take me 2 hours, is that normal?

  • @agrocoding-ia
    @agrocoding-ia 4 месяца назад

    Someone is getting avr_loss = nan?

    • @daviduartep
      @daviduartep 3 месяца назад

      Yes I am. My loras also produce all nans during generation. I fixed by removing relative_step=False, but generates a Lora with very poor quality.

  • @Bulfwyne
    @Bulfwyne 6 дней назад

    yay, I'm #800 Thumbs Up, like button !!