Train a Stable Diffusion Model Based on Your Own Art Style

Поделиться
HTML-код
  • Опубликовано: 16 апр 2023
  • In this video, you will learn how to use Dreambooth to train a Stable Diffusion model based on your own art style. Artists, get inspired by your own art style and create stylized reference material for your next creation.
    Here are some links connected to this video.
    Link to Google Collab Notebook - 1 Click Execution of Dreambooth Stable Diffusion: colab.research.google.com/git...
    How to set up Automatic 1111 and Stable Diffusion using a Google Collab Notebook, by The Ming Effect: • Stable Diffusion - How...
    How to install Automatic 1111 on your local Windows computer. User interface explained by Sebastian Kamph: • Stable diffusion tutor...
    How to install Automatic 1111 on your local Mac M1 Chip computer, Blog Article by Stable-Diffusion-Art.com: stable-diffusion-art.com/inst...
    How to install Automatic 1111 Stable Diffusion on your Mac Computer, Video Tutorial by Star Morph AI: • How to Install Automat...
  • НаукаНаука

Комментарии • 107

  • @coulterjb22
    @coulterjb22 7 месяцев назад +3

    Making your own Lora really looks like the way to go for targeted results. Thank you for this.

  • @RogalevE
    @RogalevE 6 месяцев назад +2

    This is a very straightforward and easy tutorial. Thanks!

  • @RektCryptoNews
    @RektCryptoNews Год назад +13

    Honestly this was perhaps the most direct and easy to follow tutorial on creating a SD style...thanks and great vid!

    • @wizzitygen
      @wizzitygen  Год назад

      Thanks for the kind words. Glad you found it helpful.

    • @KINGLIFERISM
      @KINGLIFERISM 8 месяцев назад

      #Facts

  • @lorenzmeier754
    @lorenzmeier754 Год назад +5

    Straightforward, easy to follow, worked smoothly. Thanks for the tut!

    • @wizzitygen
      @wizzitygen  Год назад

      Thanks for the feedback. Glad you enjoyed!

  • @latent-broadcasting
    @latent-broadcasting 11 месяцев назад +7

    Thanks so much! I made it, I really made it thanks to you. Trained a model with 39 images, took more than an hour but the results are amazing. I'm so happy!

    • @wizzitygen
      @wizzitygen  11 месяцев назад

      Congrats! So happy you found the video useful.

  • @dapper5314
    @dapper5314 11 месяцев назад +1

    For the first time with youtube tutorials, I understood everything, thank you.

    • @wizzitygen
      @wizzitygen  11 месяцев назад

      Glad you found the video helpful.

  • @calebweintraub1
    @calebweintraub1 11 месяцев назад +1

    Thank you! This is well done and helpful.

    • @wizzitygen
      @wizzitygen  10 месяцев назад

      Glad it was helpful!

  • @DanielSchweinert
    @DanielSchweinert Год назад +2

    Thanks, straight to the point.

  • @eli-shulga
    @eli-shulga Год назад +1

    Man I was looking exactly for this thank you thank you thank you!!

    • @wizzitygen
      @wizzitygen  11 месяцев назад

      Glad it was helpful.

  • @minigabiworld
    @minigabiworld 11 месяцев назад +1

    Thank you so much for this video, will give it a try 🙏

    • @wizzitygen
      @wizzitygen  11 месяцев назад

      Thanks for your message! Best of luck!

  • @MacShrike
    @MacShrike 11 месяцев назад +1

    Thank you, good video.
    As for your art; I really like that guy/person with the phones showing him/her/it. Really good.

    • @wizzitygen
      @wizzitygen  11 месяцев назад

      Thanks for the kind words!

  • @willemcramer8951
    @willemcramer8951 Год назад +2

    beautiful background music :-)

    • @wizzitygen
      @wizzitygen  Год назад +1

      The music is from audiio.com :)

  • @yutupedia7351
    @yutupedia7351 Год назад +1

    pretty cool! ✌

  • @ekansago
    @ekansago 3 месяца назад +1

    Thank you for the video. I have a question. I cannot find file "model.ckpt" in my google drive. I've check several time all my google drive. Where can it be?

  • @Queenbeez786
    @Queenbeez786 8 месяцев назад

    currently in the process of training. i was actually looking for the styles panel that on the right in your locally installed SD.. how do you make a style like that?

  • @sameer01bera
    @sameer01bera 3 месяца назад +1

    is this model works as a image to image? hope i get your replay or a link of work if possible

  • @kamw8860
    @kamw8860 Год назад +1

    thank you ❤

  • @pressrender_
    @pressrender_ Год назад

    Thanks for the video, is really great and helpful. QQ, do I have to redo all the model calculations every time I re enter, or there is a way to skip that?

    • @wizzitygen
      @wizzitygen  Год назад +1

      Hi Renato. I'm not sure I understand your question. But once the model is trained you can use it freely in Stable Diffusion Automatic 1111 Interface. No need to retrain every time.

  • @user-lq9qs7dr8e
    @user-lq9qs7dr8e 5 месяцев назад

    Super!

    • @wizzitygen
      @wizzitygen  4 месяца назад

      Glad you found it helpful.

  • @VulcanDoodie
    @VulcanDoodie Год назад +1

    very clear and easy to follow, google stopped giving permission to use SD on colab though, I donnow what that means, they warned me twice today

    • @wizzitygen
      @wizzitygen  Год назад

      Thanks for the kind words. I think Google blocked only the free tier.

  • @solomani5959
    @solomani5959 7 месяцев назад +1

    Curious, must you use a specific square size for image size? As most of my pictures are all different sizes and mostly rectangle.

    • @silveralcid
      @silveralcid 6 месяцев назад +2

      512x512 pixels is the best size to use as Stable Diffusion itself is trained on that size.

  • @Fitzcarraldo92
    @Fitzcarraldo92 Год назад

    How does the google collab do regularization images? When training directly on Dreambooth you have to provide a large dataset of images to contrast against the object or style you are creating

    • @wizzitygen
      @wizzitygen  Год назад +1

      Hi Jasper. Not sure I understand your question, but you are not creating an original model, you are actually including your data as part of the larger data set/model. So when you call on an image that was not part of your training images it will reference the larger model for that. Then apply your style. Not sure if this answers your question.

  • @timothywestover
    @timothywestover Год назад +1

    Did you find the resolution mattered of each photo? I had heard from other's that they needed to be 512x512 but I'm seeing some of yours are varying resolutions. Thanks!

    • @wizzitygen
      @wizzitygen  Год назад +1

      Hi Tim. Yes, my originals had varying resolutions, but I made them all 512 x 512 before training on them.

    • @timothywestover
      @timothywestover Год назад

      @@wizzitygen Got it thanks!

    • @matbeedotcom
      @matbeedotcom 10 месяцев назад +2

      If its varying resolutions youll have trained "buckets" of resolutions and should result in higher quality

  • @user-cf1xc1rw9o
    @user-cf1xc1rw9o 3 месяца назад

    This was a great video and the Colab workflow was very easy to follow. I wanted to create a model based on my own art style. This notebook worked perfectly up to the very end. I was able to generate the sample images and also generate images using my own prompt but after checking my Drive folder the checkpoint was never saved to the AI_PICS/models folder. All permissions were given to Colab regarding access to my Drive account. So I used my other google account that has 14gb of available space, built another model and again the tensorsafe file did not appear in Drive folder. Has anyone else experienced this problem?

  • @cuanbutfun3282
    @cuanbutfun3282 11 месяцев назад +1

    These are great, May I know how can I change the base model that I want to merge with the new one?

    • @wizzitygen
      @wizzitygen  11 месяцев назад

      You can find different base models on Hugging Face.

  • @colinelkington1360
    @colinelkington1360 10 месяцев назад +1

    Would increasing the number of 'input' images into the model improve the accuracy at which the model is able to replicate the art style? Or will you achieve the same results as long as you input roughly 20-30 as mentioned?
    Great tutorial, thank you for your time.

    • @wizzitygen
      @wizzitygen  10 месяцев назад

      My understanding is that 20-30 is the sweetspot. Training with more could result in overtraining and could affect your results. But, I would experiment and see what works best for you. If you have more give it a try and compare.

    • @dadabranding3537
      @dadabranding3537 9 месяцев назад

      Absolutely not. Fewer is better, make sure to choose the best.

  • @lukaszgrub9367
    @lukaszgrub9367 5 месяцев назад

    Hey, the faces and details are not really good on my model, is it possible to train it further to improve to make details etc? or maybe I should use better prompts?

    • @lukaszgrub9367
      @lukaszgrub9367 5 месяцев назад

      My model are "cartoonish drawings" but realistic, although the faces still seem bad after adding negative prompts and using a realistic lora. Do you know how to fix?

  • @chloesun1873
    @chloesun1873 9 месяцев назад

    Thanks for the tutorial! I wonder how many images to use for the best result, the more the better? I first used about 30 images, and later 200 images, but the latter doesn't give a better outcome.

    • @wizzitygen
      @wizzitygen  9 месяцев назад +1

      My understanding is 20-30 images. More than that, you could risk overtraining the model and not getting any benefit or perhaps seeing poorer outcomes.

  • @remain_
    @remain_ 5 месяцев назад

    I'm curious about the img2img functionality. Rather than typing a prompt of a cat, I'd love to see if it could translate an image of a specific cat into vrcty_02.

    • @wizzitygen
      @wizzitygen  4 месяца назад

      Hi there. Not sure I understand what you mean. Like a Siamese cat or the like?

  • @madrooky1398
    @madrooky1398 11 месяцев назад +3

    Actually it is better to train a model with the largest sized pictures your GPU can handle. Of course if you take the base model that was purely trained on 512x you will have issues in the beginning, but just take a custom model that has been trained with larger pictures.
    The obvious advantage is the level of detail. It might be that a certain art style does not require much detail, but some do, and 512x pics simply cant carry much detail and also being limited to 512x output is another strain.
    I was actually surprised myself not so long ago how well a 512x model can handle larger sizes, and it basically rendered all my prepared sets useless because it is such a big difference in quality only going up to 768x. But i do actually not use a specific AR on purpose, because i figured if i do that i will limit the model on this AR, so i use the number of pixels my GPU can handle and try to input as much variety as possible and i have seen the duplication issues decreasing since doing so. Its basically not happening any more. And what i mean with number of pixels is this, i figured my GPU can handle around 800,000 pixel very well. This could be for example a 800x1000 picture or 2000x400. You see it does not matter really what format, the maximum is the total number of pixels. A model just need to learn a few example of different formats for the subjects so it will not start duplicating things on the image grid.
    I am not certain however how large the dataset must be to expand a models capability in that regard since i start my own models from merges i do out of other custom models in an attempt to get the best base for my own ideas. And because the base model is actually not trained very well, if you see some of the data set and how the images have been described it is no wonder that very often things are deformed because there was simply no real focus on a proper image description. And that is also no surprise if you have worked on that you know how much effort it takes even for smaller data sets.

    • @wizzitygen
      @wizzitygen  11 месяцев назад

      I haven't experiment with resolutions other than 512 yet, perhaps once there are more models who do so, I will give it a try then.

    • @madrooky1398
      @madrooky1398 11 месяцев назад +1

      @@wizzitygen There are many models, i would actually assume most of the high quality models on civitai were trained with larger sizes. And many of them are also based on merges, i would rather say meanwhile it is hard to find a model that was not at some point trained with larger sizes.

  • @mortenlegarth1047
    @mortenlegarth1047 11 месяцев назад

    Do you not need to add text to the training images to let SD know what they depict?

    • @wizzitygen
      @wizzitygen  11 месяцев назад

      Hi there. That is what the "Class Prompt" is useful for. It helps classify the image.

  • @paulrobion
    @paulrobion 5 месяцев назад

    Damn, I followed all the steps and the model semmed to work correctly on dreambooth but not in automatic1111 : the *.ckpt shows in the dropdown menu but I can't select it for some reason. Safetensors files work though, what did I do wrong ?

    • @wizzitygen
      @wizzitygen  4 месяца назад

      Hi there, I'm not sure why that would be. Sometimes the latest commit can be buggy, but I am by no means an expert in these matters. I'm sorry I can't be of more help.

  • @user-el3hr7jt4u
    @user-el3hr7jt4u 10 месяцев назад

    How can I train one model multiple times? For example: I trained model to recognize new art style, and now I want the same model to be able to draw a specific bunny plush in this new art style. In the colab it says to change model_name to a new path, but path from where? Google drive or the colab folder?

    • @wizzitygen
      @wizzitygen  10 месяцев назад +1

      I believe you would have to add your model to Hugging Face and link to it.

    • @user-el3hr7jt4u
      @user-el3hr7jt4u 10 месяцев назад +1

      ​@@wizzitygenlooks like you're right. Thanks for the consult.

  • @bcraigcraig4796
    @bcraigcraig4796 6 месяцев назад

    how get you dlown Stable Diffusion WebUI i on your mac

  • @Queenbeez786
    @Queenbeez786 8 месяцев назад

    omg it worked aaaaaaaaaaaaaaaaaaaaaaaaaaaa. i've been stuck on this issue for months, im a noob with this so little issues would last weeks. thank so so much. can't believe i did this on my own lol. do you have a discord or community?

    • @wizzitygen
      @wizzitygen  7 месяцев назад

      Hi there, I'm so happy it worked for you. I do have a Discord Channel #wizzitygen but I am not very active on it. I made this video a while back to give artists a leg up on working with their own images and haven't posted many other videos since. My business takes up much of my time as well as the poetry I write. Thank you for your comment, it is nice to know this video is helping people.

  • @bcraigcraig4796
    @bcraigcraig4796 7 месяцев назад

    Do I need code because I do not know anything about code but I do want to use my own artistic style

  • @joshuadavis6574
    @joshuadavis6574 3 месяца назад

    I keep getting an error that says" ModuleNotFoundError Traceback (most recent call last)
    in ()
    ModuleNotFoundError: No module named 'diffusers'
    Can you help me or can someone explain what this means?

  • @Biips
    @Biips 10 месяцев назад

    I do mostly abstract illustration I’m wondering how a model could be trained with my art style if objects aren’t recognizable

    • @wizzitygen
      @wizzitygen  10 месяцев назад +1

      I'm not sure exactly but I believe it will recognize patterns and shapes. color etc. I'm unsure how you would prompt it though. I would suggest trying it and experimenting. That would be the only way to know.

  • @dronematic6959
    @dronematic6959 11 месяцев назад

    Do you put any regularization images?

    • @wizzitygen
      @wizzitygen  11 месяцев назад

      Hi there. I'm not exactly sure what you mean by regularization images. Do you mean training images?

  • @nickross6245
    @nickross6245 5 месяцев назад

    I want to make sure this doesn't sample other artists. I'm fine with it using pictures of objects for reference but is it 100% only sampling my artwork for style?

    • @wizzitygen
      @wizzitygen  4 месяца назад

      It samples your art for style but if you refer to other things in your prompts, i.e. tree, house, etc. it uses the larger model to generate those ideas. In the style you trained it on.

  • @bcraigcraig4796
    @bcraigcraig4796 6 месяцев назад

    I know You need to exprot as a 512 X 512 but does it need to be png?

  • @Queenbeez786
    @Queenbeez786 8 месяцев назад

    pls upload a tutorial for training lora as well.

  • @anuragbhandari3776
    @anuragbhandari3776 10 месяцев назад

    is having names of images with same prefixes a mandatory thing?

    • @wizzitygen
      @wizzitygen  10 месяцев назад

      Yes, you will get better and more consistent results.

  • @BirBen-mt9eq
    @BirBen-mt9eq 10 месяцев назад +1

    When making model are they became priviate?no one can use aside from me right?

    • @wizzitygen
      @wizzitygen  10 месяцев назад

      Hi there, to be completely honest I am not 100% sure. The model is forked from Hugging Face so I am not sure if your trained model goes elsewhere. The person who created the Collab Notebook can be found here. github.com/ShivamShrirao It would be best to ask him.

    • @wizzitygen
      @wizzitygen  10 месяцев назад +4

      Hi there, I decided to write Shivam Shrirao and ask him your question. This was his response: Shivam Shrirao
      Sun, Jul 2, 11:17 PM (10 hours ago) "It's only available to you."

  • @cezarybaryka737
    @cezarybaryka737 7 месяцев назад

    Can I save the file in .safetensors extension and not .ckpt?

    • @wizzitygen
      @wizzitygen  4 месяца назад +1

      In this instance it saves as a ckpt.

  • @bcraigcraig4796
    @bcraigcraig4796 6 месяцев назад

    where do I get Stable diffusion to downlod on my mac

    • @wizzitygen
      @wizzitygen  4 месяца назад +1

      Try googling Stable Diffusion WebUI for Mac, Github. If you have an M1 chip addd that to your search.

  • @CRIMELAB357
    @CRIMELAB357 11 месяцев назад

    can someone show me the process of uploading the ckpt to Huggingface and using the model online? plz...anyone?

    • @wizzitygen
      @wizzitygen  10 месяцев назад

      This might help you. huggingface.co/docs/hub/models-uploading

  • @omuupied6760
    @omuupied6760 9 месяцев назад

    why i cant find model.ckpt on my drive?

    • @wizzitygen
      @wizzitygen  8 месяцев назад

      Not sure. Have you tried searching the entire drive?

  • @rwuns
    @rwuns 4 месяца назад +1

    I don’t See a file called Model ckpt i did everything correctly!

    • @themaayte
      @themaayte 4 месяца назад

      I have the same issue, @euyoss did you find a solution?
      @wizzitygen

    • @rwuns
      @rwuns 4 месяца назад

      nope ;C@@themaayte

    • @wizzitygen
      @wizzitygen  4 месяца назад

      Hi there, not sure what the problem might be. Have you searched the entire drive? Search (.ckpt).

    • @themaayte
      @themaayte 4 месяца назад +2

      @@wizzitygen Hi so I've found the solution, the code doesn't give you a .ckpt file, it give you a safetensors file, which is the same thing

    • @rwuns
      @rwuns 3 месяца назад +1

      thanks !!
      @@themaayte

  • @clyphx
    @clyphx 11 месяцев назад

    18min to go

    • @wizzitygen
      @wizzitygen  10 месяцев назад

      Hope it turned out to your liking!

  • @gerardcremin
    @gerardcremin Год назад +1

    👍 "Promosm"

    • @wizzitygen
      @wizzitygen  11 месяцев назад

      Thanks for the thumbs up.

  • @goatfang123
    @goatfang123 10 месяцев назад

    waste of time stops working nextday

    • @wizzitygen
      @wizzitygen  10 месяцев назад +1

      Hmm. That is unusual. The model I created in this video is still working fine. One thing to check is to make sure you have the correct model loaded up when generating the image. I.e. Select the right .ckpt (Stable Diffusion checkpoint) file.

  • @judge_li9947
    @judge_li9947 10 месяцев назад

    Hi, thank you very much. Used this before and is for sure the easiest and best video out there. Today though I got following error, can you help: ValueError: torch.cuda.is_available() should be True but is False. xformers'
    memory efficient attention is only available for GPU
    not sure what to do.
    Much apprechiated thanks.

    • @wizzitygen
      @wizzitygen  10 месяцев назад

      Hi there, thanks for the kind words. Your best bet with errors is to paste the error in Google. It is usually a bit of a hunt but the solutions are usually out there if you Google the error. Sorry I can’t be of more help.

  • @matbeedotcom
    @matbeedotcom 10 месяцев назад +1

    You didnt have to annotate each image?