Put Yourself INSIDE Stable Diffusion

Поделиться
HTML-код
  • Опубликовано: 19 сен 2024
  • [patreon] / cg_matter

Комментарии • 78

  • @theplayerformerlyknownasmo3711
    @theplayerformerlyknownasmo3711 Год назад +24

    There's so many tutorials out there on midjourney. But I only trust ONE channel. Thanks fam.

  • @joshwoloszyn
    @joshwoloszyn Год назад +1

    i did this process back when it first came out and you had to rent a monster GPU online, its cool to see it becoming a more user friendly process now. thanks for sharing!

  • @liamcheetham9333
    @liamcheetham9333 Год назад +4

    Tried this when it first came out, I got better results when using more varied lighting, poses, makeup, etc. trying to get more variation in the dataset made the results feel more like me.
    epic video btw, glad to see more tuts popping up for this stuff 👍

  • @travisyee7278
    @travisyee7278 Год назад +4

    Now I can finally make kinky deepfakes of CGMatter. ( ͡° ͜ʖ ͡°) Thanks bro.

  • @Wintheria0
    @Wintheria0 9 месяцев назад +1

    Hey! Even though I typed the name of the "Create Embedding" section, I cannot select the name of the "Train" section and it gives an "error" in the console.

  • @Zhincore
    @Zhincore Год назад +4

    Never really had any damn luck with training, hopefully I'll get something this time with your help

  • @rdskew
    @rdskew Год назад +2

    good stuff brother...

  • @celiocarvalho64
    @celiocarvalho64 Год назад +2

    I'm getting "Training finished at 0 steps."
    How to fix it?

    • @OldToby53
      @OldToby53 11 месяцев назад

      use batch size 1

    • @celiocarvalho64
      @celiocarvalho64 10 месяцев назад

      ive tried, it dosent work@@OldToby53

  • @IAcePTI
    @IAcePTI Год назад +2

    This, combined with Lora and controlnet👌😏

  • @whyjordie
    @whyjordie 11 месяцев назад

    I’m not understanding how some people are getting it to spit out these so quickly, I have a 1060 and 16gb ram and am doing all of these settings, using only 8 pictures, and it’s saying it’s going to take 700 hours

  • @IzeIzeBaby
    @IzeIzeBaby Год назад +2

    hmm... 1. install stable diffusion, 2. generate not existing hot woman, 3. start onlyfans with virtual woman that look real, ???, get rich (maybe)

  • @therookiesplaybook
    @therookiesplaybook Год назад

    Discovered, you don't need to go into the inversions folder and copy a PT file, the embedding was already created in the embeddings folder, so just type the name of the embedding already in there.

  • @tigermunky
    @tigermunky Год назад +2

    I put in 20 images of my face, and the test images came out showing a fucking TRAIN! Then they got a bit better and showed warped versions of my face, merged with a CANARY! One image was a sewing machine. What the hell? The images I put in were all head shots, very similar to the ones Thom used. How they hell did stable diffusion decide that my face looks like an old steam locomotive?

    • @hemu8452
      @hemu8452 11 месяцев назад

      i have the same isssue. Solved it?

    • @tigermunky
      @tigermunky 11 месяцев назад

      @@hemu8452 Nope. I tried a totally different series of faces and just got really random stuff. Sofas, cats, cars...
      Nothing that looked anything like what I had put in.

    • @RaptorT1V
      @RaptorT1V 10 месяцев назад

      +

  • @BriggsMullen
    @BriggsMullen Год назад

    I love the tuxedo hoodie at the end

  • @Funzelwicht
    @Funzelwicht Год назад +1

    VERY good step-by-step tutoriial, please do more of that!

  • @leegosling
    @leegosling Год назад

    Dude! You’re in the matrix… now you’re immortal… Woah!

  • @Megabeboo
    @Megabeboo Год назад

    What hardware are you using? Image generation looks buttery smmoth. I feel like the training part might blow up my 8 gig RAM Macbook Air M1.

  • @GreenMagic0
    @GreenMagic0 Год назад +1

    You rock that fake mustache!

  • @Zhincore
    @Zhincore Год назад +1

    Are you planning on teaching us Dreambooth, master? I'd really love to make LoRAs

  • @popixel
    @popixel Год назад

    I always have to click "Train embeding" every 15 steps or so. Is there a reason it does not go on its own?

  • @ederdalpizzol
    @ederdalpizzol Год назад +3

    Hello Thomtutorial, great work as usual. I have a question, you mentioned that you can put your own style of art if you have one (im not sure if is in this video or another haha). Is the proccess somewhat like that?

  • @immineal
    @immineal Год назад +2

    Although I have 16 gb of vram, "CUDA out of memory" Is there any way to prevent this?

    • @bingobangolol
      @bingobangolol Год назад

      EDIT: got it working with 512x512, had to go to settings in the sd-web ui and go to "Training" then check the following
      - Move VAE and CLIP to RAM when training if possible. Saves VRAM
      - Use cross attention optimizations while training

  • @MasterFX2000
    @MasterFX2000 Год назад

    Funny, I just did the same yesterday and almost came to the same workflow. But I used preprocess images to label my pictures accordingly using the BLIP for caption and be able to tell the network what on the picture belongs to me

  • @justacherryontop6538
    @justacherryontop6538 Год назад

    what GPU are you using ? mine takes like 1.5 minutes minimum on rx580 but here you just clicking

  • @tomschuelke7955
    @tomschuelke7955 Год назад

    so now i have several Questions.
    First would be... can i train the model to one person, than train the next one, and then merge the models somehow?,
    so i could say an image with Me and My wife sitting at the beach for example?
    Next would be..
    What s the difference from dreambooth to train, to lora.. what is what , and when do i use what?

  • @Sssseytha
    @Sssseytha Год назад +1

    Training finished at 0 steps. 🙁what am i doing wrong? followed every step (RTX 2070 ti)

    • @kotsylwester5572
      @kotsylwester5572 Год назад

      Try batch size 1

    • @joshuamallek1937
      @joshuamallek1937 Год назад

      @@kotsylwester5572 and set "Use cross attention optimizations while training" on settings/training to off

    • @jaysee6320
      @jaysee6320 Год назад +1

      ​ @Riya Singh see if there are empty new line characters in your subject.txt file. when I got rid of them it fixed the problem

  • @YoIomaster
    @YoIomaster Год назад +1

    Best man!

  • @KDawg5000
    @KDawg5000 Год назад

    If you use restore faces, does it overwrite your face and it no longer looks like you? Or does it just fix any errors like weird eyes?

  • @krystof.3d
    @krystof.3d Год назад +1

    I just keep getting the same "error", when i do batch of 8, it just instantly spits out: "Training finished at 0 steps. Embedding saved to...".
    It works with batch of 1. I have pretty high pc specs, what could cause this problem? Thanks.

    • @Sssseytha
      @Sssseytha Год назад +2

      you found a solution?

    • @riyasingh9280
      @riyasingh9280 Год назад +1

      Getting same error

    • @jaysee6320
      @jaysee6320 Год назад

      ​@@riyasingh9280 see if there are empty new line characters in your subject.txt file. when I got rid of them it fixed the problem

    • @monster-tr7qh
      @monster-tr7qh Год назад

      Try less batch size

    • @celiocarvalho64
      @celiocarvalho64 Год назад

      @@jaysee6320what d you mean "new line characters"?

  • @judgeworks3687
    @judgeworks3687 Год назад +1

    This was great. Easy to follow. Thank you. If I wanted to train my drawing style would I do everything the same but choose ‘style’ instead of ‘subject’? Would the embed go to same location in folders as where you put yr embed from portrait training? This was a great tutorial. Thankyou.

  • @polyhedralcathedral
    @polyhedralcathedral Год назад

    can't find textural inversion? Do I need to be logged into hugging face or something?

  • @lofipooper9080
    @lofipooper9080 Год назад

    gpu cuda memory runs out any suggestions installed and tried different versions github repos sd sets nothing working

  • @PiginaCage
    @PiginaCage Год назад +1

    I just know it's veiny and thicc and curved slightly to the right

  • @friendlyfather6007
    @friendlyfather6007 Год назад +1

    Man, I did the training up to 1850 steps and the results I got were amazing. However putting the embedding file I created in it's folder I get errors and it wont load. No one online seems to have this issue so I'm kinda SOL.

  • @dm1i
    @dm1i Год назад +2

    You say these are 512x512 resolution images while tooltip says they are actually 800x800 😄

  • @walidflux
    @walidflux Год назад

    Lora or embeddings?

  • @steveos111
    @steveos111 Год назад +1

    Training finished at 0 steps.😐

    • @riyasingh9280
      @riyasingh9280 Год назад

      Found solution?

    • @steveos111
      @steveos111 Год назад

      @Riya Singh no I'm afraid not

    • @jaysee6320
      @jaysee6320 Год назад +2

      ​ @Riya Singh see if there are empty new line characters in your subject.txt file. when I got rid of them it fixed the problem

    • @TVARVWNV
      @TVARVWNV Год назад

      @@jaysee6320 You're right, as simple as that.

  • @erikvz
    @erikvz Год назад

    Does anyone know if the dataset you generate yourself stays on your local machine or does it transfer to some server in the cloud as well ?

    • @lukemullineux
      @lukemullineux Год назад

      Good question. F

    • @ipodtouchiscoollol
      @ipodtouchiscoollol Год назад +4

      Depends on the method you used to generate said dataset if your talking about sending the dataset into A1111 no it will not go to the cloud as A1111 is just a WebUI for Stable diffusion model running on python on your own local machine (unless you are running A1111 in the cloud then A: why tf would you do that and B: get a proper computer if your using the cloud for computation services its not worth it). If you are using third party solutions for your dataset then there is no guarantee

    • @erikvz
      @erikvz Год назад

      @@ipodtouchiscoollol alright makes sense.

  • @AgallowayGFX
    @AgallowayGFX Год назад

    Your name is thom

  • @kvakduck3278
    @kvakduck3278 Год назад

    thank youuuuuuuuuuuuuuuuuuuuuuuuuu

  • @ob4359
    @ob4359 Год назад

    👑

  • @Goobermanguy
    @Goobermanguy Год назад

    I appreciate that you explain what all the numbers actually mean 👍 great video mr. matter!

  • @nicogonzalez8171
    @nicogonzalez8171 Год назад

    Honestly Im really dissapointed to see you get so into this AI stuff, i followed you for your genuienly good 3d tutorials but I just cannot condone AI art as it stands right now, its just rife with moral issues

    • @amiri7392
      @amiri7392 Год назад +1

      It's a tool like any other, the morality depends on how it's used. If someone uses it to create art that is a rip off of another and just pass that off on their own, it's not the Ai that's at fault, it's the human using it to do that. There also some really cool uses for it like ANIME ROCK, PAPER, SCISSORS

    • @radioreactivity3561
      @radioreactivity3561 Год назад

      Cry me a river.

    • @celiocarvalho64
      @celiocarvalho64 Год назад

      sad?