Inject Yourself into the AI and Make Any Image With Your Face! (100% FREE Method)

Поделиться
HTML-код
  • Опубликовано: 16 ноя 2022
  • I love to teach this kind of stuff! Here's how you can train Stable Diffusion to use your face when generate AI Digital Art.
    Here's the the long link to Dreambooth:
    colab.research.google.com/git...
    🛠️ Explore hundreds of AI Tools: FutureTools.io/
    🐤 Follow me on Twitter: / mreflow
    🐺 My personal blog: MattWolfe.com/
    🌯 Buy me a burrito: ko-fi.com/mattwolfe
  • НаукаНаука

Комментарии • 507

  • @r34ct4
    @r34ct4 9 месяцев назад +6

    Hey man kind of a meta comment: I remember when you uploaded the video about running StabelDiffusion locally, and thinking I want to contribute by commenting with some instructions on how to troubleshoot some potential errors, hoping I could contribute in some way, instead of always being a lurker. I remember hoping that your channel would become what it is today. Ultimately, you've helped me finally realize that I always give up following my passions not because I afraid of failure, but because I am afraid of success. Thank you

  • @MarianaTraxel
    @MarianaTraxel Год назад +9

    A-MA-ZING!!!! I do have to admit, that for a non-programmer this is A LOT of information and room for doing mistakes ... it took me two hours to get same-is results. But I CAN DO IT now, thanks to you.

  • @justjewellent8129
    @justjewellent8129 11 месяцев назад +2

    You my friend, has gained a friend. If ever tutorial could be taught by you, this world would be a better place!

  • @DubsCP
    @DubsCP Год назад +22

    THIS IS ABSOLUTELY AMAZING....Thank you so much for your guided lesson. Extremely easy to understand, and QUICK to the point! Working on mine RIGHT NOW!

  • @Tim_Black
    @Tim_Black Год назад

    When trying to reconnect I get to the "Inference" point and it will not connect to my GDrive. Any thoughts?

  • @stevePurvis1
    @stevePurvis1 Год назад

    Brilliant lesson Matt, Please keep up the good work and thank you

  • @melissakampers
    @melissakampers Год назад +21

    I recently discovered ChatGPT and I've been diving into the world of A.I. ever since. Your channel has been a valuable source of information and inspiration, a true treasure trove. Keep up the great work!

    • @smokedes2
      @smokedes2 Год назад

      Same here. I've been saving all of these videos

  • @maxinewairimu6478
    @maxinewairimu6478 Год назад +2

    Thank for sharing this gem Matt, I had no idea that this was/is even possible. At first all the numbers intimidated me but not at all anymore. I have heard quite a number of people talk about a new AI tool called BlueWillow, I am curious to learn more. Please please share in your next video

  • @thedemigodstudios
    @thedemigodstudios 8 месяцев назад +1

    Hey Matt i love your channel! i have a question, would you still say this is the best way to train an AI model for yourself?

  • @buchneski
    @buchneski Год назад +20

    Great content. Well done explanation and everything works locally (which is amazing) as described.

    • @iwantit539
      @iwantit539 Год назад +3

      Npc spotted😊

    • @CrxzyShxrts
      @CrxzyShxrts 7 месяцев назад

      frr🤣@@iwantit539

    • @matix676
      @matix676 3 месяца назад

      @@iwantit539 NPC that pays? Sign me up

  • @hey_utkarshh
    @hey_utkarshh 8 месяцев назад

    This is truly insane ! Thanks for sharing Matt, but If I want to train another pic, do I need to repeat the process again ?

  • @steampoweredtv
    @steampoweredtv 10 месяцев назад

    I got this to work! Thank you for the no-nonscence tutorial!

  • @mexihcahcoatl4105
    @mexihcahcoatl4105 8 месяцев назад +1

    Thank you for your videos, you help me understand more precisely and in the overall context because of the way you teach, saludos desde Tijuana, Mexico.

  • @DS-ul2ir
    @DS-ul2ir 8 месяцев назад

    What a great guy! Thank you so much for the teaching!

  • @xixtixspace2660
    @xixtixspace2660 8 месяцев назад

    For all those having trouble with the license issue, just get your token continue on with the process, and ignore the message saying you need to click and accept. I just followed the process Matt laid out and it worked. You can ignore the missing parts on training as well.

  • @Soumya-Faria72677
    @Soumya-Faria72677 3 месяца назад

    This is SO cool! Cant wait to see myself in all kinds of crazy pictures.

  • @subhacasual
    @subhacasual 9 месяцев назад

    great explanation and very detailed tutorial sir, I was thinking, if i mount google drive then how much storage the whole process will take, like saving the weights and all ? because generally we get 15 GB free storage, the full process will fit into that ?

  • @SmudgeOfficialUK
    @SmudgeOfficialUK Год назад +4

    Hey mate, thanks for the great video! Just wondering, why did you pick the number 12 for num_class_images?

  • @googlehomemini9823
    @googlehomemini9823 Год назад +4

    Matt, can you tell me what the minimum GPU you would suggest? My machine is clearly not up to snuff… Is the GPU the only part that’s important? Or does the CPU and Ram play much of a part in this?

  • @Kiri-Saumya8712
    @Kiri-Saumya8712 3 месяца назад

    Subscribed! Love learning new tech tricks like this.

  • @johnhingkung662
    @johnhingkung662 Год назад

    Thanks so much for this inspiring free content. May God bless you!

  • @bustedd66
    @bustedd66 Год назад +2

    thanks for the help. they keep changing this colab and I was really lost

  • @nlyanke3
    @nlyanke3 9 месяцев назад

    Thanks! It still works really well.

  • @stevejordan7275
    @stevejordan7275 Год назад +26

    1:00 The "aspect ratio" of a 512x512 image is not 512 to 512, it's 1:1. If the pixel dimensions were - for example - 300x400, the aspect would be 3:4. We look for the smallest whole numbers that (when multiplied by the same subsequent number) create the pixel dimensions.
    (If the required dimensions were 512x511, that would also be the "aspect ratio." If it was 500x512, the aspect ratio would be 125:128, etc. 1920x1080 is 16:9.)
    Still, good stuff. An informative tour. Thank you!

    • @deadpooh
      @deadpooh Год назад

      So 512 IS indeed the ideal length and height of the image in PIXELS, and not aspect ratio? Perhaps you might be able to lend more clarity.

    • @deadpooh
      @deadpooh Год назад

      Also, are you, or anyone else, aware of a way to "batch resize" these images quickly, and ideally for free? I feel like this supplementary knowledge could be a good candidate for being a pinned comment.

    • @stevejordan7275
      @stevejordan7275 Год назад +1

      @@deadpooh PhotoShop has automation that could likely do this. Or at least it used to; I haven't needed to batch process images for some time.
      The process involves starting the script recorder function, perform one resizing, and then turn the recorder off. Then you select the directory where the files are that you want to process, and a directory for the processed images to be saved.
      Adobe's forums can probably give you the specifics for the CC version.

    • @stevejordan7275
      @stevejordan7275 Год назад +1

      @@deadpooh That's correct. Start with the pixel dimensions, solve for the lowest common denominator (LCD); the rest is just maths.

  • @gautamsharma8386
    @gautamsharma8386 Год назад

    Amazing I just loved the info provided. But is it possible to load the trained created model for the second time when we run this program. :)

  • @Marcus_Ramour
    @Marcus_Ramour 9 месяцев назад

    brilliant guide, many thanks for sharing!

  • @JonathanKingFC
    @JonathanKingFC Год назад +30

    Doing the lord's work mate

  • @danvorosmarty9854
    @danvorosmarty9854 Год назад +1

    Question: I want to "sequentially" train a model, in other words, I want to train it on pictures of myself and then repeat the process on the same model and train it on pictures of say, my daughter, with a new instance prompt word for her as well.
    This would enable things like being able to prompt for images with both instance peompts for myself and my daughter at the same time. And/or being able to switch between images of her or myself without having to load a new model each time (just use lne unique instance prompt word or the other).
    Currently I have a handful of custom trained models for various friends and family but my goal would be to have a single custom model where i could prompt for anyone/anything I've trained it on, and in whatever combinations I want (including all at once like say for a group photo).
    My question is, you mentioned being able to "use the model later" which i think you meant narrowly as in downloading the ckpt file and importing it to a local application (and/or running it from gdrive?)... This is what I do with my custom models currently... But I'm wondering what exactly it is pulling for files when you designate a model in the:
    "Name/path of the initial model" field... I understand it pulls the model from huggingface, but is there a way to modify this collab notebook to have it use a cpkt file that you upload? Or somehow point it to the saved file(s) in Gdrive?
    My novice thinking was to perhaps upload the file to my huggingface account so that it would have the same pathway, but all I'd have to upload is the ckpt file... Is that enough? Or are there additional files I would need to upload? And if so, where/what are they? Thanks!

    • @CupcakeUnicorn
      @CupcakeUnicorn Месяц назад

      @inteligenciamilgrau seem to have figured it out further down in the comments.

  • @TheGadgetGuru
    @TheGadgetGuru Год назад +4

    Do you have to use Google Drive or can you save to a local hard drive. And...love your Midjourney videos...keep 'em coming!

  • @joanmemoz-he9ec
    @joanmemoz-he9ec Год назад +2

    Does this take a long time to generate ? I am looking into BlueWillow, It would be wonderful if you could share a tutorial on it ! 😄 Reading through your comments, I am glad I am not the only one to request for it. I feel like I can learn a lot from your breakdowns of the tools.

  • @DanielSuchanek-qx6on
    @DanielSuchanek-qx6on Год назад +2

    Awesome tutorial bro. Thank you so much😄

    • @mreflow
      @mreflow  Год назад +1

      Thank you for taking the time to watch! :)

  • @einar_stray
    @einar_stray 9 месяцев назад +2

    You said I can use the trained model again in the future. How? Where? Thank you!

  • @marksutherland774
    @marksutherland774 Год назад +1

    Great video, thank you, have you got a video on how to load a model e.g I do the training and save the ckpt file. I leave colab and start a new session, how do I load the existing model , thank you.

  • @Orsvideo
    @Orsvideo Год назад

    Great tutorial, thanks! Is there a way to add a second person to the trained model?

  • @greenghostuk
    @greenghostuk Год назад +1

    Hi Matt great content as ever really helped us starting out in MidJourney.
    Can you help me please, I keep getting lots of image generated that are canvases stood up agaisnt a wall or a person holding up the canvas so can’t actually use the picture. On all my prompts I’m requesting (white background) or some I have tried (No background) but it still keeps happening so you know what negative prompting is needed to remove and make the graphic flat and usable

    • @mreflow
      @mreflow  Год назад +1

      You can take the resulting image over to a site like remove.bg and remove the background that way. Canva can also do this. Sometimes the AI struggles to remove backgrounds though.

  • @siunaldosenior8605
    @siunaldosenior8605 Год назад

    great video! I'm just amazed by the results it generated
    But I cannot upscale my photos. How do I increase the resolution of a picture that I just generated?

  • @nelsongc2368
    @nelsongc2368 8 месяцев назад

    hi Matt great video, i have a questio please, how can i get High Resolution images from this option? thank you

  • @CesarAugustoTejada
    @CesarAugustoTejada Год назад

    Thanks! the tutorial was very helpful. How you can reuse the model again on google colab?

  • @canaldetestes4517
    @canaldetestes4517 Год назад

    Hi @mreflow, here I'm, thank you very much for this video, it's perfect, and we can use it to create others images to use in many kind of jobs.

  • @freakyninjaman3
    @freakyninjaman3 Год назад +9

    This is great! I'm having a bunch of fun.
    Do you know if I can upload some art of mine into a folder (all a similar style) and make illustrations with the same colors/style? Would I change "person" to "style," or how would that work?

    • @mreflow
      @mreflow  Год назад +5

      Honestly, I haven't tried that yet. I'm not 100% sure if the process is the same or not. It's something I'm looking into though.

    • @Raylative
      @Raylative Год назад +4

      @@mreflow the corridor crew released a new video on how to push consistently a specific kind of art style without changing the characters profile. it might interest you. they use reverse stable diffusion with a fixed seed. im still in the process gf understanding this but YO the outcome is great. they made a whole anime from picture to picture generation. amazing.

    • @hustler6069
      @hustler6069 Год назад

      @@mreflow Hi Matt, may I know which tool do you use to remove your background and add a virtual background to your video

    • @mnrvaprjct
      @mnrvaprjct Год назад +2

      @@mreflow hello, I tried to find the check box you mentioned but can't seem to find it, where would it be @mreflow

    • @krizh289
      @krizh289 Год назад

      Yes you can train it on artistic styles as well, just replace "person" with design or art or something like that

  • @inspire_optiminds
    @inspire_optiminds Год назад +1

    Very helpful, full of useful information, thank you very much.

  • @PrinceWesterburg
    @PrinceWesterburg 6 месяцев назад +4

    The install X-Formers step is missing now and it warns the code is no longer compatible with Google whatever - video update time already?

  • @jerryjack6976
    @jerryjack6976 Год назад

    Thanks for this great tutorial !!!

  • @frankywright
    @frankywright Год назад

    You are a true legend mate. Thanks for this.

  • @eri7-11
    @eri7-11 9 месяцев назад

    If I want to come back next week to make more images of me, do I have to go through this whole process again? Or with that file "Convert Weights to ckpt to use in web uls like AUTOMATIC 1111" how do I use it to get more images? Kindly thanks for any help!

  • @dadungrpj8314
    @dadungrpj8314 Год назад +12

    Great tutorial Wolfe, after several attempts I have some observations,
    1. I'm using Mac BookPro and it couldn't exceed the point of uploading the image, error message kept saying "instance file not found" so I ran it successfully on a windows system.
    2. The terms and agreement check box is no longer there
    3. Install xformers from precompiled wheel is no longer available
    despite the fact that 2 and 3 were no longer available i successfully injected my pics and generated fun AI images. Thanks Wolfe

    • @dadungrpj8314
      @dadungrpj8314 Год назад +2

      @Viral Spook I did not, I guess it just does that automatically

    • @HSR
      @HSR Год назад +1

      @@dadungrpj8314 I am on the Mac too. Got this error when I tried to upload the images MessageError: RangeError: Maximum call stack size exceeded.

    • @dadungrpj8314
      @dadungrpj8314 Год назад +1

      @@HSR
      If you can lay your hands on a windows OS system you'll be done in an hour or so.
      But for some reason some of these AI tools and apps don't do well on Mac OS and I don't know why but going forward I'm getting a virtual box on my Mac then installing Windows OS, let's see if that would do the trick.

    • @GoomusicLtd
      @GoomusicLtd 5 месяцев назад +1

      I'm using a Mac Mini M1, and it can't even get pass the install requirements, it says
      "ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
      lida 0.0.10 requires kaleido, which is not installed.
      llmx 0.0.15a0 requires cohere, which is not installed.
      llmx 0.0.15a0 requires openai, which is not installed.
      llmx 0.0.15a0 requires tiktoken, which is not installed.
      tensorflow-probability 0.22.0 requires typing-extensions

    • @internetHate
      @internetHate 4 месяца назад

      @@GoomusicLtd and then press the play button, this should fix it.

  • @JoeRubalcaba
    @JoeRubalcaba 8 месяцев назад

    Long time watcher, first time commenter. - First, I love the tutorials and news. Thank you very much. Second, when I tried to follow your instructions, I found my page was missing the "Install xformers from precompiled wheel" section. Are you still using this process and if so are you finding this to be an issue? Thanks!

  •  10 месяцев назад

    At num_class_images you can put the amount of "classes" you are trainning, in your case, just one!! In your case "mreflow person"!! If you wanna train a model and add you wife and son, in that case 3 classes, each folder with images separated!! So you can generate a image with 3 all together!! And You can also choose train interval to 1000 and put max_train_steps to 3000! In that case it will save the model at 1000, 2000 and 3000 and you can compare the 3 of them at the end!! (But think also this will save 3 saved models, almost 15gb of your drive! lol !!! Thank you so much for sharing this knowledge!!

  • @lordbaron104
    @lordbaron104 Год назад

    Brilliant stuff brother 😊

  • @davidedasilva2207
    @davidedasilva2207 Год назад

    Great! But I have some question in minute 5:42: let's suppose that my dataset contains images of a person, dog, cat... Is it just to add more objects inside the "concepts_list" array in the same order that the images are sorted in the dataset?

  • @sukilbide
    @sukilbide Год назад +1

    Thanks. Clear and useful.

  • @vivalsdne3239
    @vivalsdne3239 Год назад +2

    Hi , How to use a custom model in this stable diffusion google colab ? i want to use other model like protogen or dreamlike diffusion

  • @eugenevorster8810
    @eugenevorster8810 Год назад +3

    Hey Matt, great video as always. Very informative.
    Say, how do I go about reusing previously trained models? You mentioned something about the ckpt. file, but I'm clueless about implementing it.

    • @jp1551
      @jp1551 Год назад +1

      Just import the trained model from `StableDiffusionPipeline.from_pretrained` and run as usual

    • @eugenevorster8810
      @eugenevorster8810 Год назад

      @@jp1551 You absolute legend. I appreciate the assistance, I'm quite shit with this code stuff, so thanks for the info.

    • @11ui55
      @11ui55 Год назад +7

      @@eugenevorster8810 what did he say, the comments gone

    • @dushyantsinghshekhawat793
      @dushyantsinghshekhawat793 Год назад +1

      ckpt file will be saved in gdrive when you first train a model.
      I am not getting how can I use it. He mentioned in code comments that we can add ckpt file path and use the pretrained model, but it's throwing following error. Any clue how to resolve this?
      HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name':
      '../content/drive/MyDrive/stable_diffusion_weights/mrblack408/2000/model.ckpt'. Use `repo_type` argument if needed.

  • @DeletedComment
    @DeletedComment Год назад +12

    There's no box to press at 2:28. I have the same screen as Matt despite not having checked the button. Reading comments below it seems like this is a common problem and no-one seems to have fixed it. Any ideas?

    • @marazu043
      @marazu043 Год назад

      I have the same problem have u found a solution?

    • @cryptomillie
      @cryptomillie Год назад

      I haven't found the box yet either?

    • @mooniproductions9782
      @mooniproductions9782 Год назад

      have you found any solution yet? MATT HELP PLZ🥲

    • @DeletedComment
      @DeletedComment Год назад +5

      Ok update, it turns out it works now without the box tick. It used to be essential. Now I suppose it's just removed and ticked as default

  • @FixitantwerpBe2100
    @FixitantwerpBe2100 5 месяцев назад +3

    This tutorial is outdated, I get errors when installing the requirements already:
    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    lida 0.0.10 requires kaleido, which is not installed.
    llmx 0.0.15a0 requires cohere, which is not installed.
    llmx 0.0.15a0 requires openai, which is not installed.
    llmx 0.0.15a0 requires tiktoken, which is not installed.
    tensorflow-probability 0.22.0 requires typing-extensions

  • @MrAgoof
    @MrAgoof Год назад +4

    Hi Matt, I did everything you showed in the video. It works just fine. Thanks! But when I reopen the page where can I paste the location so I can use it further?

  • @santicodaro
    @santicodaro 10 месяцев назад +1

    Amazing! Is it possible to do the process with multiple persons? I want to do it with all my friends faces (with their permission of course)

  • @hansimeier6587
    @hansimeier6587 Год назад

    Awesome! Does this process could also work from Android devices?

  • @bazacoughlan0
    @bazacoughlan0 8 месяцев назад

    Hi Matt, I can't see a button to accept the terms and conditions? Am i doing something wrong?

  • @lualgomo3920
    @lualgomo3920 2 месяца назад +1

    Awesome video. Sadly not working anymore due to dependencies issues.Any updates in the method would be very much appreciated.

  • @IvarDaigon
    @IvarDaigon Год назад

    does the trainng continue to process if you refresh the page? if so you can use a page refresh browser extension to keep it active.

  • @brazatolz213
    @brazatolz213 11 месяцев назад

    Cool tutorial. So much work to get it going. Easier to just take a picture. Thanks for breaking it down though.

  • @m.j.mcintear793
    @m.j.mcintear793 Год назад

    what do you use to cut out your background? Its perfect. Are your lives this good too

  • @micapoan9
    @micapoan9 Год назад

    Hi Matt,
    Great video and Channel. Does the photos of yourself get used by these companies and then passed on.

  • @daddiosunny
    @daddiosunny Год назад

    I sure do appreciate you showing all these interesting AI programs, Very helpful indeed.. I do have a question for you about any possible AI program. I'm looking for something that lets say if a person were to write and sing lyrics, Is there an AI problem that can play instrumentals to said new music? ... My kiddo ask me, I wasn't sure so decided to ask one of the experts :) .. I just Liked & Subscribed as well. :)

  • @kk47shooter91
    @kk47shooter91 Год назад +6

    ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
    torchaudio 2.0.2+cu118 requires torch==2.0.1, but you have torch 2.0.0 which is incompatible.
    torchdata 0.6.1 requires torch==2.0.1, but you have torch 2.0.0 which is incompatible.
    torchtext 0.15.2 requires torch==2.0.1, but you have torch 2.0.0 which is incompatible.
    torchvision 0.15.2+cu118 requires torch==2.0.1, but you have torch 2.0.0 which is incompatible.

    • @FinalFlameProductions
      @FinalFlameProductions 11 месяцев назад

      Did you find a fix for this ?

    • @kk47shooter91
      @kk47shooter91 11 месяцев назад

      @@FinalFlameProductions not yet :(

    • @KK47..
      @KK47.. 11 месяцев назад

      help

    • @carolynholland1962
      @carolynholland1962 11 месяцев назад

      Same issue, I tried to just keep going but training stopped with runtime error PyTorch and torchvision compiled with different CUDA versions. 😕

    • @kk47shooter91
      @kk47shooter91 11 месяцев назад

      @@carolynholland1962 its fixed now, i guess it was and update

  • @filiphaskovic9748
    @filiphaskovic9748 Год назад +1

    thank you for help sir!

  • @Bangada
    @Bangada Год назад

    Is the processing done locally on your computer or externally on some servers? have a pretty old CPU/GPU and not sure if it can do the training in less than ages ^^

  • @iliacevermeulen4330
    @iliacevermeulen4330 Год назад

    Hey, great tutorial but i don't have the seednumber on Lexica? And when it generates it mostly does not look similar maybe it's because of the seednumber that i don't have?

  • @kaizenacos8183
    @kaizenacos8183 Год назад +2

    excellent matt, but dont appear the button to accept the terms and conditions, would you someone please tell me what could I do?

  • @ewwkl7279
    @ewwkl7279 Год назад

    Thank you for your great tutorial. I've tried to use Lexica to get a seed number from the image I want to regenerate but it only showed the model name instead of the seed number. Where can I find the seed number of the image I want to generate, just like the example of Tom Cruise you showed me in this tutorial.

  • @RM-yr1yy
    @RM-yr1yy Год назад +2

    Great walk through Matt! Got it to work, but can you make a walk through on how to re use the trained model from my Google drive? I'm struggling on that part, I don't want to keep re training everytime. Cheers!~

    • @edsonmatheus7976
      @edsonmatheus7976 Год назад

      Very simple. On the "Inference" section there's a variable: model_path = WEIGHTS_DIR. Change the value of this variable for the path where your model is saved in your google drive. Something like this:
      model_path = '/content/drive/MyDrive/stable_diffusion_weights/my_model/2000'

    • @kk47shooter91
      @kk47shooter91 Год назад

      did you find any other use for the trained model

  • @anovin82
    @anovin82 Год назад +5

    Thanks for this, super helpful! Do you know whether the more images you feed it, the better the final product will be, while still using the 100 x total number of images formula?

    • @mreflow
      @mreflow  Год назад +7

      I actually tested with 20 images and 40 images. The model with only 20 images actually performed better and resulted in better images.

    • @_The2Times
      @_The2Times 11 месяцев назад

      @@mreflow what to do if i am getting distorted faces, please help

  • @empressheather5999
    @empressheather5999 Год назад

    I'm definitely going to try this.

  • @skwjisin
    @skwjisin 10 месяцев назад +1

    what do you do when you finish generating the photos? do you just close the web page? do you need to delete the photos of your face uploaded?

  • @undead504
    @undead504 9 месяцев назад

    love your videos but i keep getting this error @ the max train steps "ZeroDivisionError: integer division or modulo by zero" i did some googleing but all i could find is a post telling me to incress the save_interval so i did but no luck

  • @ralphwhite4278
    @ralphwhite4278 Год назад

    Nice! Thanks for this

  • @EliteConfiDance
    @EliteConfiDance Год назад +1

    Where do we find the seed numbers for Lexica art? Also, how do I save this page?

  • @brennersydney
    @brennersydney Год назад

    Superb tutorial thanks

  • @JaMarrJ
    @JaMarrJ Год назад +1

    Amazing tutorial Matt! Quick question do all of your 20 input photos have to be with the same hair style?

    • @mreflow
      @mreflow  Год назад +2

      It's probably better if it's not the same hairstyle honestly. The more variety you show the AI, the better it seems to perform.

  • @SteveFrame_devonuto
    @SteveFrame_devonuto Год назад +1

    This is cool, but do you have a tutorial for using these trained models in a AUTOMATIC1111 webui? I'm running a 3070, and didn't have the VRAM to be able to train, but should be able to generate images locally using thse ckpts?

  • @SUDO-gm2if
    @SUDO-gm2if 9 месяцев назад

    At 2:52 if the new token is greyed out means that the email has not yet been confirmed. It can be seen at the upper left to do it.

  • @krizh289
    @krizh289 Год назад +4

    If you try this, use this as a negative prompt to get a lot better outputs (credit SECourses):
    Negative prompt: (blue eyes, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime:1.4), fat, text, cropped, out of frame, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck

  • @dannysosa9983
    @dannysosa9983 10 месяцев назад +1

    I cannot find the button that you are talking about at 2:30. I am sure I never clicked anything and have followed all the steps up until now. Any Advice? Thanks, Danny

  • @deanthonydupree3955
    @deanthonydupree3955 6 месяцев назад

    Hey, cool content you are my go-to for AI. However, I'm following the steps, and when running my pics I get the (Errno 2) No such file or directory. Would it be a bad idea to recommend a remedy for this? Thanks

  • @phoneflasher
    @phoneflasher Год назад +2

    How to you reuse the CKPT when you go back another day? How do you use it in you own SD?

  • @heypett6619
    @heypett6619 7 месяцев назад +9

    Update: October 2023, it works really well. For those having a pip error, just execute that step again, that way worked for me. Thanks for this video, Matt!

    • @anasdroby2802
      @anasdroby2802 6 месяцев назад +4

      HOW dude ?? i've been trying for the past 3 days it didn't work

    • @heypett6619
      @heypett6619 6 месяцев назад

      @@anasdroby2802 I'mma try again rn and will let you know.

    • @sheeesh404
      @sheeesh404 5 месяцев назад

      @@heypett6619 Did it work?

  • @Maisonier
    @Maisonier Год назад +1

    How we can train another model than stable diffusion 1.5? Can we use Realistic_Vision_V2.0? can we use stable diffusion 2.1 v768? or do we need more ram in the colab GPU? thank you, liked and subscribed.

  • @ShowbizShortsz
    @ShowbizShortsz 5 месяцев назад +2

    I really love how you explained this! I do however have a question. I got to everything but xformers was missing from dreambooth and when I clicked the play button to run and train my model it said I didn't have xformers is there something I'm missing? I am on a laptop and unfortunately my GPU doesn't meet the requirements. So where do I go from here?

  • @Ryhha
    @Ryhha Год назад +2

    Great tutorial :) Please is there a way to save my training after it is done and continue on generating the images next day? Or do I have to start over and over again? Thank you

    • @programasinmer
      @programasinmer Год назад +5

      Save the notebook into your Google Drive, top left in File "Save a copy to my Google Drive". I highly recommend you to do it before all, and start the google colab from the copy, that way anything you did will be save.
      Another stuff, you still need to run again the installation and all of that everytime you close Google Colab (Or stopped for any reason), in order to create all he Stable Difussion files.

    • @FinalFlameProductions
      @FinalFlameProductions Год назад +1

      @@programasinmer Thanks for your help. So we need to add the pics and do the 30 min bit everytime ?

    • @lokeshchadha7715
      @lokeshchadha7715 Год назад

      @@FinalFlameProductions Did u get an answer to this? Even I am finding this weird running 30 mins photo upload everytime i restart my pc..Pls let me know if u find a solution to this..Thanks

    • @FinalFlameProductions
      @FinalFlameProductions Год назад +1

      @@lokeshchadha7715 Yeah I just redo everything again, whether it's right or not. 🤣 The 30 min part is sometimes 16 mins, it's actually learning the face I think.

    • @pressrender_
      @pressrender_ Год назад

      Could anyone do this?

  • @nadav13IOY
    @nadav13IOY Год назад

    Hey thanks for the video it is very cool.
    Is it possible to use more than 20 images if I want it to be more accurate?

    • @nadav13IOY
      @nadav13IOY Год назад

      and also, can I create more than one dataset? for example, one for me and one for my dog?

  • @ParisMasiel
    @ParisMasiel 11 месяцев назад

    Once you time out (after you've trained the set) how do you start using the same model again?

  • @gonewild6596
    @gonewild6596 Год назад +1

    How cani use my old training data on this site itself?, where should I put that ckpt file?

  • @classichumor3370
    @classichumor3370 Год назад

    hey, just want to say thanks~~
    (other programs didnt work for me :()
    but your tutorial does !!~ so thanks~~

  • @user-pg4yb6bj4g
    @user-pg4yb6bj4g Год назад +1

    Hi, an OutOfMemoryError: is being shown when doing the training
    what needs to be done? i dont have that much storage in google drive. please suggest an alternative.
    Thanks

  • @Azroy6229
    @Azroy6229 9 месяцев назад

    I follow the steps but unfortunately there's no "Install Xformers from precompiled wheel"..do u know what is wrong with my system? does anyone have this issue?

  • @terrinhaverde2873
    @terrinhaverde2873 Год назад +1

    Thank for your video but for the train part there is an error: python3: can't open file '/content/train_dreambooth.py': [Errno 2] No such file or directory.

  • @jamiecurran8163
    @jamiecurran8163 Месяц назад +1

    Have you made an updated version of this Matt? it doesn't seem to work anymore due to install requirements error

  • @heinzdelf
    @heinzdelf Год назад

    Master Class...Bravo 👏..

  • @user-mi8tw2ic6v
    @user-mi8tw2ic6v 11 месяцев назад +1

    Very informative video. Is it able to run on 8bg of VRAM? Cause I have an error note saying (NameError Traceback (most recent call last)
    in ()
    1 #@markdown Can set random seed here for reproducibility.
    ----> 2 g_cuda = torch.Generator(device='cuda')
    3 seed = 111 #@param {type:"number"}
    4 g_cuda.manual_seed(seed)
    NameError: name 'torch' is not defined) at the stage of [Can set random seed here]. It's all fine before this stage play buttons all worked and generated images. I got stuck at that stage.