Stable Diffusion for Flawless Portraits

Поделиться
HTML-код
  • Опубликовано: 1 янв 2025

Комментарии •

  • @dialusdudas
    @dialusdudas Год назад +2

    Bravo! Thanks Vladimir

    • @Geekatplay
      @Geekatplay  Год назад

      *Thank you for your support!*

  • @IAcePTI
    @IAcePTI Год назад +13

    This is the best tutorial i saw about how to use it. Realy great.

  • @thays182
    @thays182 Год назад +3

    Holy crap this is great. I'm 6 days down the rabbit hole of A1111/Stable Diff and I can't get enough. I've been looking for this exact video! Thank you!

  • @luclenders
    @luclenders Год назад +12

    This could save the trouble of training models for different faces. Very helpfull! thanks

  • @jbe8690
    @jbe8690 Год назад

    From an art point of view/perspective, Vlad is the best A.I. mentor on RUclips, by far.

    • @Geekatplay
      @Geekatplay  Год назад +1

      Thank you for your support!

    • @jbe8690
      @jbe8690 Год назад

      His equivalency to A.I. art is of Da Vinci modern age. Imagine if Vlad lived in Da Vinci's time. 🤔

    • @jbe8690
      @jbe8690 Год назад

      @@Geekatplay I just now looked up a RUclips video about stable diffusion, and brought me back here brother. The algorithm knows where to take me for education. Its so good.

  • @CreativePunk5555
    @CreativePunk5555 Год назад +43

    This is an amazing workflow Vladimir, great job! So many people fighting to get exactly this for so long. Again, great job!

  • @mdahsenmirza2536
    @mdahsenmirza2536 Год назад +5

    THANK YOU!!, That face trick has been something I've been trying out for months, now I can better make portraits!!

  • @ObaedaKorani
    @ObaedaKorani Год назад

    Thank you my friend

  • @DeGameBox_SRBT
    @DeGameBox_SRBT Год назад +1

    09:10 was it possible to use openpose_full instead of inpaint, which also captures the face?

  • @draganzoddomo9890
    @draganzoddomo9890 Год назад +1

    amazing thx, you explained it very well

  • @stephankotter6921
    @stephankotter6921 Год назад

    Thx so much! That's a super nice tutorial.

  • @vitim1979
    @vitim1979 Год назад

    Владимир спасибо большое. Отличный тутониал. Буду пробовать что то похожее

  • @半生轻狂客-阿美莉卡
    @半生轻狂客-阿美莉卡 Год назад

    牛逼!!? 你太厉害了。我研究了半天,看到你这个视频我终于学会了。谢谢你。👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻👍🏻

  • @amir2k469
    @amir2k469 11 месяцев назад

    Thank you 😮 master , you're goat ❤

  • @Elaneor
    @Elaneor Год назад

    Thank you very much for the tutorial. Went to find those model you used.

  • @SANTOSHKUMAR-zo4ef
    @SANTOSHKUMAR-zo4ef Год назад

    brilliant video !! thanks

  • @palakush7650
    @palakush7650 Год назад +1

    Amazin amazing content,thank you

  • @30MinsGaming
    @30MinsGaming Год назад +2

    Genius. The video and workflow technique are very much appreciated!

  • @ManAtPlay
    @ManAtPlay 3 месяца назад

    Is it should work for each checkpoint?

    • @Geekatplay
      @Geekatplay  3 месяца назад

      No, check points need match other components in how it was trained

    • @ManAtPlay
      @ManAtPlay 3 месяца назад

      @@Geekatplay got it, cause I tried on what I already had and it was not working. Thnx

  • @rahul-qm9fi
    @rahul-qm9fi Год назад

    Exactly what I was looking for. Thank you.

  • @alessandromaia7471
    @alessandromaia7471 Год назад +2

    The button Preview annotator result dont show. Any tip to show this option? (Controlnet 1.1.02)

    • @makulVR
      @makulVR Год назад +2

      Thanks for mentioning this. Same issue for me ControlNet v1.1.112

    • @Anyway-give-me-a-drink
      @Anyway-give-me-a-drink Год назад

      I have the same problem, did you solve it?

    • @konigstiger93
      @konigstiger93 Год назад

      looks like it's set up differently now, you have to check the box that says allow preview and then click run preprocessor (the little explosion icon next to the preprocessor field)

    • @Geekatplay
      @Geekatplay  Год назад

      it was changed, now it is small icon, on right side from drop down box. look like spark.

  • @ayushkastruggle3572
    @ayushkastruggle3572 Год назад +1

    I followed it entirely, I am getting my face pasted on the generation, ( I just want it to keep the structure of myface) it's not blending the face with image, how to do that, which settings to adjust, please help

  • @OmekeDavidson-ut6fg
    @OmekeDavidson-ut6fg Год назад

    I'm lost for words. Subscribed. This is too accurate and detailed to be free

  • @shirekerling5151
    @shirekerling5151 Год назад +1

    Hello, great video. However, I'm not sure how you got the ControlNet section there and the models? Can you add an explanation for that? There are many results when searching for it and in the link you provided and there is no explanation about that. Thank you.

  • @thinhnguyensb
    @thinhnguyensb Год назад

    ❤❤❤ great

  • @SlimmeRevolutie
    @SlimmeRevolutie Год назад

    Thank you sir for sharing your knowledge with the world! I fully watch all ads for you😂😅

  • @erwintan9848
    @erwintan9848 Год назад

    how to get composable lora on the bottom?

  • @dreamzdziner8484
    @dreamzdziner8484 Год назад +2

    Such a great trick.❤ Watching these vids makes me realize that I'm still a noob when it comes to SD. 😉

  • @willjames7119
    @willjames7119 Год назад

    quality walk through -- can you explain in more detail what the Lora configuration means and what it is doing? Thanks in advance

  • @ИльяДедов-и4ъ
    @ИльяДедов-и4ъ Год назад

    Хоть ролик и пошаговая инструкция к портретам, у вас получилось мимоходом объяснить работу многих параметров. Спасибо за видос.

  • @antoniovoto
    @antoniovoto 6 месяцев назад

    thanks for the tutorial.i don't find Control sdi 15 canny.where do i can download it? thanks.

  • @HoangNguyen-nz4xe
    @HoangNguyen-nz4xe Год назад

    Genius. The video and workflow technique are very much appreciated! thank you

  • @rpgecho
    @rpgecho Год назад

    Wow, this is amazing! I updated the Civitai page to announce that I started training RPG V5.0. I will ship that version with a set of Control Net image to help people have more control on the model.

  • @therookiesplaybook
    @therookiesplaybook Год назад

    What video card do you have running to be able to get results that fast with all these controlnets and script running?

  • @ad2m989
    @ad2m989 Год назад +1

    Vladimir Thank you brother, you are great And all settings are complete Best person ♥️ I hope you also look into the topic of the video freme I wanted to get a more realistic animation setting With the same settings as this video

    • @Geekatplay
      @Geekatplay  Год назад +1

      I will check it out

    • @ad2m989
      @ad2m989 Год назад

      ​@@Geekatplaythanks I really appreciate this 😘

  • @y0uKWT
    @y0uKWT Год назад

    Love the video thanks, but when I use inpaint to paint the face and click generate with the same settings, it just puts the face on a random place on the image, and does not replace the face.

    • @Geekatplay
      @Geekatplay  Год назад +1

      be sure you set correct masking, it may be inverted

    • @WillowbarkPetPhotography
      @WillowbarkPetPhotography Год назад

      Did you follow the prior steps to match the pose first?

  • @EkhyOk
    @EkhyOk Год назад

    It's amazing. Man, do you think that is possible apply this techinque for food photography or productos?

  • @toddaway
    @toddaway Год назад +10

    Very cool, and helpful! Have you figured out a way to make the in-painted face match the style of the rest of the picture?

    • @cryptojedii
      @cryptojedii Год назад +2

      Yes...using Affinity Photo you can do just that!

    • @tamroberts7303
      @tamroberts7303 Год назад +1

      Could do another img2img on low denosing with the controlnet.

    • @tstone9151
      @tstone9151 Год назад

      @@cryptojedii you mind linking a tutorial? Thanks for the recommendation of affinity photo, never heard of it

    • @Frontesque
      @Frontesque Год назад

      @@tstone9151 I use the whole Affinity suite for a bunch of stuff. It's not really AI driven, just a photoshop/lightroom alternative (in the case of photo)

  • @nancygtec6298
    @nancygtec6298 Год назад

    Excellent!! Thanks!

  • @blackrainboots
    @blackrainboots Год назад

    Great work! Thanks so much, very comprehensive!

  • @ferraravfx
    @ferraravfx Год назад

    Amazing! I think the inpaint will solve my lipstick issues for singing videos! And I could learn more about the control nets!Thanks a lot

  • @chenqingzhi6845
    @chenqingzhi6845 Год назад

    great lesson, learned a lot , thanks

  • @gamersgold4984
    @gamersgold4984 Год назад +2

    Thanks for this Tutorial. But I can't find the Model under the Preprocessor. I think i ticked all the right stuff in ControlNet and restarted the UI. Any suggestions?

    • @Geekatplay
      @Geekatplay  Год назад +1

      you need be sure models located in correct folder, i will make video about it

    • @PCtutorijali
      @PCtutorijali Год назад +1

      @@Geekatplay I dont have this model too, can you please put link for it and just write where to put model in what directory/ folder

  • @Cobra-gx96
    @Cobra-gx96 Год назад

    how you get that prompts ? is there any tool or site for good prompts ?

    • @Geekatplay
      @Geekatplay  Год назад

      yes, i will release video soon about creating prompts ( prompts generators)

  • @matteo.g2213
    @matteo.g2213 Год назад

    Another great video! thanks!

  • @bingshenlee3477
    @bingshenlee3477 Год назад

    The video looks awesome by generating portraits. May I know program is this?

    • @Geekatplay
      @Geekatplay  Год назад

      Stable Diffusion, local installation. ruclips.net/video/oTrmgXuc3e8/видео.html

  • @sonygodx
    @sonygodx Год назад

    Very Nice tutorial about AI workflow .

  • @domingo71
    @domingo71 Год назад

    hello Vladimir Beautiful tutorial, only I don't have the "preview annotator result" button in the control net section. Do you know how I can do it?

    • @Geekatplay
      @Geekatplay  Год назад +1

      in new version it is icon, looks like spark, next to the preprocessot drop down

  • @sarpsomer
    @sarpsomer Год назад

    Smoooth 👍

  • @checkmate559
    @checkmate559 10 месяцев назад

    please upload same tutorial with new version lot of different getting confuse for me not showing preview option

  • @Thiagomartins306
    @Thiagomartins306 Год назад

    congratulations on the job! I can use this technique to create pets?

    • @Geekatplay
      @Geekatplay  Год назад +1

      thank you. i will make video specifically about pets, and yes it does works. I made a lot of photos/videos with my Border Collie

    • @Thiagomartins306
      @Thiagomartins306 Год назад

      🎉🎉🎉

  • @sirousghaffari9556
    @sirousghaffari9556 Год назад

    Can this method be used for architectural rendering?

    • @Geekatplay
      @Geekatplay  Год назад

      yes, if you using ControlNet model with architectural preprocessing. can not recall from top of my head, but i will check and post.

    • @sirousghaffari9556
      @sirousghaffari9556 Год назад

      @@Geekatplay If you make a post about architecture, that would be great

    • @Geekatplay
      @Geekatplay  Год назад

      thank you for suggestion, i will

  • @Sagadyn
    @Sagadyn Год назад

    Thank You

  • @stephanemorin7145
    @stephanemorin7145 Год назад

    I do not have control net in my settings ?

    • @Geekatplay
      @Geekatplay  Год назад

      you need install as extension first

  • @I_am_Alan
    @I_am_Alan Год назад

    Very nice video explainer.

  • @Fiveash-Art
    @Fiveash-Art 9 месяцев назад

    How do you find out what size the model is trained on to get best results? I'm finding adjusting the size proportions of the canvas really drastically effects my image output/

    • @Geekatplay
      @Geekatplay  9 месяцев назад +1

      it is in model description if you downloading from Hugingface or Civit.ai

    • @Fiveash-Art
      @Fiveash-Art 9 месяцев назад

      @@Geekatplay Thanks for the reply .. Found out all the specific sizes for the model I was using. Turns out I was using an outdated version of sdxl

  • @hmza_xd
    @hmza_xd Год назад

    why i dont have upload image in controlnet img2img ?

  • @MrAlexramos8485
    @MrAlexramos8485 Год назад

    great video really helped me understand how to keep the face structure, is it possible to do inpaint batch in order to create videos, that retain the face structure? working on your other video on created flicker free video and wanted to use this feature to keep the face structure consistent with my models face.

    • @Geekatplay
      @Geekatplay  Год назад

      thank you, it is possible, but you will need to load masks for in-painting

  • @donvittoriophoto
    @donvittoriophoto Год назад

    I really enjoyed this video. All of your videos are great. Thanks.

  • @DrSteveMorreale
    @DrSteveMorreale Год назад +1

    WOW! You have me very excited. I need to see where to get started with this. Looks exactly like what I want to start doing! Liked and subscribed!

  • @charlym59
    @charlym59 Год назад

    hello Vladimir. I appreciate your videos 👍

  • @JD-RetroRideCZ
    @JD-RetroRideCZ Год назад

    Hello, i have missing Preview annotator results (create blank and hide annotator also) in control net. Is there something what can i do ?

    • @Geekatplay
      @Geekatplay  Год назад +1

      click on small icon next to preprocessor selection

  • @lyesakkouche8285
    @lyesakkouche8285 Год назад

    Hi, it looks like , version of Imac is differente of windows ! how to install it in windows ?

  • @AmodeusR
    @AmodeusR Год назад +1

    How are you using stable diffusion like that?

    • @Geekatplay
      @Geekatplay  Год назад

      it is Automatic1111 installation (UI) and control net

  • @I_OptimusPrime
    @I_OptimusPrime Год назад

    great video. is it possible to replicate the same face from the input?

    • @Geekatplay
      @Geekatplay  Год назад +1

      Yes, absolutely

    • @I_OptimusPrime
      @I_OptimusPrime Год назад

      @Geekatplay I am struggling with it for several weeks now. Do we need to mask and generate again for face and features ?

    • @Geekatplay
      @Geekatplay  Год назад

      you have option to invert mask for inpainting. you can send me email with problem. need more info on what you trying to do

  • @Maltebyte2
    @Maltebyte2 11 месяцев назад

    I just cant get anywhere i have image A and when i generate someting in image to image i get a cow! for example!

  • @boogieman7233
    @boogieman7233 Год назад +1

    Are you using Automatic1111 gui? You're very simular to mine but I don't have controlnet.

    • @Geekatplay
      @Geekatplay  Год назад

      you need install it in extension tab

  • @eliasaca
    @eliasaca Год назад

    Aren't those controlnet modules unsafe due to pickle imports being detected?

    • @Geekatplay
      @Geekatplay  Год назад

      they are using some calls, that may misused, it is why i usably check phyton code it self, if code not in the safeguard settings

  • @ChristianBueno81
    @ChristianBueno81 Год назад

    Thanks. I was looking for a tutorial about this.

  • @andreleo4826
    @andreleo4826 Год назад +2

    Hey, thanks for the tutorial it helped a lot! But I have a quick question, how to make the face and the rest of the body have matching colors and tones? Which settings do I need to change? Thanks!

    • @devesh2796
      @devesh2796 Год назад +2

      Yea i was thinking the same

    • @lexmitchell4402
      @lexmitchell4402 Год назад +1

      He didn't mention it in the video, but there is another controlNet model simply called 'color' (search for t2iadaptor color) that will make a mosiac grid like sampling of your source image colors and apply them to the generated image

    • @Geekatplay
      @Geekatplay  Год назад +1

      use full body in imported image

    • @erenbelvin
      @erenbelvin Год назад

      @@Geekatplay could you please explain in detail? how is this done?

  • @wg218
    @wg218 Год назад

    very cool👍

  • @Safeshelter
    @Safeshelter Год назад

    Top!

  • @iamjustanowl4109
    @iamjustanowl4109 Год назад

    name of the tool kit pls

  • @Enchantaire
    @Enchantaire Год назад

    Very interesting

  • @wrylac
    @wrylac Год назад

    Could you do this with an photo of a building or house. Keeping an acuate representation of the subject and placing it in a different environment?

  • @TeaBroski
    @TeaBroski Год назад

    How is your computer so fast? Great stuff, thanks for sharing!

  • @aftielschwinn3535
    @aftielschwinn3535 Год назад

    I noticed something about the your Stable Diffusion setup when you were using the ControlNet features
    There was a LORA feature just above the ControlNet menu
    How do I get that on my SD ?

  • @Bulldog-Chelista
    @Bulldog-Chelista Год назад

    excellent, really nice

  • @PlayGameToday
    @PlayGameToday Год назад

    Hello, I ve installed Control Net, but I cant see the "Preview annotator result" buttons. Should I install another extension or what?

    • @Geekatplay
      @Geekatplay  Год назад +2

      in newer version it is look like spark icon, next to the preprocessor drop down selector

    • @PlayGameToday
      @PlayGameToday Год назад

      @@Geekatplay Ok I got it, thanks!

  • @faktyimityreligijne
    @faktyimityreligijne Год назад

    Very good tutorial!

  • @Kotwurf
    @Kotwurf Год назад

    Wow! This is amazing! But how to get this crazy tool? This is not Leonardo or Stable Diffusion website?

    • @Geekatplay
      @Geekatplay  Год назад

      This is Stable Diffusion on local installation, check my channel for the videos how to install it

  • @yutupedia7351
    @yutupedia7351 Год назад

    good skills 👋

  • @StivenTowers
    @StivenTowers Год назад +2

    Awesome tutorial man. Some people have recommended me stable difussion as the most accurate image2image tool, currently I'm using Midjourney and it always changes the facial features of my character. Can you tell me if stable difussion is really accurate with character consistency most of the time? Or is it as tricky as midjourney?. Thanks for your time .

  • @jamesalxl3636
    @jamesalxl3636 Год назад

    i have all the same settings as you but when im in "inpaint" it just generates the face but doesn't keep the body or background? why is this

  • @muhammadagramaulana4347
    @muhammadagramaulana4347 Год назад

    how to install composable lora?

    • @Geekatplay
      @Geekatplay  Год назад

      i need make video about it

    • @muhammadagramaulana4347
      @muhammadagramaulana4347 Год назад

      @@Geekatplay I think I just found it. But thanks, if you want to make an explanation about it go ahead please

  • @ajaywilsonb8671
    @ajaywilsonb8671 Год назад

    what is the website he is using to do stable diffusion?

    • @Geekatplay
      @Geekatplay  Год назад

      it is local installation: ruclips.net/video/oTrmgXuc3e8/видео.html

  • @lion_king9461
    @lion_king9461 Год назад

    Great video👏👏👏 subsciribed👍

  • @LorenzoItaly
    @LorenzoItaly Год назад

    I don't know why when I use the impaint it completely ignores the previous controlls for the pose and just paste the face on a completely different image randomly.

    • @Geekatplay
      @Geekatplay  Год назад +1

      be sure to check, what area of inpainting you want to use, it should be, inpaint mask area or inpaint not masked area

    • @LorenzoItaly
      @LorenzoItaly Год назад

      @@Geekatplay thank you!!!

  • @Kimimarusan
    @Kimimarusan Год назад

    where i can found controlnet extension???

    • @Geekatplay
      @Geekatplay  Год назад

      they are in "extensions" tab

    • @Kimimarusan
      @Kimimarusan Год назад

      @@Geekatplay ok ty, i try, but if style of model is unreal, like anime or other, how to change the style of the face?

  • @PaonSol
    @PaonSol Год назад +1

    Well, I installed everything as well as I could, but after inputting a control net image I can't see "Preview annotator result". There's just nothing there.

    • @lost-frequency
      @lost-frequency Год назад +1

      same issue i do not have the preview result button

    • @wenyuangao9198
      @wenyuangao9198 Год назад

      @@lost-frequency have you find the solution?

    • @emiletetrt
      @emiletetrt Год назад

      @PaonSol and @Lost Frequency Band
      Press the "allow preview" checkbox, then a little boom button will appear next to your choice of preprocessor

    • @onejatt
      @onejatt Год назад

      Preview button is not visible now when u try the tool see there is a icon looks like this 💥 this is the preview button just enable preview and touch this icon ur preview will be there…

  • @GarethOakey
    @GarethOakey Год назад

    How was this software setup? What's the install process?

    • @Geekatplay
      @Geekatplay  Год назад

      it is Stable Diffusion, Automatic1111 installation.

  • @OnlyBeginners
    @OnlyBeginners Год назад

    how do i configure rpg4 model?

    • @Geekatplay
      @Geekatplay  Год назад

      link to de manual in description. they have recommended settings in there

  • @RoboMagician
    @RoboMagician Год назад

    can this be achieved using leonardo or midjourney?

  • @jambrud123
    @jambrud123 Год назад

    Nice nice thank you

  • @paulmp9805
    @paulmp9805 Год назад

    Hey man, great video but I just can't manage to get full-body persons. It's always cropped to head or upper body. Any ideas what I can do?

    • @cjsim2
      @cjsim2 Год назад

      This might help, I changed the first prompt to ‘full body pose’ and it gives a near on full body.

    • @Geekatplay
      @Geekatplay  Год назад +1

      it was originally 2/3 photo. for full body can have tricks to add (hat) (shoes) (floor) (sky) ..etc something above object and below

    • @paulmp9805
      @paulmp9805 Год назад

      @@Geekatplay ah nice, that sounds smart. thanks!

  • @ifoundthistoday
    @ifoundthistoday Год назад

    thank you !!

  • @moonduckmaximus6404
    @moonduckmaximus6404 Год назад

    where do you get the checkpoint and how do i install please?

    • @Geekatplay
      @Geekatplay  Год назад

      you can copy checkpoint to the model folder

  • @official_harshjhunjhunuwala
    @official_harshjhunjhunuwala Год назад

    How to install this software?

    • @Geekatplay
      @Geekatplay  Год назад

      check this video: ruclips.net/video/oO3zIfH4LRE/видео.html

  • @andersistbesser
    @andersistbesser Год назад

    I tried this on my phone and it work but i cant find the option to use my own picture, where is it?

  • @EZZYLAND
    @EZZYLAND Год назад

    how can we batch inpaint? for the purpose of processing png sequences

    • @Geekatplay
      @Geekatplay  Год назад

      you can create multiple masks and load them as batch