When you select "only masked" and then set the resolution lower it's only using the lower resolution for the inpainted area. The reason the image overall still looks good is because it's the same resolution. Using a higher or lower resolution while inpainting a mask doesn't have any impact on anything other than the inpainted area. Using this you can actually get more detail in your inpainted area by maintaining the high resolution (either with 1x if using rescale-by or by manually typing a higher resolution) it will generate for example the face at the selected full-size resolution and then shrink the inpaint down to fit inside the overall resolution of the image.
Thank you for your insightful comment! You're absolutely right about how the 'only masked' function and resolution settings work in Automatic 1111. I must admit, there was a misunderstanding on my part regarding this. Your explanation is spot on and it's very helpful to me and, I'm sure, to other viewers as well. I will make sure to address this in a future video to correct this misunderstanding and to further enhance the learning experience for everyone. I truly appreciate your input and contribution to our community.
How do you make sure that you don't go too high in resolution and clash with the larger picture in amount of details? Is there a way to calculate the actual size of the selection? I'd be curious to use x/y/z plots for the whole process to save a lot of time.
But wouldn't the resolution make a difference to what result you get? Generating a 1024x1024 image gives you a different image compared to 512x512. So if he wanted to get something close to original as far as general image properties, it would make sense to do the inpainting at that original resolution. Unless it really doesn't matter in this particular case...
I'm really glad to hear that you found the tips helpful! I strive to make my content as clear and informative as possible, so it's fantastic to know it's hitting the mark. Thanks for the kind words!
I got into stable diffusion last year and i had a lot of fun figuring out how everything works by myself. I'm pretty much self taught so i knew there would be some features i didn't know about so i thought i would finally look up how other people do their generations. Boy did i need to hear about the inpainting function. I knew what it was supposed to do but i could never figure out the specifics of how to get it to work. This is a major game changer for me! Thank you so much! I love the german accent by the way. You sound like a very friendly person.
I'm so glad to hear that the inpainting function has made such a difference for you! It's always rewarding to discover new features that enhance our creative process. Keep experimenting! Thanks for the remarks on my accent. 😊
That's fantastic to hear! The AI-powered img2img upscale truly is a game-changer for revitalizing images. I'm thrilled it helped you save an image. Thanks for watching and sharing your success!
Thank you for your suggestion! The combination of controlnet tile and ultimate SD upscale is indeed a powerful technique for getting more detail. I have plans to cover this and other advanced techniques in an upcoming part II tutorial. Stay tuned for that, and I appreciate your input!
Wow, thank you so much for the high praise and for subscribing! I'm thrilled to hear that you found the tutorial to be so valuable. Your support means a lot, and it motivates me to keep creating high-quality content. Stay tuned for more AI insights and tutorials!
This video has some serious early 2010 movie maker vibes - you just know it's gonna be good and helpful!. And bam! I wasn't dissapointed - super precise & to the point. A complete workflow highlighting each step, thanks so much 👍 The german accent is just the cherry on top 🍒
Thank you for the fantastic feedback! Thrilled to hear the video hit the mark with my content. Your appreciation, especially for the German accent, brings a huge smile to my face! 🍒 Exciting news: the next video will also focus on a workflow, this time harnessing the power of ControlNet. Your support inspires me to keep creating. Don't miss out-thanks for being an awesome part of our community!
@@AIKnowledge2Go good to hear that I could put a smile on your face 😋 I’m totally excited for the next video as well! ControlNet is so far the one thing I haven't touched, but would love to know more about and slowly master.
I know this might get lost here, but, thank you, you really made my day, been a bit sad lately with life and everything and I just wanted to learn something new to keep myself occupied. With something. Just to not think about it all and produce art. You made that all possible. So thank you and I hope that you know that you made someone's life better.
I'm deeply touched by your words, and I'm so grateful to know that my videos have been able to provide you with some comfort and a positive distraction during this time. Life can be challenging, but remember that it's okay to take time for yourself and do something that brings you joy. The beauty of creating art is that it allows us to express ourselves, to lose ourselves in the process, and to make sense of our experiences. Never hesitate to reach out if you have any questions or just want to share your creations. Thank you for being a part of this community, and please take care of yourself.
Thank you so much for your kind words! I'm really glad to hear that you found the tutorial helpful and of high quality. It's comments like yours that motivate me to keep creating and sharing more content. Stay tuned for more tutorials!
Thank you for your feedback! I'm glad to hear that you found the video informative and valuable. The next Video releases in a few minutes actually :) Stay tuned!
Thank you for your kind words and subscribing. There's actually an improved version of this workflow you can find in this video ruclips.net/video/wyDRHRuHbAU/видео.html it uses control net so it's a little more advanced. Happy creating
I use Draw Things on my iPad and iPhone but even so this video has been very helpful! Your presentation is clear and simple, with no filler. Fantastic!
Draw Things is indeed a great piece of software. I'm thrilled that you found the video helpful and appreciate your kind words about the presentation. Thanks for taking the time to comment!
I'm glad my antics gave you a chuckle! I admit, sometimes my attention wand... - "Oh, look a bird!"😂 Jokes aside, there was some re-recording involved in creating the video. When it came to editing, I figured it would be easier to add some explanatory text rather than reshoot everything again. Thanks for noticing and watching!
with the same settings and just some changes to the negative prompt I was able to get much better faces, limbs, and less disfigured limbs with initial generation making the later inpainting much quicker.
You've made a great observation! This video primarily showcases this particular workflow. However, it's worth noting that when your prompts become more complex, or you're using multiple 'loras' as is often the case for creating stunning art in SD 1.5, the quality of faces tends to diminish. That’s exactly where this workflow proves to be incredibly useful.
Hi und danke für dein Feedback! Den Akzent bekomme ich einfach nicht raus 😂. Zum Glück scheinen ihn die meisten englischsprachigen Zuschauer eher amüsant als nervig zu finden.
I'm thrilled to hear that this has reignited your interest in building virtual worlds! It's such a fascinating field with so much potential. I'm glad my content could be part of your renewed journey. Thanks for sharing your experience and happy world-building!
I've been tinkering with BlenderAI, Stable-Diffusion that works inside of the Blender modelling system. Blender renders a source image which is passed into Diffusion (along with animated prompts!) The Blender scenes (3d animations) are a great starting point, but I can see that using this work-flow is going to greatly improve my results. My current project is to 'convert' a short video clip into a nightmarish vision using BlenderAI to re-work frames of video. Being able to play with 'sliders' on a frame-by-frame basis is pretty wild!
I'm sorry to hear that you're feeling overwhelmed. I'd suggest starting with my basic tutorial to build a foundation, then gradually progress from there. Here's the link to help you get started: ruclips.net/video/SHikMK39Q30/видео.html.
Thanks so much for your excitement about Part 2! 🌟 Your wait is over: ruclips.net/video/mrWmEWEZwDw/видео.html. Just a heads-up: since Parts I and II have been around for a bit, some of the settings might have changed. I'm planning to update them as soon as I can find a slot in my busy schedule. Stay tuned and happy creating! Your support means a lot! 🚀
A good beginner/intermediate tutorial, I have been using a very similar workflow for a while, except I gave up using extras to upscale (because it never looked as good as I wanted) and now use SD Ultimate upscale Script in conjunction with Control Net Tile (CN tile not 100% necessary but can improve the image) as a final step, then maybe some touchups in GIMP if needed.
That sounds like a solid workflow you've developed there! Thank you for sharing it with us. It's always interesting to hear about different methods people are using. Your suggestion about using Control Net Tile has come up a few times and I can see the value it brings. In light of these discussions, I'm planning to create a Workflow Part II tutorial where I'll cover this more advanced technique. Stay tuned for that!
My 1080 can't upscale so much. Welp. But nice demonstration. That model is right up my alley. I hope the dev will resume maintaining it after his hiatus.
I understand your concerns with the 1080. To address the VRAM issue, you might want to try adding the `--lowVram` or `--medVram` parameters when starting with webui-user.batch. Another approach for upscaling with limited VRAM is to utilize Controlnet Tile. It essentially breaks your image into smaller tiles and scales each one separately. This can be especially helpful for hardware with memory limitations. Hope that helps
Where do I find the Controlnet Tile? I'm not ready to follow your tutorial yet cos my internet is being throttled and that file for the revAnimated model you suggested is taking hours to download :/@@AIKnowledge2Go
@@Asterex versuchs mit negative embeddings wie z.b.civitai.com/models/116230/bad-hands-5 Ansonsten hat After Detailer auch die möglichkeit Hände zu "inpainten" zu after Deatiler hab ich auch ein Video. Allerdings ist da der Fokus auf Gesichtern.
Thank you for your enthusiasm! It's great to know you found the video helpful. I'm excited to continue creating more content for you and the community. Stay tuned for more!
I'm really glad to hear that you found the tips and workflow helpful! Sharing this knowledge and helping others is exactly why I create these videos. Your support and feedback are much appreciated. Thank you!
This is more or less my workflow, only that I prefer DDIM over Euler a, and DPM++ 2S a Karras over DPM++ 2M Karras. I didn't know when I was first experimenting, but the two that I picked add noise between steps, which further randomizes and creates variation. Most other samplers(ones that aren't DDIM or have 'a' in the name), end up converging at about 150 steps to the same result because they don't add noise between steps.
Interesting! I've noticed similar patterns with different samplers. The noise addition in DDIM and those with 'a' really makes a difference in creating unique variations. I'll definitely consider experimenting more with DPM++ 2Sa for refinement. Thanks for sharing your insights!
Vielen Dank für dein positives Feedback! Es freut mich zu hören, dass dir das Video gefallen hat. Ich hoffe, dass die Tipps und Tricks dir beim Ausprobieren helfen werden. Viel Spaß dabei!
You're getting a hand and a robot thing in your leg in painting because it's still part of the original prompt. It's important to trim out portions that aren't applicable to what you're inpainting.
You're absolutely right. My skills in inpainting have improved since then. I'm considering re-recording this video in the future. Thanks for pointing that out, happy creating.
adding keywords for face tend to decrease chances of getting mangled faces you will have to fix later. Try (highly detailed face, textured skin, detailed eyes)
That's a great tip, thanks for sharing! However, it's also worth mentioning that there will come a time when you'll want to upscale your image to enhance the overall quality. In that case, you might still need to deal with some mangled features, but hopefully fewer with your suggested keywords.
Absolutely! Adetailer has been a game-changer. If you haven't seen it yet, check out my video on it; I delve into its benefits and how it can save you a lot of time. Cheers!
Omg you`re a life saver!!! I have one issue tho and maybe more people would relate: Inpainting does not work for me, i tried giving it prompts, paining one small area, multiple, restarted the whole webui, just nothing seems to work. I can see the image being rendered nicely and it looks good but i get the same result in my folders. Does anyone know any fix?
Thank you for the kind words! Regarding your issue, I've experienced similar problems when using ControlNet for inpainting in the current version. If you're not using ControlNet, perhaps a fresh installation might help resolve the problem?
@AIKnowledge2Go Hello! I've solved my issue, I don't have them right now but I've had to put some commands in the webui batch file and it works fine. I don't know why, maybe because I have an AMD computer
Danke dir für das hilfreiche Video, Chris! Gibt mir eine super Hilfe zum Start! Funktioniert der Workflow so grundsätzlich auch für fotorealistische Bilder (oder auch für anderes)? Natürlich dann mit anderem Modell und Parametern denke ich...
Danke für das Feedback! Es freut mich sehr, dass dir das Video geholfen hat. Ja, ich verwende diesen Workflow für alle Arten von Bildern, sei es Anime oder fotorealistisch. Gelegentlich nutze ich den "after! detailer" speziell für Gesichter. Falls du damit noch nicht vertraut bist, habe ich ein Video dazu auf meinem Kanal.
Love this video. I mainly learn through RUclips Vids, and I've been an avid Stable Diffusion user since the onset. Keep making content like this that explains your actions & choices, and I know you're gonna have tons of subs soon.
Thank you so much for your kind words and support! I'm thrilled to hear that you find the videos helpful. I will certainly continue to create content that explains my process in detail. Your feedback and encouragement mean a lot to me. Stay tuned for more content!
Thank you so much for your kind words and support! I'm thrilled to hear you found the video helpful. Cheers to more learning and exploration together! 🍻
I'm so glad to hear that the video was helpful for you! It's wonderful that you're now able to use the tool more effectively. Thanks for watching and happy creating!
To see the 'resize by' option, you may need to update your version of Automatic 1111. The latest version as of today is 1.3.2. I've confirmed that this feature isn't related to any extensions, as I still had access to it even after disabling all of my extensions. Please try updating your version.
Hello, could you please tell me how you enabled the bars on the side of your image on 4:16? The ones that let you pull the image up and down and to the sides. I can't find anywhere on how to enable them, for me images just get squeezed smaller and for really wide images that makes it so hard to work with.
Oh, those scroll bars were actually a result of using ctrl + mousewheel to zoom in on my browser. It appears that feature might have been changed or removed in newer versions. 😞 However, I'd recommend the canvas zoom extension for a1111. You can find it under extensions -> available -> load from, or check these direct links. haven't tried it myself but it seems to do what you are looking for: github.com/richrobber2/canvas-zoom Hope this helps!
hi there! thank you for the guide and tutorial. Very useful information especially to users like me that only know how to put in prompts and click generate. Thank you again!
You're very welcome! I'm glad to hear that you found the guide and tutorial helpful. It's always my aim to make the process of AI art generation more accessible and easy to understand. Don't hesitate to explore my other videos for more tips and tricks. Thank you for your kind words and support. Stay tuned for future videos!
Wow man this is exactly what i was looking for this really helped a lot with learning a better work flow im making all kinds of crazy new images now keep up the good work liked and subbed.
Thank you so much for your enthusiastic feedback! I'm thrilled to hear that you found the workflow helpful and that it's inspiring you to create all sorts of new images. Your support really means a lot to me. Keep experimenting and creating, and stay tuned for more content. Thanks again for subscribing and liking!
@@AIKnowledge2Go I don't suppose you know of a good way to reproduce the same character. I know about LORAs but i am trying to generate multiple pictures of the same character that i can then turn into a LORA .... at least i think that's how it is done LOL im a noob :)
Do you struggle with prompting? 🌟 Download a sneak peak of my prompt guide 🌟 No Membership needed: ⬇ Head over to my Patreon to grab your free copy now! ⬇ www.patreon.com/posts/sneak-peek-alert-90799508?Link&
You're very confident, first video I watched of you. How long did it take you to self learn this? There's very little I completely understand from tweaking arbitrary values and knowing how that changes the image. I just read about how to get the software running on google colab which I never knew existed or how anyone would use it until now.
I appreciate your observation! It took me several evenings of dedicated learning and experimentation to grasp everything fully and understand the intricacies of the values. If Google collab works for you great. I read that A1111 would not run anymore on collab, but that info is 2 Month old at least. If you find yourself challenged by the basics, I recommend starting with my foundational tutorials. Here's a playlist that can guide you: ruclips.net/video/SHikMK39Q30/видео.html.
Do you do tutorials for us cheapskates who can't afford a powerful enough computer? I'm on a $1000 dollar Windows Surface 4, using GPU Radeon Ryzen (?) and it's taken me almost 24 hours to get it up and running to an acceptable level (lots of code changing and deleting venv files, and setting (having lol) the arguments to make it work on this computer. It's been an absolute nightmare for me, but a small render with a handful of prompts is now coming in at under 2 minutes. Another problem I;m having is running out of GPU memory when I'm asking it to do when I use too many prompts or sampling steps etc. It's been a real head scratcher! @@AIKnowledge2Go
I'm glad to hear that tip was helpful for you! I dont know why this is not fixed already. Maybe one day i'll write a bug ticket in Automatic 1111 github on the other hand, this move has become second nature to me. So even if this bug is fixed one day i will clear the mask anyway. :) Thanks for watching and happy creating!.
Dman this is very nice workflow , i usually use ad detailer to fix face from the start , and for upscaling i go to img2img and use tiling with ultimate upscaling. I dont know why....thats just the workflow ive had. Ill try this now
It's always interesting to hear about others' workflows! The beauty of these AI tools is that there are so many ways to use them. It sounds like you have a process that works well for you, which is fantastic. My workflow is just one way to approach creating AI art, and I encourage you to try it out and see if it suits your style. Thanks for watching and happy creating!
If you're interested in learning how to save time inpainting body parts using a handy tool in Automatic 1111, I recommend checking out this video next: ruclips.net/video/y3DxX9s0NhQ/видео.html
Really good video :) btw you think the "vae-ft-mse" is worth having? since I got it but can't see it anywhere in my stable diffusion so if its worth trying to get it to work?
@@Marcus-si7su Thanks for your feedback. You can set up vae-ft-mse in A1111 under Settings->Quicksettings list-> there you add Sd vae and restart A1111 then you get the dropdown. But you will have to download the VAE.
To the entire ai art community: You lack of creativity is astounding, here you have this amazing new art tool that allows a new multiverse of possibilities, and all you guys come up with is essentially the same few pinup girls. It's quite cringe guys.
The ideal computer for running Automatic1111 largely depends on your budget. If cost is not a concern, I would suggest going for a system with an Nvidia RTX 4090 graphics card and an Intel I7-9700K processor or faster. However, keep in mind that when it comes to running Automatic1111, the GPU and VRAM are significantly more important than the CPU.
How much was your computer? lol I mean, I'm utterly jealous by how fast yours works, and I don't know where to even start looking for good brands @@AIKnowledge2Go
mehr als eindeutig du bist deutsch ^^ allerdings muss ich sagen es war schwer zu verstehen, was was wirklich macht. ich nutze easy Diffusion als web ui auf meinem poweredge r630. es war viel information gepackt in einem video allerdings hätte ich gern ein paar Beispiele gesehen. würde mich gern über eine video serie freuen die frei nach dem motte. lasst uns heute mal ein bild erstellen und dann vielleicht irgendwelche themen nehmen oder random ein paar worte zusammen werfen oder so. also quasi was man so als anfä´nger auch tun würde. wäre sicher interressant dich einfach mal bei dem prozess zu begleiten und auch deine hinter gründe zu hören wie und warum du das so jetzt machts. aber sehr informativ und hat mir schon ein ganzes stück weiter geholfen
Ha da hast du mich erwischt 😂 Deutscher durch und durch😂 Du hu hast absolut recht, das Video richtet sich auch eher an Leute die bereits Berührungspunkte mit Automatic 1111 hatten. Ich kann dir dieses Video von mir empfehlen da erkläre ich die einzelnen Settings: ruclips.net/video/SHikMK39Q30/видео.html Wenn du der Liste folgst da hab ich einzelne Themenbereiche in Reihenfolge abgedeckt. Ich hoffe das hilft dir weiter.
Good video and explanation! I'm just a bit confused about one thing, during inpainting is it really necessary to have an accurate prompt, for what you want specifically in that area? And what happens when you leave it blank, does it just try to "autofill"?
Hey there! Your prompt can guide the AI for better inpainting, but feel free to experiment and see what surprises the blank canvas brings! What i didn't do in my Video is to change the prompt. I actually should have changed it. Leaving it blank can work, but i suggest you use ControlNet Inpaint.
"Before watching your video clip, I kept trying to use 'Hires fix' foolishly, and the result is that the pictures look very, very bad. Thank you very much." and I'm a newbie
I totally get that! The 'Hires fix' can be quite tempting for many, especially when you're just starting out. I've been there. Glad my video could steer you in a different direction. Happy creating.
@@AIKnowledge2Go "My English is quite poor, I can only follow what I see. I have to watch your video clip a few times before I can do it, well, a bit slow but anyway I feel lucky that you shared your experience. Once again, thank you very much." Chat GPT Translate :D
I'd actually like to know more about the syntax, what to set in brackets, the values and what options there are I see a lot of stuff like (purple hair:1.2) sometimes with or without colon, sometimes with multiple brackets and so on... While I use prompts like that myself and play around with it, i feel like I roughly get how to use it, but i could very well be completely wrong on this as well If you already have a video on this just let me know, otherwise this would make a very good video imo
Absolutely understand your query! Check out this video for a detailed understanding: ruclips.net/video/IEYMVIbPbQQ/видео.html However, if that feels too advanced, I recommend starting with this one: ruclips.net/video/SHikMK39Q30/видео.html I'm also currently working on a comprehensive guide on prompting. Stay tuned!
Do you have some source to adjust our research, a lot of parameters I've seen in this video, weren't showed anywhere, I assume part of it is emperical knowledge, still, if you can link us to great indept tutorial explan,ation it would save me a lot of hardwork ! Still I learn a ton from that video, and helped me adjust some issues i've got manytimes !
Your response is a good start, as it acknowledges the viewer's appreciation and seeks to clarify their specific needs. You could enhance it by offering direct links to your relevant videos and expressing a willingness to assist further. Here’s a revised version of your response: Hi there! I'm thrilled to hear that my video was informative and helpful for you. Regarding the settings you're curious about, could you specify which parameters or aspects you're looking into? This will help me guide you better. Also, I have two basic videos on prompting and basic settings on my channel which might be just what you're looking for. I'll drop the links here for easy access: ruclips.net/video/MftRapF4AaU/видео.html ruclips.net/video/SHikMK39Q30/видео.html Feel free to check these out. Also i have whole tutorial Series on stable diffusion.
My problem with using img2img for upscaling is that I work with models trained on specific faces and when I do it this way the likeness is lost. So for me Hires Fix works better. But nice video and many great tips! I didn’t quite understand the reason for decreasing the resolution on the face though.
Thank you for sharing your experience with img2img for upscaling. If the likeness is getting lost, it might be due to the denoising strength being set too high. However, if Hires Fix works better for you, that's absolutely fine. I often inpaint faces later and then use a detailer, which I've found to be quite helpful. Thanks for your feedback and for watching the video!
Thank you so much for your kind words and feedback! I'm thrilled to hear that you're finding my techniques helpful. As for your suggestion, I absolutely plan on sharing my workflow for hyperrealistic and photorealistic images in a future video. While I can't specify a date at the moment, I promise it's on my list. Stay tuned, and thanks again for your support!
Hello @AIknowledge2Go, this is a really helpful video. Thank you for coming up wirh such a great content. I have a quick question here and would appreciate if you could provide some valuable suggestion. I have always been having challenge in worling with image2image generation, seems like the final result is no where close to my input image. For example, if I want to make minor edit to my normal human images like change clothes color, changing hairstyle, changing hairlength etc by keeping rest of the entire details intact should i go for image2image option in SD or should i use some other method for this? I am using absolutereality Checkpoint for this to ensure the pics are realistic. Any advise / suggestion would be greatly appreciated
Thank you for your kind words! If you want to make minor edits to your normal human images while keeping the rest of the details intact, using the inpaint option of Image2Image in Automatic 1111 is a good choice. However, for specific modifications like changing clothes color, hairstyle, or hair length, you may need to experiment with different prompts and parameters to achieve the desired results. In addition to Image2Image, you can also explore the ControlNet model for inpainting, as it can be effective in preserving the overall details while making specific modifications. Remember to adjust your prompt accordingly to focus on the areas you want to edit. It's important to experiment and iterate with different prompts, models, and parameters to achieve the desired outcome.
@@AIKnowledge2Go Thank you so much for your response . I totally agree , detailed prompt + config details are the key here, which I haven't mastered yet, still learning 🙂. Btw, if you don't mind me asking I would like to know if there is a way I can have a model trained to simply take input and change hairstyle on the same pic as output. Considering I have to work on multiple images, it would be difficult to keep writing prompt on each image to get the desired result. Thank you so much for looking into this 🙂
Thank you for the feedback. I always aim to provide the most clarity in my tutorials. I'll keep your suggestion in mind for future videos. Happy creating.
Thank you, I'm glad you enjoyed the tutorial! To change to a dark theme, there are a few options you can try: Add /?__theme=dark at the end of your browser's URL when you're on the Automatic 1111/Stable Diffusion page. Try the Dark Reader plugin for your browser. Open your webui-user.bat file and add set COMMANDLINE_ARGS=--theme dark. Please remember to make a copy of your webui-user.bat before making any changes! Hope one of these solutions works for you!
I never considered sending an initial image I like to img2img. I'll have to try that. I normally copy and paste the seed of an image I like and tweak settings from there.
I found it a bit more intuitive compared to the high res fix workflow and in my experience, it often led to better results. But the beauty of these tools is the flexibility they offer. Definitely give it a try and see how it works for you!
@@AIKnowledge2Go I have been trying this method, and I am amazed how much better this technique is. I get so much more detailed and hi-res results. Thank you for sharing this.
When you refering to refiner you mean SD XL Models right? Actually i havent done upscaling with it. My system crashes when i go higher than 1024 x 1024 with sdxl models. I still use a lot SD 1.5 because as of now in my opinion you can get better results with 1.5 if you know what you are doing.
Yes there is. Just change the prompt. In fact that is the reason why is have this little "spider thingy" on her leg, because i did not change the prompt. Happy Creating.
Took hours to generate the first set of 8 images... My graphics card is "MSI Gaming GeForce RTX 3070 LHR 8GB GDRR6 256-Bit HDMI/DP Nvlink Torx Fan 4 RGB Ampere Architecture OC Graphics Card (RTX 3070 Gaming Z Trio 8G LHR)" Is this normal/expected? Every other setting I think I got to match yours in 1111. Thanks for the great video!
10 Minutes still is very long with an RTX 3070. I needed about 1,5 Minutes for 8 Images with my old 2080 Super. Do you have Xformers installed? In your stable diffusion web ui folder find the webui-user.bat file. Open it with a texteditor like notepad. Add --xformers if your "set COMMANDLINE_ARGS=" does not have it. so should look like this: set COMMANDLINE_ARGS= --xformers or similar, also between set COMMANDLINE_ARGS= and call webui.bat write in a new line "git pull".This keeps automatic 1111 up to date. For Xformers to install sometimes takes 3 - 4 restarts of the automatic 1111 server.It's Strange. After you saved the file you have to start A1111 via the webui-user.bat instead of the webui.bat file. Hope that helps.
Hey, wirklich ein sehr gutes Tutorial, danke dafür. Ich habe noch ein Problem, vielleicht kannst du helfen: Wenn ich in InPaint einen bestimmten Bereich maskiert habe, sagen wir zum Beispiel die Beine, und dann auf generate drücke, macht er mir keine neuen Beine, sondern es sieht so aus, als würde er das komplette Bild noch mal in dem maskierten Bereich erstellen. Hab die Einstellungen aus dem Video übernommen und auch mit verschiedenen Noise Strength experimentiert.
Danke für das Feedback. Wenn er das oder ein ähnliches Bild noch einmal erzeugt in dem maskierten Bereich, hast die denoising strengh zu hoch. Was ich in dem Video nicht zeige (weil ich es damals nicht besser wusste) ist, das du dein Prompt anpassen sollst/kannst/musst. Wenn du z.b. ein Gesicht inpainten möchtest, dann schreib sowas wie "image of a face of..." was du im Prompt drin lassen sollest ist alles was das rendering betrifft (HDR, 4K, cinematic shot). Inpainten ist immer auch ein bisschen Trial and error. Hope that helps.
Klasse Tutorials! Durch deine Videos sind meine Ergebnisse um ein vielfaches besser geworden! Ich habe nur ein Problem mit den "Inpaintings". Ich habe es Schritt für Schritt so gemacht wie du in deinem Video, aber bei mir verändert sich so gut wie nichts. Egal welche Einstellung ich verändere, selbst wenn ich die "Denoising strength" komplett hochstelle oder den "Seed" auf -1, bekomme ich nach dem Generieren 4x nahezu das selbe Ergebnis. Hast du zufällig ne Idee woran das liegen könnte?
Hi yes it is. Maybe you want to use controlNet for this. Here is a newer version of this video. ruclips.net/video/wyDRHRuHbAU/видео.html To remove objects just write what you want to have instead on the background for prompt when inpainting. You need to experiment with the denoising strength.
Thank you for your positive feedback! I'm glad to hear that you tried and liked the workflow. Your support is appreciated! If you have any questions or if there's anything else you'd like to see in future videos, please feel free to share.
for me, during the ImgToImg inpainting, if i render with "restore faces" on, the eyes always come out blurred and wonky. if i turn it off, the eyes are fine, but of course they are not "upgraded" like they are (in theory) when "restore faces" is turned on.
That's indeed an interesting behaviour. The image resolution could play a role in what you are experiencing. Or you have to many loras, Textual inversion etc. active. What you can try: I have a very handy video about Afterdetailer. It's an extension that can automate inpainting of faces and other body parts and produces great results. I didn't mention it in this video as I wanted to keep things simple. Here is the link to the Afterdetailer tutorial video: ruclips.net/video/y3DxX9s0NhQ/видео.html. It might give you some insights on how to achieve better results with face restoration.
I don't understand what I am doing wrong. When I am inpainting the face I follow your steps precisely and all my settings are the same as yours but I never get a new face on the image. I have tried inpainting other parts of the image and get the same results. I have tried changing all the settings individually just to get some change but nothing ever changes. All the images are output precisely the same as the original. What am I missing?
I'm sorry to hear that you're experiencing difficulties with generating new faces using the inpainting process. It's possible that there may be some issues with your installation or settings. Here are a few suggestions to troubleshoot the problem: Make sure that your Automatic 1111 software is up to date. The latest version as of now is 1.4.0. If you're using an older version, consider updating to the latest release. Double-check that all your settings match the ones shown in the tutorial video. Pay close attention to any specific prompts or parameters that are mentioned. Even a small difference in settings can affect the output. If possible, try generating images with different prompts or inputs to see if the issue persists. This can help determine whether the problem is specific to the face inpainting process or a more general issue. Consider performing a fresh install of Automatic 1111. Uninstall the current version, then download and install the latest version from the official source. This can help resolve any potential installation issues or conflicts. If you continue to experience difficulties, it may be helpful to seek assistance from the Automatic 1111 community or support channels. They may be able to provide more specific guidance based on your specific setup and issue.
First of all, a greeting and thanks for the video, a beautiful image. I wanted to ask you why you didn't use the High res fix, from what I understand what it does is the same as you did but saving you a step, that is, starting an imgtoimg process. What is the reason that you advise against its use?
Thank you for your comment and your kind words about the video. @hoasiai is spot on. The High Res Fix does indeed have the potential to significantly alter the image, as does changing the sampler. My preference is for a straightforward workflow. I start with prompt engineering, and once I'm satisfied with the composition, I move to Image2Image to boost the quality when changing the sampler, without affecting the composition. I hope this clarifies my approach, and thanks again for your question!
Newer Version of this Video: ruclips.net/video/wyDRHRuHbAU/видео.html
When you select "only masked" and then set the resolution lower it's only using the lower resolution for the inpainted area. The reason the image overall still looks good is because it's the same resolution. Using a higher or lower resolution while inpainting a mask doesn't have any impact on anything other than the inpainted area. Using this you can actually get more detail in your inpainted area by maintaining the high resolution (either with 1x if using rescale-by or by manually typing a higher resolution) it will generate for example the face at the selected full-size resolution and then shrink the inpaint down to fit inside the overall resolution of the image.
Thank you for your insightful comment! You're absolutely right about how the 'only masked' function and resolution settings work in Automatic 1111. I must admit, there was a misunderstanding on my part regarding this. Your explanation is spot on and it's very helpful to me and, I'm sure, to other viewers as well. I will make sure to address this in a future video to correct this misunderstanding and to further enhance the learning experience for everyone. I truly appreciate your input and contribution to our community.
@@AIKnowledge2Go Was that response generated by Chat-GPT? Because it looks like it hahaha
@@freakdeer2486 100%
How do you make sure that you don't go too high in resolution and clash with the larger picture in amount of details? Is there a way to calculate the actual size of the selection? I'd be curious to use x/y/z plots for the whole process to save a lot of time.
But wouldn't the resolution make a difference to what result you get? Generating a 1024x1024 image gives you a different image compared to 512x512. So if he wanted to get something close to original as far as general image properties, it would make sense to do the inpainting at that original resolution. Unless it really doesn't matter in this particular case...
Finally someone was able to provide very descriptive and helpful tips, thank you
I'm really glad to hear that you found the tips helpful! I strive to make my content as clear and informative as possible, so it's fantastic to know it's hitting the mark. Thanks for the kind words!
This is by far the most useful tutorial on stable diffusion I've run into!
Thank you so much! I'm glad you found it helpful. Happy diffusing! 🚀
I got into stable diffusion last year and i had a lot of fun figuring out how everything works by myself. I'm pretty much self taught so i knew there would be some features i didn't know about so i thought i would finally look up how other people do their generations. Boy did i need to hear about the inpainting function. I knew what it was supposed to do but i could never figure out the specifics of how to get it to work. This is a major game changer for me! Thank you so much!
I love the german accent by the way. You sound like a very friendly person.
I'm so glad to hear that the inpainting function has made such a difference for you! It's always rewarding to discover new features that enhance our creative process. Keep experimenting! Thanks for the remarks on my accent. 😊
The img2img upscale workflow is amazing. It completely saved an image that i would have thought was complete garbage!
That's fantastic to hear! The AI-powered img2img upscale truly is a game-changer for revitalizing images. I'm thrilled it helped you save an image.
Thanks for watching and sharing your success!
Man... You're tutorial is so good, way much better then any I saw before danke !
I'm really glad you found the tutorial helpful! Your support means a lot to me.
@@AIKnowledge2Go Well deserved ;)
If you want to dump more detail use controlnet tile + ultimate SD upscale
Thank you for your suggestion! The combination of controlnet tile and ultimate SD upscale is indeed a powerful technique for getting more detail. I have plans to cover this and other advanced techniques in an upcoming part II tutorial. Stay tuned for that, and I appreciate your input!
This is easily the best tutorial I have seen on AI. Subscribed.
Wow, thank you so much for the high praise and for subscribing! I'm thrilled to hear that you found the tutorial to be so valuable. Your support means a lot, and it motivates me to keep creating high-quality content. Stay tuned for more AI insights and tutorials!
This video of yours has given me whole new level of understanding of what I am supposed to do, thank you!
Glad it was helpful! You are welcome
This video has some serious early 2010 movie maker vibes - you just know it's gonna be good and helpful!.
And bam! I wasn't dissapointed - super precise & to the point. A complete workflow highlighting each step, thanks so much 👍 The german accent is just the cherry on top 🍒
Thank you for the fantastic feedback! Thrilled to hear the video hit the mark with my content. Your appreciation, especially for the German accent, brings a huge smile to my face! 🍒 Exciting news: the next video will also focus on a workflow, this time harnessing the power of ControlNet. Your support inspires me to keep creating. Don't miss out-thanks for being an awesome part of our community!
@@AIKnowledge2Go good to hear that I could put a smile on your face 😋 I’m totally excited for the next video as well! ControlNet is so far the one thing I haven't touched, but would love to know more about and slowly master.
I know this might get lost here, but, thank you, you really made my day, been a bit sad lately with life and everything and I just wanted to learn something new to keep myself occupied. With something. Just to not think about it all and produce art. You made that all possible. So thank you and I hope that you know that you made someone's life better.
I'm deeply touched by your words, and I'm so grateful to know that my videos have been able to provide you with some comfort and a positive distraction during this time. Life can be challenging, but remember that it's okay to take time for yourself and do something that brings you joy. The beauty of creating art is that it allows us to express ourselves, to lose ourselves in the process, and to make sense of our experiences. Never hesitate to reach out if you have any questions or just want to share your creations. Thank you for being a part of this community, and please take care of yourself.
Been doing this for a while (still a noob of course), but this is by far the most useful info I've seen. thanks!
Glad to hear it! Happy creating.
Just started getting into stable diffusion and this video completely changes things. Thank you so much!
I'm glad the video could help! It's amazing how much of a difference understanding stable diffusion can make. Happy creating!
That's really insightful. I don't have to throw away good images with small blemishes. Thanks!
Glad it was helpful!
thats what i call a quality contect tutorial ! the tutor knows what he is doing here ...
Thank you so much for your kind words! I'm really glad to hear that you found the tutorial helpful and of high quality. It's comments like yours that motivate me to keep creating and sharing more content. Stay tuned for more tutorials!
I like the roboto thingy on her leg. Very cool. Thanks for sharing
Thank you i am glad you liked it. Stay tuned for more.
More video please.... I learned so much from this video.
Thank you for your feedback! I'm glad to hear that you found the video informative and valuable. The next Video releases in a few minutes actually :) Stay tuned!
I know I'm late to this video but this helped me out big time! Needless to say I'm now subscribed and following your videos.
Thank you for your kind words and subscribing. There's actually an improved version of this workflow you can find in this video ruclips.net/video/wyDRHRuHbAU/видео.html
it uses control net so it's a little more advanced. Happy creating
I use Draw Things on my iPad and iPhone but even so this video has been very helpful! Your presentation is clear and simple, with no filler. Fantastic!
Draw Things is indeed a great piece of software. I'm thrilled that you found the video helpful and appreciate your kind words about the presentation. Thanks for taking the time to comment!
German Attitude 😂 - No filling, more information, more efficiency. Right?
I had to chuckle when you said she'd need another leg and immediately masked her face :)
I'm glad my antics gave you a chuckle! I admit, sometimes my attention wand... - "Oh, look a bird!"😂 Jokes aside, there was some re-recording involved in creating the video. When it came to editing, I figured it would be easier to add some explanatory text rather than reshoot everything again. Thanks for noticing and watching!
Fantastic and quick video - nicely concise and it'll be very useful!
i am Glad it was helpful to you!
with the same settings and just some changes to the negative prompt I was able to get much better faces, limbs, and less disfigured limbs with initial generation making the later inpainting much quicker.
You've made a great observation! This video primarily showcases this particular workflow. However, it's worth noting that when your prompts become more complex, or you're using multiple 'loras' as is often the case for creating stunning art in SD 1.5, the quality of faces tends to diminish. That’s exactly where this workflow proves to be incredibly useful.
It's like listening to myself with my German Accent.
Top Video Bruder
Hi und danke für dein Feedback! Den Akzent bekomme ich einfach nicht raus 😂. Zum Glück scheinen ihn die meisten englischsprachigen Zuschauer eher amüsant als nervig zu finden.
Thanks!
This has re-newed my interest in virtual worlds-building!
I'm thrilled to hear that this has reignited your interest in building virtual worlds! It's such a fascinating field with so much potential. I'm glad my content could be part of your renewed journey. Thanks for sharing your experience and happy world-building!
I've been tinkering with BlenderAI, Stable-Diffusion that works inside of the Blender modelling system.
Blender renders a source image which is passed into Diffusion (along with animated prompts!)
The Blender scenes (3d animations) are a great starting point, but I can see that using this work-flow is going to greatly improve my results.
My current project is to 'convert' a short video clip into a nightmarish vision using BlenderAI to re-work frames of video.
Being able to play with 'sliders' on a frame-by-frame basis is pretty wild!
Thank you for making such awesome videos! Love the German accent. Just liked and subscribed.
Thank you for your kind words and support! I'm glad you're enjoying the content and my German accent! Stay tuned for more exciting videos.
I'm already feeling overwhelmed
I'm sorry to hear that you're feeling overwhelmed. I'd suggest starting with my basic tutorial to build a foundation, then gradually progress from there. Here's the link to help you get started: ruclips.net/video/SHikMK39Q30/видео.html.
Looking forward to Part 2!
Thanks so much for your excitement about Part 2! 🌟 Your wait is over: ruclips.net/video/mrWmEWEZwDw/видео.html. Just a heads-up: since Parts I and II have been around for a bit, some of the settings might have changed. I'm planning to update them as soon as I can find a slot in my busy schedule. Stay tuned and happy creating! Your support means a lot! 🚀
@@AIKnowledge2Go Wow good timing! I just saw the video, thanks for letting me know. I already liked and posted a couple of comments!
Thank you so much!!! 💎 thiss is reaaaally helpful!
Awesome! Happy to help you out with my content!
A good beginner/intermediate tutorial, I have been using a very similar workflow for a while, except I gave up using extras to upscale (because it never looked as good as I wanted) and now use SD Ultimate upscale Script in conjunction with Control Net Tile (CN tile not 100% necessary but can improve the image) as a final step, then maybe some touchups in GIMP if needed.
That sounds like a solid workflow you've developed there! Thank you for sharing it with us. It's always interesting to hear about different methods people are using. Your suggestion about using Control Net Tile has come up a few times and I can see the value it brings. In light of these discussions, I'm planning to create a Workflow Part II tutorial where I'll cover this more advanced technique. Stay tuned for that!
best tutorial for newbies, like it
Thank you for the positive feedback! Happy creating.
I know it doesn't really matter, but I absolutely love your accent. Very helpful video as well ofc!
I'm glad you enjoy my accent. I wasn't sure how native speakers would perceive it. Thanks for the feedback!
german coastguard speaking 😍 great tutorial, will follow for more
Danke sehr! 😊 I'll do my best to keep up the quality. Stay tuned for more tutorials!
Looks fun! I can't wait to start messing around with this. I just got a new GPU so I could.
That's great to hear! Getting a new GPU can make a huge difference. Happy creating!
@@AIKnowledge2Go thanks it's only a 3060 but 12 gb. My old was a 760 and it would not even try
This video is a gem, thank you very much!
Thank you for your kind words, it means a lot to me!
My 1080 can't upscale so much. Welp. But nice demonstration.
That model is right up my alley. I hope the dev will resume maintaining it after his hiatus.
I understand your concerns with the 1080. To address the VRAM issue, you might want to try adding the `--lowVram` or `--medVram` parameters when starting with webui-user.batch. Another approach for upscaling with limited VRAM is to utilize Controlnet Tile. It essentially breaks your image into smaller tiles and scales each one separately. This can be especially helpful for hardware with memory limitations. Hope that helps
Where do I find the Controlnet Tile? I'm not ready to follow your tutorial yet cos my internet is being throttled and that file for the revAnimated model you suggested is taking hours to download :/@@AIKnowledge2Go
Great content! Thank you so much!
Glad you liked it!
very quick and detailed tutorial, you earned a sub!
Thank you so much! Welcome to the community! 🚀
Danke Mann :D das Tutorial hat mir echt weiter geholfen, mein Abo hast du sicher!
Das freut mich zu hören! Ich bin froh, dass das Tutorial dir helfen konnte. Vielen Dank für dein Abo, ich schätze deine Unterstützung sehr!
Sehr geiles Video.
Hat mich einen echten Schritt nach vorne gebracht!
Danke für das Feedback, freut mich das es geholfen hat.
@@AIKnowledge2Go falls du gute Tips zu "Händen" hast, nehme ich die gern. 😄
@@AIKnowledge2Go youtube hat einfach meinen Link ausgeblended.. ..die biatch
@@Asterex versuchs mit negative embeddings wie z.b.civitai.com/models/116230/bad-hands-5
Ansonsten hat After Detailer auch die möglichkeit Hände zu "inpainten" zu after Deatiler hab ich auch ein Video. Allerdings ist da der Fokus auf Gesichtern.
@@AIKnowledge2Go das mit der negativ embedding probier ich aus! Vielen Dank!
Awesome video excited to see your others and future videos
Thank you for your enthusiasm! It's great to know you found the video helpful. I'm excited to continue creating more content for you and the community. Stay tuned for more!
Awesome video man!! Dank je
Thanks for watching! I'm glad you enjoyed the video!
Great job, the angle on the phone thing she is holding is a little off but overall your workflow and tips are appreciated.
Thank you for pointing that out! You're absolutely right there is still room for improvement. Happy creating
Really great guide, a well earned sub and I hope you continue to produce content of this calibre and earn success!
Thank you so much for the kind words and support! I'll do my best to keep delivering quality content. Stay tuned for more. Happy creating!
As a non-german mathematician I find it very confusing to hear germans say "Yoo-ler" instead of "Oy-ler" hahaha. Great video!
Yeah, I took a guess on how to pronounce that. 😂 Looks like I guessed wrong. Thanks for the feedback and for watching!
Thiss isss the besssst sstable diffusion vid🖤🙏🏾 ssthanku sir
Thank you so much for the kind words! I'm glad you found the video helpful. Stay tuned for more content!
Very nice workflow. good tips. This can be helpful to a lot of people.
Thanks for sharing
I'm really glad to hear that you found the tips and workflow helpful! Sharing this knowledge and helping others is exactly why I create these videos. Your support and feedback are much appreciated. Thank you!
This is more or less my workflow, only that I prefer DDIM over Euler a, and DPM++ 2S a Karras over DPM++ 2M Karras.
I didn't know when I was first experimenting, but the two that I picked add noise between steps, which further randomizes and creates variation. Most other samplers(ones that aren't DDIM or have 'a' in the name), end up converging at about 150 steps to the same result because they don't add noise between steps.
Interesting! I've noticed similar patterns with different samplers. The noise addition in DDIM and those with 'a' really makes a difference in creating unique variations. I'll definitely consider experimenting more with DPM++ 2Sa for refinement. Thanks for sharing your insights!
Sehr gutes video, gut erklärt, werde ich die Tage mal ausprobieren.
Vielen Dank für dein positives Feedback! Es freut mich zu hören, dass dir das Video gefallen hat. Ich hoffe, dass die Tipps und Tricks dir beim Ausprobieren helfen werden. Viel Spaß dabei!
You're getting a hand and a robot thing in your leg in painting because it's still part of the original prompt.
It's important to trim out portions that aren't applicable to what you're inpainting.
You're absolutely right. My skills in inpainting have improved since then. I'm considering re-recording this video in the future. Thanks for pointing that out, happy creating.
adding keywords for face tend to decrease chances of getting mangled faces you will have to fix later. Try (highly detailed face, textured skin, detailed eyes)
That's a great tip, thanks for sharing! However, it's also worth mentioning that there will come a time when you'll want to upscale your image to enhance the overall quality. In that case, you might still need to deal with some mangled features, but hopefully fewer with your suggested keywords.
I am using Adetailer it's amazing !!
Absolutely! Adetailer has been a game-changer. If you haven't seen it yet, check out my video on it; I delve into its benefits and how it can save you a lot of time. Cheers!
Omg you`re a life saver!!!
I have one issue tho and maybe more people would relate: Inpainting does not work for me, i tried giving it prompts, paining one small area, multiple, restarted the whole webui, just nothing seems to work.
I can see the image being rendered nicely and it looks good but i get the same result in my folders.
Does anyone know any fix?
Thank you for the kind words! Regarding your issue, I've experienced similar problems when using ControlNet for inpainting in the current version. If you're not using ControlNet, perhaps a fresh installation might help resolve the problem?
@AIKnowledge2Go Hello!
I've solved my issue, I don't have them right now but I've had to put some commands in the webui batch file and it works fine.
I don't know why, maybe because I have an AMD computer
thank you for great tips.
You're welcome! I'm glad you found the tips helpful. Happy experimenting!
Danke dir für das hilfreiche Video, Chris! Gibt mir eine super Hilfe zum Start! Funktioniert der Workflow so grundsätzlich auch für fotorealistische Bilder (oder auch für anderes)? Natürlich dann mit anderem Modell und Parametern denke ich...
Danke für das Feedback! Es freut mich sehr, dass dir das Video geholfen hat. Ja, ich verwende diesen Workflow für alle Arten von Bildern, sei es Anime oder fotorealistisch. Gelegentlich nutze ich den "after! detailer" speziell für Gesichter. Falls du damit noch nicht vertraut bist, habe ich ein Video dazu auf meinem Kanal.
Love this video. I mainly learn through RUclips Vids, and I've been an avid Stable Diffusion user since the onset. Keep making content like this that explains your actions & choices, and I know you're gonna have tons of subs soon.
Thank you so much for your kind words and support! I'm thrilled to hear that you find the videos helpful. I will certainly continue to create content that explains my process in detail. Your feedback and encouragement mean a lot to me. Stay tuned for more content!
fantastic video, ever so helpful, liked and sub'ed🍻
Thank you so much for your kind words and support! I'm thrilled to hear you found the video helpful. Cheers to more learning and exploration together! 🍻
really amazing thanks! i was using it completely wrong before
I'm so glad to hear that the video was helpful for you! It's wonderful that you're now able to use the tool more effectively. Thanks for watching and happy creating!
I dont have a resize by option. When i go to inpaint and img2img I just have another width and height chart.
To see the 'resize by' option, you may need to update your version of Automatic 1111. The latest version as of today is 1.3.2. I've confirmed that this feature isn't related to any extensions, as I still had access to it even after disabling all of my extensions. Please try updating your version.
Hello, could you please tell me how you enabled the bars on the side of your image on 4:16? The ones that let you pull the image up and down and to the sides. I can't find anywhere on how to enable them, for me images just get squeezed smaller and for really wide images that makes it so hard to work with.
Oh, those scroll bars were actually a result of using ctrl + mousewheel to zoom in on my browser. It appears that feature might have been changed or removed in newer versions. 😞
However, I'd recommend the canvas zoom extension for a1111. You can find it under extensions -> available -> load from, or check these direct links. haven't tried it myself but it seems to do what you are looking for:
github.com/richrobber2/canvas-zoom
Hope this helps!
Great tutorial. thanks!
Glad you enjoyed it!
hi there! thank you for the guide and tutorial. Very useful information especially to users like me that only know how to put in prompts and click generate. Thank you again!
You're very welcome! I'm glad to hear that you found the guide and tutorial helpful. It's always my aim to make the process of AI art generation more accessible and easy to understand. Don't hesitate to explore my other videos for more tips and tricks. Thank you for your kind words and support. Stay tuned for future videos!
Wow man this is exactly what i was looking for this really helped a lot with learning a better work flow im making all kinds of crazy new images now keep up the good work liked and subbed.
Thank you so much for your enthusiastic feedback! I'm thrilled to hear that you found the workflow helpful and that it's inspiring you to create all sorts of new images. Your support really means a lot to me. Keep experimenting and creating, and stay tuned for more content. Thanks again for subscribing and liking!
@@AIKnowledge2Go I don't suppose you know of a good way to reproduce the same character. I know about LORAs but i am trying to generate multiple pictures of the same character that i can then turn into a LORA .... at least i think that's how it is done LOL im a noob :)
Thanks for the tutorial.
You're welcome! I'm glad you found the tutorial helpful. Stay tuned for more.
@@AIKnowledge2Go yes
Do you struggle with prompting? 🌟 Download a sneak peak of my prompt guide 🌟 No Membership needed: ⬇ Head over to my Patreon to grab your free copy now! ⬇ www.patreon.com/posts/sneak-peek-alert-90799508?Link&
You're very confident, first video I watched of you. How long did it take you to self learn this? There's very little I completely understand from tweaking arbitrary values and knowing how that changes the image. I just read about how to get the software running on google colab which I never knew existed or how anyone would use it until now.
I appreciate your observation! It took me several evenings of dedicated learning and experimentation to grasp everything fully and understand the intricacies of the values. If Google collab works for you great. I read that A1111 would not run anymore on collab, but that info is 2 Month old at least. If you find yourself challenged by the basics, I recommend starting with my foundational tutorials. Here's a playlist that can guide you:
ruclips.net/video/SHikMK39Q30/видео.html.
Do you do tutorials for us cheapskates who can't afford a powerful enough computer? I'm on a $1000 dollar Windows Surface 4, using GPU Radeon Ryzen (?) and it's taken me almost 24 hours to get it up and running to an acceptable level (lots of code changing and deleting venv files, and setting (having lol) the arguments to make it work on this computer. It's been an absolute nightmare for me, but a small render with a handful of prompts is now coming in at under 2 minutes.
Another problem I;m having is running out of GPU memory when I'm asking it to do when I use too many prompts or sampling steps etc. It's been a real head scratcher! @@AIKnowledge2Go
That tip about mask being there altough it doesn´t appear is priceless! You just saved my entire life
I'm glad to hear that tip was helpful for you! I dont know why this is not fixed already. Maybe one day i'll write a bug ticket in Automatic 1111 github on the other hand, this move has become second nature to me. So even if this bug is fixed one day i will clear the mask anyway. :) Thanks for watching and happy creating!.
u earned a new sub
Thanks for joining the community! I’m excited to have you here!
insane dude, thank you very much
Thank you for your comment! I'm glad you found the content helpful and enjoyed it.
Dman this is very nice workflow , i usually use ad detailer to fix face from the start , and for upscaling i go to img2img and use tiling with ultimate upscaling. I dont know why....thats just the workflow ive had. Ill try this now
It's always interesting to hear about others' workflows! The beauty of these AI tools is that there are so many ways to use them. It sounds like you have a process that works well for you, which is fantastic. My workflow is just one way to approach creating AI art, and I encourage you to try it out and see if it suits your style. Thanks for watching and happy creating!
a good job! I like it!
Thank you so much! I'm glad you enjoyed it. Your support means a lot! 😊
If you're interested in learning how to save time inpainting body parts using a handy tool in Automatic 1111, I recommend checking out this video next: ruclips.net/video/y3DxX9s0NhQ/видео.html
Really good video :) btw you think the "vae-ft-mse" is worth having? since I got it but can't see it anywhere in my stable diffusion so if its worth trying to get it to work?
@@Marcus-si7su Thanks for your feedback. You can set up vae-ft-mse in A1111 under Settings->Quicksettings list-> there you add Sd vae and restart A1111 then you get the dropdown. But you will have to download the VAE.
Thx a lot for this. How could you have fixed the arm? It looks too small. Just like the legs?
To the entire ai art community:
You lack of creativity is astounding, here you have this amazing new art tool that allows a new multiverse of possibilities, and all you guys come up with is essentially the same few pinup girls.
It's quite cringe guys.
What is the best computer to use for Automatic?
The ideal computer for running Automatic1111 largely depends on your budget. If cost is not a concern, I would suggest going for a system with an Nvidia RTX 4090 graphics card and an Intel I7-9700K processor or faster. However, keep in mind that when it comes to running Automatic1111, the GPU and VRAM are significantly more important than the CPU.
How much was your computer? lol I mean, I'm utterly jealous by how fast yours works, and I don't know where to even start looking for good brands @@AIKnowledge2Go
Amazing! Thank you so much for creating this! Amazing insights - love your work looking forwards to learning more!
Thank you for your kind words! I'm thrilled you found the content insightful. Stay tuned for more! 🌟
mehr als eindeutig du bist deutsch ^^ allerdings muss ich sagen es war schwer zu verstehen, was was wirklich macht. ich nutze easy Diffusion als web ui auf meinem poweredge r630.
es war viel information gepackt in einem video allerdings hätte ich gern ein paar Beispiele gesehen. würde mich gern über eine video serie freuen die frei nach dem motte. lasst uns heute mal ein bild erstellen und dann vielleicht irgendwelche themen nehmen oder random ein paar worte zusammen werfen oder so. also quasi was man so als anfä´nger auch tun würde. wäre sicher interressant dich einfach mal bei dem prozess zu begleiten und auch deine hinter gründe zu hören wie und warum du das so jetzt machts.
aber sehr informativ und hat mir schon ein ganzes stück weiter geholfen
Ha da hast du mich erwischt 😂 Deutscher durch und durch😂
Du hu hast absolut recht, das Video richtet sich auch eher an Leute die bereits Berührungspunkte mit Automatic 1111 hatten. Ich kann dir dieses Video von mir empfehlen da erkläre ich die einzelnen Settings:
ruclips.net/video/SHikMK39Q30/видео.html
Wenn du der Liste folgst da hab ich einzelne Themenbereiche in Reihenfolge abgedeckt. Ich hoffe das hilft dir weiter.
Good video and explanation! I'm just a bit confused about one thing, during inpainting is it really necessary to have an accurate prompt, for what you want specifically in that area? And what happens when you leave it blank, does it just try to "autofill"?
Hey there! Your prompt can guide the AI for better inpainting, but feel free to experiment and see what surprises the blank canvas brings! What i didn't do in my Video is to change the prompt. I actually should have changed it. Leaving it blank can work, but i suggest you use ControlNet Inpaint.
"Before watching your video clip, I kept trying to use 'Hires fix' foolishly, and the result is that the pictures look very, very bad. Thank you very much." and I'm a newbie
I totally get that! The 'Hires fix' can be quite tempting for many, especially when you're just starting out. I've been there. Glad my video could steer you in a different direction. Happy creating.
@@AIKnowledge2Go "My English is quite poor, I can only follow what I see. I have to watch your video clip a few times before I can do it, well, a bit slow but anyway I feel lucky that you shared your experience. Once again, thank you very much."
Chat GPT Translate :D
I'd actually like to know more about the syntax, what to set in brackets, the values and what options there are
I see a lot of stuff like (purple hair:1.2) sometimes with or without colon, sometimes with multiple brackets and so on...
While I use prompts like that myself and play around with it, i feel like I roughly get how to use it, but i could very well be completely wrong on this as well
If you already have a video on this just let me know, otherwise this would make a very good video imo
Absolutely understand your query! Check out this video for a detailed understanding:
ruclips.net/video/IEYMVIbPbQQ/видео.html
However, if that feels too advanced, I recommend starting with this one:
ruclips.net/video/SHikMK39Q30/видео.html
I'm also currently working on a comprehensive guide on prompting. Stay tuned!
Thank you@@AIKnowledge2Go
Do you have some source to adjust our research, a lot of parameters I've seen in this video, weren't showed anywhere, I assume part of it is emperical knowledge, still, if you can link us to great indept tutorial explan,ation it would save me a lot of hardwork ! Still I learn a ton from that video, and helped me adjust some issues i've got manytimes !
Your response is a good start, as it acknowledges the viewer's appreciation and seeks to clarify their specific needs. You could enhance it by offering direct links to your relevant videos and expressing a willingness to assist further. Here’s a revised version of your response:
Hi there! I'm thrilled to hear that my video was informative and helpful for you. Regarding the settings you're curious about, could you specify which parameters or aspects you're looking into? This will help me guide you better. Also, I have two basic videos on prompting and basic settings on my channel which might be just what you're looking for. I'll drop the links here for easy access:
ruclips.net/video/MftRapF4AaU/видео.html
ruclips.net/video/SHikMK39Q30/видео.html
Feel free to check these out. Also i have whole tutorial Series on stable diffusion.
My problem with using img2img for upscaling is that I work with models trained on specific faces and when I do it this way the likeness is lost. So for me Hires Fix works better. But nice video and many great tips! I didn’t quite understand the reason for decreasing the resolution on the face though.
Thank you for sharing your experience with img2img for upscaling. If the likeness is getting lost, it might be due to the denoising strength being set too high. However, if Hires Fix works better for you, that's absolutely fine. I often inpaint faces later and then use a detailer, which I've found to be quite helpful. Thanks for your feedback and for watching the video!
Awesome videos!! I would love to see your workflow for hyper and photo realistic images. You have some great techniques!
Thank you so much for your kind words and feedback! I'm thrilled to hear that you're finding my techniques helpful. As for your suggestion, I absolutely plan on sharing my workflow for hyperrealistic and photorealistic images in a future video. While I can't specify a date at the moment, I promise it's on my list. Stay tuned, and thanks again for your support!
Thank you so much 🙏
You're welcome 😊
Hello @AIknowledge2Go, this is a really helpful video. Thank you for coming up wirh such a great content. I have a quick question here and would appreciate if you could provide some valuable suggestion. I have always been having challenge in worling with image2image generation, seems like the final result is no where close to my input image. For example, if I want to make minor edit to my normal human images like change clothes color, changing hairstyle, changing hairlength etc by keeping rest of the entire details intact should i go for image2image option in SD or should i use some other method for this? I am using absolutereality Checkpoint for this to ensure the pics are realistic. Any advise / suggestion would be greatly appreciated
Thank you for your kind words! If you want to make minor edits to your normal human images while keeping the rest of the details intact, using the inpaint option of Image2Image in Automatic 1111 is a good choice. However, for specific modifications like changing clothes color, hairstyle, or hair length, you may need to experiment with different prompts and parameters to achieve the desired results.
In addition to Image2Image, you can also explore the ControlNet model for inpainting, as it can be effective in preserving the overall details while making specific modifications. Remember to adjust your prompt accordingly to focus on the areas you want to edit.
It's important to experiment and iterate with different prompts, models, and parameters to achieve the desired outcome.
@@AIKnowledge2Go Thank you so much for your response . I totally agree , detailed prompt + config details are the key here, which I haven't mastered yet, still learning 🙂. Btw, if you don't mind me asking I would like to know if there is a way I can have a model trained to simply take input and change hairstyle on the same pic as output. Considering I have to work on multiple images, it would be difficult to keep writing prompt on each image to get the desired result. Thank you so much for looking into this 🙂
i would prefer for the whole screen to be visible that would make the videos much more efficient even to learn from
Thank you for the feedback. I always aim to provide the most clarity in my tutorials. I'll keep your suggestion in mind for future videos. Happy creating.
Usefull, thanks
I'm glad you found the video useful! Stay tuned for more AI Art tutorials!
great tutorial! how do I change my stable diffusion to this dark theme?
Thank you, I'm glad you enjoyed the tutorial! To change to a dark theme, there are a few options you can try:
Add /?__theme=dark at the end of your browser's URL when you're on the Automatic 1111/Stable Diffusion page.
Try the Dark Reader plugin for your browser.
Open your webui-user.bat file and add set COMMANDLINE_ARGS=--theme dark. Please remember to make a copy of your webui-user.bat before making any changes!
Hope one of these solutions works for you!
@@AIKnowledge2Go Thank you so much... adding this line to the bat file worket!
I never considered sending an initial image I like to img2img. I'll have to try that. I normally copy and paste the seed of an image I like and tweak settings from there.
I found it a bit more intuitive compared to the high res fix workflow and in my experience, it often led to better results. But the beauty of these tools is the flexibility they offer. Definitely give it a try and see how it works for you!
@@AIKnowledge2Go I have been trying this method, and I am amazed how much better this technique is. I get so much more detailed and hi-res results. Thank you for sharing this.
Just one question, so in Automatic1111, does the refiner model doesn't work for image upscales?
When you refering to refiner you mean SD XL Models right?
Actually i havent done upscaling with it. My system crashes when i go higher than 1024 x 1024 with sdxl models. I still use a lot SD 1.5 because as of now in my opinion you can get better results with 1.5 if you know what you are doing.
so there isn't a way to do inpaint and give it new prompts which it knows only to apply to the masked area?
Yes there is. Just change the prompt. In fact that is the reason why is have this little "spider thingy" on her leg, because i did not change the prompt. Happy Creating.
Very Nice video
Thank you for your kind words! I'm glad you enjoyed the video. Stay tuned for more content.
Took hours to generate the first set of 8 images... My graphics card is "MSI Gaming GeForce RTX 3070 LHR 8GB GDRR6 256-Bit HDMI/DP Nvlink Torx Fan 4 RGB Ampere Architecture OC Graphics Card (RTX 3070 Gaming Z Trio 8G LHR)" Is this normal/expected? Every other setting I think I got to match yours in 1111. Thanks for the great video!
Oh I actually had the wrong model loaded. It went faster once I got the right one in, about 10 minutes. Thanks again!
10 Minutes still is very long with an RTX 3070. I needed about 1,5 Minutes for 8 Images with my old 2080 Super. Do you have Xformers installed? In your stable diffusion web ui folder find the webui-user.bat file. Open it with a texteditor like notepad. Add --xformers if your "set COMMANDLINE_ARGS=" does not have it. so should look like this: set COMMANDLINE_ARGS= --xformers or similar,
also between set COMMANDLINE_ARGS= and call webui.bat write in a new line "git pull".This keeps automatic 1111 up to date.
For Xformers to install sometimes takes 3 - 4 restarts of the automatic 1111 server.It's Strange. After you saved the file you have to start A1111 via the webui-user.bat instead of the webui.bat file. Hope that helps.
@@AIKnowledge2Go I did what you said to install the xformers and I do think it's working a bit faster now, thank you again!
Geiles video es tut mir echt leid aber ich musste echt oft lachen wegen dem Akzent aber trotzdem sehr hilfreich :)
Danke, ja den Akzent kann ich nicht verbergen. 😂 Ich bilde mir ein in neueren Video ist es besser geworden😂
Hey, wirklich ein sehr gutes Tutorial, danke dafür. Ich habe noch ein Problem, vielleicht kannst du helfen: Wenn ich in InPaint einen bestimmten Bereich maskiert habe, sagen wir zum Beispiel die Beine, und dann auf generate drücke, macht er mir keine neuen Beine, sondern es sieht so aus, als würde er das komplette Bild noch mal in dem maskierten Bereich erstellen. Hab die Einstellungen aus dem Video übernommen und auch mit verschiedenen Noise Strength experimentiert.
Danke für das Feedback. Wenn er das oder ein ähnliches Bild noch einmal erzeugt in dem maskierten Bereich, hast die denoising strengh zu hoch. Was ich in dem Video nicht zeige (weil ich es damals nicht besser wusste) ist, das du dein Prompt anpassen sollst/kannst/musst.
Wenn du z.b. ein Gesicht inpainten möchtest, dann schreib sowas wie "image of a face of..." was du im Prompt drin lassen sollest ist alles was das rendering betrifft (HDR, 4K, cinematic shot).
Inpainten ist immer auch ein bisschen Trial and error.
Hope that helps.
@@AIKnowledge2Go Danke für deine Antwort, es lag am Prompt. Hatte ich dann später noch rausgefunden 😅
the inpainting part on the legs.. it give me random weired output and deform. can we somehow have a greater control on what it will be render out?
To have more control I suggest you use controlNet. I have a newer version of this video right here: ruclips.net/video/wyDRHRuHbAU/видео.html
Klasse Tutorials! Durch deine Videos sind meine Ergebnisse um ein vielfaches besser geworden! Ich habe nur ein Problem mit den "Inpaintings". Ich habe es Schritt für Schritt so gemacht wie du in deinem Video, aber bei mir verändert sich so gut wie nichts. Egal welche Einstellung ich verändere, selbst wenn ich die "Denoising strength" komplett hochstelle oder den "Seed" auf -1, bekomme ich nach dem Generieren 4x nahezu das selbe Ergebnis. Hast du zufällig ne Idee woran das liegen könnte?
Hat sich erledigt! Durch ergänzen der COMMANDLINE mit " --no-half-vae --no-half" und deaktivieren des Adblockers meines Browsers, funktioniert es nun
Das ist super zu hören! Manchmal sind es diese kleinen Dinge, die den Unterschied machen. Viel Spaß beim Erstellen deiner Projekte! 😊
Hi, is it possible to remove specific objects? Sometimes the picture only has one thing bad about it and i cnat remove it
Hi yes it is. Maybe you want to use controlNet for this. Here is a newer version of this video. ruclips.net/video/wyDRHRuHbAU/видео.html
To remove objects just write what you want to have instead on the background for prompt when inpainting. You need to experiment with the denoising strength.
Nice work-flow! I tried and like it +1
Thank you for your positive feedback! I'm glad to hear that you tried and liked the workflow. Your support is appreciated! If you have any questions or if there's anything else you'd like to see in future videos, please feel free to share.
for me, during the ImgToImg inpainting, if i render with "restore faces" on, the eyes always come out blurred and wonky. if i turn it off, the eyes are fine, but of course they are not "upgraded" like they are (in theory) when "restore faces" is turned on.
That's indeed an interesting behaviour. The image resolution could play a role in what you are experiencing. Or you have to many loras, Textual inversion etc. active. What you can try: I have a very handy video about Afterdetailer. It's an extension that can automate inpainting of faces and other body parts and produces great results. I didn't mention it in this video as I wanted to keep things simple. Here is the link to the Afterdetailer tutorial video: ruclips.net/video/y3DxX9s0NhQ/видео.html. It might give you some insights on how to achieve better results with face restoration.
I don't understand what I am doing wrong. When I am inpainting the face I follow your steps precisely and all my settings are the same as yours but I never get a new face on the image. I have tried inpainting other parts of the image and get the same results. I have tried changing all the settings individually just to get some change but nothing ever changes. All the images are output precisely the same as the original. What am I missing?
I'm sorry to hear that you're experiencing difficulties with generating new faces using the inpainting process. It's possible that there may be some issues with your installation or settings. Here are a few suggestions to troubleshoot the problem:
Make sure that your Automatic 1111 software is up to date. The latest version as of now is 1.4.0. If you're using an older version, consider updating to the latest release.
Double-check that all your settings match the ones shown in the tutorial video. Pay close attention to any specific prompts or parameters that are mentioned. Even a small difference in settings can affect the output.
If possible, try generating images with different prompts or inputs to see if the issue persists. This can help determine whether the problem is specific to the face inpainting process or a more general issue.
Consider performing a fresh install of Automatic 1111. Uninstall the current version, then download and install the latest version from the official source. This can help resolve any potential installation issues or conflicts.
If you continue to experience difficulties, it may be helpful to seek assistance from the Automatic 1111 community or support channels. They may be able to provide more specific guidance based on your specific setup and issue.
First of all, a greeting and thanks for the video, a beautiful image. I wanted to ask you why you didn't use the High res fix, from what I understand what it does is the same as you did but saving you a step, that is, starting an imgtoimg process. What is the reason that you advise against its use?
High res fix can change many details compared to the original image
Thank you for your comment and your kind words about the video. @hoasiai is spot on. The High Res Fix does indeed have the potential to significantly alter the image, as does changing the sampler. My preference is for a straightforward workflow. I start with prompt engineering, and once I'm satisfied with the composition, I move to Image2Image to boost the quality when changing the sampler, without affecting the composition. I hope this clarifies my approach, and thanks again for your question!