Can you please, please please please, tell us what the extension is that allows you to have the Clip Skip and the VAE chooser at the top of your webui? I searched and couldn't find anything, but I'm very new to this.
So it's not an extension, go under settings and then go to user interface, then go down and there's an option that says quicksettings list. The two that you want to put in for clip Skip and the SDVAE are the following: 'CLIP_stop_at_last_layers' And 'se_vae' then just apply settings and restart the interface.
9:24 Erm, so with this extension, you replace "masking it properly by hand" with "mask it automatically and then manually paint a blocker mask to prevent the autmatic to mess up." ? I'd rather mask it manually. Also enables you to mask buffers not all around, but in areas you need it.
I developed and built my own prompt generator. It's a sophisticated series of commands instructions and other things that are about 3 or 4 pages long that tell chatgpt how to generate stable diffusion prompts. I make it available for those who want to buy it. I have a Discord Channel where you can ask the people who have bought it how they like it. discord.gg/45hKjRNC
Probably want to use a combination of inpainting and controlnet. Mask out the area you want in the photo, then use controlnet to get the openpose skeletal structure of the person in the photo. Then switch to your model and generate masked area only. I would have to try it, but I think this should work.
Not necessarily. Unless your body type, size, structure are the same as the destination character. I would just use inpainting and controlnet. Using inpaint anything may put to much of a restriction on how the AI can conform you to that space.
Yeah once you get yourself in there you'll obviously have a seam that doesn't match with the rest of the background. Send that image back into in painting mask out the area that is the seam just masking out the seam, and then regenerate using an in-painting model and that'll blend it in nicely. If you need to turn up the inpaint mask just a little bit from 4 pixels to 12 pixels
I'm trying to transfer a set of sunglasses from the Internet to a face I've already generated in Stable Diffusion. I'm no computer wizard or digital artist, but I can crop the glasses and, at the very least, 'align' them with the PNG in Paint 3D. I've tried using the 'Reference' and 'Canny' tools to overlay the cropped glasses into the generated image in Stable Diffusion in-paint, but all of the results are messy. Any help on a reliable process or setting configuration for this sort of thing?
What you could try is get the sunglasses you want if their positioned right, overlay them in a Photoshop program on the face, take it into stable diffusion in painting, mask out only the edges of the sunglasses, and then gradually using a low denoise strength and multiple renders get it to blend the edges so it looks more natural.
Yeah just go to my Google share that I have, share.Xerophayze.com And you'll find a Google sheets there that you can open and copy and paste it into your styles.csv file.
You said you'd link your styles.csv file somewhere, but I was unable to find it. Could you point me in the right direction? I already got my own but more style, more wildcards and more prompts are always welcomed!
So, if you replace just the environment, won't the lighting on the character no longer match the environment? Any ideas about how to keep the lighting consistent between the character and environment when making large changes like this?
I think a lot of it depends on how much context you give it on the character, when you're changing the entire environment sometimes you want the mask to overlap the edges of the character which gives the AI a little bit of information about lighting. Outside of that, once you've changed the environment, you could always do an image to image with a low denoise strength to see if it can correct the lighting.
@@AIchemywithXerophayze-jt1gg Yeah, I've heard that mentioned before. Also, I think increasing the "mask padding" value might also help give it more context, or so I've heard. I'm going to do some experimentation. Thanks for the reply!
It's just a unique label for filing purposes. The next version of my prompt generator will actually have that removed along with the quality indicators at the beginning of the prompt because with SDXL it's attention to is at the beginning of the prompt is a lot more acute
chat gpt dont work for me, it wont give me prompts for diffusion like the way you showed it. how do you word your question so as to get the proper answer?
Awesome! I subbed. Are you able to step by step inpaint multiple characters into a background? I would love to see that as I'm trying to figure out how to do it.
@@AIchemywithXerophayze-jt1gg Adding characters to a scene. I've been thinking of multiple solutions: The obvious one is using inpainting. I found this to be very difficult for some reason. Another would be to create all characters in open pose, I haven't tried this one yet, as I am not very skilled yet. Yet another would be to create each character separately, then add all images into one image and blend the border transitions. Have not tried this one. What is the best way to add more characters to a scene?
RUclipsr: *uses a miracle of technology absolutely for free* Also RUclipsr: My prompt generator is up for sale, you can pay with credit card Generator: 'exuding', 'this is currently trending', 'daring aesthetic' 😊
nvm, solved it. not an extension. settings -> show all pages in the left panel -> CTRL + F for "quicksettings" and in that box, type or select from the dropdown "sd_vae"
The tutorial do not work for me, y masked but the result generated imagen is overlaped same image and cloned it very bad config. impossible to do "Segment anything is messing and merge all colours with the background.
good stuff, but it's really annoying you don't have a mouse pointer!! So much time I waste to go back and forth and look where you clicked and what's going on in the dark mode you are.
Oh, not sure why I didn't notice that. Thank you for letting me know. The software I use I think has an option to either shut it off or turn it on. Maybe I accidentally shut it off or something. For my next video I'll make sure it's turned on.
@@AIchemywithXerophayze-jt1gg Excellent! I like your stuff you are showing and teaching. I tried this Inpaint Anything, and yes it worked, but the way the masking there works is a Nightmare. Do you think is BETTER to do a mask in Photoshop (or equivalent) and inpaint upload instead?
I understand the frustration with people charging for stuff like that, but couldn't you go about that in a more civilized way. "Hey I don't think your GPT prompt provides enough value for you to be charging that much for it or at all"
It's okay, sometimes people are frustrated that not everything is free. And so they feel like they have to put it down. I have hundreds of other people who would say otherwise. And I'm constantly working on it improving it. Adding functionality and features.
Wow!! This is Amazing in many ways. Five prompts at the same time with the appropriate batch size. Boom!
Glad you like it!
Excellent walk through and tutorial. Thank you for the great videos, it's very much appreciated!
Glad it was helpful!
Fantastic video, thank you for the in-depth tutorial.
You're very welcome!
Great video !
Thanks!
can use for batch..multiple file sequnce image from video ?
I'm not entirely sure what you mean by that.
Can you please, please please please, tell us what the extension is that allows you to have the Clip Skip and the VAE chooser at the top of your webui? I searched and couldn't find anything, but I'm very new to this.
So it's not an extension, go under settings and then go to user interface, then go down and there's an option that says quicksettings list. The two that you want to put in for clip Skip and the SDVAE are the following: 'CLIP_stop_at_last_layers' And 'se_vae' then just apply settings and restart the interface.
9:24 Erm, so with this extension, you replace
"masking it properly by hand" with
"mask it automatically and then manually paint a blocker mask to prevent the autmatic to mess up." ?
I'd rather mask it manually. Also enables you to mask buffers not all around, but in areas you need it.
Good to know.
how did you make ChatGPT being able to generate Stable Diffusion Prompts?
I developed and built my own prompt generator. It's a sophisticated series of commands instructions and other things that are about 3 or 4 pages long that tell chatgpt how to generate stable diffusion prompts. I make it available for those who want to buy it. I have a Discord Channel where you can ask the people who have bought it how they like it. discord.gg/45hKjRNC
Any tips for how I might use this to switch out a person from a photo with an image of myself from a trained model?
Probably want to use a combination of inpainting and controlnet. Mask out the area you want in the photo, then use controlnet to get the openpose skeletal structure of the person in the photo. Then switch to your model and generate masked area only. I would have to try it, but I think this should work.
@@AIchemywithXerophayze-jt1gg interesting, so i assume I would still use inpaint anything to get the outline then do the other stuff?
Not necessarily. Unless your body type, size, structure are the same as the destination character. I would just use inpainting and controlnet. Using inpaint anything may put to much of a restriction on how the AI can conform you to that space.
@@AIchemywithXerophayze-jt1gg any idea how to reduce blur/ bad background after doing this? Lots of artifacts and seams.
Yeah once you get yourself in there you'll obviously have a seam that doesn't match with the rest of the background. Send that image back into in painting mask out the area that is the seam just masking out the seam, and then regenerate using an in-painting model and that'll blend it in nicely. If you need to turn up the inpaint mask just a little bit from 4 pixels to 12 pixels
I'm trying to transfer a set of sunglasses from the Internet to a face I've already generated in Stable Diffusion. I'm no computer wizard or digital artist, but I can crop the glasses and, at the very least, 'align' them with the PNG in Paint 3D. I've tried using the 'Reference' and 'Canny' tools to overlay the cropped glasses into the generated image in Stable Diffusion in-paint, but all of the results are messy. Any help on a reliable process or setting configuration for this sort of thing?
What you could try is get the sunglasses you want if their positioned right, overlay them in a Photoshop program on the face, take it into stable diffusion in painting, mask out only the edges of the sunglasses, and then gradually using a low denoise strength and multiple renders get it to blend the edges so it looks more natural.
Nice new video
Could you please share the negative prompts?
Yeah just go to my Google share that I have, share.Xerophayze.com And you'll find a Google sheets there that you can open and copy and paste it into your styles.csv file.
You said you'd link your styles.csv file somewhere, but I was unable to find it. Could you point me in the right direction? I already got my own but more style, more wildcards and more prompts are always welcomed!
yea, share.xerophayze.com
i keep getting this message when i try to run segment anything "SAM generate failed" can you help me?
I'm not entirely sure, I would double check to make sure that the models have actually been downloaded, try uninstalling the extension by deleting the
Me too. I've downloaded the models, I deleted, reinstalled, then re-downloaded a model but get the same SAM generate failed message.
So, if you replace just the environment, won't the lighting on the character no longer match the environment? Any ideas about how to keep the lighting consistent between the character and environment when making large changes like this?
I think a lot of it depends on how much context you give it on the character, when you're changing the entire environment sometimes you want the mask to overlap the edges of the character which gives the AI a little bit of information about lighting. Outside of that, once you've changed the environment, you could always do an image to image with a low denoise strength to see if it can correct the lighting.
@@AIchemywithXerophayze-jt1gg Yeah, I've heard that mentioned before. Also, I think increasing the "mask padding" value might also help give it more context, or so I've heard. I'm going to do some experimentation. Thanks for the reply!
g53ptv in prompt, what is this?
It's just a unique label for filing purposes. The next version of my prompt generator will actually have that removed along with the quality indicators at the beginning of the prompt because with SDXL it's attention to is at the beginning of the prompt is a lot more acute
Buen tutorial. Que grafica tienes y de cuantas GB vRam es ?
Nvidia RTX 3080 Ti 12GB vRam
chat gpt dont work for me, it wont give me prompts for diffusion like the way you showed it. how do you word your question so as to get the proper answer?
I've created a seed prompt that I sell on my shop that gets chat GPT to work that way. Shop.xerophayze.com
Awesome! I subbed. Are you able to step by step inpaint multiple characters into a background? I would love to see that as I'm trying to figure out how to do it.
Are you talking about in painting existing characters to get them to look better or adding new characters to a scene?
@@AIchemywithXerophayze-jt1gg Adding characters to a scene.
I've been thinking of multiple solutions:
The obvious one is using inpainting. I found this to be very difficult for some reason.
Another would be to create all characters in open pose, I haven't tried this one yet, as I am not very skilled yet.
Yet another would be to create each character separately, then add all images into one image and blend the border transitions. Have not tried this one.
What is the best way to add more characters to a scene?
RUclipsr: *uses a miracle of technology absolutely for free*
Also RUclipsr: My prompt generator is up for sale, you can pay with credit card
Generator: 'exuding', 'this is currently trending', 'daring aesthetic' 😊
Interesting response.
What's the name of the vae extension?
nvm, solved it. not an extension.
settings -> show all pages in the left panel -> CTRL + F for "quicksettings" and in that box, type or select from the dropdown "sd_vae"
Vae-ft-mse-840000-ema-pruned
The tutorial do not work for me, y masked but the result generated imagen is overlaped same image and cloned it very bad config. impossible to do "Segment anything is messing and merge all colours with the background.
Not sure why your results aren't turning out. Double check all your settings.
@@AIchemywithXerophayze-jt1gg tanks a check my imagen and put original size not a rescalated one and it turns well tanks a lot
it was the old graphics Amd Radeon R7 the problem, i switched to Nvidia and it works great thanks!
good stuff, but it's really annoying you don't have a mouse pointer!! So much time I waste to go back and forth and look where you clicked and what's going on in the dark mode you are.
Oh, not sure why I didn't notice that. Thank you for letting me know. The software I use I think has an option to either shut it off or turn it on. Maybe I accidentally shut it off or something. For my next video I'll make sure it's turned on.
@@AIchemywithXerophayze-jt1gg Excellent! I like your stuff you are showing and teaching. I tried this Inpaint Anything, and yes it worked, but the way the masking there works is a Nightmare. Do you think is BETTER to do a mask in Photoshop (or equivalent) and inpaint upload instead?
your "chat gpt prompt generator" is the saddest thing i ever seen, literally nothing and you are charging money for it.
Interesting point of view. Thanks for the feedback.
I understand the frustration with people charging for stuff like that, but couldn't you go about that in a more civilized way. "Hey I don't think your GPT prompt provides enough value for you to be charging that much for it or at all"
It's okay, sometimes people are frustrated that not everything is free. And so they feel like they have to put it down. I have hundreds of other people who would say otherwise. And I'm constantly working on it improving it. Adding functionality and features.
I don't normally look at youtube comments because of dummies like this.