- Видео 9
- Просмотров 105 669
Intelligent Image
США
Добавлен 6 мар 2024
Welcome to Intelligent Image where I am exploring the intersection between AI art and digital painting. My goal is to fuse the AI generation process with traditional 2D and 3D image creation methods. I wish to solve the major issues that prevent AI from being integrated into a creative workflow, mainly issues of creative control and image authorship.
(KRITA AI) REGIONAL PROMPTS Solve a Big Problem
Hey everybody! Today I'm going to show Regional Prompting within Krita with the Stable Diffusion Generative AI plugin. I am covering, Regions, Live Mode, LoRA's, and editing images.
Resources:
Krita: krita.org/
Plugin: github.com/Acly/krita-ai-diffusion
0:00 Intro
0:25 Regional Prompting
2:58 Adding LoRAs to Regions
3:48 Editing Regions
4:44 Regions in Live Mode
7:25 Editing Images with Regions
10:48 End Screen
Resources:
Krita: krita.org/
Plugin: github.com/Acly/krita-ai-diffusion
0:00 Intro
0:25 Regional Prompting
2:58 Adding LoRAs to Regions
3:48 Editing Regions
4:44 Regions in Live Mode
7:25 Editing Images with Regions
10:48 End Screen
Просмотров: 2 733
Видео
Struggling with PONY DIFFUSION? Here's Why
Просмотров 9 тыс.Месяц назад
Today, we will be looking at how to get the best quality images from models based on Pony Diffusion V6 XL. I am demonstrating in ComfyUI, but these tips apply to all Stable Diffusion interfaces. Pony Diffusion V6 XL: civitai.com/models/257749/pony-diffusion-v6-xl Scores: Enter this entire string of text for every model based on Pony Diffusion V6 XL: score_9, score_8_up, score_7_up, score_6_up, ...
(KRITA AI) BEGINNERS guide to KRITA
Просмотров 9 тыс.2 месяца назад
Hey everybody! Today I'm going to show you the tools and options within Krita you are most likely to use when creating images with the Stable Diffusion Generative AI plugin. Resources: Krita: krita.org/ Plugin: github.com/Acly/krita-ai-diffusion Music by CreatorMix.com 0:00 Intro 0:25 Interface 8:25 Inpainting/Making Selections 14:27 Upscale 15:55 Face Refine/Transform/Transparency Masks 23:21 ...
(KRITA AI) STEP-BY-STEP Using Stable Diffusion in Krita
Просмотров 10 тыс.3 месяца назад
Hey everybody! Today I am going to go over the process of actually creating something with the Generative AI plugin for Krita. Resources: Krita: krita.org/ Plugin: github.com/Acly/krita-ai-diffusion Models: VXP: civitai.com/models/311157/vxp-xl-hyper Music by CreatorMix.com
(KRITA AI) The NEW CONTROLNETS are a BIG DEAL!
Просмотров 7 тыс.4 месяца назад
Update to the Stable Diffusion Plugin for Krita. New ControlNets and settings. Resources: Krita: krita.org/ Plugin: github.com/Acly/krita-ai-diffusion Update Instructions: github.com/Acly/krita-ai-diffusion/wiki/Common-Issues#how-do-i-update-to-a-new-version-of-the-plugin Installation Instructions: www.interstice.cloud/plugin Samplers Documentation: github.com/Acly/krita-ai-diffusion/wiki/Sampl...
(KRITA AI) INTRO to Stable Diffusion for Krita PART 2
Просмотров 16 тыс.4 месяца назад
This is part two of my complete introduction to the Generative AI for Krita plugin where I go over the tools and features. Resources: Krita: krita.org/ Plugin: github.com/Acly/krita-ai-diffusion Installation Instructions: www.interstice.cloud/plugin Required Models and Nodes: github.com/Acly/krita-ai-diffusion/wiki/ComfyUI-Setup CivitAI: civitai.com/ Music by CreatorMix.com
(KRITA AI) INTRO to Stable Diffusion for Krita PART 1
Просмотров 51 тыс.4 месяца назад
This is part one of my complete introduction to the Generative AI for Krita plugin. PART 2 HERE: ruclips.net/video/ziXTE6mC_38/видео.html Resources: Krita: krita.org/ Plugin: github.com/Acly/krita-ai-diffusion Installation Instructions: www.interstice.cloud/plugin Required Models and Nodes: github.com/Acly/krita-ai-diffusion/wiki/ComfyUI-Setup CivitAI: civitai.com/ Music by CreatorMix.com
Painting with Stable Diffusion (Speed Painting in Krita)
Просмотров 7396 месяцев назад
Welcome to Intelligent Image where I am exploring the intersection between AI art and digital painting. This is a speedpaint combining traditional digital painting techniques and render passes using Stable Diffusion with ComfyUI. Please see my other videos for a more in-depth look into the process of rendering your digital paintings with AI and incorporating AI into your digital painting workfl...
Painting with Stable Diffusion (Speed Painting in Krita)
Просмотров 1 тыс.6 месяцев назад
Welcome to Intelligent Image where I am exploring the intersection between AI art and digital painting. This is a speedpaint combining traditional digital painting techniques and render passes using Stable Diffusion with ComfyUI. Please see my other videos for a more in-depth look into the process of rendering your digital paintings with AI and incorporating AI into your digital painting workfl...
Great video, please keep producing great content. I have switched over to primarily using Krita instead of comfy b/c of your videos. PSA: For anyone struggling with Regions, it took me some time to realize that Regions don't fully work with 1.5. You can use the "generate active region only" in 1.5 using the alpha button (next to the generate button), but when i try and follow the instructions in this video with a 1.5 model (I tried several) and generate all regions at the same time, it completely ignores the regions. When i switch to a SDXL model, it works perfectly. There may be a workaround for 1.5, but i haven't figured it out yet, but SDXL seems to work fast enough in Krita that it's probably not necessary
PSA Update: The Regions seem to work MUCH better in 1.5 if you DON'T include anything in the "common text prompt" field or in the background field.
Thanks! I was still using 1.5 almost exclusively up until a couple of months ago. Pony Diffusion models are my go to now, so I haven't tested the 1.5 models with Krita's newer features.
I'm using A1111's web-ui, kind of curious what you are using? looks like some kind of flow based programming IDE.
nvm it's ComfyUI
Yes, ComfyUI is shown in the video. I mainly use the plugin for the Krita paint program which connects to ComfyUI for my generations. You can see it in my other videos.
Thanks man
Short and to the point, and hilarious.
Thanks!😁
Hey, I have a question, but I don't know where to contact you to ask?
You can DM me on x/twitter. There is also an email link in my channel bio. x.com/intellimageai?t=Z8azcVj42vbhaM4m9zz5PA&s=09
I love how you demonstrate what you're describing as you're doing it. One thing that I'm curious about is if your methodology will allow for incorporation of OpenPose wireframe models to get the posing right?
I feel like I underutilize control nets. I haven't actually tried them with regions. I would expect it to work. I would try and make sure the control net doesn't overlap different regions.
"Men only man, if man beat animal with stick. Use stick to hunt, use stick to make fire, use stick to build hut." This is how you all hating on AI sound like.
Are you dumb?
lets see if ur still saying this kind of stuff once ai is taking over and hunting every last human on this planet.
@@JuhoSprite We cant build a phone to last a day what makes you think ai will have enough juice to hunt us ? :D
@@RichardKincses what? we obviously have the technology to build a phone that lasts for days 💀 look at most nokia phones. also we could have phones last for weeks, its just that its not practical and would increase size of the design, aswell as the cost. Ther are already cars operating by themselves using ai to detect humans. Its clearly already a thing of reality that robots can specifically target humans. so a bunch of robots getting hacked to attack us all isn't a far stretch.
Great work!
Thanks!
aaaah Pony ... i'm struggling with controlnet and there is no fixing it. making the controlmap for character reference sheet even with 3 datamaps will give much weaker results. so hybrid workflow it goes.
adressing PONY XL issues : ITS NOT KRITA FAULT / it's purely pony that is pure jank. actually Pony based model is so broken in it's core that it bring a lot of problems so here is the list : 1 - broken lora compatibility. 2 - WAY less controlnet effects and UNION controlnet is so random. 3 - Fooocus inpaint patch is now useless and it's prety mandatory to avoid having an inpainting model. 4 - by extension to 1) Live mode won't works since it rely on a Hyper LORA. 5 - by extension to 1) IPadapter is very unpredictable. --- this is problematic since i found that PONY train on new character data much easier which is great for originals characters. that make it my favorite model to train and use but it doesn't adress such glaring flaws.
I would try composing images with non Pony models and then doing image to image with a Pony model to get the Pony style to get around most of these problems.
This technique is amazing!
Thanks for your tutorials,
Glad you like them!
Have you tested Flux dev with Krita AI? It seems to me that Flux doesn't follow regions as well as SDXL, but I might be wrong. I need to test this more
I haven't tried flux yet. I wouldn't be surprised if it didn't respond to regions in the same way.
05:36 in Comfyui that happens when we try to use a sd15 model in a XL workflow or the other way around
Interesting. I suspected it may be a general problem of sizes getting misaligned somewhere in the workflow.
PONY!!!
More on that next time!
A question, can the loras only be used through a connection or can I download them and use them locally? I have a gpu gtx 1080 ti, do you think it can run on my PC?
You would have to download them and use them locally. It should run on your PC, but I'm not sure about the speeds you will get.
lora doesnt seem to showup when i type ?
Do they show up where you can apply them in the settings? If there is only one Lora missing and it is one you just installed, you may need to restart Krita to get it to show up.
@@IntelligentImage-sl7uf yes they showup in settings but not when i type <lora:
Where are you trying to type it? It only works in the prompt box of the docker itself, not in the style prompts in the settings.
@@IntelligentImage-sl7uf ya in the docker itself just as in video
I'm really not sure then. It should work the same way in the text prompt node in ComfyUI. You could see if it works there and at least determine if it's a Krita issue.
I'll probably just roll the random dice most of the time because I'm lazy. But I'll have to try integrating it with Krita someday. Thanks for the nice vid.
Ai can only do females lmao
Aaaah i was waiting for someone to cover it. Let's watch that
Thanks for sharing all this information. You're definitely the master of using the Krita plugin to generate truly great art. 🙂
Thanks! It takes a lot of tries to get things to turn out right though 😅
@@IntelligentImage-sl7uf I learned that when removing object from pictures. You have to try and change the prompt and change the amount until it finally gets rid of the object and creates a nice new blend.
Thats a possible to add filter for making a pixel art with this?
You could try using a pixel art style checkpoint such as this one: civitai.com/models/277680/pixel-art-diffusion-xl
This looks great! Maybe I'll re-install Krita and the plugin huh. I have ComfyUI and models already installed and set up. IIRC, Krita plugin reinstalls an older ComfyUI version, but is it possible to link models from the original ComfyUI folder? I don't know if my question is clear..?
You want to share models between two Comfy UI installations right? You should be able to do it in the way described here: github.com/Acly/krita-ai-diffusion/wiki/common-issues#how-to-share-models-from-another-folder-with-krita-ai-diffusion-plugin
Ah is always exiting to see your videos pop up! do you post your works somewhere or have a discord to share stuff, etc? definitely trying this myself is so awesome!!!!
Thanks! I think I'll start posting stuff on X (Twitter) for now. I made an account and haven't done much with it. x.com/intellimageai?t=5XAAddlTYLPWefAm4ef3WA&s=09 I need to look into what Discord actually is. I've looked at it a few times and couldn't figure it out. Maybe I'm too old.
@@IntelligentImage-sl7uf😂😂😂
@@IntelligentImage-sl7uf , please let me know which model you are using on this example. I'm using Pony Diffusion V6 XL and I don't get fullbody characters as you do, I only get a very close-up of faces.
@@Lunarsong.Don't use the base pony. It's busted in many ways. Use autismMix instead. It's pony based model and lora compatible while being more predictable. Also pony based model have issues with inpaint and controlnet. For a beter experience try to use some other proper SDXL model that actually don't broke controls when you don't make anatomy or nsfw work. (wildcardXL, Juggernaut,AAMXL to name a few)
I'm using Pony Pencil XL with a lot of LoRAs. My next video I'm going to go more in depth into setting up Pony Diffusion in Krita. civitai.com/models/432249/ponypencil-xl
I've not tried it out still. Is there any way to create likeness images. Like for specific character or a person?
Some models have some character baked in. For example AnimagineXL have the hololive Vtuber but not PonyXL. When a character is absent, Lora can be used on the mentionned checkpoint. For background it's good to have that as separate layer. For consistent style, a LORA. For likeness overall, either reference/style/composition controlnet.
You can also try the composition and face control nets to reference a specific image.
@@IntelligentImage-sl7uf I till investigate, thank you, will let you know about my results <3
THANK YOU <3
You go that hollywood actor or model face. Beside that, thank you for the knowledge.
Thanks 😅
Fan-tas-tic! I’ve been holding back moving to the latest version of Krita AI, worried it’s going to be another learning curve. You’re helping reduce those fears. Btw, what version of Krita AI are you using? Also, have you updated to the newest version of Krita itself?.. Any issues? Thank you!
Thanks! Great to hear! I have been using the 1.22 version of the plugin. It looks like there is already an update I haven't installed yet. I'm pretty sure I have the latest version of Krita installed. The only issues I have had are that the newer versions of the plugin require additional models and custom nodes for Comfy UI. If you are not using the managed server, you will have to set those up manually.
comment for the algorithm gods
Hello buddy can you help me, like how to increase the size of texts in that Ai promp box
I've looked at the settings again and I didn't think there is a way to increase the text size unfortunately. You could try downloading a screen magnifying application. I've been meaning to look for one so I don't have to do so much manual zooming during video editing.
Is there any way to link it to certain say sticky notes in windows And i would want you to interact in a discord link where people talk about Ai Stuff more of comfy and forge ui Its of pixorama
You can create your own discord too
@@BUILDS-ge8kz I'm not sure what you mean by connecting it to Sticky Notes, but there is no way to have it interface with any other program as far as I know. So many people have asked me to start a Discord I will definitely look into it. I have looked at Discord before and don't really understand what it's supposed to be. Maybe I ca get Chat GPT to explain it to me like I'm five 😅
@@IntelligentImage-sl7uf its a very group community of people thing, who are very very near to you, like you all are in a virtual room together
There are just so many hoops to jump through to make this work.
I agree. I wish there was a more straightforward way to get it set up at least.
Does someone have a video to show me how to use SDXL? Im struggling, I thought it would be similar to SD 1.5 but it isnt. I downloaded the pony v6 and assumed I just put it in the stable diffusion folder in the webui folder. What is the difference between SDXL and "pony". Im trying to use prompts similar to what I did in SD 1.5 but it's giving me ugly results.
1) DONT USE NEGATIVES 2) DONT USE NEGATIVES 3) do not (((big breasts))) or (huge breasts:1.6) ONLY up to :1.2 4) just ask simply in natural language and don't repeat the same words 5) use 1024*1024 resolution NEVER use below 832*832 pixels
I'm still getting over saturated dreck with these settings, I'm using Krita as well. Lora's don't seem to even be recognized, or if they are, the horrible art style is drowning it out.
The checkpoint you choose makes a big difference. I used Pony Pencil to make the "good" images for the video. civitai.com/models/432249/ponypencil-xl
@@IntelligentImage-sl7uf Unfortunately, it doesn't matter which I use, any Pony checkpoint does it to me (with the exception of EtherRealMix). It defaults to some amateurish style that it can't escape. Even the distorted signatures that sometimes appears in a corner is always the same. I tried adding a couple of style loras at 100% to push the output in a vastly different direction, but the ugly default style still just mixes in like poison. The only thing I can think to do is jack the loras up past 100%, but it's been my experience that it can harshly effect an image, and I've never seen anyone else say they had to do it.
Make sure there is nothing in your prompt that is locking you into a certain image style. You said you're using Krita, so don't forget about any style prompts that might be in the appended prompts in the settings. Sometimes a certain style or subject can confine the model to a small region of the data set.
I need help please! I have a legion 5 with decent 1660 Ti with lots of storage I've downloaded like 5 models and some more I've tried the same process as yours on a line are with using the line art model but its so slow it took like half an hour or even an hour for the first result The dimensions are 1000x1000 and everything the same nothing extra What can i do to make it faster cause i feel there is a problem i can't figure
There is a section on the wiki that might help: github.com/Acly/krita-ai-diffusion/wiki/common-issues#image-generation-is-really-slow
hey sorry for asking, but I'm not sure where else I can find help. Upscaling doesn't work for me whenever I use an SDXL model. I always get the error, that I should download the xinsir promax, but krita already downloaded it when I installed the ai diffusion plugin. Do you or anybody reading this has a solution?
Sorry, I don't know why this error might be. Maybe someone else who reads this can help 🤞
On mimic pc, I can’t select a model of controlnet for preprocessing (it appears “none”). Can you explain me why?
Sorry, I don't know anything about Mimic PC.
I tried to find that cat girl image on your civitai page but... couldn't. Is it hidden?
For some reason, Civit marked it as R rated. It is probably hidden because of your content settings. I thought it was gone once too 😅 civitai.com/images/21471122
Fixing hands in Krita pls
I'll try and incorporate it in a video soon!
@@IntelligentImage-sl7uf Thanks! I found that even if i draw the hands and use the Btw, theres any way to generate the same face? I tryed to use the Face controller but i get "onnx_cpp2py_export DLL load error"
The Face controlnet uses Face ID for the IP Adapter. I don't know what would be causing that error though.
@@IntelligentImage-sl7uf I got it working… well, more or less. The results are blurred faces with artifacts.
thank you for the help now I understand why you really need the VEA XD
Glad it was helpful!
Thank you for the video! First minute literary described my situation and confusion :D
The same thing happened to me, so I knew it was probably a common experience 😆
Brilliant tutorial, much indebted!
Thanks! Glad it was helpful!
exactly what i needed, thanks good content
Glad it helped!
I haven't commented on a youtube video in ages...but your video was both informative and got me to laugh a few time. Thanks :)
Thanks! I really appreciate that!
If someone do shit comments, dont listen them. We definitely needs MORE vids, if possible in real time <3
Thanks for coming to my defense! I have actually learned to enjoy the haters 😆 They just sound silly to me. I have a lot more videos in the works including more process videos!
@@IntelligentImage-sl7uf <3 Im sorry, what you mean in the works :D I've checked the links - no videos :DD
Sorry, I mean I am working on them and will post them soon :)
@@IntelligentImage-sl7uf cool, Im waiting for them. Dunno if rtx 3060 will good for AI Krita I had amd card, but its hard to configure, so ordered budget one <3 I was trying to learn cg drawing long, but it's hard may be this will help me :DD
Hi, good job Congrats !!! I have question about create an animation with pony Model in ComfyUI using animatediff no problem to create good image but it's seems that pony models are very versatile to product video or animation. Results are poor, often blurry. Using Clip-2 CFG at least 7 or 8 and DPPM or Euler A Have you any solution or workflow to suggest me ? Results seems better seems better with realistic Model than anime by advance many thanks for your answer
Thanks! I'm not sure about the answer to your question. Each Pony based model will be different. Some might work better than others. I haven't done much with Animatediff, but thinking about your question has made me want to try it again. I have a few ideas on how to make it work. I'll see if I can work in making a tutorial about it if I can figure it out.
I confirm, Some models like idéalpony or cinematicpony gives correct results but other like autism for example are bas. I'll try with other models an add loras thanks again
Your channel is amazing mate! Keep up with the good work! Im loving the way you are integrating AI in the workflow!
Thanks! I really appreciate it!
hey, mine's not greyed out.. and I checked the "AI Image Diffusion" in [settings> Configure Krita> Python Plugin Manager], yet I still couldn't find the AI in the dockers. could you please help me?
nevermind, I got it done.. haha