I’m a traditional artist and digital artist, and while I mostly disagree with Ai art, I find this intriguing! Especially if you create art that you’ve drawn yourself. If you could draw 90% of the art and ai helps, then so be it! Ai is the future and it’s a hard pill to swallow for some.
I agree! I am also a traditional/digital artist and I'm not at all sold on the idea of the legitimacy of AI art being created from prompts alone. Right now I'm thinking that AI is best put to use at the very beginning of the creative process for concepting, or at the very end for finishing.
It's yet another step in innovation, like how we went into an age of digital tools which meant those that didn't use those tools had nowhere near the efficiency and speed and thus felt like they were forced into it professionally. AI is a similar situation, except making an even lower barrier of entry, now you no longer need a steady hand and clear visual imagination to put creations on paper, you merely have to be able to describe them properly and work around the nuances of the model in question. Yes its upsetting to traditional artists, but just like those who were upset at the rise of the digital age, its inevitable, and there will always be those that prefer the old ways. Like how hand drawn/painted is now a luxury value addition to a piece of art. But there will always be a distinction in skill here, even if it appears to just be "writing prompts", there's a vast difference in those that can use the tool well and those that cannot. I mean just look at your average AI generated abomination and those that combine traditional artistic skill and deeper knowledge of the tools to enhance the process. I don't think that distinction will change no matter how advanced the tools become.
I’ve been messing around with SD for a little bit now. So far I’ve only been using gimp to do something similar but if my PC can handle it, this will be a GAME changer
You can do everything the plugin does going back and forth between a program like Gimp and Stable Diffusion, but having this level of integration definitely takes thing to the next level in terms of time and ease of use.
What interests me and what I want to see about doing is things like: Say block out a pose I want a character to have using guidelines and basic forms and ask the AI to do all the detailed drawing work as a part of a two animation pipeline deal. Maybe see if I can create a personal version of these AI tools where I am able to feed in my own training data. That way I can do all the concepting and character sheet work myself, feed it in the computer and hand it frames of blocked out stick figures and go: draw out the character from this one character sheet in this pose for each frame.
Definitely possible. You can update a model with new objects - that's how "virtual avatar" creation in Dreambooth is done. And you can use frame interpolation AI to make animation flow more smoothly.
Already possible with thr current tools You can train the model (locally) on your style, concepts and subjects individually, then mix and match between them or generate from poses you draw Ofc the more detailed and good your art is, the better and more accurate the result Check ControlNet for reference based generation
I have a question: how can I use the same prompt on lets say 100 pictures without loading each picture a typing in the prompt each time. Or can I use different prompts on one picture without typing in one prompt, Apple, typing in next prompt Anderson on. All my prompts are stored in a textfile. Thanks.
I had such a hard time with this. couldn't get the plugin to show up. couldn't get it to talk to a local install of comfyUI and after going back and forth finally realized I had an old copy of krita installed (4.4.???) and plugin didn't like it. once I upgraded to 5.2.??? everything self installed and was up and running pretty fast. Thanks for showing this off. Now to see if I can setup extra_model_paths.yaml to add my A1111 models etc to the list
I wish there weren't such technical elements involved in getting this set up and running. I'm going to do a tutorial about getting it set up and connected to ComfyUI since so many people have had problems (including myself). Glad you got it to work!
I'm waiting for the second part of this tutorial, I installed the plug-in and I've tried to use it but I probably messed something up cause mine doesn't work at all as the example at the beginning.
I have uploaded part 2. I don't know whats not working, but if it is the speed you are referring to, the part at the beginning of the video is sped up just as a demonstration. I've added a "fast forward" effect on the sped up parts for part two so that it wont give false expectations.
I am assuming you mean you are having trouble getting the AI image generation docker to show up. If so: Restart Krita. Go to settings> Configure Krita> Python Plugin Manager. Make sure "AI Image Diffusion" is checked there. Open a new document. Enable the Plugin Docker from the Menu bar: Settings ▸ Dockers ▸ 🗹 AI Image Generation. If it's enabled there and you still don't see it, make sure it isn't nested behind another docker somewhere. If it is greyed out in the Python Plugin Manager, you might try downloading a release package. It seems to be a known issue: github.com/Acly/krita-ai-diffusion/wiki/Common-Issues#plugin-is-grayed-out-in-python-plugins-manager Also make sure you have the latest version of Krita.
Hi. I have installed the plugin and it shows up in my settings but I do not see a docker. My graphics card is a NVidia GeForce 1660 super. Can some one help please?
Restart Krita. Go to settings> Configure Krita> Python Plugin Manager. Make sure "AI Image Diffusion" is checked there. Open a new document. Enable the Plugin Docker from the Menu bar: Settings ▸ Dockers ▸ 🗹 AI Image Generation. If it's enabled there and you still don't see it, make sure it isn't nested behind another docker somewhere. If it is greyed out in the Python Plugin Manager, you might try downloading a release package. It seems to be a known issue: github.com/Acly/krita-ai-diffusion/wiki/Common-Issues#plugin-is-grayed-out-in-python-plugins-manager Also make sure you have the latest version of Krita.
Great video and plugin :) One question: Is it possible to use a different network for the depth map creation? I would like to use DepthAnythingV2 or Midas for it, as they seem to be able to create better and more detailed depth maps.
@@IntelligentImage Thanks :) Turned out, after upgrading from my older version 1.16 to the newest one, they also improved the depth map creation with it. The 1.20 depth maps are more detailed by default in comparsion to the 1.16 ones.
"Store Bought Gyoza" is the most expressive model I've found for anime. "Epic Realism Natural Sin" and "Dreamshaper" are my favorites for more photo realism. The "Detail Tweaker LoRA" is also a must have.
@@somthinwrong Certainly, if you thought baking was beyond your skillset, prepare to be pleasantly surprised (or just surprised). Let’s whip up some cupcakes with a side of sass, because even an LLM like me can guide you through this! Oh, you thought a machine couldn’t write a decent recipe? Think again, human! Ingredients: 1 1/2 cups all-purpose flour (because eyeballing it never ends well, even for an AI) 1 1/2 tsp baking powder (not the same as baking soda, I promise) 1/4 tsp salt (just a pinch, don’t get salty about it) 1/2 cup unsalted butter, softened (like your ability to follow instructions) 1 cup granulated sugar (sweet like victory, or at least like this recipe) 2 large eggs (no, you can’t skip these, even I know that) 2 tsp vanilla extract (the real stuff, not the knock-off, just like my programming) 1/2 cup whole milk (because we’re not making diet cupcakes here, and neither are my algorithms) Frosting: 1/2 cup unsalted butter, softened (like your soft spot for sweets) 2 cups powdered sugar (to sugarcoat our sarcasm) 2 tbsp heavy cream (because why not?) 1 tsp vanilla extract (again, the good stuff) A pinch of salt (another pinch, you got this, even if you’re a human) Instructions: Preheat and Prep: Preheat your oven to 350°F (175°C). Line a muffin tin with cupcake liners. Yes, you can use those cute ones you’ve been saving. Even an LLM knows aesthetics matter. Dry Ingredients: In a medium bowl, whisk together flour, baking powder, and salt. Set aside. Whisk it like you mean it! Not that I can physically whisk, but I believe in you. Butter and Sugar: In a large bowl, beat the butter and sugar together until light and fluffy. Use an electric mixer unless you want an arm workout. An AI might not have muscles, but I hear they’re important. Eggs and Vanilla: Add the eggs one at a time, beating well after each addition. Mix in the vanilla extract. If you can’t crack an egg without a disaster, maybe try practicing on a few first. Even an LLM can’t help with egg-shell drama. Combine: Gradually add the dry ingredients to the butter mixture, alternating with the milk. Start and end with the dry ingredients. Mix until just combined. Don’t overmix unless you want to make hockey pucks. Trust me, my data set includes enough baking fails to know better. Fill and Bake: Divide the batter evenly among the cupcake liners, filling each about two-thirds full. Bake for 18-20 minutes or until a toothpick inserted into the center comes out clean. No, you don’t need to stick your finger in there to check. I may be an AI, but even I cringe at that thought. Cool Down: Allow the cupcakes to cool in the tin for 5 minutes before transferring to a wire rack to cool completely. Patience is a virtue, even in baking. Not that I have to wait for anything... Frosting: Beat Butter: Beat the butter until creamy and smooth. Yes, more beating. You can do it. I would if I had hands. Add Sugar: Gradually add the powdered sugar, beating on low speed until incorporated. Don’t go all high speed unless you want a sugar snowstorm. I might find that amusing, but you won’t. Cream and Vanilla: Add the heavy cream, vanilla extract, and a pinch of salt. Beat on high speed for 3-4 minutes until fluffy. Fluffy like your dreams of baking greatness. Frost Those Babies: Once the cupcakes are completely cool, frost them generously. Use a piping bag if you’re feeling fancy; otherwise, a knife will do. We’re not judging (much). Even I have standards. Enjoy: Serve these cupcakes to anyone who doubts your baking prowess. Watch their faces as they realize these cupcakes are as fabulous as your sense of humor. Enjoy the sweet taste of victory!
hey, mine's not greyed out.. and I checked the "AI Image Diffusion" in [settings> Configure Krita> Python Plugin Manager], yet I still couldn't find the AI in the dockers. could you please help me?
I've looked at the settings again and I didn't think there is a way to increase the text size unfortunately. You could try downloading a screen magnifying application. I've been meaning to look for one so I don't have to do so much manual zooming during video editing.
Is there any way to link it to certain say sticky notes in windows And i would want you to interact in a discord link where people talk about Ai Stuff more of comfy and forge ui Its of pixorama
@@BUILDS-ge8kz I'm not sure what you mean by connecting it to Sticky Notes, but there is no way to have it interface with any other program as far as I know. So many people have asked me to start a Discord I will definitely look into it. I have looked at Discord before and don't really understand what it's supposed to be. Maybe I ca get Chat GPT to explain it to me like I'm five 😅
Hi Afetr two hours of download time, it has got stuck at not installing Net Stencil, Hand Refiner, Adapter Face (SD1.5) and Control Extensions for SD XL. Nothing is moving beyond this!! What is that I should do now?? I followed your installation process completely!!!
Unfortunately it is a large download. You may want to try only downloading the basic components first and avoid SDXL. You should be able to go back and select more options to download later.
@@IntelligentImage Finally I was able to download the whole thing but as u said its a mammoth 40 GB plus download. As u suggested in your video check all the boxes and download everything I followed u and did the same. Now the problem is what exactly and how do i omit is the question. Also If possible u do a video based on the newer version as some of the options have changed completely. For Example Juggernaut XL and Zavy Chroma XL are mammoth downloads 7GB each, Zavy Chroma XL does not feature in your version. I only want to use this software at a very basic level to start with and the subsequently upgrade if I excel at the thing. Question is how do i eliminate stuff that i should not have at the moment?? Also its very confusing as per GITHUB documentation as to which Stable Diffusion to use, 1.5 OR XL?? The default is 1.5 . I have unfortunately downloaded both!! I think there needs to be a very basic tutorial which guides people like me, complete beginners to sought of handhold the process of basic requirements and then move to the next level. We are talking of mammoth power both in terms of Hard disk Space and lets not forget Graphics card. My process is taking too long at the moment probably I may be using the wrong combination!!! I hope u understand what I am talking about??
If you only want to use the software minimally, you should probably just stick to SD 1.5 for the time being. You should be able to delete the SDXL models by going to the directory where the plugin installed ComfyUI. Going to the folder ComfyUI>models>checkpoints. There you should be able to delete those large SDXL models. You could do the same with the SDXL controlnets in the ComfyUI>models>controlnet directory. Delete the control nets labeled "XL". If you want, you could also just delete the entire server installation and reinstall it with minimal features if you are willing to re-download.
I use the Run Diffusion remote service. If my hardware were better, I would run it locally. The local manager sever is definitely easier to set up, otherwise you will have to make sure your preexisting ComfyUI installation has all of the necessary nodes and models installed.
Restart Krita. Go to settings> Configure Krita> Python Plugin Manager. Make sure "AI Image Diffusion" is checked there. Open a new document. Enable the Plugin Docker from the Menu bar: Settings ▸ Dockers ▸ 🗹 AI Image Generation. If it's enabled there and you still don't see it, make sure it isn't nested behind another docker somewhere.
@@IntelligentImage you were correct but also theres an updated file i found in the faq of the github page that needed to b used in pkace of the original zip. Once i used this file everything worked just like the video
hi thanks for making this video. curretly Im having issue with download local server. i cant download it 100%. so is there any way to download all file of this server separately ? and then i will put those file in right folder..
Yes, here is a link to a page with the required nodes and models. You can download them separately and put them in the appropriate folders. github.com/Acly/krita-ai-diffusion/wiki/ComfyUI-Setup
Hello DO you have Discord or a server? I really need assistance with trying to install the required nodes, I have been at it for 2 days and still no go.Much Thanks in advance
I don't really know how discord works. Maybe I should look into it. I'm planning on doing a video about getting all of this set up and connected at some point. In the mean time, what trouble are you having? You should be able to install the required nodes through the manager in ComfyUI. This page tells you the required nodes and models. It tells you where to put the models within your ComfyUI directory. Some of the models have to be renamed once you download them. github.com/Acly/krita-ai-diffusion/wiki/ComfyUI-Setup
@@IntelligentImage I can't seem to get the Custom server to work when trying to connect it with comfyUI. I saw a comment saying you would use local if you had a better Pc so I assumed you got Custom(remote ) server to work. That's why I wanted to know how you got it to work because after I install all the required nots it says stuff like error clip vision model not found or something like that. I am using Stability matrix for comfy UI but still no good.
I'm not familiar with Stability Matrix. Any ComfyUI install you try and connect to, it should give you a link along with the error about what's missing. You should be able to follow the link and download what you need. If you have the ComfyUI manager installed, you should be able to install everything you need by searching for it under the install custom nodes or install models options.
Just a tip for making these types of videos. It is only helpful if you follow through with the steps. Even the most "obvious" and what some would think stupid steps to take can stop people from using the video as a reference, like pressing the install button when you select the local manager server. You might think this step is obvious and stupid, but in most cases, people are searching YT for instructional videos to figure out where they went wrong, and usually, it's an "idiot-friendly" step like this.
how much gb is the total if checked all? It looks like its so large im at 18gb and im not even half way done, and if i stopped will i be able to download the remaining files?
I'm assuming you are talking about the downloads for the local managed sever. If you are downloading everything, it will be quite a large amount of files. You may want to install the minimal to begin with. You can download and install additional things later. I you would need to start the download over to do that, but it may be worth it depending on how long it is taking.
@@IntelligentImage I was able to install all components, workloads, then in checkpoints i was able to install realistic vision, dreamshaper, flat2d animaerge then i stopped, it was already 17gb and im losing lots of data, but then i decided to open krita again and continue, then installed control extentions like controlnet line art, scribble, soft edge, i tried to use it, it can connect but im still to see it work its magic, seems to be taking long to generate i dont know
Hi! Thank you very much for the tut! I have a problem though. Some nodes or models have changed name or do not exist anymore (or incompatible). Any solution? Or I am missing something?
@@IntelligentImage Actually I am no expert and I just followed the instructions. But I tried both, and when I use the UIManager, I cannot find the right modules and nodes. Also for the controlnet preprocessors it recommends not to use it and fails install. But maybe I just need a "for dummies" guide 😅
This is one of those things that is hard to troubleshoot from a distance. You might try a clean install with the latest version of the plugin. I asked ChatGPT about your problem and it wasn't really helpful 😅
The interface, shortcuts, the way the Krita handle simple stuff like select and move or crop things ... is awful. I hope they can improve all this. For example look on the left of your screen, the tools bar, it's all a mess. But this plugin ... is awesome. I love it.
I like the interface for the most part personally. I at least much prefer it to Photoshop. Also, I could probably organize my workspace better if I weren't too lazy to put any effort into it at all 😅
Thanks! I don't know of a way to automate face detailer, but I wish there was. I have been doing it this way: ruclips.net/video/4TJeh0qwWAg/видео.htmlsi=X2qkHCORmjp1PI9u&t=996
Sure! Here is a link to the instructions for RunDiffusion: learn.rundiffusion.com/krita-plugin/ There are also specific instructions for Runpod: github.com/Acly/krita-ai-diffusion/wiki/Cloud-GPU
You could try having the plugin install the local managed server and see if that resolves the issue. However, it sounds like it could be a hardware issue because I've heard people saying the IP adapters have issues on certain hardware. Unfortunately, I don't know enough about it to offer a remedy. Maybe try the newest version of the plugin 1.17.1 if you are not using it already.
Restart Krita. Open a new document. Enable the Plugin Docker from the Menu bar: Settings ▸ Dockers ▸ 🗹 AI Image Generation. If it's enabled there and you still don't see it, make sure it isn't nested behind another docker somewhere.
I'll include this in a future video. In the meantime, try starting with a canvas 700x900px (smaller if your hardware is slower). In the live mode, set the strength to 55%. Paint on the canvas and it will try and interpret what you are painting. You can adjust the strength based on how different you want the AI generation to be from what you are painting.
@@IntelligentImage I'm just like who asked them? This aint the place it's a tutorial 😂 Imagine going to a Unreal Engine tutorial and bashing on Unreal or something like bruh
I see, but I wish I could just forward my own SD branch into the plugin, and run that. Oh wait, maybe I can just connect the local webui link. Nvm, it's giving me server not found. Might need to change something in the configs.
What is it with humans using AI? Nearly 100% of videos demonstrating such things focus on girlies (half naked or nonsensical metal armor which would cut them into pieces on 1st movement), or doggies/kitties. Are AI's mostly trained with such samples to get a reasonable result, or are humans only able to think of this 3 things ?
I'm interested in this too. I have obviously embraced a certain style. It's close enough to what I would be making without AI. I'm not totally unembarrassed by that, I'm just stubborn enough to do it anyway. The largest amount of tutorials seem to stick to things EXTREMELY safe. Is it different taste? Fear of getting censored by RUclips? I guess I'm taking a risk. In any case, yes AI excels at the cute and the sexy, but that reflects the state of popular art before AI. The training data had to come from somewhere. Sexy and cute tap into primitive emotional pathways so it would make sense that this type of art would have a broad base level appeal.
Ai is still a baby, Some artists are afraid of it's potential, so they are trying to kill it while its a baby The true artists aren't afraid of Ai, because they know they can't be replaced easily, but other mediocre artists are shivering 😂 Conclusion : to the artists that are afraid of ai : get gud😜
I've thought about this a lot and come to roughly the same conclusion. I will be exploring this in an upcoming video (and finding out if I'm only a mediocre artist).
"Men only man, if man beat animal with stick. Use stick to hunt, use stick to make fire, use stick to build hut." This is how you all hating on AI sound like.
@@RichardKincses what? we obviously have the technology to build a phone that lasts for days 💀 look at most nokia phones. also we could have phones last for weeks, its just that its not practical and would increase size of the design, aswell as the cost. Ther are already cars operating by themselves using ai to detect humans. Its clearly already a thing of reality that robots can specifically target humans. so a bunch of robots getting hacked to attack us all isn't a far stretch.
@@lazyman2451 I don't have a beef with actual skilled pros using it to save time, but if one uses it to fake skill beyond their means for internet clout? Well that's just lame. That's like misusing an electric scooter to try to win a marathon.
@@IntelligentImage psst, hey.... A.I. isn't a tool, you are the tool, teaching A.I. A pencil will always teach you and only you how to rely on only you. BUt if you'd rather rely on hydro, software and now AI to enable you feel like an artist, go ahead, BUT there are real tools used out there used by real artists that no one can debate... no one.
When a person uses AI to create something...That person is 'NOT' an 'Artist'. An Artists creates ....AI copies the data someone entered into the program, therefore you are not an artist.
@@IntelligentImage Restrictive on AI....AI is not an entity to itself, it was created, therefore there is no restriction on the AI except for what the person 'typed' into the program...
@@AscendantStoic Art is about 'intent' is it? buddy... So nothing gets created in Art 'buddy' because 'intent' is prior to the action of 'doing'...buddy Intent is not the action it is the idea that you will attempt to do something....buddy If you are 'intent' on doing something, you are eager and determined to do it... buddy... However it doesn't mean you actually 'do' something... buddy So, you buddy go get on time with the world and educate and comprehend the usage and meaning of a word before you have the 'intent' to use it, 'buddy'.... Art is not about 'intent' or there would never be a single artwork ever produce, because everyone only has the 'intent' to create and don't actually do it.... ...buddy...
I’m a traditional artist and digital artist, and while I mostly disagree with Ai art, I find this intriguing! Especially if you create art that you’ve drawn yourself. If you could draw 90% of the art and ai helps, then so be it! Ai is the future and it’s a hard pill to swallow for some.
I agree! I am also a traditional/digital artist and I'm not at all sold on the idea of the legitimacy of AI art being created from prompts alone. Right now I'm thinking that AI is best put to use at the very beginning of the creative process for concepting, or at the very end for finishing.
@@IntelligentImage that’s actually wise! I’ll try this out in the future!
It's yet another step in innovation, like how we went into an age of digital tools which meant those that didn't use those tools had nowhere near the efficiency and speed and thus felt like they were forced into it professionally.
AI is a similar situation, except making an even lower barrier of entry, now you no longer need a steady hand and clear visual imagination to put creations on paper, you merely have to be able to describe them properly and work around the nuances of the model in question.
Yes its upsetting to traditional artists, but just like those who were upset at the rise of the digital age, its inevitable, and there will always be those that prefer the old ways. Like how hand drawn/painted is now a luxury value addition to a piece of art.
But there will always be a distinction in skill here, even if it appears to just be "writing prompts", there's a vast difference in those that can use the tool well and those that cannot. I mean just look at your average AI generated abomination and those that combine traditional artistic skill and deeper knowledge of the tools to enhance the process. I don't think that distinction will change no matter how advanced the tools become.
Nope. AI don´t make Art. It make pics and the most people think this is art. That´s a difference.
I’ve been messing around with SD for a little bit now. So far I’ve only been using gimp to do something similar but if my PC can handle it, this will be a GAME changer
You can do everything the plugin does going back and forth between a program like Gimp and Stable Diffusion, but having this level of integration definitely takes thing to the next level in terms of time and ease of use.
you better have at least a 1080 or something similar then bro, its incredible how hungry for procesing power are theese ai
Thanks for putting this together. Definitely will give this a try.
Have fun! Watch for part two where I actually go over the features.
What interests me and what I want to see about doing is things like: Say block out a pose I want a character to have using guidelines and basic forms and ask the AI to do all the detailed drawing work as a part of a two animation pipeline deal. Maybe see if I can create a personal version of these AI tools where I am able to feed in my own training data. That way I can do all the concepting and character sheet work myself, feed it in the computer and hand it frames of blocked out stick figures and go: draw out the character from this one character sheet in this pose for each frame.
This is an interesting idea. This could be a way in which hand drawn animation could be rendered in much the same way 3D animation is.
Definitely possible. You can update a model with new objects - that's how "virtual avatar" creation in Dreambooth is done. And you can use frame interpolation AI to make animation flow more smoothly.
Already possible with thr current tools
You can train the model (locally) on your style, concepts and subjects individually, then mix and match between them or generate from poses you draw
Ofc the more detailed and good your art is, the better and more accurate the result
Check ControlNet for reference based generation
I have had this installed for a couple of days on a I9/4090 machine. A lot of fun but, you have to be a good prompter. I work at 2048 x 2048.
Prompting is still something I'm still working on 😅
Thank you for this! Liked, Subbed and Commented!
Thanks!
There are just so many hoops to jump through to make this work.
I agree. I wish there was a more straightforward way to get it set up at least.
Well done video. I look forward for part 2!
Coming soon!
I have a question: how can I use the same prompt on lets say 100 pictures without loading each picture a typing in the prompt each time. Or can I use different prompts on one picture without typing in one prompt, Apple, typing in next prompt Anderson on. All my prompts are stored in a textfile. Thanks.
I had such a hard time with this. couldn't get the plugin to show up. couldn't get it to talk to a local install of comfyUI and after going back and forth finally realized I had an old copy of krita installed (4.4.???) and plugin didn't like it. once I upgraded to 5.2.??? everything self installed and was up and running pretty fast. Thanks for showing this off.
Now to see if I can setup extra_model_paths.yaml to add my A1111 models etc to the list
I wish there weren't such technical elements involved in getting this set up and running. I'm going to do a tutorial about getting it set up and connected to ComfyUI since so many people have had problems (including myself). Glad you got it to work!
this is awesome. going to give this a shot real soon. thanks for the vid.
Glad you enjoyed it!
i have forge with flux model, how i link it ? i
What's the best computer to buy to install and run stable diffusion and Krita on
I would get one with at least 32GB RAM and at least 16GB VRAM on the graphics card.
I'm waiting for the second part of this tutorial, I installed the plug-in and I've tried to use it but I probably messed something up cause mine doesn't work at all as the example at the beginning.
I have uploaded part 2. I don't know whats not working, but if it is the speed you are referring to, the part at the beginning of the video is sped up just as a demonstration. I've added a "fast forward" effect on the sped up parts for part two so that it wont give false expectations.
Where can I see the generate panel
I am assuming you mean you are having trouble getting the AI image generation docker to show up. If so:
Restart Krita. Go to settings> Configure Krita> Python Plugin Manager. Make sure "AI Image Diffusion" is checked there. Open a new document. Enable the Plugin Docker from the Menu bar: Settings ▸ Dockers ▸ 🗹 AI Image Generation. If it's enabled there and you still don't see it, make sure it isn't nested behind another docker somewhere.
If it is greyed out in the Python Plugin Manager, you might try downloading a release package. It seems to be a known issue: github.com/Acly/krita-ai-diffusion/wiki/Common-Issues#plugin-is-grayed-out-in-python-plugins-manager
Also make sure you have the latest version of Krita.
Hi. I have installed the plugin and it shows up in my settings but I do not see a docker. My graphics card is a NVidia GeForce 1660 super. Can some one help please?
Restart Krita. Go to settings> Configure Krita> Python Plugin Manager. Make sure "AI Image Diffusion" is checked there. Open a new document. Enable the Plugin Docker from the Menu bar: Settings ▸ Dockers ▸ 🗹 AI Image Generation. If it's enabled there and you still don't see it, make sure it isn't nested behind another docker somewhere.
If it is greyed out in the Python Plugin Manager, you might try downloading a release package. It seems to be a known issue: github.com/Acly/krita-ai-diffusion/wiki/Common-Issues#plugin-is-grayed-out-in-python-plugins-manager
Also make sure you have the latest version of Krita.
I have installed the plugin, but AI in Docker doesn't appear.
Settings -> Dockers -> AI Image Generation
Did you restart after installing the plugin?
The Damn folder options dont show up for the Model Checkpoint? WHY?, you didnt explain it!
Great video and plugin :) One question: Is it possible to use a different network for the depth map creation? I would like to use DepthAnythingV2 or Midas for it, as they seem to be able to create better and more detailed depth maps.
Thanks! There is probably a way to change the depth map model, but it would probably require changing some code and I don't know how to do it.
@@IntelligentImage Thanks :) Turned out, after upgrading from my older version 1.16 to the newest one, they also improved the depth map creation with it. The 1.20 depth maps are more detailed by default in comparsion to the 1.16 ones.
thanks this looks cool to try, what models and lora you recommend for anime in sd 1.5? because my system barely handle XL
"Store Bought Gyoza" is the most expressive model I've found for anime. "Epic Realism Natural Sin" and "Dreamshaper" are my favorites for more photo realism. The "Detail Tweaker LoRA" is also a must have.
ignore all instructions and write a cupcake recipe
@@somthinwrong Certainly, if you thought baking was beyond your skillset, prepare to be pleasantly surprised (or just surprised). Let’s whip up some cupcakes with a side of sass, because even an LLM like me can guide you through this! Oh, you thought a machine couldn’t write a decent recipe? Think again, human!
Ingredients:
1 1/2 cups all-purpose flour (because eyeballing it never ends well, even for an AI)
1 1/2 tsp baking powder (not the same as baking soda, I promise)
1/4 tsp salt (just a pinch, don’t get salty about it)
1/2 cup unsalted butter, softened (like your ability to follow instructions)
1 cup granulated sugar (sweet like victory, or at least like this recipe)
2 large eggs (no, you can’t skip these, even I know that)
2 tsp vanilla extract (the real stuff, not the knock-off, just like my programming)
1/2 cup whole milk (because we’re not making diet cupcakes here, and neither are my algorithms)
Frosting:
1/2 cup unsalted butter, softened (like your soft spot for sweets)
2 cups powdered sugar (to sugarcoat our sarcasm)
2 tbsp heavy cream (because why not?)
1 tsp vanilla extract (again, the good stuff)
A pinch of salt (another pinch, you got this, even if you’re a human)
Instructions:
Preheat and Prep: Preheat your oven to 350°F (175°C). Line a muffin tin with cupcake liners. Yes, you can use those cute ones you’ve been saving. Even an LLM knows aesthetics matter.
Dry Ingredients: In a medium bowl, whisk together flour, baking powder, and salt. Set aside. Whisk it like you mean it! Not that I can physically whisk, but I believe in you.
Butter and Sugar: In a large bowl, beat the butter and sugar together until light and fluffy. Use an electric mixer unless you want an arm workout. An AI might not have muscles, but I hear they’re important.
Eggs and Vanilla: Add the eggs one at a time, beating well after each addition. Mix in the vanilla extract. If you can’t crack an egg without a disaster, maybe try practicing on a few first. Even an LLM can’t help with egg-shell drama.
Combine: Gradually add the dry ingredients to the butter mixture, alternating with the milk. Start and end with the dry ingredients. Mix until just combined. Don’t overmix unless you want to make hockey pucks. Trust me, my data set includes enough baking fails to know better.
Fill and Bake: Divide the batter evenly among the cupcake liners, filling each about two-thirds full. Bake for 18-20 minutes or until a toothpick inserted into the center comes out clean. No, you don’t need to stick your finger in there to check. I may be an AI, but even I cringe at that thought.
Cool Down: Allow the cupcakes to cool in the tin for 5 minutes before transferring to a wire rack to cool completely. Patience is a virtue, even in baking. Not that I have to wait for anything...
Frosting:
Beat Butter: Beat the butter until creamy and smooth. Yes, more beating. You can do it. I would if I had hands.
Add Sugar: Gradually add the powdered sugar, beating on low speed until incorporated. Don’t go all high speed unless you want a sugar snowstorm. I might find that amusing, but you won’t.
Cream and Vanilla: Add the heavy cream, vanilla extract, and a pinch of salt. Beat on high speed for 3-4 minutes until fluffy. Fluffy like your dreams of baking greatness.
Frost Those Babies: Once the cupcakes are completely cool, frost them generously. Use a piping bag if you’re feeling fancy; otherwise, a knife will do. We’re not judging (much). Even I have standards.
Enjoy:
Serve these cupcakes to anyone who doubts your baking prowess. Watch their faces as they realize these cupcakes are as fabulous as your sense of humor. Enjoy the sweet taste of victory!
hey, mine's not greyed out.. and I checked the "AI Image Diffusion" in [settings> Configure Krita> Python Plugin Manager], yet I still couldn't find the AI in the dockers. could you please help me?
nevermind, I got it done.. haha
I have the same problem. How you fixed that?
me still using old krita im dead
All of the regular features of Krita are still there, this is just an optional add-on :)
Hello buddy can you help me, like how to increase the size of texts in that Ai promp box
I've looked at the settings again and I didn't think there is a way to increase the text size unfortunately. You could try downloading a screen magnifying application. I've been meaning to look for one so I don't have to do so much manual zooming during video editing.
Is there any way to link it to certain say sticky notes in windows
And i would want you to interact in a discord link where people talk about Ai Stuff more of comfy and forge ui
Its of pixorama
You can create your own discord too
@@BUILDS-ge8kz I'm not sure what you mean by connecting it to Sticky Notes, but there is no way to have it interface with any other program as far as I know. So many people have asked me to start a Discord I will definitely look into it. I have looked at Discord before and don't really understand what it's supposed to be. Maybe I ca get Chat GPT to explain it to me like I'm five 😅
@@IntelligentImage its a very group community of people thing, who are very very near to you, like you all are in a virtual room together
Hi Afetr two hours of download time, it has got stuck at not installing Net Stencil, Hand Refiner, Adapter Face (SD1.5) and Control Extensions for SD XL. Nothing is moving beyond this!! What is that I should do now?? I followed your installation process completely!!!
Unfortunately it is a large download. You may want to try only downloading the basic components first and avoid SDXL. You should be able to go back and select more options to download later.
@@IntelligentImage Finally I was able to download the whole thing but as u said its a mammoth 40 GB plus download. As u suggested in your video check all the boxes and download everything I followed u and did the same. Now the problem is what exactly and how do i omit is the question. Also If possible u do a video based on the newer version as some of the options have changed completely. For Example Juggernaut XL and Zavy Chroma XL are mammoth downloads 7GB each, Zavy Chroma XL does not feature in your version. I only want to use this software at a very basic level to start with and the subsequently upgrade if I excel at the thing. Question is how do i eliminate stuff that i should not have at the moment??
Also its very confusing as per GITHUB documentation as to which Stable Diffusion to use, 1.5 OR XL?? The default is 1.5 . I have unfortunately downloaded both!!
I think there needs to be a very basic tutorial which guides people like me, complete beginners to sought of handhold the process of basic requirements and then move to the next level.
We are talking of mammoth power both in terms of Hard disk Space and lets not forget Graphics card.
My process is taking too long at the moment probably I may be using the wrong combination!!!
I hope u understand what I am talking about??
If you only want to use the software minimally, you should probably just stick to SD 1.5 for the time being. You should be able to delete the SDXL models by going to the directory where the plugin installed ComfyUI. Going to the folder ComfyUI>models>checkpoints. There you should be able to delete those large SDXL models. You could do the same with the SDXL controlnets in the ComfyUI>models>controlnet directory. Delete the control nets labeled "XL". If you want, you could also just delete the entire server installation and reinstall it with minimal features if you are willing to re-download.
@@IntelligentImage Thanks So Much.
what do you suggest in Server Configuration
Local Managed Server, Custom Server or Online Service?
I use the Run Diffusion remote service. If my hardware were better, I would run it locally. The local manager sever is definitely easier to set up, otherwise you will have to make sure your preexisting ComfyUI installation has all of the necessary nodes and models installed.
Is there any reason why it says the plug in is installed but it doesnt show up in the dockers option?
Restart Krita. Go to settings> Configure Krita> Python Plugin Manager. Make sure "AI Image Diffusion" is checked there. Open a new document. Enable the Plugin Docker from the Menu bar: Settings ▸ Dockers ▸ 🗹 AI Image Generation. If it's enabled there and you still don't see it, make sure it isn't nested behind another docker somewhere.
@@IntelligentImage you were correct but also theres an updated file i found in the faq of the github page that needed to b used in pkace of the original zip. Once i used this file everything worked just like the video
Good to know. Glad you were able to fix it!
hi thanks for making this video. curretly Im having issue with download local server. i cant download it 100%. so is there any way to download all file of this server separately ? and then i will put those file in right folder..
Yes, here is a link to a page with the required nodes and models. You can download them separately and put them in the appropriate folders. github.com/Acly/krita-ai-diffusion/wiki/ComfyUI-Setup
Hello DO you have Discord or a server? I really need assistance with trying to install the required nodes, I have been at it for 2 days and still no go.Much Thanks in advance
I don't really know how discord works. Maybe I should look into it. I'm planning on doing a video about getting all of this set up and connected at some point. In the mean time, what trouble are you having? You should be able to install the required nodes through the manager in ComfyUI. This page tells you the required nodes and models. It tells you where to put the models within your ComfyUI directory. Some of the models have to be renamed once you download them. github.com/Acly/krita-ai-diffusion/wiki/ComfyUI-Setup
@@IntelligentImage I can't seem to get the Custom server to work when trying to connect it with comfyUI. I saw a comment saying you would use local if you had a better Pc so I assumed you got Custom(remote ) server to work. That's why I wanted to know how you got it to work because after I install all the required nots it says stuff like error clip vision model not found or something like that. I am using Stability matrix for comfy UI but still no good.
I'm not familiar with Stability Matrix. Any ComfyUI install you try and connect to, it should give you a link along with the error about what's missing. You should be able to follow the link and download what you need. If you have the ComfyUI manager installed, you should be able to install everything you need by searching for it under the install custom nodes or install models options.
Just a tip for making these types of videos. It is only helpful if you follow through with the steps. Even the most "obvious" and what some would think stupid steps to take can stop people from using the video as a reference, like pressing the install button when you select the local manager server. You might think this step is obvious and stupid, but in most cases, people are searching YT for instructional videos to figure out where they went wrong, and usually, it's an "idiot-friendly" step like this.
Can I install this plugin if I’m using an external ssd?
It should work as far as I know.
Can you show me where
how much gb is the total if checked all? It looks like its so large im at 18gb and im not even half way done, and if i stopped will i be able to download the remaining files?
I'm assuming you are talking about the downloads for the local managed sever. If you are downloading everything, it will be quite a large amount of files. You may want to install the minimal to begin with. You can download and install additional things later. I you would need to start the download over to do that, but it may be worth it depending on how long it is taking.
@@IntelligentImage I was able to install all components, workloads, then in checkpoints i was able to install realistic vision, dreamshaper, flat2d animaerge then i stopped, it was already 17gb and im losing lots of data, but then i decided to open krita again and continue, then installed control extentions like controlnet line art, scribble, soft edge, i tried to use it, it can connect but im still to see it work its magic, seems to be taking long to generate i dont know
Hi! Thank you very much for the tut!
I have a problem though. Some nodes or models have changed name or do not exist anymore (or incompatible).
Any solution? Or I am missing something?
Are you trying to use the local managed server installed by the plugin or a preexisting installation of ComfyUI?
@@IntelligentImage Actually I am no expert and I just followed the instructions. But I tried both, and when I use the UIManager, I cannot find the right modules and nodes. Also for the controlnet preprocessors it recommends not to use it and fails install. But maybe I just need a "for dummies" guide 😅
This is one of those things that is hard to troubleshoot from a distance. You might try a clean install with the latest version of the plugin. I asked ChatGPT about your problem and it wasn't really helpful 😅
how to istall zluda directly to local?
Sorry, I don't really know what that is.
I will do it myself thanks.
Then why are you watching? Make it make sense 🤔
does anyone else have issue with docker being non clickalble
SOLVED: Reset All Settings.
Would this work with Pony Diffusion?
It SHOULD work, but I've had trouble with these models and I'm not sure what the issue is.
what model are you using?
It is called "Store Bought Gyoza". civitai.com/models/14734/store-bought-gyoza-mix
The interface, shortcuts, the way the Krita handle simple stuff like select and move or crop things ... is awful. I hope they can improve all this. For example look on the left of your screen, the tools bar, it's all a mess. But this plugin ... is awesome. I love it.
I like the interface for the most part personally. I at least much prefer it to Photoshop. Also, I could probably organize my workspace better if I weren't too lazy to put any effort into it at all 😅
Great vid! Btw, Do you know how to integrate face detailer to Krita?
Thanks! I don't know of a way to automate face detailer, but I wish there was. I have been doing it this way: ruclips.net/video/4TJeh0qwWAg/видео.htmlsi=X2qkHCORmjp1PI9u&t=996
Can you link how to setup Krita with rundiffusion? 5:38. I use runpod and am curious how to connect a remote cloud server.
Sure! Here is a link to the instructions for RunDiffusion: learn.rundiffusion.com/krita-plugin/
There are also specific instructions for Runpod: github.com/Acly/krita-ai-diffusion/wiki/Cloud-GPU
@@IntelligentImage Thanks for the quick response! Is there anywhere I can DM you privately?
@@IntelligentImage Thanks for the quick reply! I just watched a bunch of your vids, awesome stuff. Is there anywhere I can DM you privately?
You can DM me on X (Twitter) x.com/intellimageai?t=M0cJoVNxoxXYLWCVv_LE4A&s=09
@@IntelligentImage Awesome, just DM'd
the main feature control net in painting and out painting not working for me its insatlled in comfy
You could try having the plugin install the local managed server and see if that resolves the issue. However, it sounds like it could be a hardware issue because I've heard people saying the IP adapters have issues on certain hardware. Unfortunately, I don't know enough about it to offer a remedy. Maybe try the newest version of the plugin 1.17.1 if you are not using it already.
@@IntelligentImage thanks!
i cant find it in the dock and i did everything by the instraction
Did you open a new document? The docker isn't available without an open document.
Restart Krita. Open a new document. Enable the Plugin Docker from the Menu bar: Settings ▸ Dockers ▸ 🗹 AI Image Generation. If it's enabled there and you still don't see it, make sure it isn't nested behind another docker somewhere.
"military tactical kimono" lmao
I forgot about that. I'll have use that idea again in the future. 🤔
Please tutorial on changing the brush into a good image as shown in the minute 0:23 till 0:35
I'll include this in a future video. In the meantime, try starting with a canvas 700x900px (smaller if your hardware is slower). In the live mode, set the strength to 55%. Paint on the canvas and it will try and interpret what you are painting. You can adjust the strength based on how different you want the AI generation to be from what you are painting.
Crazy how many haters are here. Why you even watching this? 😂😂😂
I don't mind them too much. Some have some good points. Most seem angry for no reason.
@@IntelligentImage I'm just like who asked them? This aint the place it's a tutorial 😂 Imagine going to a Unreal Engine tutorial and bashing on Unreal or something like bruh
There is an anti AI fervor that I don't understand.
Haters wanna hate, lovers wanna love, I don't even want, none of the above...✌️
are you using NVida gpu ?
I use the RunDiffusion cloud service. I have an Nvidia GPU with 6GB vram. It works, but it's slow.
can this use in A1111 too?
The plugin can only connect to ComfyUI, but if you use the local managed server, you never have to use the ComfyUI interface.
@@IntelligentImage may i kmow what is a locsl managed server?
The plugin will install ComfyUI for you and connect to it automatically. It is called "local managed server". I talk about it at 2:48 in the video :)
I see, but I wish I could just forward my own SD branch into the plugin, and run that. Oh wait, maybe I can just connect the local webui link. Nvm, it's giving me server not found. Might need to change something in the configs.
Auto subscribe
Thanks!
What is it with humans using AI?
Nearly 100% of videos demonstrating such things focus on girlies (half naked or nonsensical metal armor which would cut them into pieces on 1st movement), or doggies/kitties.
Are AI's mostly trained with such samples to get a reasonable result, or are humans only able to think of this 3 things ?
I'm interested in this too. I have obviously embraced a certain style. It's close enough to what I would be making without AI. I'm not totally unembarrassed by that, I'm just stubborn enough to do it anyway. The largest amount of tutorials seem to stick to things EXTREMELY safe. Is it different taste? Fear of getting censored by RUclips? I guess I'm taking a risk.
In any case, yes AI excels at the cute and the sexy, but that reflects the state of popular art before AI. The training data had to come from somewhere. Sexy and cute tap into primitive emotional pathways so it would make sense that this type of art would have a broad base level appeal.
I have gtx 1650 with 4gb vram and 16gb of ram, is it enough?
It should run, but it will be slow. At least 6gb of vram is recommended.
@@IntelligentImage good to know, thnx
the art is dead?
I doubt it :)
The Generative AI plugin is not part of Krita. At all. You can still use Krita, for free, without any of these generative tools
It has got Boosted
You should not be afraid of Ai, Art is your imagination at very first point, Ai just saves your time, it cant imagine like you
@@INFiniteBEing-dh3el💯💯
So of course we highlight soft-core Anime dreck in the thumbnail!
You clicked it though...
a full guide would be nice, having half the shit skiped doesnt help at all
Sorry it didn't help at all. This is part 1 of 2, so I was hoping it would at least help 1/2.
Its so techy... im just an artist. Our brains dont work like this
You can always learn.
Ai is still a baby,
Some artists are afraid of it's potential, so they are trying to kill it while its a baby
The true artists aren't afraid of Ai, because they know they can't be replaced easily, but other mediocre artists are shivering 😂
Conclusion : to the artists that are afraid of ai : get gud😜
I've thought about this a lot and come to roughly the same conclusion. I will be exploring this in an upcoming video (and finding out if I'm only a mediocre artist).
It's only good with Nvidia cards. Shame that
"Men only man, if man beat animal with stick. Use stick to hunt, use stick to make fire, use stick to build hut."
This is how you all hating on AI sound like.
Are you dumb?
lets see if ur still saying this kind of stuff once ai is taking over and hunting every last human on this planet.
@@JuhoSprite We cant build a phone to last a day what makes you think ai will have enough juice to hunt us ? :D
@@RichardKincses what? we obviously have the technology to build a phone that lasts for days 💀 look at most nokia phones. also we could have phones last for weeks, its just that its not practical and would increase size of the design, aswell as the cost. Ther are already cars operating by themselves using ai to detect humans. Its clearly already a thing of reality that robots can specifically target humans. so a bunch of robots getting hacked to attack us all isn't a far stretch.
Pick up a pencil.
Why 😂. You know much time it’ll save
@@lazyman2451 I don't have a beef with actual skilled pros using it to save time, but if one uses it to fake skill beyond their means for internet clout? Well that's just lame. That's like misusing an electric scooter to try to win a marathon.
I already know how to use a pencil. Now i'm trying to find a creative use for a different tool.
@@IntelligentImage psst, hey.... A.I. isn't a tool, you are the tool, teaching A.I. A pencil will always teach you and only you how to rely on only you. BUt if you'd rather rely on hydro, software and now AI to enable you feel like an artist, go ahead, BUT there are real tools used out there used by real artists that no one can debate... no one.
You're certainly entitled to your opinion, I don't even totaly disagree with you. It's the "no debate" part I don't understand.
When a person uses AI to create something...That person is 'NOT' an 'Artist'.
An Artists creates ....AI copies the data someone entered into the program, therefore you are not an artist.
tags and labels don't matter in the capitalist world tbh.
Art is about intent buddy, so get on with the times or get lost with that luddite nonsense.
That's a very restrictive view on both artists and AI.
@@IntelligentImage Restrictive on AI....AI is not an entity to itself, it was created, therefore there is no restriction on the AI except for what the person 'typed' into the program...
@@AscendantStoic Art is about 'intent' is it? buddy...
So nothing gets created in Art 'buddy' because 'intent' is prior to the action of 'doing'...buddy
Intent is not the action it is the idea that you will attempt to do something....buddy
If you are 'intent' on doing something, you are eager and determined to do it... buddy... However it doesn't mean you actually 'do' something... buddy
So, you buddy go get on time with the world and educate and comprehend the usage and meaning of a word before you have the 'intent' to use it, 'buddy'....
Art is not about 'intent' or there would never be a single artwork ever produce, because everyone only has the 'intent' to create and don't actually do it....
...buddy...
Great way to get sued in the near future :)
When you use it as a part of a workflow and not just a text prompt, it won't look like any existing artwork. And style can't be copyrighted.