Reasons I love this plugin and Stable diffusion use, I'll cover a few things, First adobe trained there model on there stock that has copyrighted images so if you want something you can control this is better as you can train and use your own model, and you can train that in any resolution you like, I trained mine at 1500, and that was just due to hardware limitations. the miss conceptions I get are Stable diffusion use copyrighted Images, you can use and train your own model or others models to avoid art stealing if you feel this way. though I feel the real issue is a fear of being redundant, I dont feel that will ever be the case, for many many reasons. also are not limited to one model so you can download models from civitAI that don't endorse copyright images in training, Do note watermarks are on some images that are royalty free, even adobe has that issue, and with civitAi there are models trained on 4K so if you have a nice big system and a rtx4090 you can use a model and resolutions at 8K. The Plugin allows you to link and use RENTED hardware so even on a low end system you can pay for that option. and have far greater flexibility than adobe and there model and models with higher resolution trained sizes. You can make and use your own models and use them, This also means you can be unique in whats make, I love that above all, as I trained a model on my art. MASSIVE NOTE: despite training on my art the renders are completely different as the core AI seems to be trained not to copy and be unique with what it makes. so while the model is fantastic its very different to my digital paintings that are more basic. Adobe Benefits Easy to use and less issues, don't have to be smart with computers, Though less powerful. Has adobe support, Adobe IMO is trained on stability AI and uses Stability AI for the AI just as Stable diffusion that's made by Stability AI, This means you can expect adobe to go to 4k and 8k soon enough, adobe at current does have a better language trained engine to there model so more accurate responses, though adobe do run a AI that checks prompts that does effect you being able to use it to do things and cause confusion ie fix skin, skin is seen as being used for something lude, at least when i was using it to fix photos in about November 2023. Adobes AI remover I FANTASTIC like amazing. Adobe edge bending is better though visible, Krita AI plugin does have ghosting sometimes at the blend and shadow artifacts. Both have there pros and cons, for me this is better as I control it has way more features. but note again with resolutions you do need the hardware or rented GPU cloud, but as I mentioned the cost of adobe subscription can very easily get you that. Krita plugin Big win.. FREE if like me you are poor free is very appreciated know this is long winded covering few things that I feel are important to understand. NOTE On my hardware I don't mind sacrifice with resolution using 512x512 models, the reason for this is I have a old purchase of Topaz gigapixel AI upscale before being broke, so once I have done work restorations etc I run them through the AI upscaler to 4k, 8k etc this I had to do with adobe also. there are free AI upscaler available and I recommend looking at them and trying a few. I have to lower resolution in this video due to just using a gtx1070 and of course I encoding though my GPU at the same time I am using it to use this program that's a lot on the poor old system. otherwise 1024x1024 image size and SDXL models I can do fine but longer waits
I found with just a grey static I still get the same result but only the color seems better on more vibrant image the grey worked on the 4x4 in the bush just as well. I think as long as there no reference image the AI works better if there a reference the AI trys to use that image as a reference. I have not tested noise patterns of large blocks or worm static, you bring up a interesting idea I'll have to play with.
Im working on it the neuro logically dyslexic keeps throwing me to i especially as Mum is Rita so my brain does that with K ..I do a video and relies my brain has done the error.
Things perpetuate, like the Jif/Gif confusion. I know ways people pronounce things are different all over the world. I feel like if I my name was "Kreeta", but unfortunately I spelled it "Krita", I would want you to call me "Kreeta". At first I didn't know it was Swedish and without knowing, I thought it was "Krita". Oh well. I have really enjoyed all the in-depth information you have provide in your videos. I have gotten so much farther than I would have on my own. You showed me things that are fantastic. Thank You.@@streamtabulous
@@ZiggyDaMoe hopefully will get another little video up tomorrow. I'm enjoying doing the series on it. Be sure to check out the new video on the snap section tool. So so handy
rtx3060 8gig generates as fast as Adobe cloud faster in some cases, My mates rxt3060 12gig even faster than that, and rxt4080 is 2sec that's much much faster than adobe, AND again what you pay per year you could easily buy one of these cards and have more control than what they offer and again NO Limitations on generations.. Plus play high end games and more. its a no brainier to put your money into your own system rather than to Adobe for a limited services. There secret source is a better auto section in the background that results in better blending, and a auto static in background so the AI doesn't concentrate on the information of the selection, Plus there model is very large over 400TB of training images. BUT there model is still extremely limited and the ability to use your own models etc is far far far better.
That's the removal feature was brought in later yes. But this still works if the AI won't adhere to what you want where it just makes other cars etc. But overall with the newer version this is less need. I had to use it today with the new version on a stubborn item that would not vanish till I did this.
Can you show how to set up a local ComfyUI installation and connect it to Krita? When Krita crashes, python is still running and prevents the server from being relaunched. (Killing python in task manager fixes problem.)
Like this one on how to put them into the krita AI Diffusion? or how to download and find models on civitai.com Iv been asked to do a video on how to use civitai.com so will be doing that very soon ruclips.net/video/xh-4YUr1F4U/видео.htmlsi=auHfuYqUapyyVU8X
Works the same as SD comfy just a different UI, so I'd say yes, in the 10.1 plugin update you can use lora code also and weight has been fixed so you can now prompt the same as you would in comfy. I'm downloading some inpaint models to test on the Australian bush to see if there have better results. Will give feedback. There still trained and normally trained for certain tasks ie smile, glasses, eyes, clothing. So will be a interesting test.
Tested with various impounding models the checkpoint painting models do not detect is checkpoint but in the lora file detected, so used the best model and linked the inpaint modes with them. The results where the same needed static to correctly remove the vehicle and get the most accurate look to the whole image. There seem to be no difference in the look of the render with or without a inpaint model.
few reasons, Ill cover a little more than what you are asking for others, first adobe trained there model on there stock that has copyrighted images so if you want something you can control this is better as you can train and use your own model, and you can train that in any resolution you like, I trained mine at 1500, and that was just due to hardware limitations. you are not limited to one model so you can download models from civitAI that don't endorse copyright images in training, do note watermarks are on some images that are royalty free, even adobe has that issue, and with civitAi there are models trained on 4K so if you have a nice big system and a rtx4090 you can use a model and resolutions at 8K. side note the plugin allows you to link and use rented hardware so even on a low end system you can pay for that option and have far greater flexibility than adobe and there model and trained sizes. also as you can make your own models and use them this also means you can be unique in whats made, I love that above all, as I trained on my art. Adobe has benefits to however of being easy to use and less issues in anything that goes wrong, though less powerful, has adobe support, and is also trained on stability AI and uses Stability AI for the AI just as Stable diffusion that's made by Stability AI, this means you can expect adobe to go to 4k and 8k soon enough, adobe at current does have a better language trained engine to there model so more accurate responses, though adobe do run a AI that checks prompts that does effect you being able to use it to do things and cause confusion ie fix skin, skin is seen as being used for something lude at least when i was using it to fix photos in November. Both have there pros and cons, for me this is better as I control it. but note again with resolutions you do need the hardware or rented GPU cloud, but as I mentioned the cost of adobe subscription can very easily get you that. know this is long winded covering few things that I feel are important to understand. PS on my hardware I dont mind sacrifice there resolution to 512x512 models, the reason for this is I have a old purchase of Topaz gigapixel AI upscale so once I have done work restorations etc I run them through the AI upscaler to 4k, 8k etc this I had to do with adobe. there are free AI upscaler available and I recommend looking at them. PS. I have to lower resolution due to just using a gtx1070 and of course I encoding though my GPU at the same time I am using it to use this program thats a lot on the poor old system.
it's standard for most viruses scanners to warn of anything from a cloud, it is a png image right click save image as but there should be a download link, for the image, otherwise any colour static will do the job, just use google images search colour static save crop to a square and place in the mentioned directory. any mess of colour seems to work, static seems best.
My editing skills suck. I use Movavi video editing, cheap for my poor budget and basic to use, not Fillmore. But it's very basic and also with having dyslexia that affects complicated drop down menus and sub menu and complicated UI, so I need simplistic. I'm definitely no pro 😢😂
neurological dyslexic affects speech grammar spelling, its not deliberate i know it triggers people im sorry for that, its why one second i ponce differently seconds after i have just said a word, front temporal lob damage likely from a car accident went i was 5y old when my seat-belt ripped and i went through car window, also why part of my face mouth doesn't move fully. i often apologies for this issue through many videos
I dont own a Mac to do that im afraid but on the reddit a there is a person that has dont that, they linked it to the comfyAI with these github.com/Acly/krita-ai-diffusion/blob/main/doc/comfy-requirements.md
@@mhavock if you go to the git plugin you will see Mac support is added. There is something called MLC machine learning code that interfaces differently so it doesn't need cuda. I wonder when there on tensor option for Nvidia rtx users as they are made for AI use
@@BRUTALKING Krita is a Painting APP, the Plugin he is showing is from separate developers. The plugin may run on mac but without hardware acceleration (like a video card) it will be ALOT slower.
No its Not... rtx3060 8gig generates as fast as there cloud, My mates rxt3060 12gig even faster than that, and rxt4080 is 2sec that's much much faster than adobe, AND again what you pay per year you could easily buy one of these cards and have more control than what they offer and again NO Limitations on generations.. plus play high end games and more. its a no brainier to put your money into your own system rather than to them for a limited services. There secret source is a better auto section in the background that results in better blending, and a auto static in background so the AI doesn't concentrate on the information of the selection, Plus there model is very large over 400TB of training images. BUT there model is still extremely limited and the ability to use your own models etc is far far far better.
@@streamtabulous well yeah.. I didn't mean to single out speed per se as the main point, rather that overall fact it's done in the cloud using Firefly and a huge training set, and from what I've seen, it tends to produce better results with less effort in many situations, as I think they use a fairly large amount of steps for the generations. (which for most average users, does make it quicker in the long run) The speed locally can vary a lot depending on what models you're using though. I have tested some on my 3080, and it can take up to 2 mins to generate. Some will do it in seconds. I just found out about SDXL Turbo today though, and it can generate images locally in about 200ms! 😂 With regard to the selection stuff, I haven't tried it myself yet, but watched friends do it, and they were working with much cleaner images, so selection issues weren't really apparent.
@@welbot 2min ouch, are they high resolution images. As the gtx1070 8gig did not take much longer than the cuts in the video soon as I stop recording it flys much fast, just obs encoding through the GPU while using it. Make sure you have Microsoft visually studio and cuda kit installed. I mentioned those in speeding up A1111 video for Nvidia users but it works across the board. Yer definitely faster models, some are slow the one I trained in this video is massively slow because I use old AI trianing set, but I have newer SDXL that are much fast and if you set LMC LCM dyslexia they work faster but used on say a model with built in LCM it will be a mess. I'll have to do the video on models as there so complicated. My favourite models are Absolute reality Epic realism Robot chillout RPG Urber realistic that's a not safe one but the training set speed is multiple styles OpebDalle yes that's the Dall-E model need to get from hugging face it's a very fast sdxl Playground AI again hugging face and the online AI playground core model Colorful Reality check Sdxxl What sampler and setting plays a massive part in speed. Again I'll have to do a video on that. Adobe is hand especially if the feature is on the IOS. But overall I like these tools more, and using the money on my own system. That's my goal for end of next year upgrade the GPU to a rtx3060. I have one in a htpc for gaming on the TV and use and tested all these programs and there so much faster it's insane. Again cuda and VS installed.
Reasons I love this plugin and Stable diffusion use,
I'll cover a few things,
First adobe trained there model on there stock that has copyrighted images so if you want something you can control this is better as you can train and use your own model, and you can train that in any resolution you like, I trained mine at 1500, and that was just due to hardware limitations.
the miss conceptions I get are Stable diffusion use copyrighted Images, you can use and train your own model or others models to avoid art stealing if you feel this way.
though I feel the real issue is a fear of being redundant, I dont feel that will ever be the case,
for many many reasons.
also are not limited to one model so you can download models from civitAI that don't endorse copyright images in training,
Do note watermarks are on some images that are royalty free, even adobe has that issue, and with civitAi there are models trained on 4K so if you have a nice big system and a rtx4090 you can use a model and resolutions at 8K.
The Plugin allows you to link and use RENTED hardware so even on a low end system you can pay for that option. and have far greater flexibility than adobe and there model and models with higher resolution trained sizes.
You can make and use your own models and use them, This also means you can be unique in whats make, I love that above all, as I trained a model on my art.
MASSIVE NOTE: despite training on my art the renders are completely different as the core AI seems to be trained not to copy and be unique with what it makes. so while the model is fantastic its very different to my digital paintings that are more basic.
Adobe Benefits
Easy to use and less issues, don't have to be smart with computers, Though less powerful. Has adobe support,
Adobe IMO is trained on stability AI and uses Stability AI for the AI just as Stable diffusion that's made by Stability AI, This means you can expect adobe to go to 4k and 8k soon enough, adobe at current does have a better language trained engine to there model so more accurate responses, though adobe do run a AI that checks prompts that does effect you being able to use it to do things and cause confusion ie fix skin, skin is seen as being used for something lude, at least when i was using it to fix photos in about November 2023.
Adobes AI remover I FANTASTIC like amazing.
Adobe edge bending is better though visible, Krita AI plugin does have ghosting sometimes at the blend and shadow artifacts.
Both have there pros and cons, for me this is better as I control it has way more features.
but note again with resolutions you do need the hardware or rented GPU cloud, but as I mentioned the cost of adobe subscription can very easily get you that.
Krita plugin Big win..
FREE
if like me you are poor free is very appreciated
know this is long winded covering few things that I feel are important to understand.
NOTE
On my hardware I don't mind sacrifice with resolution using 512x512 models, the reason for this is I have a old purchase of Topaz gigapixel AI upscale before being broke, so once I have done work restorations etc I run them through the AI upscaler to 4k, 8k etc this I had to do with adobe also.
there are free AI upscaler available and I recommend looking at them and trying a few.
I have to lower resolution in this video due to just using a gtx1070 and of course I encoding though my GPU at the same time I am using it to use this program that's a lot on the poor old system. otherwise 1024x1024 image size and SDXL models I can do fine but longer waits
Great job, very helpful
I bet if you use different noise patterns/types you get different results or even an image that is not considered a pattern as well . Cool stuff. :O)
I found with just a grey static I still get the same result but only the color seems better on more vibrant image the grey worked on the 4x4 in the bush just as well.
I think as long as there no reference image the AI works better if there a reference the AI trys to use that image as a reference.
I have not tested noise patterns of large blocks or worm static, you bring up a interesting idea I'll have to play with.
you explain this stuff slowly, clear and in detail, but I have the attention span of a squirrel and feel totally lost...
its not you, he just adds a lot of gibberish that nobody needs
Krita = Crayon in swedish. :) "Kreeta" is closer to the correct pronunciation than "Kritta"
Im working on it the neuro logically dyslexic keeps throwing me to i especially as Mum is Rita so my brain does that with K ..I do a video and relies my brain has done the error.
Things perpetuate, like the Jif/Gif confusion. I know ways people pronounce things are different all over the world. I feel like if I my name was "Kreeta", but unfortunately I spelled it "Krita", I would want you to call me "Kreeta". At first I didn't know it was Swedish and without knowing, I thought it was "Krita". Oh well. I have really enjoyed all the in-depth information you have provide in your videos. I have gotten so much farther than I would have on my own. You showed me things that are fantastic.
Thank You.@@streamtabulous
@@ZiggyDaMoe hopefully will get another little video up tomorrow.
I'm enjoying doing the series on it. Be sure to check out the new video on the snap section tool. So so handy
rtx3060 8gig generates as fast as Adobe cloud faster in some cases, My mates rxt3060 12gig even faster than that, and rxt4080 is 2sec that's much much faster than adobe, AND again what you pay per year you could easily buy one of these cards and have more control than what they offer and again NO Limitations on generations..
Plus play high end games and more. its a no brainier to put your money into your own system rather than to Adobe for a limited services.
There secret source is a better auto section in the background that results in better blending, and a auto static in background so the AI doesn't concentrate on the information of the selection, Plus there model is very large over 400TB of training images. BUT there model is still extremely limited and the ability to use your own models etc is far far far better.
not sure if this feature was added later but you can just simply select a region and generate with the same result without using a noise map
That's the removal feature was brought in later yes.
But this still works if the AI won't adhere to what you want where it just makes other cars etc. But overall with the newer version this is less need. I had to use it today with the new version on a stubborn item that would not vanish till I did this.
Can you show how to set up a local ComfyUI installation and connect it to Krita? When Krita crashes, python is still running and prevents the server from being relaunched. (Killing python in task manager fixes problem.)
i have tried i can get it sort of working but many features are missing, if i manage to work it out ill do a video. but its definitely completed
Well done, Please how I got so many styles, it's just four of them
models give styles and unique looks.
ruclips.net/video/xh-4YUr1F4U/видео.htmlsi=6cM-039A8NZCaYt7
=can you make a video which shows how to download and deploy different models...
Like this one on how to put them into the krita AI Diffusion?
or how to download and find models on civitai.com
Iv been asked to do a video on how to use civitai.com so will be doing that very soon
ruclips.net/video/xh-4YUr1F4U/видео.htmlsi=auHfuYqUapyyVU8X
does using an inpainting model better?
Works the same as SD comfy just a different UI, so I'd say yes, in the 10.1 plugin update you can use lora code also and weight has been fixed so you can now prompt the same as you would in comfy.
I'm downloading some inpaint models to test on the Australian bush to see if there have better results. Will give feedback. There still trained and normally trained for certain tasks ie smile, glasses, eyes, clothing. So will be a interesting test.
Tested with various impounding models the checkpoint painting models do not detect is checkpoint but in the lora file detected, so used the best model and linked the inpaint modes with them.
The results where the same needed static to correctly remove the vehicle and get the most accurate look to the whole image.
There seem to be no difference in the look of the render with or without a inpaint model.
How with resolution , its better than generative fill?
few reasons, Ill cover a little more than what you are asking for others, first adobe trained there model on there stock that has copyrighted images so if you want something you can control this is better as you can train and use your own model, and you can train that in any resolution you like, I trained mine at 1500, and that was just due to hardware limitations.
you are not limited to one model so you can download models from civitAI that don't endorse copyright images in training, do note watermarks are on some images that are royalty free, even adobe has that issue, and with civitAi there are models trained on 4K so if you have a nice big system and a rtx4090 you can use a model and resolutions at 8K.
side note the plugin allows you to link and use rented hardware so even on a low end system you can pay for that option and have far greater flexibility than adobe and there model and trained sizes.
also as you can make your own models and use them this also means you can be unique in whats made, I love that above all, as I trained on my art.
Adobe has benefits to however of being easy to use and less issues in anything that goes wrong, though less powerful, has adobe support, and is also trained on stability AI and uses Stability AI for the AI just as Stable diffusion that's made by Stability AI, this means you can expect adobe to go to 4k and 8k soon enough, adobe at current does have a better language trained engine to there model so more accurate responses, though adobe do run a AI that checks prompts that does effect you being able to use it to do things and cause confusion ie fix skin, skin is seen as being used for something lude at least when i was using it to fix photos in November.
Both have there pros and cons, for me this is better as I control it.
but note again with resolutions you do need the hardware or rented GPU cloud, but as I mentioned the cost of adobe subscription can very easily get you that.
know this is long winded covering few things that I feel are important to understand.
PS on my hardware I dont mind sacrifice there resolution to 512x512 models, the reason for this is I have a old purchase of Topaz gigapixel AI upscale so once I have done work restorations etc I run them through the AI upscaler to 4k, 8k etc this I had to do with adobe.
there are free AI upscaler available and I recommend looking at them.
PS.
I have to lower resolution due to just using a gtx1070 and of course I encoding though my GPU at the same time I am using it to use this program thats a lot on the poor old system.
short answer if you dont have the hardware for the larger resolutions save your money and use a free AI image upscaler. try it and you be amazed
Why is the link to the plugin leading to just a png image, and also a virus warning?? from avast.
it's standard for most viruses scanners to warn of anything from a cloud, it is a png image right click save image as but there should be a download link, for the image, otherwise any colour static will do the job, just use google images search colour static save crop to a square and place in the mentioned directory.
any mess of colour seems to work, static seems best.
I've installed Krita, but my GPU lacks sufficient power to run it. Is there a way to leverage Google Colab to utilize the Krita AI plugin? 🤔
there was on i think 1.10 version or rental gpu, but i don't see the option any more, there might have been issues. what GPU do you have?
Wow, so Filmora like!!!!
My editing skills suck. I use Movavi video editing, cheap for my poor budget and basic to use, not Fillmore. But it's very basic and also with having dyslexia that affects complicated drop down menus and sub menu and complicated UI, so I need simplistic. I'm definitely no pro 😢😂
It's not "Krayta", it's "Kreeta"
neurological dyslexic affects speech grammar spelling, its not deliberate i know it triggers people im sorry for that, its why one second i ponce differently seconds after i have just said a word, front temporal lob damage likely from a car accident went i was 5y old when my seat-belt ripped and i went through car window, also why part of my face mouth doesn't move fully.
i often apologies for this issue through many videos
can you show how to install it on macos
I dont own a Mac to do that im afraid but on the reddit a there is a person that has dont that, they linked it to the comfyAI with these
github.com/Acly/krita-ai-diffusion/blob/main/doc/comfy-requirements.md
LOL how will you render on a mac without a video card?
@@mhavock if you go to the git plugin you will see Mac support is added. There is something called MLC machine learning code that interfaces differently so it doesn't need cuda. I wonder when there on tensor option for Nvidia rtx users as they are made for AI use
@@mhavock then why Krita has a macox version shown in there website.
@@BRUTALKING Krita is a Painting APP, the Plugin he is showing is from separate developers. The plugin may run on mac but without hardware acceleration (like a video card) it will be ALOT slower.
The "secret sauce" of Adobe's speed, is simply that the processing of it is done in the cloud, which is why you need to pay credits for it.
No its Not... rtx3060 8gig generates as fast as there cloud, My mates rxt3060 12gig even faster than that, and rxt4080 is 2sec that's much much faster than adobe, AND again what you pay per year you could easily buy one of these cards and have more control than what they offer and again NO Limitations on generations.. plus play high end games and more. its a no brainier to put your money into your own system rather than to them for a limited services.
There secret source is a better auto section in the background that results in better blending, and a auto static in background so the AI doesn't concentrate on the information of the selection, Plus there model is very large over 400TB of training images. BUT there model is still extremely limited and the ability to use your own models etc is far far far better.
ps read the pinned post, for more
@@streamtabulous well yeah.. I didn't mean to single out speed per se as the main point, rather that overall fact it's done in the cloud using Firefly and a huge training set, and from what I've seen, it tends to produce better results with less effort in many situations, as I think they use a fairly large amount of steps for the generations. (which for most average users, does make it quicker in the long run)
The speed locally can vary a lot depending on what models you're using though. I have tested some on my 3080, and it can take up to 2 mins to generate. Some will do it in seconds. I just found out about SDXL Turbo today though, and it can generate images locally in about 200ms! 😂
With regard to the selection stuff, I haven't tried it myself yet, but watched friends do it, and they were working with much cleaner images, so selection issues weren't really apparent.
@@streamtabuloustotally agree that having to pay per gen is shit. Especially given to even use it, you have to pay them a monthly sub as well!
@@welbot 2min ouch, are they high resolution images. As the gtx1070 8gig did not take much longer than the cuts in the video soon as I stop recording it flys much fast, just obs encoding through the GPU while using it. Make sure you have Microsoft visually studio and cuda kit installed. I mentioned those in speeding up A1111 video for Nvidia users but it works across the board.
Yer definitely faster models, some are slow the one I trained in this video is massively slow because I use old AI trianing set, but I have newer SDXL that are much fast and if you set LMC LCM dyslexia they work faster but used on say a model with built in LCM it will be a mess.
I'll have to do the video on models as there so complicated.
My favourite models are
Absolute reality
Epic realism
Robot chillout
RPG
Urber realistic that's a not safe one but the training set speed is multiple styles
OpebDalle yes that's the Dall-E model need to get from hugging face it's a very fast sdxl
Playground AI again hugging face and the online AI playground core model
Colorful
Reality check
Sdxxl
What sampler and setting plays a massive part in speed. Again I'll have to do a video on that.
Adobe is hand especially if the feature is on the IOS. But overall I like these tools more, and using the money on my own system. That's my goal for end of next year upgrade the GPU to a rtx3060. I have one in a htpc for gaming on the TV and use and tested all these programs and there so much faster it's insane.
Again cuda and VS installed.