INSTANT LORA - No Training Required - ComfyUI
HTML-код
- Опубликовано: 10 июн 2024
- AloeVera's - Instant-LoRA is a workflow that can create a Instant Lora from any 6 images. This install guide shows you everything you need to know. This is really nice for Styles, but can also work for faces. It doesn't require any extra traning time, so you can experiment will all kinds for styles instantly. Another benefit is that this doesn't require you to keep all the loras around. Just 6 Images as a input and you are ready to go.
#### Links from my Video ####
ComfyUI Install: • ComfyUI - Node Based S...
AloeVera's - Instant-LoRA civitai.com/articles/2345
FreePik: www.freepik.com/free-vector/c...
#### Join and Support me ####
Buy me a Coffee: www.buymeacoffee.com/oliviotu...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord
AI Newsletter: oliviotutorials.podia.com/new...
Support me on Patreon: / sarikas - Хобби
Thank you for showcasing this great tool! I rarely use ComfyUI but workflows like this makes me want to use it more. I'll for sure be checking this one out! Thank you!
Thank you for explaining everything in detail. I think I'll be switching to ComfyUI and it's very very helpful!
Always amazing content!
This is super useful!!!!!
That cmd in address bar is such a cool, hidden feature.
It's definitely a useful trick. There's a lot of little shortcuts and lifehacks across Windows (and other things) - someone surely has made a video showcasing them all, its worth learning them!
Thnx for such a nice and informative video
Ipadapter is amazing. It basically copy’s the style and some of the detail from an image. Way more fun than just instant Lora. You can basically recreate images using a prompt with it. Definitely worth having a play if you haven’t used it before.
@@eucharistenjoyeryeah sorry. I didn’t mean it to sound like I was against the video. I meant that there is much more than instant Lora. It’s great fun.
Been meaning to try it in A1111 for a while now, just did and it is amazing! Thanks for the kick in the butt! :)
@@jiml5166 lets hope it gets added to automatic1111
👍👍😀Thank you for your lessons, I watch you from Russia and don’t miss a single video.Спасибо за ваши уроки, я смотрю вас из России не пропускаю ни 1 видео.
I was skeptical. I tried. And MY GOD, it does work! It's essential, because LoRAs, while good, take a long time to train properly. InstaLoRA is a much faster process. It's a must if you needed a LoRA for a specific item, like a special gun for your character.
Looks brilliant! Is it possible to use it in combination with controlNet?
Extremely useful and wonderful tutorial!!!🙏🏻🙏🏻🙏🏻😀❤️
I really need to step up my Hawaiian shirt game.
Particularly since I live in Hawaii and you're shaming me.
You can get a good output with just 1 single source image and playing with the Lora strength. The developer says not to put in extra images if they don't add any new concepts to the clip model.
there's also an sdxl version, it's in beta and it's a bit more capricious but it gives good results. You can load less images or more, there is in this pack a node Load image from (dir) so you just need to indicate the folder in which your images are stored.
name? cant find it
@@xlegijaone every thing,there with some png images loading the process directly in comfy ruclips.net/video/dGL02W4QatI/видео.htmlsi=67kcZ1zdnRf6SWWV
Very nice as usual! Install has been updated since then and not working for me... IPAdapter nodes not found even when I ask COmfyUI to instal missing nodes. Is there an updated explainer video to this cool tool?
Wish someone could make an A1111 version of this.
I heard that this is basically ipadapter, I haven't been able to use auto1111 recently so you'll have to look into it.
Would definitively be a big plus!
Absolutely agree!!!
Can't agree more, A1111 is my go to program when it comes to image generations.
its called Kohya
Hello, great explanation as always...
A few days ago I created some workflows on the topic of Instant Lora and IP adapters.
I just uploaded it to CivitAI and linked your video there. I hope that is ok?
Best regards, Murphy
i realized how to specify the models on the nodes that were 'minimized' but now get exception in the IPAdapter node: Currently, AutocastCPU only support Bfloat16 as the autocast_cpu_dtype.
Oh no! Now You are too using my favorite ComfyUI :D :D :D (but ... still using old outdated models !)
I couldn't get this working with SDXL models, if that's what you mean?
New updated workflow just came out!
Aside from the age-old question of Midjourney vs Un/Stable Diffusion , and now vs Stability AI's Dall-E 3 / Bing Image Creator (or even Open AI's Dall-E1)...
Any thoughts on using Fooocus/SDXL? You don't have total control (ex. CFG values) but it does seem to help in learning how some controls affect the image?
Also, with so many tools
(Detail Tweaker, ideogram ai , BlueWillow, Playground ai, Leonardo ai, ControlNet Tile,Realistic Vision V5.1, After Detailer, Automatic1111, DreamBooth, LowRA, Vlad Diffusion, Invoke),
any thoughts on mixing tools together? For example, using LyCORIS (LoCon and LoHa) with LoRA?
There's a 'moonride edition' of Fooocus called fooocus mre that has more advanced things like canny, depth, cfg and other sampling stuff while remaining dirt simple. It even has freeu. Not sure if the original does.
How do you save it as a LoRa to use in other workflows? I might've missed that part.
I change my comment because I already tested the API application and corrected the problem. It is working very fast and works like a LORA. Those who have a memory problem, lower the bath, and enter the correct resolution of the images, later an option to change the size of the output image.
I tried with very good quality images and these are generated realistically, and 0.40 in the lora weight seems correct to me because it did not respect the positive prompt 100%. This when applied with people.
when i try to do the update it doesn't actually add all the folders to the custom nodes like it did in the video?
Can I just install this or would i need to have something else installed locally? still trying to figure out how to use github
Where do we save the workflow JSON's from AloeVera? Great video btw everything else very informative.
You can save them anywhere on your system. The JSON files can be dragged into the ComfyUI window or you can use "Load" from the Queue Prompt window. You can also drag any PNGs created with ComfyUI into your window and it'll pull up a workflow as the PNG has the JSON embedded in it.
Hey, it's Oz! Great to see you here doing AI Art.
keeps asking me for ipmodel but cant figure out how to specify that in this workflow, i've got ipadapter installed.
Love your videos Olivio
Always amazing content, just wish I could get it to work. Never seems to work for me. Lucky I guess
strange. make sure you don't get any errors. join my discord to ask people there with screenshots of your workflow and results
The AloeVera workflows are so confusing. How do you combine this Aloevera instant lora into your double model workflow in the ComfyUI beginner video, @Olivio Sarikas? CAn you make a .json file workflow for that or make a video tutorial how to do it? RIght now AV is only using 1 checkpoint and its noit using efficient nodes and a lora stacker...
I’m curious how I could integrate this into an Img2Img workflow, using my own image as input but conforming it to an “instant Lora” style.
it works great, and it's simple because Iadapter work ont the conditionning part, not the latent, so you can do img2img and play with the denoise value easily.
Help! Can anyone share the Aloe Vera Instant LORA ImgDrop workflow
Error occurred when executing KSampler:
The size of tensor a (1024) must match the size of tensor b (1280) at non-singleton dimension 1
I AM HAVING THIS ERROR ON COLAB NOTEBOOK . DO ANYBODY KNOW WHAT TO DO
The input image must be 1024px, if you want to generate at 1024px, in the case of 768px you should have an image at 768px.
@@canaljoseg0172 i tried to input all image of 1024 x 1024 and set the dimesnision to 1024 x 1024 in empty latent image . but same error is occuring . . IS IT BECAUSE I AM USING SDXL MODEL?
so they create a UI where everything can be installed through it´s mananger (allegedly), here is an awesome thing but you need to go to folder x, and folder z and install this, install that....
Please... Noob question... how can I use controlnet or some way to inpaint the style onto an image I want? Thank you.
Thank you.
This method of the instant lora is great for style transfer. so that should work fine for you.
This seems to have much of the same effect as using a low weighted tile controlnet with text2img?...
It uses ip-adapters. You can combine ip-adapters with lora trained on same image dataset to get great results. With controlnet extension you can combine ip-adapters with controlnet and get amazing images.
Too bad - meanwhile, the installation procedure changed completely, so the method shown here does not work anymore. But to make it worse, the NEW install method doesn't work either. So no way to install this at the moment 🤔
Anyone else having issues running this? I've downloaded all the required files but when I hit queue prompt it doesn't run. I tried another workflow and it was fine...
Did you put the model files in the correct paths? I got them reversed initially, but think I got an error when thar happened, also are you using an SD 1.5 model or SDXL?
@@jibcot8541 im using xl so that might be the reason...
Is it possible to import things intended for ComfyUI into InvokeAI's node UI generator?
No, they are different systems and nodes coded for comfyui won’t work for invoke
You need see Stability Matrix. It is like Automatic's UI but engines powered by ComfyUI. I'm noway affiliated to them. 😂
in my opinion, the inferface and the ease of use combined with very gerat features is in invoke the best part.. and it improves fast... multi image prompt... instant lora is allready on the roadmap
I mean, with 6 images, you could train a lora in about 5 minutes, if that, with way more control over your final outputs.
Why waste 5 mins on every style you want to experiment with when this is just as good?
What is your pc setup?
@@godpunisher have a 12gig 3060
It takes half an hour to train a lora with 6GB VRAM. Besides, as someone else cleverly pointed out, your five minutes are five minutes PER LORA. Time waste stacks up.
I also don't get your point that a real lora is more flexible. Everything I did with my trained LoRAs, I can do with InstaLoRA.
404 The page you are looking for doesn't exist
So this creates an "Instant Lora" and doesn't let you create a Lora instantly... sigh, tasty bait though
No need to output a Lora, as the image itself can be mixed and matched.
Nothing stopping you from batch loading in entire folders of images and using them to generate.
Would prefer to output a Lora instantly so I am not stuck in just Comfy. @@_SimpleSam
this is the end of the design
Not working anymore
...but the bad news is: only on ComfyUI :-/
it allso will be soon available in Invoke AI.
its allready in the roadmap.. and... it can be used in linear interface and also in nodes...
Only getting lots of errors.....
Yeah, yeah... All you need is just download 3-5 GB of stuff...
I installed the model but I get this error when trying to generate- Error occurred when executing IPAdapter: 'NoneType' object has no attribute 'patcher'. I tried google but nothing relevant came up. Anyone know what might have gone wrong?
What do I need to co-pee to where?
He is calling this a LoRa lol
A little Clickbait my friend!
Interesting workflow concept, but ShittyUI is a no no at least for me.
Most professional tools like Blender, Davinci Resolve or Unreal Engine use a Node based UI because it can do way way way mode than what A1111 can do. It's really not that hard. Just give it a try
@@OlivioSarikas The problem with this is, that it is only for total specialists.. each node tree looks different and even if identicaly.. if you import it somewhere else, and only rearange nodes, your colleque who opens up the workflow will be totaly lost.
if comfy ui was your only tool to go and if you are a one man show.. its great.. but in bigger companys a linear workflow is much more important, because everyone can rely on it not to change from user to user..
Now.. why not invoke ai? It has increadibly improved and still hast the most intuitive interface and... also involves nodes for the neards between us..
Sorry, still won't use comfy. It's clunky and easy to screw up. Troubleshooting it as a novice is extremely frustrating. There is a reason its less popular than the others.
I get this error:
Prompt outputs failed validation
VAELoader:
- Value not in list: vae_name: 'v2-1_768-ema-pruned-0869.vae.pt' not in ['sdxl_vae.safetensors']
UpscaleModelLoader:
- Value not in list: model_name: '4x_NMKD-Superscale-SP_178000_G.pth' not in ['ESRGAN_4x.pth', 'RealESRGAN_x4plus.pth', 'RealESRGAN_x4plus_anime_6B.pth']
Can anybody guide me in the right direction?
You have not got the Vae or the Upscaler model that the person who created the workflow was using, either change the models used or download them and none them the same names.
He should have mentioned this in the video... i am struggling to get it work with the same errors 😡
Olivio please only use Comfy from now 😭😭 A1111 is so basic