#### Links from the Video #### JOIN the Contest: contest.openart.ai/ Download the WORKFLOWS: drive.google.com/file/d/1EhEOpQmxStEChqzg3Qfp_phyLyQK43Bx/view?usp=sharing Matt3o Channel: www.youtube.com/@latentvision Deliberate Models: huggingface.co/XpucT/Deliberate/tree/main IP Adapter and Encoder: github.com/cubiq/ComfyUI_IPAdapter_plus MM_SD Models: github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved control_v11f1e_sd15_tile.pth huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1e_sd15_tile.pth
🎯 Key Takeaways for quick navigation: 00:00 🎨 *Introduction to IP Adapter Workflows* - Overview of workflows by Mato using the IP adapter. - Open art contest details with a prize pool of over $13,000. - Invitation to explore and enter multiple workflows for free. 01:09 🖼️ *IP Adapter in Multi-Style Image Composition* - IP adapter usage for combining three different art styles in an image. - Importance of using a rough mask with specific colors. - Explanation of IP adapter model inputs and locations for model files. 03:50 💻 *Setting up IP Adapter Models and Files* - Detailed guide on downloading and organizing IP adapter models. - Different versions (normal, plus, plus face, full face) and their use cases. - Instructions for saving models in the appropriate folders. 06:14 🔄 *Multi-Image Composition Workflow* - Demonstration of combining multiple images using IP adapter iteratively. - Importance of using the correct mask channel for each image. - Upscaling process for achieving high-resolution and detailed results. 07:48 🎭 *Conditioning Masks for Image Manipulation* - Utilizing conditioning set mask notes to apply prompts to specific image regions. - Example of changing hair color using conditioning on different mask parts. - Highlighting the flexibility of conditioning for various image modifications. 10:19 🎞️ *Creating Blinking Animation* - Generating a blinking animation using a clever image rendering technique. - Importance of using specific checkpoint models and version 1.5 of animate diff loader. - Tips for updating extensions and ensuring smooth workflow execution. 13:28 🌐 *Blending Between Two Images* - Creating an animation blending between two images using masks. - Distinction between 16-frame and 32-frame workflows, considering CPU and GPU usage. - Special attention to control net models and their versions for different workflows. Made with HARPA AI
Olivio, give the Automatic1111 some love. Many of us are not ready to switch to Comfy as A1111 is easier to use. I use the nodes workflow to create 3D textures, and I know that it's a powerful tool, but sometimes, I just want to load a model and just work on it without having to fiddle around with hundreds or nodes and parameters.
One of Olivio’s videos shows how to add a node to ComfyUI that makes it interoperate with A1111 installed on the same machine. It helps to bridge the gap. InvokeAI also has a nice node interface, but as far as I know it still doesn’t connect up with ComfyUI or A1111 just yet. While nodes are fun to get a workflow like the one described in this video, we’ll keep getting better apps and user interfaces that make the process more fluid, and that’s what I look forward to most.
@@LTE18 nodes have been a thing for a while in various apps, but I have never thought that being something akin to a glorified telephone exchange operator to be the pinnacle of art creation. While AI is amazing i'm sure most people won't favor nodes as the input.
I agree with you but the problem is not the love but the evolution. A1111 evolves too slowly: Comfyui is almost always the first to make new things work despite its horrible interface (It's not complicated to understand but it's annoying to use: you have to prepare a workflow before creating..that's not how I work. I need to improvise and I can't with comfyui). So it's normal to make videos about comfyui when your videos are about news in AI.
@@lennoyl Well, Comfy is pushed hard because now it's owned by StabilityAI. A1111 is still developed by the community. StabilityAI can also help the most popular open-source platform for it's models.
I thought he was the guy behind the nodes, not the technology. His videos are amazing and well explained, and now I feel even more respect for the guy.
If you don't mind, how did you get the image @ 9:30 at the top left with the really cool Final Fantasy styled concept art of a female? I see you loaded it but if you created it and can provide the prompt / model to recreate it (and similar neat concepts of that style) or related resource that would be appreciated.
This is getting really interesting, a bit like VST plugins or synths for audio, or filters or plugins in photoshop or premiere, only much more powerful!
Really love your comfy UI videos. Please do them more, comfyUI seems to have a lot of haters in the community and they don't realize how much potential this thing has.
I don't hate comfy UI but I'm never going to use it. It's like trying to read a wiring diagram and I have no desire to do that. I see a comfy UI video and I just don't watch.
I don't even know how you do to stay calm. These technologies drives me crazy! Every day, we have something new, something that doesn't work and we have to fix. And you are always in peace! I'd like to pay homage to your patience. Thank you for that! 😅
the worklow for putting two figures into an central image is a lot of fun. Sometimes tho i have found that one of the input images gets ignored entirely (so only one of the two figures is used) and i cant figure out what causes that. is it just seed randomness? i did check the IP Adapter to ensure i didnt accidentally effectively remove it there. I figure it has to do with clip vision crop setting perhaps but havent figured it out. Interestingly if i switch sides of the RGB mask (so make the missing figure to be on the right side if they were missing from the left) that seems to work. the input image figure was centered on the image tho, so would assume clipvision crop = center is correct?
Trying to get started with ComfyUI. I can't get the blinking example to work. I tried it with my own two pictures, but I have a feeling it just ignores my reference pictures. I tried it with the example pictures and it just skips half the nodes. After that when I refresh and queue the prompt again it just starts the last node ignoring the rest. What am I doing wrong?
Hey Olivio, I gotta say, your channel is always go-to for everything about SD. Really appreciate you keeping everyone in the loop with all the new stuff in AI generative art. Thanks a bunch and keep it up friend!
@Olivio, upscale question: in the first workflow from Matteo, the upscale comes from the 1st ksampler to the upscale path and makes its way into the second Ksampler Latent. the second ksamplers model input comes from the path of the og model and ipadpters. and the conditioning from the original prompts. my question is as the image is upscaled in this scenario - is it taking any information from the first kasamplers output? what exactly is being sent from the first ksampler to the second via the latent? is it image information or just the dimensions of the image? I hope this makes sense. I wish someone would go deep into the path of the data and image pixels as it goes from the first gen ksampler and up the upscale path.
Thanks for the video. Would you mind making a tutorial for SDXL AnimateDiff workflow? ...I just couldn't get it to work. My output is all black unless I adjust the size to 256*256.
Hi Olivio, I have problem with loading ipadapter. Load IPAdapter Model node shows" ipadapter_file null".I 've made a folder in models folder called ipadapter.And changed model.bin file name to ip-adapter-plus_sd15.safetensors.And updated Confy.But says Prompt outputs failed validation IPAdapterloader: Value not in list: ipadapter_file: 'None' no in [] . Could you please guide me.
yes I saw his Video, its incredible stuff. Question for you.. can you use the canvas node you introduced me to to make these rudimentary RGB masks?Im trying it now - you can use its mask but I don't see how to separate the RGB in the same way the image load node does it
Great video and workflow ideas. Thanks. As an A1111 user I am just trying to explore Comfy UI. I think in the end it could have some sort of macro interface above this piping (like a lot of software e.g. some synths in the audio world). Then casual users can create more easily using just the macro controls, whilst still allowing others to do a deep dive and customise in detail to their needs.
Workflows are the macros, you can save a finished picture in the workflows folder and it will import the workflow it was created with when you use "load"
When i copy your workflow it does not work it says I am missing all the nodes and thne when i say in the manager to isntall it it cant find it something is weird about the new comfyui
lvoe your content, was reviewing yoru product placement video, however, I really want to try and place products in the hands of AI models/people. Any work flow for this?
Wow extreme quick response thx a lot. Hope u dont mind that I created them through screenshot but I was to impatient testing the workflow...result is amazing@@OlivioSarikas
@@effehell7593 At the minute ment cheap rtx3090 ( after mining hype ) is the best solution. I bought mine rtx 3090 24 GB VRAM for 700 euro. To work with AI rtx 3090 is as fast as rtx 4080 but has more VRAM 24 GB Vs 16 GB from rtx 4080. So has a bit longer future than 16 GB cards . Right now to generate pictures with the most advanced SDXL versions you need 8-12 GB VRAM. To generate video you need 12 GB. To work with LLM you can fully put a model on your rtx 3090 up to 34B size ( all 65 layers of q4k_m version of ggml). And you get around 40 tokens /s Bigger ggml models like 70B you can put half on GPU and the rest to RAM. 70 lB model will be getting around 3 tokens/s then.
This seems like an overly complicated way of avoiding the use of Photoshop. You could end up with the same result just making 3 individual pictures (background, girl 1, girl 2) also using ip-adapter but in Auto1111. then just Open Photoshop, and make 3 layers. either blend everything manually if you know how, or make a rough version and improve it with img2img and/or inpainting in auto1111. I'm sure Comfy UI has its unique qualities, I just think it's always better to combine the use of several programs to produce art. None can do everything perfectly.
I don't think anybody knows😂 It replicates a style in the picture, mostly used for replicating faces. And the results can vary wildly depending on the whims of stable diff
This seems to work as a prompt in the form of an image; basically transforming the image into text description. Very useful since you can describe more with a picture, and perform image processing such as masking, adjustment etc which are hard to do and describe with words... Well I'm still learning also 😅
@@bigbeng9511 I don't think that's it. That's how Midjourney's image reference works, or at least how it used to work, not sure if it still is. But IPAdapter is far too precise for that to be the case.
A1111 can only dream of such functionality. Some things like controlnet or posing also works in A1111 but in seperate tabs and very clunky to change things imo. Im just doing a bit of ai images, but after I saw comfy I never went back to A1111, and now comfy has all the functionality and even more what A1111 used to have as an advantage when comfy was still pretty new. And the manager helps alot aswell. I saw the IP adapter in another video, that is very powerful stuff, like an automatic inpaint to a degree but smart and for whole pictures. Kinda like a smart controlnet really.
Olivio, please, the "end screen cat" has got to go. The cat is a stock character from an automatic clip making site, so it's not unique to you and other people could use it freely. Plus, you now know so many great ways to make custom animation with all the workflows from the last few months. We need a new outro that's uniquely Olivio! ... also the cat wasn't even pointing at video links at the end of this one, it's kinda awkward. >_>
Unlike your usual videos, lots of interesting information here. Thumb up. For the automatic 1111 users, i think you can achieve the same results as in the first example with image2image, it will just take three renders instead of one but still probably be simpler to work than doing the triple masking+spaghetti wiring in comfy. I wouldn't know how to do the second example in automatic, but animatediff doesn't work in automatic for me for some reason. And tbh, I think the blinking girl example could be done in an animation software easier because ipadapter plus animatediff will just make the character fixed if the strength is too strong, or the style will drift too much when you lower the strength. So in this case it's just alternating two almost still images.
Keep up the great work, Olivio.
Thank you for your support. really appreciate it :)
#### Links from the Video ####
JOIN the Contest: contest.openart.ai/
Download the WORKFLOWS: drive.google.com/file/d/1EhEOpQmxStEChqzg3Qfp_phyLyQK43Bx/view?usp=sharing
Matt3o Channel: www.youtube.com/@latentvision
Deliberate Models: huggingface.co/XpucT/Deliberate/tree/main
IP Adapter and Encoder: github.com/cubiq/ComfyUI_IPAdapter_plus
MM_SD Models: github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved
control_v11f1e_sd15_tile.pth huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1e_sd15_tile.pth
I was never the artist type, but i was always a nerd. I love creating but i don't like drawing and this new form of art is incredible.
Matteo is the real deal.... every video tutorial is pure gold!! Not like other idea stealing YT channels
🎯 Key Takeaways for quick navigation:
00:00 🎨 *Introduction to IP Adapter Workflows*
- Overview of workflows by Mato using the IP adapter.
- Open art contest details with a prize pool of over $13,000.
- Invitation to explore and enter multiple workflows for free.
01:09 🖼️ *IP Adapter in Multi-Style Image Composition*
- IP adapter usage for combining three different art styles in an image.
- Importance of using a rough mask with specific colors.
- Explanation of IP adapter model inputs and locations for model files.
03:50 💻 *Setting up IP Adapter Models and Files*
- Detailed guide on downloading and organizing IP adapter models.
- Different versions (normal, plus, plus face, full face) and their use cases.
- Instructions for saving models in the appropriate folders.
06:14 🔄 *Multi-Image Composition Workflow*
- Demonstration of combining multiple images using IP adapter iteratively.
- Importance of using the correct mask channel for each image.
- Upscaling process for achieving high-resolution and detailed results.
07:48 🎭 *Conditioning Masks for Image Manipulation*
- Utilizing conditioning set mask notes to apply prompts to specific image regions.
- Example of changing hair color using conditioning on different mask parts.
- Highlighting the flexibility of conditioning for various image modifications.
10:19 🎞️ *Creating Blinking Animation*
- Generating a blinking animation using a clever image rendering technique.
- Importance of using specific checkpoint models and version 1.5 of animate diff loader.
- Tips for updating extensions and ensuring smooth workflow execution.
13:28 🌐 *Blending Between Two Images*
- Creating an animation blending between two images using masks.
- Distinction between 16-frame and 32-frame workflows, considering CPU and GPU usage.
- Special attention to control net models and their versions for different workflows.
Made with HARPA AI
Thanks!
these are perfect thank you for the in depth analysis! bravo Olivio!
Olivio, give the Automatic1111 some love. Many of us are not ready to switch to Comfy as A1111 is easier to use. I use the nodes workflow to create 3D textures, and I know that it's a powerful tool, but sometimes, I just want to load a model and just work on it without having to fiddle around with hundreds or nodes and parameters.
One of Olivio’s videos shows how to add a node to ComfyUI that makes it interoperate with A1111 installed on the same machine. It helps to bridge the gap. InvokeAI also has a nice node interface, but as far as I know it still doesn’t connect up with ComfyUI or A1111 just yet. While nodes are fun to get a workflow like the one described in this video, we’ll keep getting better apps and user interfaces that make the process more fluid, and that’s what I look forward to most.
It amuses me that some like to parse this as some kind of AI art arms race. It is ever evolving, but no one knows to what.😊
@@LTE18 nodes have been a thing for a while in various apps, but I have never thought that being something akin to a glorified telephone exchange operator to be the pinnacle of art creation. While AI is amazing i'm sure most people won't favor nodes as the input.
I agree with you but the problem is not the love but the evolution. A1111 evolves too slowly: Comfyui is almost always the first to make new things work despite its horrible interface (It's not complicated to understand but it's annoying to use: you have to prepare a workflow before creating..that's not how I work. I need to improvise and I can't with comfyui). So it's normal to make videos about comfyui when your videos are about news in AI.
@@lennoyl Well, Comfy is pushed hard because now it's owned by StabilityAI. A1111 is still developed by the community. StabilityAI can also help the most popular open-source platform for it's models.
Keep doing your job, you are the best, this is at least my minimum to contribute to your excellent education now
I thought he was the guy behind the nodes, not the technology. His videos are amazing and well explained, and now I feel even more respect for the guy.
That image blending is awesome! I need to play around with this... I want to blend a wintery scene into my cousin's air conditioning company logo!
I watched the original video.
And I'm definitely going to need an AI to do that for me.
But that should be around sooner than later.
Just a few months ago, all of these were still impossible to do. The updates are really fast and exciting.
Really great video, very helpful tips and workflows, thank you!
Iterative Latent Upscaler gives the best results from my tests
If you don't mind, how did you get the image @ 9:30 at the top left with the really cool Final Fantasy styled concept art of a female? I see you loaded it but if you created it and can provide the prompt / model to recreate it (and similar neat concepts of that style) or related resource that would be appreciated.
awesome You got me really great stuff with the logo animation that was awsome
Nice Summary! Such an amazing node to use with animations.
This is getting really interesting, a bit like VST plugins or synths for audio, or filters or plugins in photoshop or premiere, only much more powerful!
Really love your comfy UI videos. Please do them more, comfyUI seems to have a lot of haters in the community and they don't realize how much potential this thing has.
I don't hate comfy UI but I'm never going to use it. It's like trying to read a wiring diagram and I have no desire to do that. I see a comfy UI video and I just don't watch.
I didn't like it before but I got past that and started getting used to it. Now I have 3 workflows and use it constantly.
Another great video. Thanks
I don't even know how you do to stay calm. These technologies drives me crazy! Every day, we have something new, something that doesn't work and we have to fix. And you are always in peace! I'd like to pay homage to your patience. Thank you for that! 😅
Nice one Olivio.
He isnt the creator of IP-adapter, he created the custom_node for comfyUI that uses ipadapter.
Correct, he's the developer of the IPAdapter Plus custom node, still a total MVP though!
What will be interesting is text2video outputs from something like pika 1.0 put through a comfy UI workflow to overlay styles and upscale.
the worklow for putting two figures into an central image is a lot of fun. Sometimes tho i have found that one of the input images gets ignored entirely (so only one of the two figures is used) and i cant figure out what causes that. is it just seed randomness? i did check the IP Adapter to ensure i didnt accidentally effectively remove it there. I figure it has to do with clip vision crop setting perhaps but havent figured it out. Interestingly if i switch sides of the RGB mask (so make the missing figure to be on the right side if they were missing from the left) that seems to work. the input image figure was centered on the image tho, so would assume clipvision crop = center is correct?
Hi Olivio Can do a video about how to add prompt styler any workflow, I look on RUclips no one made a proper tutorial, Thank you
Trying to get started with ComfyUI. I can't get the blinking example to work. I tried it with my own two pictures, but I have a feeling it just ignores my reference pictures. I tried it with the example pictures and it just skips half the nodes. After that when I refresh and queue the prompt again it just starts the last node ignoring the rest. What am I doing wrong?
Hey Olivio, I gotta say, your channel is always go-to for everything about SD. Really appreciate you keeping everyone in the loop with all the new stuff in AI generative art. Thanks a bunch and keep it up friend!
HI , where can I get IP Adapter encoder 1-5.safetensors for Load CLIP Vision , I can not fint it
Props I also recently subbed to to them they are a wizard
Great Guide
now you are frying my brain - but I love it!
@Olivio, upscale question: in the first workflow from Matteo, the upscale comes from the 1st ksampler to the upscale path and makes its way into the second Ksampler Latent. the second ksamplers model input comes from the path of the og model and ipadpters. and the conditioning from the original prompts. my question is as the image is upscaled in this scenario - is it taking any information from the first kasamplers output? what exactly is being sent from the first ksampler to the second via the latent? is it image information or just the dimensions of the image? I hope this makes sense. I wish someone would go deep into the path of the data and image pixels as it goes from the first gen ksampler and up the upscale path.
How to get this "blueprint" view in stable diffusion?
If you use colab notebooks, how to achieve similar level of control as having an elaborate GUI?
Yes you can download them now in the extension I find them today but you need to do an update
Ok now it's mostly ComfyUI channel.
Yea, they all sold out.
LOL, yes, totally selling out on a FREE tool - You got me bro! Cancel Culture Rage to the Max please
Does this work flow only work for ComfyUI? Can it work for the standard Stable Diff Web UI?
did you figure this out
I couldnt manage to make the conditioning through prompt work in the second example with SDXL, is this possible?
Thanks for the video. Would you mind making a tutorial for SDXL AnimateDiff workflow? ...I just couldn't get it to work. My output is all black unless I adjust the size to 256*256.
great workflow
Is it possible to use multiple ipadapters on one video clip? so if a person turns around how does it know to keep the stylization of the person?
Hi Olivio, I have problem with loading ipadapter. Load IPAdapter Model node shows" ipadapter_file null".I 've made a folder in models folder called ipadapter.And changed model.bin file name to ip-adapter-plus_sd15.safetensors.And updated Confy.But says Prompt outputs failed validation IPAdapterloader: Value not in list: ipadapter_file: 'None' no in [] . Could you please guide me.
OLIVIO, Is it possible to create img2img workflows using SDXL Turbo in ComfyUI???
Probably rename the video to comfiui
Is the future
@@Bikini_Beatswhat a joke
@@jevinlownardo8784any suggestions for a competitive alternative?
@@jevinlownardo8784comfy gang gang
yes I saw his Video, its incredible stuff. Question for you.. can you use the canvas node you introduced me to to make these rudimentary RGB masks?Im trying it now - you can use its mask but I don't see how to separate the RGB in the same way the image load node does it
Great video and workflow ideas. Thanks. As an A1111 user I am just trying to explore Comfy UI. I think in the end it could have some sort of macro interface above this piping (like a lot of software e.g. some synths in the audio world). Then casual users can create more easily using just the macro controls, whilst still allowing others to do a deep dive and customise in detail to their needs.
Workflows are the macros, you can save a finished picture in the workflows folder and it will import the workflow it was created with when you use "load"
wow! I'm going deeper underground
When i copy your workflow it does not work it says I am missing all the nodes and thne when i say in the manager to isntall it it cant find it something is weird about the new comfyui
you need to update to the latest version of comfyUI
This is actually a basic set up , you can do much much more with bbox + auto masking + segmentation and ip adapter.
you, of course. this is only to show an idea of how to use it. :)
lvoe your content, was reviewing yoru product placement video, however, I really want to try and place products in the hands of AI models/people. Any work flow for this?
My question is probably silly but i will ask it anyway :) where can i get the RGB.png picture because i don't find it inside of the workflow.
Is this possible In forge ?
this is the IP-Adapter node creator :) ( IP-adapter has been created by lllyasviel (the creator of Controlnet & Fooocus)
can this be accomplished in A1111 with segment anything? danke
Can something like this be run in a RTX 4070 Ti?
yes ... eastly
@@mirek190even for video generation?
@@fimbulInvierno yes
For video generation you need 12 GB VRAM
Hello sir! I'm wondering if there is something like tile upscaler in comfyui? Or something similar that would add detail while upscaling
yes
@@mirek190I didn't know. Any tips on this type of workflow sir?
Are you serious?
Find some workflow for it ...@@ultimategolfarchives4746
Where can I get these rgb.png masks?
You paint them yourselfs in any paint program or paint online app
Wow extreme quick response thx a lot. Hope u dont mind that I created them through screenshot but I was to impatient testing the workflow...result is amazing@@OlivioSarikas
How you create the masks?
7:21
What's the browser theme that you're using? The tabs look a little more rounded than usual.
Thanks for the video, as always.
So you just do ComfyUI now? I am on AMD GPU so I cant use it. So if thats all you use then I can know if I should watch the videos or not.
Wow, Olivio has more than 200k subscribers!
Is someone know if I can have 2 different graphic card working together.
I have a 3090 Ti and and a 2070 Super. I f I can have 32 Gig for image AI...
nope, sorry
You have rtx 3090 has 24 GB of VRAM that is more than enough to work literally with everything ....
@@mirek190Not by a long shot. These AI programs really need a lot of GPU power. The more the better.
@@effehell7593
At the minute ment cheap rtx3090 ( after mining hype ) is the best solution.
I bought mine rtx 3090 24 GB VRAM for 700 euro.
To work with AI rtx 3090 is as fast as rtx 4080 but has more VRAM 24 GB Vs 16 GB from rtx 4080.
So has a bit longer future than 16 GB cards .
Right now to generate pictures with the most advanced SDXL versions you need 8-12 GB VRAM.
To generate video you need 12 GB.
To work with LLM you can fully put a model on your rtx 3090 up to 34B size ( all 65 layers of q4k_m version of ggml). And you get around 40 tokens /s
Bigger ggml models like 70B you can put half on GPU and the rest to RAM. 70 lB model will be getting around 3 tokens/s then.
This seems like an overly complicated way of avoiding the use of Photoshop. You could end up with the same result just making 3 individual pictures (background, girl 1, girl 2) also using ip-adapter but in Auto1111. then just Open Photoshop, and make 3 layers. either blend everything manually if you know how, or make a rough version and improve it with img2img and/or inpainting in auto1111. I'm sure Comfy UI has its unique qualities, I just think it's always better to combine the use of several programs to produce art. None can do everything perfectly.
Photoshop couldn't make the lighting coherent on all the characters, or have them interact like stable diffusion can do
@@yoyo1poe sure it could, it's not one click and you have to know what you're doing, I'll give you that. But you could totally do it I assure you.
Nodes are the death of passion.....
Getting better and better woohoooo last few videos are just...... umaaaah
That is the same brunette that features in pretty much every image generation I make
Has a1111 fallen so far behind that this stuff can't be used with it?
All the geniuses prefer this type of interface that's all. If you're a genius and try to talk about anything but Comfy you get blackballed.
Wonderful. Personally I find comfyUI a real pain to use. I understand the versatility.
I still don't really understand just WHAT IPAdapter actually is/does.
I don't think anybody knows😂
It replicates a style in the picture, mostly used for replicating faces.
And the results can vary wildly depending on the whims of stable diff
This seems to work as a prompt in the form of an image; basically transforming the image into text description. Very useful since you can describe more with a picture, and perform image processing such as masking, adjustment etc which are hard to do and describe with words... Well I'm still learning also 😅
@@bigbeng9511 I don't think that's it. That's how Midjourney's image reference works, or at least how it used to work, not sure if it still is. But IPAdapter is far too precise for that to be the case.
A1111 can only dream of such functionality. Some things like controlnet or posing also works in A1111 but in seperate tabs and very clunky to change things imo. Im just doing a bit of ai images, but after I saw comfy I never went back to A1111, and now comfy has all the functionality and even more what A1111 used to have as an advantage when comfy was still pretty new.
And the manager helps alot aswell. I saw the IP adapter in another video, that is very powerful stuff, like an automatic inpaint to a degree but smart and for whole pictures. Kinda like a smart controlnet really.
Olivio, please, the "end screen cat" has got to go. The cat is a stock character from an automatic clip making site, so it's not unique to you and other people could use it freely. Plus, you now know so many great ways to make custom animation with all the workflows from the last few months. We need a new outro that's uniquely Olivio!
... also the cat wasn't even pointing at video links at the end of this one, it's kinda awkward. >_>
Unlike your usual videos, lots of interesting information here. Thumb up.
For the automatic 1111 users, i think you can achieve the same results as in the first example with image2image, it will just take three renders instead of one but still probably be simpler to work than doing the triple masking+spaghetti wiring in comfy.
I wouldn't know how to do the second example in automatic, but animatediff doesn't work in automatic for me for some reason. And tbh, I think the blinking girl example could be done in an animation software easier because ipadapter plus animatediff will just make the character fixed if the strength is too strong, or the style will drift too much when you lower the strength. So in this case it's just alternating two almost still images.
"Unlike your usual videos, " too true
Really hate comfyui
I see ChaosUI, I leave, no upvote.
That's the most unintuitive thing I can imagine.
a11111 >>>>??????
ok boomer
@@mirek190 You're not cool cause you use Comfy.
Whether boomer or not, we're still enjoying life!😊😊@@mirek190
like 33
only %1 of the people using that ui so why r u keep showing things from that ui ?
lol ... go dreaming.
Most people went to ComfUI nowadays.
Stable diffusion? And comfyui ? Why say stable diffusion when its comfyui ?😅
booo