How the IP-Adapter Creator uses IPA in Stable Diffusion!!!

Поделиться
HTML-код
  • Опубликовано: 10 дек 2024

Комментарии • 132

  • @sazarod
    @sazarod Год назад +16

    Keep up the great work, Olivio.

    • @OlivioSarikas
      @OlivioSarikas  Год назад +2

      Thank you for your support. really appreciate it :)

  • @OlivioSarikas
    @OlivioSarikas  Год назад +2

    #### Links from the Video ####
    JOIN the Contest: contest.openart.ai/
    Download the WORKFLOWS: drive.google.com/file/d/1EhEOpQmxStEChqzg3Qfp_phyLyQK43Bx/view?usp=sharing
    Matt3o Channel: www.youtube.com/@latentvision
    Deliberate Models: huggingface.co/XpucT/Deliberate/tree/main
    IP Adapter and Encoder: github.com/cubiq/ComfyUI_IPAdapter_plus
    MM_SD Models: github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved
    control_v11f1e_sd15_tile.pth huggingface.co/lllyasviel/ControlNet-v1-1/blob/main/control_v11f1e_sd15_tile.pth

  • @Smashachu
    @Smashachu Год назад +18

    I was never the artist type, but i was always a nerd. I love creating but i don't like drawing and this new form of art is incredible.

  • @WhySoBroke
    @WhySoBroke Год назад +4

    Matteo is the real deal.... every video tutorial is pure gold!! Not like other idea stealing YT channels

  • @alan_yong
    @alan_yong Год назад +2

    🎯 Key Takeaways for quick navigation:
    00:00 🎨 *Introduction to IP Adapter Workflows*
    - Overview of workflows by Mato using the IP adapter.
    - Open art contest details with a prize pool of over $13,000.
    - Invitation to explore and enter multiple workflows for free.
    01:09 🖼️ *IP Adapter in Multi-Style Image Composition*
    - IP adapter usage for combining three different art styles in an image.
    - Importance of using a rough mask with specific colors.
    - Explanation of IP adapter model inputs and locations for model files.
    03:50 💻 *Setting up IP Adapter Models and Files*
    - Detailed guide on downloading and organizing IP adapter models.
    - Different versions (normal, plus, plus face, full face) and their use cases.
    - Instructions for saving models in the appropriate folders.
    06:14 🔄 *Multi-Image Composition Workflow*
    - Demonstration of combining multiple images using IP adapter iteratively.
    - Importance of using the correct mask channel for each image.
    - Upscaling process for achieving high-resolution and detailed results.
    07:48 🎭 *Conditioning Masks for Image Manipulation*
    - Utilizing conditioning set mask notes to apply prompts to specific image regions.
    - Example of changing hair color using conditioning on different mask parts.
    - Highlighting the flexibility of conditioning for various image modifications.
    10:19 🎞️ *Creating Blinking Animation*
    - Generating a blinking animation using a clever image rendering technique.
    - Importance of using specific checkpoint models and version 1.5 of animate diff loader.
    - Tips for updating extensions and ensuring smooth workflow execution.
    13:28 🌐 *Blending Between Two Images*
    - Creating an animation blending between two images using masks.
    - Distinction between 16-frame and 32-frame workflows, considering CPU and GPU usage.
    - Special attention to control net models and their versions for different workflows.
    Made with HARPA AI

  • @tristanwalling1388
    @tristanwalling1388 Год назад

    Thanks!

  • @risinghigherthen
    @risinghigherthen 11 месяцев назад

    these are perfect thank you for the in depth analysis! bravo Olivio!

  • @abdelhakkhalil7684
    @abdelhakkhalil7684 Год назад +52

    Olivio, give the Automatic1111 some love. Many of us are not ready to switch to Comfy as A1111 is easier to use. I use the nodes workflow to create 3D textures, and I know that it's a powerful tool, but sometimes, I just want to load a model and just work on it without having to fiddle around with hundreds or nodes and parameters.

    • @ScottLahteine
      @ScottLahteine Год назад +4

      One of Olivio’s videos shows how to add a node to ComfyUI that makes it interoperate with A1111 installed on the same machine. It helps to bridge the gap. InvokeAI also has a nice node interface, but as far as I know it still doesn’t connect up with ComfyUI or A1111 just yet. While nodes are fun to get a workflow like the one described in this video, we’ll keep getting better apps and user interfaces that make the process more fluid, and that’s what I look forward to most.

    • @garrulousskeptic6616
      @garrulousskeptic6616 Год назад

      It amuses me that some like to parse this as some kind of AI art arms race. It is ever evolving, but no one knows to what.😊

    • @Marian87
      @Marian87 Год назад +4

      @@LTE18 nodes have been a thing for a while in various apps, but I have never thought that being something akin to a glorified telephone exchange operator to be the pinnacle of art creation. While AI is amazing i'm sure most people won't favor nodes as the input.

    • @lennoyl
      @lennoyl Год назад +1

      I agree with you but the problem is not the love but the evolution. A1111 evolves too slowly: Comfyui is almost always the first to make new things work despite its horrible interface (It's not complicated to understand but it's annoying to use: you have to prepare a workflow before creating..that's not how I work. I need to improvise and I can't with comfyui). So it's normal to make videos about comfyui when your videos are about news in AI.

    • @abdelhakkhalil7684
      @abdelhakkhalil7684 Год назад

      @@lennoyl Well, Comfy is pushed hard because now it's owned by StabilityAI. A1111 is still developed by the community. StabilityAI can also help the most popular open-source platform for it's models.

  • @miguelgargallo
    @miguelgargallo Год назад +1

    Keep doing your job, you are the best, this is at least my minimum to contribute to your excellent education now

  • @eucharistenjoyer
    @eucharistenjoyer Год назад +1

    I thought he was the guy behind the nodes, not the technology. His videos are amazing and well explained, and now I feel even more respect for the guy.

  • @brandonopolis
    @brandonopolis Год назад +2

    That image blending is awesome! I need to play around with this... I want to blend a wintery scene into my cousin's air conditioning company logo!

    • @jtjames79
      @jtjames79 Год назад

      I watched the original video.
      And I'm definitely going to need an AI to do that for me.
      But that should be around sooner than later.

  • @vincedodge321
    @vincedodge321 Год назад +6

    Just a few months ago, all of these were still impossible to do. The updates are really fast and exciting.

  • @tristanwalling1388
    @tristanwalling1388 Год назад

    Really great video, very helpful tips and workflows, thank you!

  • @Shingo_AI_Art
    @Shingo_AI_Art Год назад +1

    Iterative Latent Upscaler gives the best results from my tests

  • @Zerod-rn3ye
    @Zerod-rn3ye 11 месяцев назад +1

    If you don't mind, how did you get the image @ 9:30 at the top left with the really cool Final Fantasy styled concept art of a female? I see you loaded it but if you created it and can provide the prompt / model to recreate it (and similar neat concepts of that style) or related resource that would be appreciated.

  • @wuetsby5448
    @wuetsby5448 11 месяцев назад

    awesome You got me really great stuff with the logo animation that was awsome

  • @Inner-Reflections-AI
    @Inner-Reflections-AI Год назад +1

    Nice Summary! Such an amazing node to use with animations.

  • @bastienfrancois9180
    @bastienfrancois9180 Год назад +1

    This is getting really interesting, a bit like VST plugins or synths for audio, or filters or plugins in photoshop or premiere, only much more powerful!

  • @musicandhappinessbyjo795
    @musicandhappinessbyjo795 Год назад +7

    Really love your comfy UI videos. Please do them more, comfyUI seems to have a lot of haters in the community and they don't realize how much potential this thing has.

    • @joeterzio7175
      @joeterzio7175 Год назад +6

      I don't hate comfy UI but I'm never going to use it. It's like trying to read a wiring diagram and I have no desire to do that. I see a comfy UI video and I just don't watch.

    • @kkryptokayden4653
      @kkryptokayden4653 Год назад +3

      I didn't like it before but I got past that and started getting used to it. Now I have 3 workflows and use it constantly.

  • @Bikini_Beats
    @Bikini_Beats Год назад +1

    Another great video. Thanks

  • @AlterMax24-YouTube
    @AlterMax24-YouTube Год назад

    I don't even know how you do to stay calm. These technologies drives me crazy! Every day, we have something new, something that doesn't work and we have to fix. And you are always in peace! I'd like to pay homage to your patience. Thank you for that! 😅

  • @alreadythunkit
    @alreadythunkit Год назад +1

    Nice one Olivio.

  • @draken5379
    @draken5379 Год назад +3

    He isnt the creator of IP-adapter, he created the custom_node for comfyUI that uses ipadapter.

    • @AB-wf8ek
      @AB-wf8ek Год назад +3

      Correct, he's the developer of the IPAdapter Plus custom node, still a total MVP though!

  • @aindmix
    @aindmix 11 месяцев назад

    What will be interesting is text2video outputs from something like pika 1.0 put through a comfy UI workflow to overlay styles and upscale.

  • @zoemorn
    @zoemorn 9 месяцев назад

    the worklow for putting two figures into an central image is a lot of fun. Sometimes tho i have found that one of the input images gets ignored entirely (so only one of the two figures is used) and i cant figure out what causes that. is it just seed randomness? i did check the IP Adapter to ensure i didnt accidentally effectively remove it there. I figure it has to do with clip vision crop setting perhaps but havent figured it out. Interestingly if i switch sides of the RGB mask (so make the missing figure to be on the right side if they were missing from the left) that seems to work. the input image figure was centered on the image tho, so would assume clipvision crop = center is correct?

  • @wisdombox4
    @wisdombox4 Год назад +1

    Hi Olivio Can do a video about how to add prompt styler any workflow, I look on RUclips no one made a proper tutorial, Thank you

  • @NeoIntelGore
    @NeoIntelGore Год назад

    Trying to get started with ComfyUI. I can't get the blinking example to work. I tried it with my own two pictures, but I have a feeling it just ignores my reference pictures. I tried it with the example pictures and it just skips half the nodes. After that when I refresh and queue the prompt again it just starts the last node ignoring the rest. What am I doing wrong?

  • @clumsymoe
    @clumsymoe Год назад +1

    Hey Olivio, I gotta say, your channel is always go-to for everything about SD. Really appreciate you keeping everyone in the loop with all the new stuff in AI generative art. Thanks a bunch and keep it up friend!

  • @kargulo
    @kargulo Год назад +1

    HI , where can I get IP Adapter encoder 1-5.safetensors for Load CLIP Vision , I can not fint it

  • @veritas7010
    @veritas7010 Год назад +1

    Props I also recently subbed to to them they are a wizard

  • @Clupea101
    @Clupea101 Год назад

    Great Guide

  • @NotThatOlivia
    @NotThatOlivia Год назад

    now you are frying my brain - but I love it!

  • @dkamhaji
    @dkamhaji Год назад

    @Olivio, upscale question: in the first workflow from Matteo, the upscale comes from the 1st ksampler to the upscale path and makes its way into the second Ksampler Latent. the second ksamplers model input comes from the path of the og model and ipadpters. and the conditioning from the original prompts. my question is as the image is upscaled in this scenario - is it taking any information from the first kasamplers output? what exactly is being sent from the first ksampler to the second via the latent? is it image information or just the dimensions of the image? I hope this makes sense. I wish someone would go deep into the path of the data and image pixels as it goes from the first gen ksampler and up the upscale path.

  • @pawozakwa
    @pawozakwa 7 месяцев назад

    How to get this "blueprint" view in stable diffusion?

  • @keylanoslokj1806
    @keylanoslokj1806 10 месяцев назад

    If you use colab notebooks, how to achieve similar level of control as having an elaborate GUI?

  • @johndebattista-q3e
    @johndebattista-q3e Год назад

    Yes you can download them now in the extension I find them today but you need to do an update

  • @shtorm7267
    @shtorm7267 Год назад +2

    Ok now it's mostly ComfyUI channel.

    • @carlingo3191
      @carlingo3191 Год назад

      Yea, they all sold out.

    • @OlivioSarikas
      @OlivioSarikas  Год назад +1

      LOL, yes, totally selling out on a FREE tool - You got me bro! Cancel Culture Rage to the Max please

  • @luciusblackheart
    @luciusblackheart Год назад +1

    Does this work flow only work for ComfyUI? Can it work for the standard Stable Diff Web UI?

    • @JJ-vp3bd
      @JJ-vp3bd Год назад

      did you figure this out

  • @LuiNogueira
    @LuiNogueira 11 месяцев назад

    I couldnt manage to make the conditioning through prompt work in the second example with SDXL, is this possible?

  • @kedixia
    @kedixia Год назад

    Thanks for the video. Would you mind making a tutorial for SDXL AnimateDiff workflow? ...I just couldn't get it to work. My output is all black unless I adjust the size to 256*256.

  • @FusionDraw9527
    @FusionDraw9527 Год назад

    great workflow

  • @MisterWealth
    @MisterWealth 10 месяцев назад

    Is it possible to use multiple ipadapters on one video clip? so if a person turns around how does it know to keep the stylization of the person?

  • @ufukzayim6689
    @ufukzayim6689 Год назад

    Hi Olivio, I have problem with loading ipadapter. Load IPAdapter Model node shows" ipadapter_file null".I 've made a folder in models folder called ipadapter.And changed model.bin file name to ip-adapter-plus_sd15.safetensors.And updated Confy.But says Prompt outputs failed validation IPAdapterloader: Value not in list: ipadapter_file: 'None' no in [] . Could you please guide me.

  • @Steve.Jobless
    @Steve.Jobless Год назад +1

    OLIVIO, Is it possible to create img2img workflows using SDXL Turbo in ComfyUI???

  • @KingQuantShi
    @KingQuantShi Год назад +39

    Probably rename the video to comfiui

    • @Bikini_Beats
      @Bikini_Beats Год назад +11

      Is the future

    • @jevinlownardo8784
      @jevinlownardo8784 Год назад +5

      ​@@Bikini_Beatswhat a joke

    • @Senti_Q
      @Senti_Q 11 месяцев назад

      @@jevinlownardo8784any suggestions for a competitive alternative?

    • @J3R3MI6
      @J3R3MI6 10 месяцев назад +1

      @@jevinlownardo8784comfy gang gang

  • @dkamhaji
    @dkamhaji Год назад

    yes I saw his Video, its incredible stuff. Question for you.. can you use the canvas node you introduced me to to make these rudimentary RGB masks?Im trying it now - you can use its mask but I don't see how to separate the RGB in the same way the image load node does it

  • @art3112
    @art3112 Год назад +2

    Great video and workflow ideas. Thanks. As an A1111 user I am just trying to explore Comfy UI. I think in the end it could have some sort of macro interface above this piping (like a lot of software e.g. some synths in the audio world). Then casual users can create more easily using just the macro controls, whilst still allowing others to do a deep dive and customise in detail to their needs.

    • @yoyo1poe
      @yoyo1poe 11 месяцев назад +1

      Workflows are the macros, you can save a finished picture in the workflows folder and it will import the workflow it was created with when you use "load"

  • @nelvero
    @nelvero Год назад +1

    wow! I'm going deeper underground

  • @FunwithBlender
    @FunwithBlender Год назад

    When i copy your workflow it does not work it says I am missing all the nodes and thne when i say in the manager to isntall it it cant find it something is weird about the new comfyui

    • @OlivioSarikas
      @OlivioSarikas  Год назад +1

      you need to update to the latest version of comfyUI

  • @blender_wiki
    @blender_wiki Год назад

    This is actually a basic set up , you can do much much more with bbox + auto masking + segmentation and ip adapter.

    • @OlivioSarikas
      @OlivioSarikas  Год назад

      you, of course. this is only to show an idea of how to use it. :)

  • @SeanieinLombok
    @SeanieinLombok Год назад

    lvoe your content, was reviewing yoru product placement video, however, I really want to try and place products in the hands of AI models/people. Any work flow for this?

  • @NicolasLeroy-g8h
    @NicolasLeroy-g8h 11 месяцев назад

    My question is probably silly but i will ask it anyway :) where can i get the RGB.png picture because i don't find it inside of the workflow.

  • @forfreeiran8749
    @forfreeiran8749 8 месяцев назад

    Is this possible In forge ?

  • @cyril1111
    @cyril1111 Год назад

    this is the IP-Adapter node creator :) ( IP-adapter has been created by lllyasviel (the creator of Controlnet & Fooocus)

  • @johnmenezes2031
    @johnmenezes2031 Год назад

    can this be accomplished in A1111 with segment anything? danke

  • @fimbulInvierno
    @fimbulInvierno Год назад +1

    Can something like this be run in a RTX 4070 Ti?

    • @mirek190
      @mirek190 Год назад

      yes ... eastly

    • @fimbulInvierno
      @fimbulInvierno Год назад

      @@mirek190even for video generation?

    • @mirek190
      @mirek190 Год назад

      @@fimbulInvierno yes
      For video generation you need 12 GB VRAM

  • @ultimategolfarchives4746
    @ultimategolfarchives4746 Год назад

    Hello sir! I'm wondering if there is something like tile upscaler in comfyui? Or something similar that would add detail while upscaling

    • @mirek190
      @mirek190 Год назад +1

      yes

    • @ultimategolfarchives4746
      @ultimategolfarchives4746 Год назад

      ​@@mirek190I didn't know. Any tips on this type of workflow sir?

    • @mirek190
      @mirek190 Год назад

      Are you serious?
      Find some workflow for it ...@@ultimategolfarchives4746

  • @zappazack
    @zappazack 9 месяцев назад

    Where can I get these rgb.png masks?

    • @OlivioSarikas
      @OlivioSarikas  9 месяцев назад

      You paint them yourselfs in any paint program or paint online app

    • @zappazack
      @zappazack 9 месяцев назад

      Wow extreme quick response thx a lot. Hope u dont mind that I created them through screenshot but I was to impatient testing the workflow...result is amazing@@OlivioSarikas

  • @toonleap
    @toonleap Год назад

    How you create the masks?

  • @Concepts_Space
    @Concepts_Space Год назад

    What's the browser theme that you're using? The tabs look a little more rounded than usual.
    Thanks for the video, as always.

  • @op12studio
    @op12studio Год назад

    So you just do ComfyUI now? I am on AMD GPU so I cant use it. So if thats all you use then I can know if I should watch the videos or not.

  • @DJVARAO
    @DJVARAO Год назад

    Wow, Olivio has more than 200k subscribers!

  • @aceathor
    @aceathor Год назад

    Is someone know if I can have 2 different graphic card working together.
    I have a 3090 Ti and and a 2070 Super. I f I can have 32 Gig for image AI...

    • @2PeteShakur
      @2PeteShakur Год назад

      nope, sorry

    • @mirek190
      @mirek190 Год назад +2

      You have rtx 3090 has 24 GB of VRAM that is more than enough to work literally with everything ....

    • @effehell7593
      @effehell7593 Год назад +1

      @@mirek190Not by a long shot. These AI programs really need a lot of GPU power. The more the better.

    • @mirek190
      @mirek190 Год назад

      @@effehell7593
      At the minute ment cheap rtx3090 ( after mining hype ) is the best solution.
      I bought mine rtx 3090 24 GB VRAM for 700 euro.
      To work with AI rtx 3090 is as fast as rtx 4080 but has more VRAM 24 GB Vs 16 GB from rtx 4080.
      So has a bit longer future than 16 GB cards .
      Right now to generate pictures with the most advanced SDXL versions you need 8-12 GB VRAM.
      To generate video you need 12 GB.
      To work with LLM you can fully put a model on your rtx 3090 up to 34B size ( all 65 layers of q4k_m version of ggml). And you get around 40 tokens /s
      Bigger ggml models like 70B you can put half on GPU and the rest to RAM. 70 lB model will be getting around 3 tokens/s then.

  • @SPT1
    @SPT1 Год назад +1

    This seems like an overly complicated way of avoiding the use of Photoshop. You could end up with the same result just making 3 individual pictures (background, girl 1, girl 2) also using ip-adapter but in Auto1111. then just Open Photoshop, and make 3 layers. either blend everything manually if you know how, or make a rough version and improve it with img2img and/or inpainting in auto1111. I'm sure Comfy UI has its unique qualities, I just think it's always better to combine the use of several programs to produce art. None can do everything perfectly.

    • @yoyo1poe
      @yoyo1poe 11 месяцев назад

      Photoshop couldn't make the lighting coherent on all the characters, or have them interact like stable diffusion can do

    • @SPT1
      @SPT1 11 месяцев назад

      @@yoyo1poe sure it could, it's not one click and you have to know what you're doing, I'll give you that. But you could totally do it I assure you.

  • @Marian87
    @Marian87 Год назад +5

    Nodes are the death of passion.....

  • @sidheart8905
    @sidheart8905 Год назад

    Getting better and better woohoooo last few videos are just...... umaaaah

  • @bigdaddy5303
    @bigdaddy5303 11 месяцев назад

    That is the same brunette that features in pretty much every image generation I make

  • @CaptainKokomoGaming
    @CaptainKokomoGaming Год назад

    Has a1111 fallen so far behind that this stuff can't be used with it?

    • @carlingo3191
      @carlingo3191 Год назад +1

      All the geniuses prefer this type of interface that's all. If you're a genius and try to talk about anything but Comfy you get blackballed.

  • @giochi4
    @giochi4 Год назад

    Wonderful. Personally I find comfyUI a real pain to use. I understand the versatility.

  • @JohnVanderbeck
    @JohnVanderbeck Год назад

    I still don't really understand just WHAT IPAdapter actually is/does.

    • @yoyo1poe
      @yoyo1poe 11 месяцев назад

      I don't think anybody knows😂
      It replicates a style in the picture, mostly used for replicating faces.
      And the results can vary wildly depending on the whims of stable diff

    • @bigbeng9511
      @bigbeng9511 Месяц назад

      This seems to work as a prompt in the form of an image; basically transforming the image into text description. Very useful since you can describe more with a picture, and perform image processing such as masking, adjustment etc which are hard to do and describe with words... Well I'm still learning also 😅

    • @JohnVanderbeck
      @JohnVanderbeck Месяц назад

      @@bigbeng9511 I don't think that's it. That's how Midjourney's image reference works, or at least how it used to work, not sure if it still is. But IPAdapter is far too precise for that to be the case.

  • @YouCountSheep
    @YouCountSheep 11 месяцев назад

    A1111 can only dream of such functionality. Some things like controlnet or posing also works in A1111 but in seperate tabs and very clunky to change things imo. Im just doing a bit of ai images, but after I saw comfy I never went back to A1111, and now comfy has all the functionality and even more what A1111 used to have as an advantage when comfy was still pretty new.
    And the manager helps alot aswell. I saw the IP adapter in another video, that is very powerful stuff, like an automatic inpaint to a degree but smart and for whole pictures. Kinda like a smart controlnet really.

  • @CrystalBreakfast
    @CrystalBreakfast Год назад

    Olivio, please, the "end screen cat" has got to go. The cat is a stock character from an automatic clip making site, so it's not unique to you and other people could use it freely. Plus, you now know so many great ways to make custom animation with all the workflows from the last few months. We need a new outro that's uniquely Olivio!
    ... also the cat wasn't even pointing at video links at the end of this one, it's kinda awkward. >_>

  • @yoyo1poe
    @yoyo1poe 11 месяцев назад

    Unlike your usual videos, lots of interesting information here. Thumb up.
    For the automatic 1111 users, i think you can achieve the same results as in the first example with image2image, it will just take three renders instead of one but still probably be simpler to work than doing the triple masking+spaghetti wiring in comfy.
    I wouldn't know how to do the second example in automatic, but animatediff doesn't work in automatic for me for some reason. And tbh, I think the blinking girl example could be done in an animation software easier because ipadapter plus animatediff will just make the character fixed if the strength is too strong, or the style will drift too much when you lower the strength. So in this case it's just alternating two almost still images.

    • @boogieman7233
      @boogieman7233 11 месяцев назад

      "Unlike your usual videos, " too true

  • @jevinlownardo8784
    @jevinlownardo8784 Год назад +3

    Really hate comfyui

  • @nermal93
    @nermal93 Год назад +1

    I see ChaosUI, I leave, no upvote.

  • @therookiesplaybook
    @therookiesplaybook Год назад +3

    That's the most unintuitive thing I can imagine.

  • @Mohammed-oo5cj
    @Mohammed-oo5cj Год назад +5

    a11111 >>>>??????

    • @mirek190
      @mirek190 Год назад +2

      ok boomer

    • @carlingo3191
      @carlingo3191 Год назад

      @@mirek190 You're not cool cause you use Comfy.

    • @Mohammed-oo5cj
      @Mohammed-oo5cj Год назад

      Whether boomer or not, we're still enjoying life!😊😊@@mirek190

  • @miguelgargallo
    @miguelgargallo Год назад

    like 33

  • @ardapasa2118
    @ardapasa2118 Год назад +2

    only %1 of the people using that ui so why r u keep showing things from that ui ?

    • @mirek190
      @mirek190 Год назад +3

      lol ... go dreaming.
      Most people went to ComfUI nowadays.

  • @Explorewithajwise
    @Explorewithajwise Год назад

    Stable diffusion? And comfyui ? Why say stable diffusion when its comfyui ?😅

  • @Jaysunn
    @Jaysunn Год назад

    booo