ComfyUI: Area Composition, Multi Prompt Workflow Tutorial

Поделиться
HTML-код
  • Опубликовано: 31 июл 2024
  • This is a comprehensive tutorial on how to use Area Composition, Multi Prompt, and ControlNet all together in Comfy UI for Stable DIffusion. Area Composition allows you to define prompts within different areas of an image, giving you full control over the composition of the image. In this tutorial, I go through 4 workflow examples explaining how to use Area Composition effectively.
    ------------------------
    Relevant Links:
    JSON File (RUclips Membership): www.youtube.com/@controlaltai...
    Comfy UI Manager GitHub: github.com/ltdrdata/ComfyUI-M...
    DaveMane42 Custom Node: github.com/Davemane42/ComfyUI...
    UltimateSDUpscale: github.com/ssitu/ComfyUI_Ulti...
    ComfyUI 101 Tutorial: • ComfyUI for Stable Dif...
    ------------------------
    TimeStamps:
    0:00 Intro.
    1:00 Custom Nodes.
    1:51 Workflow 1.
    21:07 Workflow 2.
    25:11 Workflow 3.
    27:18 Workflow 4.
  • ХоббиХобби

Комментарии • 178

  • @enriqueicm7341
    @enriqueicm7341 7 месяцев назад +4

    OMG! This was the best tutorial! thanks a lot!

    • @controlaltai
      @controlaltai  7 месяцев назад

      Welcome. Thank you for the support!!

  • @monkeypanda-ib5cz
    @monkeypanda-ib5cz 6 месяцев назад

    This was super helpful. Thanks 🙏

  • @gabrielmoro3d
    @gabrielmoro3d 7 месяцев назад +6

    Omggggg!!! This is a masterclass. Mind blowing. Joining the members area right now, your content is absolute gold.

  • @hakandurgut
    @hakandurgut 8 месяцев назад +3

    great tutorial, appreciate your time... I learned a lot, the only thing is; nodes displacement may follow the process order to make more sense instead of packing them in a compact area.

    • @controlaltai
      @controlaltai  8 месяцев назад +4

      Ohh thank you! I will keep that in mind and try to be more organized in the next video. I am still finding my ropes will comfy, probably I should make a habit of making colored groups and separating them as per flow for easy understanding. Good feedback, appreciate it.

  • @tigerfox68
    @tigerfox68 7 месяцев назад +2

    Just amazing. Thank you so much for this!

  • @Mypstips
    @Mypstips 29 дней назад

    Amazing tutorial! Thanks a lot!

  • @interfactorama
    @interfactorama 5 месяцев назад +1

    Unbelievably great tutorial! Blown Away!

  • @TouchSomeGrassOnce
    @TouchSomeGrassOnce 7 месяцев назад +2

    Such great content 👏❤.. this was very helpful.. Thank you so much for creating this tutorial 😊... Looking forward to more such videos

  • @krio_gen
    @krio_gen 2 месяца назад

    Thank you very much!

  • @ImAlecPonce
    @ImAlecPonce 8 месяцев назад

    Thanks !!! I'm going to try it out now .

    • @controlaltai
      @controlaltai  8 месяцев назад

      Great!! If at any time you need help let me know. Happy to help.

  • @jjhon8089
    @jjhon8089 7 месяцев назад +1

    great tutorial

  • @ignacionaveran154
    @ignacionaveran154 2 месяца назад

    muchas gracias por tu video

  • @hamid2688
    @hamid2688 7 месяцев назад

    proud of u, u really nailed it, can say the one of the best quliaty video over utube on how to achive a quliaty picture with AI !

  • @CAPSLOCK_USER
    @CAPSLOCK_USER 2 месяца назад

    Great tutorial!

  • @screenrec
    @screenrec 8 месяцев назад +1

    Thank you. ❤

  • @godpunisher
    @godpunisher 3 месяца назад

    Thank you so much for such a nice tutorial 🤩

  • @M4Pxls
    @M4Pxls 7 месяцев назад +2

    Really love your workflow! Subscribed ;) Would love to use track anything model to mask out characters, use controlnet to modify background, the resample all the image/sequence.

    • @controlaltai
      @controlaltai  7 месяцев назад +1

      Hi, thank you!! Track Anything will work with a video workflow correct, Animate Diff?

  • @ImmacHn
    @ImmacHn 8 месяцев назад +3

    It's funny how this method is basically what you do when doing this by hand.

    • @controlaltai
      @controlaltai  8 месяцев назад +2

      Lol exactly 💯. When I used to draw I used to do the same. I was finding a way to do this with AI. So basically I created my own workflow technique.

  • @manolomaru
    @manolomaru 3 месяца назад

    +1 Super video!
    ✨👌😎🙂😎👍✨

  • @freshlesh3019754
    @freshlesh3019754 2 месяца назад

    This is great, I would love to see this with Stable Cascade and Ipadapter. Being able to have regional control, global style based on an image, and then minute control over a specific area with ipadapter as well would be about everything that I would need in a workflow. (Maybe the addition of an upscaler). But that would be powerful.

    • @controlaltai
      @controlaltai  2 месяца назад +1

      Hi, This is clip text conditioning and cannot be combined with IP adapter. You can use this with cascade however.

  • @agftun8088
    @agftun8088 4 месяца назад +1

    love the voice ai , how did you set it up ? want to use it to hear poetry

  • @britonx
    @britonx 5 месяцев назад

    Thanks Sister !!

  • @Douchebagus
    @Douchebagus 6 месяцев назад +2

    Does this not work with SDXL? It popped off for 1.5, but it doesn't seem to work for the newer models of SD. Edit: I figured it out, the sdxl model I work with is trained on clip skip -2, and setting the clip skip to that breaks the entire node.

    • @LaughterOnWater
      @LaughterOnWater 6 месяцев назад +1

      Wow. This was holding me back. I had it at -1 and it looked like a dog's breakfast. I put it to -2 and suddenly the stars aligned. Thanks for posting this!

  • @jasonkaehler4582
    @jasonkaehler4582 8 месяцев назад +1

    thanks for this! But when i add my second image, it renders garbage where the character is supposed to be (the background 0 renders fine). Any idea how to fix?

    • @controlaltai
      @controlaltai  8 месяцев назад

      Hi, no problem. I need to have some details for troubleshooting. There are multiple workflows in the video, which one are you trying out exactly. What is the checkpoint, confirm the latent resolution, Lora or ControlNet being used?
      Or are you just doing 1 BG and One Character? Meaning 2 Prompts?

    • @jasonkaehler4582
      @jasonkaehler4582 8 месяцев назад +1

      hi, thanks for reply! I did figure out my problem.

  • @eucharistenjoyer
    @eucharistenjoyer 5 месяцев назад

    Your videos are great, really in depth and clear.
    One question though: Is this MultiAreaConditioning similar to Gligen?

    • @controlaltai
      @controlaltai  5 месяцев назад +1

      Thanks! Yes, it’s similar. In some cases gligen may be superior as it understands the entirety of the image. Although I couldn’t find sdxl compatibility, hence multi area composition.

    • @eucharistenjoyer
      @eucharistenjoyer 5 месяцев назад

      @@controlaltaiThank you for the answer. I still use 1.5 most of the time (4gb VRAM), but I wish there was a comfyui node with GUI for Gligen similar to MultiAreConditioning. Doing everything by numbers is really cumbersome.

    • @controlaltai
      @controlaltai  5 месяцев назад

      It is pre built into comfy..
      comfyanonymous.github.io/ComfyUI_examples/gligen/

  • @aybo836
    @aybo836 7 месяцев назад +1

    Great tutorial but a little bit challenging technique as you are using an SD1.5 model to generate 1024x1024 image. As a result, with every pass we can see that new artifacts are being added to the input image. If you want to increase the detail while remaining loyal to the input image, a better way of doing this is either using a model trained to produce 1024x1024 images or do tile upscale. Informative video overall tho, thanks!

    • @controlaltai
      @controlaltai  7 месяцев назад +1

      Thank You!! and you are absolutely spot on. The only reason for using the SD 1.5 model for the video is that the elements in the composition are of lower resolution. The SDXL model will give artifacts (checkpoint dependant), for example, if you want to choose a box for the sun that is very low resolution. Furthermore, you can pass it through an SDXL model for upscaling.
      If the elements are near SDXL resolution, you can use SDXL for the area composition.

    • @aybo836
      @aybo836 6 месяцев назад +1

      Oh sorry my bad, I was specifically talking about the upscaling technique you used, not the area composition😊

  • @8561
    @8561 6 месяцев назад

    Great video! Random question, at 32:19, how did you queue prompt such that the seed changed quickly and the prompt instantly stopped queuing? Also, how would you go about re-applying a faceID or reactor face after it changes in the upscales? Do you face swap at the upscaled pixels are is that not advisable?

    • @controlaltai
      @controlaltai  6 месяцев назад +1

      Thank You!! I change the "control after generate" to randomize. When you do that and hit queue prompt, randomises the seed instantly. Then it will stop, you have to press prompt again for it to use the random seed.
      Basically its like, Seed 1234 (fixed), change after generation to random, queue prompt, it changes seed to "4567" (random), then change random fix again so that seed does not change at queue prompt. Now when I queue prompt again, it will re generate with 4567 and keep that seed as "change after regeneration" is fixed.
      For applying face. First make all the necessary changes like adding details etc without upscaling. This should be done in 1024 or Sd 51.5 resolution.
      Apply the face swap, then just upscale image to image without denoising too much.
      Face swap will work up to a certain resolution after which it won't be that clear. Its advisable to do face swap, then upscale only (no further adding details).
      You can do that upscale via Ultimate SD upscale, check mode form linear to none. Upscale 1.5x to 2x at a time only. This will upscale without any details.

    • @8561
      @8561 6 месяцев назад

      Thanks for the clear response! Looking forward to more tuts@@controlaltai

  • @ai_gene
    @ai_gene Месяц назад +2

    Thank you for the tutorial! I would like to know how to do the same thing as in the fourth workflow but using IPAdapter FaceID to be able to place a specific person in the frame. I tried, but the problem is that the inputs to MultiAreaConditioning are Conditioning, while the outputs from IPAdapter FaceID are Model. How can I solve this problem? I would appreciate any help.

    • @controlaltai
      @controlaltai  Месяц назад

      Okay but this area conditioning in the tutorial is not designed to work with ip adapter. That's a very different workflow. To place a specific person in the frame we have not covered that tutorial but involves masking and adding the person and then using lc light and a bunch of other stuff to adjust lighting as per the scene. Processing it through sampling and then changing the face again.

    • @ai_gene
      @ai_gene Месяц назад

      @@controlaltai Thank you for your response! 😊 It would be great if you could create a tutorial on this topic. I'm trying to develop a workflow for generating thumbnails for videos. The main issue is that SD places the person's face in the center, but I would like to see the face on the side to leave space for other information on the thumbnail. Your tutorial was very helpful for composition, but now I need to figure out how to integrate a specific face. 😅

    • @controlaltai
      @controlaltai  Месяц назад +1

      Unfortunately due to an agreement with the company who owns the insight face copyright tech I cannot publicly create any face swapping tutorial for RUclips. Just search for a reactor and you should find plenty of RUclips. I am just restricted for public education not paid consultations or workflows which are private. (For this specific topic)

    • @controlaltai
      @controlaltai  Месяц назад

      ​@@ai_geneHi, okay so to have the face on the left it is very very easy. You can do this via using 2 control nets. Dw pose and depth. Make sure the image resolution is same as the image generated and ensure the ControlNet image the person is on the left.

  • @AlyValley
    @AlyValley 29 дней назад

    well described and explained,
    but can this be mixed with instantID to insert a consistant character into the image, like the portrait workflow, but using instantID to have same face and such?

    • @controlaltai
      @controlaltai  29 дней назад +1

      This is not meant for that. For consistent characters the workflow is very different. This is basically only used to define the composure. What element where. Consistency is very different and comes after the exposure. Same face is also different and requires tools like face swap, ip adapter etc, or even instant I'd as you mentioned.
      You can create your composure with this. Then have the character replaced and use ic light to re light the whole scene. So technically yes but requires multiple workflows.

    • @AlyValley
      @AlyValley 28 дней назад

      @@controlaltai thank you so much for the brain brightining

  • @TheRMartz12
    @TheRMartz12 7 месяцев назад

    I couldn't find 4x_foolhardy_remacri upscaler, It seems it might be an old or discontinued upscaler and I couldn't find it on safetensors. Do you know where to find it or a safer alternative?

    • @controlaltai
      @controlaltai  7 месяцев назад +1

      Hi, it's there on Hugging Face. Here is the Link: huggingface.co/uwg/upscaler/tree/main/ESRGAN

    • @TheRMartz12
      @TheRMartz12 7 месяцев назад +1

      Thank you so much. I was so focused trying to find the upscaler because it was preventing me to move forward on the guide that I forgot to mention that you created an amazing tutorial and I'm very grateful for this content! I took a look and noticed that all the upscalers come in .pth format, both on huggingface and OpenModelDB. I'm new to the world of generative AI, but I do worry that using pickles might leave me at risk for not knowing how to detect if any of them are infected. Do you have any thoughts on this? Is huggingface generally a site to be trusted even for files that are not safetensors?@@controlaltai I would really appreciate your insight on this. Thank you again! 🙏

    • @controlaltai
      @controlaltai  7 месяцев назад

      Welcome and Thank You! Basically when I started I faced I had the same doubts. Normally, safetensor and pth are safe. .bin file can be risky. For example, When I was doing the A1111 Tutorial for IPadapter, I verified from 2 to 3 places and GitHub as all ipadapter files at that time were .bin, now there are safetensors. HuggingFace is a big site but safe. Don’t download any .bin unless some trusted source recommends you. Pth and safe tensor should be fine.
      Also it is not necessary to use the same upscaler. Try this:
      Open Comfy Manager, go to Install models. Here, search for upscaler and click on install any one and use that. If you are happy with the result use that. Whatever you find within comfy manager is safe, and should not cause any issues.

  • @Unstable_Stories
    @Unstable_Stories 3 месяца назад +3

    does the visual area conditioning custom node no longer exist? I can't find it while searching in my manager...

    • @controlaltai
      @controlaltai  3 месяца назад

      I can still find it. Try searching the author name dave

    • @Unstable_Stories
      @Unstable_Stories 3 месяца назад

      @@controlaltai sooo weird. I can literally find any other author and custom node besides this one. It's either something weirdly wrong with mine, or maybe since you already have it downloaded if it was removed for some reason you can still have access to it...

    • @controlaltai
      @controlaltai  3 месяца назад +1

      @@Unstable_Stories no it’s still there, I always double check before answering any queries. You can check on the GitHub page and manually install it. Also you don’t need this node to use multi area composition. I used it because it’s easier to explain and easier for end users. Comfy ui itself has area conditioning and multi latent conditioning, but there is no visual representation.
      Here is the git hub link
      github.com/Davemane42/ComfyUI_Dave_CustomNode

    • @Unstable_Stories
      @Unstable_Stories 3 месяца назад

      @@controlaltai Thanks for much! Not sure what the issue is then on my end. I will just manually install it and maybe install a clean version of manager too. Your tutorials are really good. Thanks so much :)

    • @YuusufSallahuddinYSCreations
      @YuusufSallahuddinYSCreations 3 месяца назад +1

      I also could not find it in the Manager, however a manual install from git link above worked fine.

  • @Gabriecielo
    @Gabriecielo 7 месяцев назад

    Thanks for the tutorial first, but after I installed Dave's nodes, I still could not find it from "Add nodes" menu, neither by search. Any possible issues?

    • @controlaltai
      @controlaltai  7 месяцев назад

      Right click - add nodes - daveman42. If its not there, close comfy, restart comfy (browser and command prompt both). Still not there update everything, comfy, custom nodes, close and restart again

    • @Gabriecielo
      @Gabriecielo 7 месяцев назад +1

      @@controlaltai Thanks, tried but still not available. Then I went to github, found someone reported same issue, need to download and overwrite folder, instead of git clone. it worked now.

  • @stijnfastenaekels4035
    @stijnfastenaekels4035 Месяц назад

    Awesome tutorial, thanks! But i'm unable to find the visual area composition custom node when i try to install it. Was it removed?

    • @controlaltai
      @controlaltai  Месяц назад +1

      Thanks and no you can find it here....github.com/Davemane42/ComfyUI_Dave_CustomNode

  • @RompinDonkey-bv8qe
    @RompinDonkey-bv8qe 5 месяцев назад +1

    Great workflow and great video.
    Although, has this process stopped working now? When I add the multi area conditioning node, it doesn't have the grid and can't seem to add extra inputs. I saw that its been abandoned. Anyone else having this issue?

    • @controlaltai
      @controlaltai  5 месяцев назад +1

      Thank you! I just checked the workflow and everything is working as intended. Right click the node and insert input above 1, 2, 3 etc. You then have to connect it. You then have to select the appropriate index and define the width and height or you to see the multi colored grid. If you are not getting this then something else is probably wrong, maybe it did not install correctly or some other comfy conflict.

    • @RompinDonkey-bv8qe
      @RompinDonkey-bv8qe 5 месяцев назад

      @@controlaltai Thank you so much for taking the time to reply. I'm still not sure what the issue was, I was running comfyui on google colab at the time. It had the manager installed and the daveman nodes were available. It just didnt look the same (no grid) and wasn't able to add more inputs. I have now tried on desktop local install and it works fine. Thought i'd let youknow incase anyone else asks.
      Thanks again ❤

  • @RonnieMirands
    @RonnieMirands 5 месяцев назад

    Just a question, can i use this Area composition for img2img? Cause i am searching this and cant find. By the way, thanks a lot for share this wonderful tutorial!

    • @controlaltai
      @controlaltai  5 месяцев назад

      Not possible with image to image. However you can use latent composite and blend a character (text2image) into an existing image.

  • @othoapproto9603
    @othoapproto9603 7 месяцев назад +1

    Thanks, that was wonderful. Q: can you replace the props with images as a source?

    • @controlaltai
      @controlaltai  7 месяцев назад +1

      Thanks. Probably, but the workflow would be different, I guess. I have to work on it to see how it can be integrated.

    • @othoapproto9603
      @othoapproto9603 7 месяцев назад +1

      @@controlaltai Cool, I really like the pace and explanation thought out your videos. So much to learn, thanks again, subscribing is a no-brainer.

  • @harishsuresh8707
    @harishsuresh8707 7 месяцев назад

    Thanks for making such a detailed workflow with a clear explanation. I've been trying to use this workflow with SDXL models, but it doesn't seem to work. Do the models / workflows have any architecture specific limitations? For example - SD 1.5 only, SD 2.0+ only, SDXL only, Turbo Models only?

    • @controlaltai
      @controlaltai  7 месяцев назад

      Thank you! No, the workflow don't have any architecture limitations. The Concept should work with any, as long as the elements used within the architecture are compatible with each other. For example, when using SDXL, use SDXL ControlNet.

    • @controlaltai
      @controlaltai  7 месяцев назад

      Send me your workflow via email and mention the checkpoint you are using. I will have a look at what is wrong. When you say it does not work? What exactly is not working? Please elaborate.

  • @calumyuill
    @calumyuill 8 месяцев назад +1

    Thanks for tutorial, can you tell me which custom node enables 'Anime Lineart' node? @22:40

    • @controlaltai
      @controlaltai  8 месяцев назад +1

      Hi, welcome and thank you. In comfy manager search for controlnet and install the ComfyUI's ControlNet Auxiliary Preprocessors by Fannovel16. Manual install link is: github.com/Fannovel16/comfyui_controlnet_aux

    • @calumyuill
      @calumyuill 8 месяцев назад

      @@controlaltai thank you for your reply. I did already install this using the manager but I still don't get the option to create the 'Anime Lineart' node, any ideas?

    • @controlaltai
      @controlaltai  8 месяцев назад

      @calumyuill have you restarted comfy from command prompt (close and reopen), search for preprocessor and you should see all the nodes for it.

    • @calumyuill
      @calumyuill 8 месяцев назад

      @@controlaltai thanks again, yes I have restarted from command prompt but when I search for preprocessor I get zero results 😞

    • @controlaltai
      @controlaltai  8 месяцев назад

      @calumyuill do one thing, click on update comfy and check if there is any comfy update. Close and restart, then try update all and check if the latest version of the controlnet custom node is installed, close and restart again.

  • @NWO_ILLUMINATUS
    @NWO_ILLUMINATUS 6 месяцев назад

    Even following your exact process, model, sampler, scheduler, etc... my pic ALWAYS has too much noise once I add the conditioner. I have to lower the strength in each index, each strength for each index different. any thoughts?

    • @controlaltai
      @controlaltai  6 месяцев назад +1

      There must be some settings within the nodes that was overlooked, just guessing though. Can you email me the workflow you made. I can have a look at it and figure out what's going on. mail @ controlaltai. com (without spaces)

    • @NWO_ILLUMINATUS
      @NWO_ILLUMINATUS 6 месяцев назад

      I definitely, will, as soon as I have a chance. (single daddy, work full time, blah blah blah...).
      Though I may have figured out my issue.... 4 G VRam. lol. You mentioned that we CAN used a tiled decode, but that it may leave artifacts. I usually get the message, "Switching to tiled because only 4 g Vram or less (paraphrase.)"
      Safe to assume it's my old @55 GTX 1050ti?@@controlaltai

    • @controlaltai
      @controlaltai  6 месяцев назад +1

      @1111Scorpitarius for 4gb yes. 6 to 8 gb VRam is recommended. Try a lower resolution. Also don’t try first without upscale. That should help.

    • @NWO_ILLUMINATUS
      @NWO_ILLUMINATUS 6 месяцев назад

      Thank you for the prompt replies, by the way. Earned you a sub, and I'll watch through all your Comfy vids.

    • @RompinDonkey-bv8qe
      @RompinDonkey-bv8qe 5 месяцев назад

      @@NWO_ILLUMINATUS just weighing in here, sorry if you are already aware but there's a software called Fooocus (yup 3 o's) that's really good for those with lower specs. I ran it perfectly on a 1060 ti (I think 4gb Vram lol) - It's not quite as much fun as Comfy imo (i just like the node based format) but i still had a lot of fun with it once I got my head around the settings. Theres a good few more videos out there for it now too.

  • @lenny_Videos
    @lenny_Videos 8 месяцев назад +2

    Thanks for the great tutorial. Where can i find the json files for the workflow?

    • @controlaltai
      @controlaltai  8 месяцев назад +2

      Hi, they are posted for RUclips channel Memberships.

    • @lenny_Videos
      @lenny_Videos 8 месяцев назад

      @@controlaltai I see no option for youtube membership on your channel...

    • @lenny_Videos
      @lenny_Videos 8 месяцев назад

      @@controlaltai Hi there, i see no option for youtube membership on your channel...

    • @controlaltai
      @controlaltai  8 месяцев назад

      Hi, Here: www.youtube.com/@controlaltai/join

  • @user-kh8lh4dv7u
    @user-kh8lh4dv7u 6 месяцев назад

    Great Malihe. Do you do custom workflow? How we can reach out?

    • @controlaltai
      @controlaltai  6 месяцев назад

      Thank You! The team does take up custom Project works. You can reach out to my colleague "Gaurav Seth" via discord "g.seth" or mail " mail @ controlaltai . com" (without spaces). He handles all custom Workflow Projects.

  • @ambtehrani
    @ambtehrani 28 дней назад

    this node's been abandoned and doesn't work anymore. Would you please suggest a suitable replacement ??

    • @controlaltai
      @controlaltai  28 дней назад

      There is no replacement. You can still do the area conditioning in comfyui without using that node. Only you won't have the visual view of x and y co ordinates. And the node works. You should know how to manually install from GitHub. You can download the zip or git clone and do a manual install

  • @asmedeus448
    @asmedeus448 4 дня назад

    I try to use multiareaconditioning with LORA on each prompt. the output image generate independent crop which is it not really great.

    • @controlaltai
      @controlaltai  3 дня назад

      Can you share your workflow and lora via email. I can have a look, email me at mail @ controlaltai. com without spaces

    • @asmedeus448
      @asmedeus448 3 дня назад

      @@controlaltai hi,
      thanks for the fast reply. I'll send you tomorrow with the workflow of the prompt

  • @NgocNguyen-ze5yj
    @NgocNguyen-ze5yj 2 месяца назад

    Hi there, Do you have Video for workflow change backgrounds and combine subjects and backgrounds?
    thanks

    • @controlaltai
      @controlaltai  2 месяца назад

      Not exactly, but a custom workflow can be designed doing just that. To do that accurately, there are multiple ways, depending on how perfect you want the blend. If you want lighting on the subject to match the bg etc, workflow becomes complicated. Maybe this helps as some elements from here could be used to select and remove subjects ComfyUI: Yolo World, Inpainting, Outpainting (Workflow Tutorial)
      ruclips.net/video/wEd1wPlCBaQ/видео.html

    • @NgocNguyen-ze5yj
      @NgocNguyen-ze5yj 2 месяца назад

      @@controlaltai could you please make a series about that?, I need it for works, remove background, combine subject another background, lighting on subjects and bg...to realistic... Many thanks

    • @controlaltai
      @controlaltai  2 месяца назад

      @NgocNguyen-ze5yj can’t promise, but will try. Some things are already in the pipeline, will add this too it.

    • @NgocNguyen-ze5yj
      @NgocNguyen-ze5yj 2 месяца назад

      @@controlaltai yes thanks you in advanced, love your tutorials ( details and lovely)

  • @TheRMartz12
    @TheRMartz12 7 месяцев назад

    Ahhh sadly the workflow number two didn't work for me, no matter how I tried, the image traced from ControlNet wouldn't be adapted to the generated image :( Maybe I was asking for something too complicated and abstract in my first try, but for the second attempt I just tried a adding steampunk tower to my composition and still it would blend-in into the background almost completely in the first generation and in the upscales it was completely gone :( The only thing I had different from the tutorial is that my anime lineart preprocessor is in .ph format and the controlnet model came in safetensors directly from the Comfy manager, but I did check and they are called the same than yours. 😢

    • @TheRMartz12
      @TheRMartz12 7 месяцев назад

      Also in 24:03, if I make those changes to Ksampler 2 and the Ultimate SD Upscale, both last two images look sooooooooo bad, why do yours look good? In the first workflow I remember your images looking bad as well and you had to change them to what you had before making those tweaks at 24:03 and thats what made the images look good for me.

    • @controlaltai
      @controlaltai  7 месяцев назад

      Hi, the second workflow can be tricky because of the complexity. This might help you.
      1. The Multi Area Conditioning Strength (of the selected Control Area Subject only). If the current number doesn't work, try increasing it a further 0.5-1.0 at a time. Keep ControlNet at 1 only.
      2. In Ultimate SD Upscale, workflow 2, the Mode Type is set to none, which means only Image to Image Upscale without adding anything. This mode will ignore the denoise value set here. What "none" does is, take the image output in pixel space, and double it. No denoising.
      3. If the first KSampler image is giving you proper results after increasing Multi Area Strength and the result is becoming worse in the second preview, do not pass it through the second KSampler. Delete/disable the Upscale Latent model, second K sampler, and second VAE decode tile model.
      4. Try Replacing the VAE Decode Tiled with the simple VAE Decode Node.
      5. Make Sure to have Clip Skip as per the Checkpoint Specifications. If the trained checkpoint has no mention of recommended clip skip, then delete the node and connect everything directly to load model. Normally they have clip skip -1/-2 or none.
      6. Make sure the ControlNet Model and PreProcessor both are matching.
      Lastly, this is highly checkpoint/prompt dependent. Save the Seed and try random seeds to see if different results pop up. Some checkpoints are trained with specific sampler name and scheduler and negative prompts for optimal results.
      Let me know if it helps.

  • @TailspinMedia
    @TailspinMedia 5 месяцев назад

    does this work for SDXL models or just 1.5?

  • @headscout
    @headscout 3 месяца назад

    Can I use the same method to generate 3 different human x animal character? Says... 1 for fox girl, second for cat girl, and the last for demon girl at the same frame?

    • @controlaltai
      @controlaltai  3 месяца назад

      Since the prompts are isolated, yes. However subjects would be non overlapping or interactive. To have them overlap or interact, a multi latent composite has to be used, which is totally different.

    • @rei6477
      @rei6477 3 месяца назад

      @@controlaltai Hi! (*^▽^*)
      so, if I understand correctly, this workflow wouldn't allow me to create a scene where, for example, 『Character 1 is in the foreground with their back turned, while Character 2 is in the background, partially overlapped by Character 1, so that only Character 2's face is visible and their body is hidden. 』Is that right?
      I tried to research more about multi latent composite, but I couldn't find a clear explanation. Could you possibly point me to any articles or videos that explain multi latent composite technique in more detail?

    • @controlaltai
      @controlaltai  3 месяца назад

      Hi, yes you understood correctly. Latent composite is the same as this but we do it via k sampling. Unfortunately there are no articles for it. If you go to the comfy GitHub page for workflow examples you will get a workflow sample from comfy itself for latent composition.
      I have not done a tutorial for this. But I have used multi latent composition in both the stable video diffusion videos.
      The concept is similar as that's for video and here you have to do it for images.

    • @rei6477
      @rei6477 3 месяца назад

      @@controlaltai Thank you so much for your reply! I think I should be able to put together a simple workflow using latent composition and K-samplers on my own.
      I'm currently super busy with creating Loras and stuff, but I'm definitely planning to give it a try once things settle down a bit. And of course, I'll also experiment with the workflow from your video. Thanks again!

  • @aloto1240
    @aloto1240 7 месяцев назад +1

    Can you do this but loading image’s instead of just prompts?

    • @controlaltai
      @controlaltai  7 месяцев назад +1

      Nice idea, I think so. But the whole workflow will be different. Based on what you want exactly, I have to explore it further.

    • @aloto1240
      @aloto1240 7 месяцев назад +1

      @@controlaltai thanks for the reply. It would be interesting if it can work. I’m new to Comfy UI and stable diffusion so been watching lots of content looking for the perfect workflow, this looks great! Using images or combining images with prompt in this manner would be awesome. Thanks for all the great contents

    • @controlaltai
      @controlaltai  7 месяцев назад +1

      Will put it in the todo list.

  • @spraygospel5539
    @spraygospel5539 5 месяцев назад

    where can I download the controlnet for the anime lineart?

    • @spraygospel5539
      @spraygospel5539 5 месяцев назад

      it'd be great if someone can give me the controlnet library

    • @controlaltai
      @controlaltai  5 месяцев назад

      Go to comfy manager - custom nodes - install controlnet auxiliary preprocessor node. Here whenever you use any controlnet pre processor, it will download the pre processor model.
      Models can be downloaded from here:
      ControlNet Control Lora Models: huggingface.co/stabilityai/control-lora/tree/main
      ControlNet Models SD1.5: huggingface.co/lllyasviel/ControlNet-v1-1/tree/main
      ControlNet Models SDXL: huggingface.co/lllyasviel/sd_control_collection/tree/main

  • @tailongjin-yx3ki
    @tailongjin-yx3ki 3 месяца назад

    may i add multi loras to this workflow?

    • @controlaltai
      @controlaltai  3 месяца назад

      Yeah there are no issues with that.

    • @tailongjin-yx3ki
      @tailongjin-yx3ki 3 месяца назад

      @@controlaltai i mean to Parallel loras, not series loras, cause i've trained different roles with lora, i wonder whether i can fuse them in one photo

    • @controlaltai
      @controlaltai  3 месяца назад

      @@tailongjin-yx3ki well thats a different workflow and nothing to do with area composition. To answer your question yes, you can blend in multiple loras. Use the lora node and daisy chain them, or use a custom node from RG Three, Lora Stacker, where in one node you can add multiple loras with different weights.

  • @LucasMiranda2711
    @LucasMiranda2711 5 дней назад

    Doesn't seem to work on new versions of comfyui unfortunately

    • @controlaltai
      @controlaltai  5 дней назад

      I have just tested this after your comment, like 2 minutes back on the latest version of comfy, in the matter of fact my comfy uses the nightly PyTorch version, everything works as intended. I just ran all the workflows to double check.

    • @LucasMiranda2711
      @LucasMiranda2711 3 дня назад

      @@controlaltai the project from davemane is archived on his github and the node doesn't even appear on filter from comfyui, trying to install it from github, when creating the node the screen freezes from exceptions

    • @controlaltai
      @controlaltai  3 дня назад

      The one that freeze is muLti latent composition, not area conditioning. Check which node you are putting, he has two node, one always had problems, the workflow doesn’t use latent composition.

  • @icedzinnia
    @icedzinnia 3 месяца назад +1

    i am coming back to this and the nodes aren't available anymore ie "Davemane". Perhaps it's because of the ipdapter massive change. otherwise, it's just not there anymore. :( sad face.

    • @controlaltai
      @controlaltai  3 месяца назад

      Give me some time. Will fix it and post an update.

    • @controlaltai
      @controlaltai  3 месяца назад

      Hi, Just checked and am confused now. Which workflow are you referring to. The Multi Area Composition does not have an IP Adapter in it.

    • @qkrxodls3377
      @qkrxodls3377 3 месяца назад

      Assuming there is some issue with the manager. Just download the repo from the provided link above in the description to the custom_node folder, then restart comfy. For me, the git clone did not worked also, so just downloaded and pasted it manually.
      Hope it helps!

  • @user-xi7rz9ty3i
    @user-xi7rz9ty3i 7 месяцев назад

    followed the tutorial, but the result is not quite the same. the red riding hood and the house are not sharp even the strength is increased.....

    • @controlaltai
      @controlaltai  7 месяцев назад

      Hi, This is highly checkpoint/seed dependant. If everything is the same as per the tutorial, try random seeds. Alternatively, you can send me your workflow, and I can have a look at it as to why you are not getting the desired results.

  • @yurigrishov3333
    @yurigrishov3333 6 месяцев назад

    It just freezing system when I makes index more then 1.

    • @controlaltai
      @controlaltai  6 месяцев назад

      Make sure you are using the correct node. The freeze happens on multiple latent composition node. The correct node would be multi area conditioning node. That works fine.

    • @yurigrishov3333
      @yurigrishov3333 6 месяцев назад

      @@controlaltai Yes, its two different nodes. But it can be index 0 or 1 only.

    • @controlaltai
      @controlaltai  6 месяцев назад +1

      @yurigrishov3333 If you have more than two connected the index should go higher than 0-1.

    • @yurigrishov3333
      @yurigrishov3333 6 месяцев назад +1

      @@controlaltai Oh, I had skip the step "insert inputs" Now its fine.

  • @Mehdi0montahw
    @Mehdi0montahw 8 месяцев назад +2

    We require a professional episode on converting images to lineart while completely removing the black and gray parts

    • @controlaltai
      @controlaltai  8 месяцев назад +2

      I will try and use images to line art example in one of the future tutorials I make. I have to learn how to do that effectively myself first. If I include it in a video will notify you via replying to this comment. Give me some time please. Thanks.

    • @Mehdi0montahw
      @Mehdi0montahw 8 месяцев назад +1

      This is the strongest video I have found so far that explains what I want. If you want to use it and complete the part of removing the gray and black areas and more precise lines link: Perfect Line Art For Comic Books: Stable Diffusion Tutorial..ruclips.net/video/URoLTXDGSig/видео.html&ab_channel=SebastianTorres

    • @controlaltai
      @controlaltai  8 месяцев назад +1

      I think I have something better but not for A1111. Will let you know shortly. Am working on it.

    • @Mehdi0montahw
      @Mehdi0montahw 8 месяцев назад +1

      ​@@controlaltai Anything that achieves the goal, but I hope it is free

    • @controlaltai
      @controlaltai  8 месяцев назад

      Hi, Please check this. Let me know if they are fine or not. The workflow is complicated but all free and rock solid. You just have to adjust 3-4 settings values to get the desired image. Like darker lines, lesser details, more details etc. Video should be online by Tomorrow.
      drive.google.com/file/d/1AMo9rUWRraTZgsYiCnmURQXMfglO1DbJ/view?usp=sharing

  • @user-jx7bh1lx4q
    @user-jx7bh1lx4q 21 день назад

    need an alternative for multiareaconditioning

    • @controlaltai
      @controlaltai  21 день назад

      There is non, do a manual install. It still works.

    • @user-jx7bh1lx4q
      @user-jx7bh1lx4q 21 день назад

      @@controlaltai I haven't seen a manual installation in the openart cloud

    • @controlaltai
      @controlaltai  21 день назад

      Ermm....i don't know what is open art cloud. But if you have a local install you can git clone the repository. If you are using a cloud service then it would be best to ask them how to get a GitHub repository installed in the comfy folder.

  • @cXrisp
    @cXrisp 14 дней назад

    Thanks for the info, but that music clip looping over and over and over and over for the entire video gets too annoying.

  • @B4zing4
    @B4zing4 4 месяца назад

    Where are the workflows?

    • @controlaltai
      @controlaltai  4 месяца назад

      Mentioned it in the description.

    • @B4zing4
      @B4zing4 4 месяца назад

      @@controlaltai Thanks, wil these workflows also work with other models?

    • @controlaltai
      @controlaltai  4 месяца назад

      If you are referring to the area composition, then you can use any checkpoint model.

  • @CORExSAM
    @CORExSAM 5 дней назад

    you sound like some character from game of thrones fr

    • @controlaltai
      @controlaltai  5 дней назад

      Its an AI voice. And no, its no one from that show, just randomly picked and stayed with it. Works better for presentations style video tutorials.

    • @CORExSAM
      @CORExSAM 5 дней назад

      @@controlaltai oh sorry nevermind, i commented on wrong video LOL

  • @Pyugles
    @Pyugles 18 дней назад

    Great tutorial, but it looks like the creator of the visual area conditioning/latent composition hasn't updated their node, and its completely unusable now.

    • @controlaltai
      @controlaltai  18 дней назад

      It still works, you have to do a manual install. No issues with the latest version of comfy portable.

    • @Pyugles
      @Pyugles 17 дней назад

      @@controlaltai Oh! I just did the manual install and it works, tyvm!

  • @claylgruber7994
    @claylgruber7994 3 месяца назад

    abla ismin türkçe türk müsün?

    • @controlaltai
      @controlaltai  3 месяца назад

      I translated your message. Answer is no. Here is the SDXL + Refiner Workflow: drive.google.com/file/d/18zQNC-ejeJ021xcTGJcF3YB-lC7DbO6f/view?usp=sharing

    • @claylgruber7994
      @claylgruber7994 3 месяца назад

      @@controlaltai thanks

  • @jerryTang
    @jerryTang 8 месяцев назад

    looks ipadapt att mask has more control than this,

    • @controlaltai
      @controlaltai  8 месяцев назад +5

      Hi, the ip adapter mask was a recent update, not the original one. However that control is image 2 image, I need to make a video on ipadapter mask for Comfy. However area composition is text to image. What would be interesting is if you combine both. Get a control text to image composition, then take a reference image and add to this using ip adapter masking. Let me know if you are interested in something like that will give it a short.

    • @onewiththefreaks3664
      @onewiththefreaks3664 7 месяцев назад +1

      @@controlaltai I found your very helpful and interesting video because I am trying to build this exact workflow you're mentioning here. I did not know about the visual area conditioning node, so thank you very much for all your time and effort!