- Видео 97
- Просмотров 468 208
Dr.Lt.Data's ComfyUI Extension
Добавлен 7 июн 2023
This channel is dedicated to uploading videos that introduce the usage of developed ComfyUI extension nodes.
ComfyUI-Manager - Double Click Node Title Feature [AUDIO]
This video explains the Double Click feature of the node title provided by ComfyUI-Manager.
github.com/ltdrdata/ComfyUI-Manager
CREDIT: MaraScott (remake/narration)
github.com/ltdrdata/ComfyUI-Manager
CREDIT: MaraScott (remake/narration)
Просмотров: 1 010
Видео
ComfyUI-Manager: Node badge feature [AUDIO]
Просмотров 7545 месяцев назад
This video explains the Node badge badge feature in ComfyUI-Manager github.com/ltdrdata/ComfyUI-Manager CREDIT: MaraScott (remake/narration)
ComfyUI-Manager - Node info and Workflow galleries. [AUDIO]
Просмотров 8765 месяцев назад
This video introduces the Node Info feature and Workflow Galleries. github.com/ltdrdata/ComfyUI-Manager CREDIT: MaraScott (remake/narration)
ComfyUI-Manager - Multiple selection/Channels [AUDIO]
Просмотров 7835 месяцев назад
This video explains the features of multiple selection and channels. github.com/ltdrdata/ComfyUI-Manager CREDIT: MaraScott (remake/narration)
ComfyUI Manager - How To Enable The Preview? [AUDIO]
Просмотров 1,6 тыс.7 месяцев назад
In this video, I demonstrate the feature, introduced in version V0.17, of easily adjusting the preview method settings through ComfyUI Manager. CREDIT: MaraScott (narration)
ComfyUI-Manager - Update(V2.15): New columns
Просмотров 1,2 тыс.8 месяцев назад
Stars, Last Update columns are added. Sort feature is added. github.com/ltdrdata/ComfyUI-Manager
Experimental: KSampler Advanced - step skipping & lowering cfg
Просмотров 3,6 тыс.9 месяцев назад
In this video, I experimented with a simple "KSampler Advanced" trick.
ComfyUI-Impact-Pack: Update - Detailer with Differential Diffusion
Просмотров 8 тыс.9 месяцев назад
The "noise_mask_feather" feature in the Detailer function of the Impact Pack has been improved. "noise_mask_feather" applies feather to the noise_mask used in the i2i process. The issue is that it wasn't very effective. github.com/ltdrdata/ComfyUI-Impact-Pack
ComfyUI-Impact-Pack/ComfyUI-Inspire-Pack - Workflow: MakeTileSEGS+IPAdapter+SDXL-Lightning
Просмотров 4,6 тыс.9 месяцев назад
In this video, the method of applying SDXL-Lightning to the "Make Tile SEGS" upscale and improving the upscale context using IPAdapter is demonstrated. github.com/ltdrdata/ComfyUI-Impact-Pack github.com/ltdrdata/ComfyUI-Inspire-Pack
ComfyUI-Inspire-Pack: Stable Cascade Checkpoint Loader
Просмотров 1,5 тыс.9 месяцев назад
This video introduces a node that can easily load the Stable Cascade checkpoint, which is newly added to the Inspire Pack. github.com/ltdrdata/ComfyUI-Inspire-Pack
ComfyUI Impact Pack: Tutorial #14 - sigma_factor in Regional Prompt
Просмотров 5 тыс.10 месяцев назад
Introducing a feature through the updated "Regional Sampler" that allows adjusting the denoise level for each region. github.com/ltdrdata/ComfyUI-Impact-Pack github.com/ltdrdata/ComfyUI-Inspire-Pack
ComfyUI-Impact-Pack: Upscale Video with Make Tile SEGS, ControlNetApply (SEGS)
Просмотров 3,6 тыс.10 месяцев назад
This video showcases upscaling video using AnimateDiff. github.com/ltdrdata/ComfyUI-Impact-Pack
ComfyUI-Impact-Pack: PreviewDetailerHookProvider
Просмотров 2,4 тыс.10 месяцев назад
PreviewDetailerHookProvider is connected to Detailers to monitor intermediate processes. github.com/ltdrdata/ComfyUI-Impact-Pack
[LEGACY VIDEO] ComfyUI-Manager: Double-Click Node Title
Просмотров 1,5 тыс.10 месяцев назад
This video explains the convenience feature of double-clicking the title of a node. github.com/ltdrdata/ComfyUI-Manager
ComfyUI-Inspire-Pack: IPAdapter Model Helper
Просмотров 2,2 тыс.10 месяцев назад
This video introduces the IPAdapter Model Helper node, which allows for easy management of the IPAdapter model. github.com/ltdrdata/ComfyUI-Inspire-Pack github.com/cubiq/ComfyUI_IPAdapter_plus
ComfyUI-Manager: Component based on Group Node (V2.0)
Просмотров 2,1 тыс.11 месяцев назад
ComfyUI-Manager: Component based on Group Node (V2.0)
ComfyUI-Impact-Pack - Workflow: Upscaling with Make Tile SEGS
Просмотров 9 тыс.11 месяцев назад
ComfyUI-Impact-Pack - Workflow: Upscaling with Make Tile SEGS
ComfyUI-Inspire-Pack: Batch Splitter
Просмотров 2,2 тыс.11 месяцев назад
ComfyUI-Inspire-Pack: Batch Splitter
ComfyUI Manager - How To Fix An Outdated Node
Просмотров 1,9 тыс.11 месяцев назад
ComfyUI Manager - How To Fix An Outdated Node
ComfyUI-Impact-Pack: Tutorial #13 MASK to SEGS
Просмотров 6 тыс.11 месяцев назад
ComfyUI-Impact-Pack: Tutorial #13 MASK to SEGS
ComfyUI - Cheatsheet: Mask Editor
Просмотров 3,7 тыс.11 месяцев назад
ComfyUI - Cheatsheet: Mask Editor
ComfyUI-Impact-Pack: Preview Bridge (latent)
Просмотров 4,3 тыс.11 месяцев назад
ComfyUI-Impact-Pack: Preview Bridge (latent)
ComfyUI-Impact-Pack: Cycle and Detailer Hook
Просмотров 12 тыс.Год назад
ComfyUI-Impact-Pack: Cycle and Detailer Hook
ComfyUI-Manager: Update(V1.6.3) - Summary
Просмотров 2,1 тыс.Год назад
ComfyUI-Manager: Update(V1.6.3) - Summary
ComfyUI Workflow: Openpose To Region Map (OpenArt.ai)
Просмотров 4,2 тыс.Год назад
ComfyUI Workflow: Openpose To Region Map (OpenArt.ai)
ComfyUI-Inspire-Pack - Regional IPAdapter
Просмотров 9 тыс.Год назад
ComfyUI-Inspire-Pack - Regional IPAdapter
ComfyUI-Impact-Pack/ComfyUI-Inspire-Pack - Workflow: Cinemagraph
Просмотров 5 тыс.Год назад
ComfyUI-Impact-Pack/ComfyUI-Inspire-Pack - Workflow: Cinemagraph
ComfyUI-Inspire-Pack - Tutorial #12: SEGSDetailer for Animate Diff
Просмотров 9 тыс.Год назад
ComfyUI-Inspire-Pack - Tutorial #12: SEGSDetailer for Animate Diff
ComfyUI-Inspire-Pack - Tutorial #11: Regional Seed
Просмотров 3,4 тыс.Год назад
ComfyUI-Inspire-Pack - Tutorial #11: Regional Seed
ComfyUI-Inspire-Pack - Tutorial #10: Seed Explorer
Просмотров 3,6 тыс.Год назад
ComfyUI-Inspire-Pack - Tutorial #10: Seed Explorer
Could you please make another workflow for flux?
This has now been of course moved to the native Settings dialog > search for "badge" and press Enter > "Node ID badge mode" setting. It will surely be moved and renamed again by the time you read this comment.
Hi there, I am a bit confused on where the Image Receiver is getting its image from. I downloaded the workflow from the github page, however even when running, the "Image Receiver" is not loading any image. From my understanding, it should be coupled with an "Image Sender" node but it's not present in the workflow. The only thing I can think of is that it's somehow receiving the image from the "LoadImage" node, but mine doesn't behave like that and I'm not aware of any settings for enabling that behavior.
In addition to receiving images sent by the Image Sender, the Image Receiver can also independently store images within the workflow in base64 format.
@@drltdata Oh, I see! Is there a tutorial or workflow on how to use the node that way? I've only seen it used in conjunction with an Image Sender node.
How do I connect two "sdxl prompt stylers" to use them at the same time?
get a mic
why no voice?
If I have a set of segments (Seg1, Seg2, Seg3) and I want a different prompt to apply to each of them, do I understand correctly that the Wildcard will be {prompt_for_seg1 | prompt_for_seg2 | prompt_for_seg3}, i.e. the wildcard pieces will be processed sequentially like the segments? And if not and the Wildcard pieces will be taken randomly, is there any way to assign each segment its own specific prompt?
{A|B|C} means that one option among A, B, C will be randomly selected. If you want to process them sequentially, please refer to the following syntax. github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/tutorial/ImpactWildcard.md#special-syntax-for-detailer-wildcard
Hello @drltdata, I miss your great ideos... Just a question please, is there a new or better way to let the detailer for AnimateDiff accept more than 30 images? I am using this workflow but I use Load Video (path) .. the video is 120 images.
If you want to process long videos, set the context_length in Context Options provided by comfyui-animatediff-evolved. The recommended value for context_length is 16. github.com/Kosinkadink/ComfyUI-AnimateDiff-Evolved#context-options-and-view-options
it's not working for me for some reason, it downloaded the model and now it's just not displaying anything. Am i too impatient? how long should this take?
While the initial download takes quite a bit of time, once the download is complete, the results come out almost immediately. You can ask to here: github.com/pythongosssss/ComfyUI-WD14-Tagger/issues
thank you man you saved me
for some reason, the queue load stops on the second region and loads infinitly
In my testing, no issues were found. Update both ComfyUI and custom nodes to their latest versions, disable everything except Impact Pack and Inspire Pack, and try again.
at 2:39 how could you show preview image on KSampler node? Do you have install any packages for that ?
ruclips.net/video/hzx1yw2x9Ls/видео.html
I'm unable to find the MediaPipe-FaceMeshPreprocessor node anywhere, I did install the addon packs.
That is renamed to `MediaPipe Face Mesh`.
everything goes well until it reach toladapterpipe then it stop and "prompt executed" I really don't know what to do
sorry its stopped in this node : Apply Regional IPAdapters (Inspire)
@@37hp72 Please check if both Inspire Pack and IPAdapter Plus are the latest versions.
certificate error
thanks.
Seems like it is blocked, I get this error when I tried to install the nodes: 'With the current security level configuration, only custom nodes from the "default channel" can be installed.' I am on mac and comfy is the most current one, I do not even know what 'security level' here means. Is it the only way to make and use colored mask?
When it's set up to allow external access, such as with --listen 0.0.0.0, it's blocked at the default security level. github.com/ltdrdata/ComfyUI-Manager#security-policy
@@drltdata Oh okay, so I looked into the folder, then ... I still can not find this 'config.ini' anywhere. From the file name I can assume that's one of windows file. What would be the equivalent one for a mac system? (BTW I didn't make mine -listen, it is runs solely isolated in my macbook.
@@TheCrazyscale You can find that file in the ComfyUI-Manager dir.
Thank you so much, I found it: security_level = normal, so you want me to set it 'low'? As I mentioned, I am not using -listen option.
@@drltdata I temporarily set it to 'weak', and it was installed, returned it back to 'normal' and it is still being loaded, usable. Thanks for nice tip, now I can use color mask like you did!
Is it possible to get XY coordinates of each seg?
Let's say I put small point several masks, and I want to get their coordinates in order to transpose PNG images and then send to ksampler..
I can't see "Badge" :/ Everyhing has been updated.
Don't have it either, might be a special extension? I don't know .. yet :)
DOES NOT WORK FOR NESTED WILDCARDS. aside from zero proof, and a claim on their github, it does not. I've tested it in the latest iteration of comfyui. DOES NOT WORK!
Does the Impact-Pack support nested wildcards?
thanks for the video! is there a way to do this without inpainting? just to control attention mask before ksampler? (noob here :P)
is there a way to use combined, but make the masks as close to each other as possible to use detailer? In my case, I have masks of head, left hand, right hand, and two legs, I want to inpaint those areas, but those masks are very far way from each other, making the size of the image very large. I want to bring all those mask to close to each other, combined to one image, upscale/inpaint, then use special mapping to paste those refined parts to the original image
I added `SEGS Merge` node to Impact Pack.
so you are responsible for this amazing workflow thanks, could you tell us where is it located on the drive, to delete or share with people Thank you for your amazing work, subbed
You can find saved components in custom_nodes/ComfyUI-Manager/components
@@drltdata thanks a lot, and thank you for your work
I've learned that just waving my mouse cursor around the comment entry line does nothing to teach you about how I loath silent videos and that if you have an aversion to microphones it isn't that difficult for you to install an ai voice generator to explain things like a normal person. If all you do is wave your mouse cursor around without explanation in a video where most of the words aren't terribly legible most of the time, you may as well just make a short 5 second video that's nothing more than a web address to a large readable image of your workflow.
Thanks! Have to say that without comfy manager the whole user experience in ComfyUI wouldn't be so nice as it is now.
The Comfy Org team has been formed and significant progress is being made on the front-end side, so we can expect major improvements. :)
@@drltdata Thankyou for all the work you do on this mate! I thing what he was saying is that it really makes the end user's life so much easier. Thanks again for all the hardwork you put into this ecosystem!
why do my final characters look nothing like my inputs?
IPAdapter plus has undergone many option changes over time. Various settings of IPAdapter need to be modified. For understanding IPAdapter settings, please refer to Latent Vision, the channel posted by the developer of that custom node. www.youtube.com/@latentvision
hi. i just want to know if you are going to add more hairstyles. can we add more ourselves?
This is simply to demonstrate an application of a workflow that uses detection-based mask operations to inpaint only specific areas.
Why didn't you use the actual lora, while checking it with XY Input: Lora Block Weight? I can't see it loaded in the Efficient Loader.
I don't understand what you mean by "actual lora".
@@drltdata in 1:28 for example if you look at the Efficient Loader - there is no Lora loaded
@@j18040-u That feature is for creating a table while modifying the block weights of LoRA. This is a function that cannot be performed with a typical LoRA loader, which is why a dedicated node is provided.
@@drltdata Yeah, I got this. But shouldn't the Lora be loaded in the main Model stream for this to work?
please talk friend, your videos could be amazing
Awessome! Thanksss
basically crop_factor & bbox_fill both provide more context to the detailer, correct?
They are different concept. crop_factor providing wider surroding context for inpaintingi area. However, bbox_fill simply determines whether the mask, which is the target for inpainting, should be fully filled in a bbox shape.
What is the model name ?
It's fantexi v0.7 This model was one I frequently used for simple image generation during the SD1.5 era because it produced decent anatomy.
Where I can find this workflow?
@@josemariagala2588 You can find it in my comment.
@@drltdata thanks a lot!!! ☺️
@@drltdata How can I control the refinement parameters? I mean, if I ask to create for example an Asian woman, it makes her Caucasian.
@@josemariagala2588 You have to control via model and prompt.
@@drltdata where i put the model?
My name is Joseph Jerry. I just finished watching your video, and it was very insightful. However, I'm having trouble downloading ComfyUI from the direct link on GitHub. Could you please help me understand what the problem might be?
很少看到有人介紹.. 謝謝你.
I have tried everything but no image preview after queue prompt on my rtx 2070. (i have no issue with my rtx 3090)
Nothing compares to your extension! Thank you immensely for all you do!!!
thanks
Hi the two github links in the description aren't working are you able to reupload them?
Very clean workflow, I find it strange some connections seem to not have what I'd define as a "conclusion." like the Lora clip output. But im still wondering how are you able to watch the image generate like that?
Thank you for sharing. How to put photos from the outside?
Doesn't seem to work with Inspire Pack version_code = [0, 82], loaded your workflow, tried increasing the strength a lot, change the seed, made the mask a lot bigger, nothing changed, can you take a look. Thanks for all the custom nodes, they are very useful. Edit: did more testing, found out if noise mode is set to CPU for all nodes, the Seed Explorer By Mask doesn't work.
This is fixed (v0.86.1)
@@drltdata tested, working correctly now, thanks.
he tell error while installing wd v1 moat tagger any solutions
I have no problem. That seems to be your network issue. Directly download following model into the ComfyUI-WD14-Tagger/models path. huggingface.co/lllyasviel/misc/blob/71f7a66a7affe631c64af469fe647217d422cac0/wd-v1-4-moat-tagger-v2.onnx
@@drltdata it keep unistalling it and then tell me the same error
Could you give a link for this workflow? I still don't understand how it work but I think this can solve my problem.
I recommend Regional Sampler instead of this. TwoSamplers was a node created initially for experimental purposes, and it's recommended to use Regional Sampler instead. github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Impact-Pack/workflow/regionalsampling.png github.com/ltdrdata/ComfyUI-extension-tutorials/blob/Main/ComfyUI-Inspire-Pack/workflow/regional-sampling.png
it's not a tutorial, it's you moving your screen around.
Incredibly helpful for getting things up and running from an Auto1111 mindset. Thank you!
tk anh đã chia sẽ nhưng mình bị lỗi ksampler khắc phục thế nào ạ tk anh nhiều
Make sure Impact Pack and Inspire Pack up to date.
thanks for share👍
i downloaded this workflow, while "2sampler for mask" is working as shown in the demo but "2 advance sampler for mask "is not working ..after Q prompting all its giving is noise , where im doing wrong ?? any clue??
This is fixed in latest version.