- Видео 11
- Просмотров 71 447
JanRT
Добавлен 29 ноя 2023
ComfyUI Stable Cascade workflow
2/20: models updated for comfyui, you can change to Load Checkpoint node as usual using the models below:
huggingface.co/stabilityai/stable-cascade/tree/main/comfyui_checkpoints
2/18: CascadeSampling node added by comfyui, workflow updated.
Comfyui just updated native support for stable cascade. This is an example of workflow.
Stable Cascade Github
github.com/Stability-AI/StableCascade/tree/master?tab=readme-ov-file
Cascade model huggingface
huggingface.co/stabilityai/stable-cascade/tree/main
ComfyUI-Easy-Use
github.com/yolain/ComfyUI-Easy-Use
Workflow:
drive.google.com/file/d/1FwhBn6KQuKXMU18CMs8rRpnG9kQAhx7T/view?usp=sharing
00:00 Introduction
00:44 Workflow walkthrough
01:12 examples with differen...
huggingface.co/stabilityai/stable-cascade/tree/main/comfyui_checkpoints
2/18: CascadeSampling node added by comfyui, workflow updated.
Comfyui just updated native support for stable cascade. This is an example of workflow.
Stable Cascade Github
github.com/Stability-AI/StableCascade/tree/master?tab=readme-ov-file
Cascade model huggingface
huggingface.co/stabilityai/stable-cascade/tree/main
ComfyUI-Easy-Use
github.com/yolain/ComfyUI-Easy-Use
Workflow:
drive.google.com/file/d/1FwhBn6KQuKXMU18CMs8rRpnG9kQAhx7T/view?usp=sharing
00:00 Introduction
00:44 Workflow walkthrough
01:12 examples with differen...
Просмотров: 4 905
Видео
AnimateDiff SparseCtrl RGB w/ single image and Scribble control
Просмотров 6 тыс.8 месяцев назад
AnimateDiffv3 SparseCtrl RGB w/ single image and Scribble control for smooth and flicker-free animation generation. This is an update from previous ComfyUI SparseCtrl workflow to generate animation with just one image, as starting or ending frame. Also talked about the openpose guidance, SparseCtrl scribble redraw, and the effect of total frames. Updated AnimateDiff Evolved to Gen2 node sets. A...
ComfyUI workflow for RAVE: Temporally Consistent Video Editing
Просмотров 2,4 тыс.8 месяцев назад
Comfyui workflow for RAVE, Randomized Noise Shuffling for Fast and Consistent Video Editing. Lightweight but works quite well for style transfer and object replacement, producing flicker-free results. Workflow step by step provided at the end. Added SDXL FaceID and reactor. RAVE Github: github.com/rehg-lab/RAVE Comfyui RAVE: github.com/spacepxl/ComfyUI-RAVE Comfyui noise: github.com/BlenderNeko...
ComfyUI Hand Correction Workflow - HandRefiner
Просмотров 10 тыс.9 месяцев назад
Comfyui work flow w/ HandRefiner, easy and convenient hand correction or hand fix. HandRefiner Github: github.com/wenquanlu/HandRefiner Controlnet inpaint depth hand model: huggingface.co/hr16/ControlNet-HandRefiner-pruned/tree/main comfyui_controlnet_aux: github.com/Fannovel16/comfyui_controlnet_aux Mesh Graphormer (FYI): github.com/microsoft/MeshGraphormer Workflow: drive.google.com/file/d/11...
AnimateDiffv3 faceID and ReActor
Просмотров 3,1 тыс.9 месяцев назад
Happy new year everyone! This video talks about AnimateDiff v3 w/ IPadapter FaceID and ReActor for creating animations using reference face picture, face swap and face analysis. Both ReActor and FaceID use insightface, please install it first. IP-Adapter-FaceID model: huggingface.co/h94/IP-Adapter-FaceID ComfyUI_IPAdapter_plus: github.com/cubiq/ComfyUI_IPAdapter_plus comfyui-reactor-node: githu...
Comfyui AnimateDiffv3 RGB image Sparse Control
Просмотров 7 тыс.9 месяцев назад
AnimateDiffv3 RGB image SparseCtrl example, comfyui workflow w/ Open pose, IPAdapter, and face detailer. SparseCtrl Github: guoyww.github.io/projects/SparseCtrl/ AnimateDiff v3 model: huggingface.co/guoyww/animatediff/tree/main ComfyUI-Advanced-ControlNet: github.com/Kosinkadink/ComfyUI-Advanced-ControlNet Workflow: drive.google.com/file/d/1X5DqLOOYUvcM5z9Wsx_MJJm_-sdEtEff/view?usp=sharing 00:0...
Comfyui AnimateDiff v3 + LCM Video to Video
Просмотров 18 тыс.9 месяцев назад
AnimateDiffv3 released, here is one comfyui workflow integrating LCM (latent consistency model) controlnet IPadapter Face Detailer auto folder name parser, with animation and realistic demo. AnimateDiff v3 model: huggingface.co/guoyww/animatediff/tree/main IPadapter models: huggingface.co/h94/IP-Adapter Vit-H model (SD1.5, I renamed): huggingface.co/h94/IP-Adapter/resolve/main/models/image_enco...
LucidDreamer: 3D scene generation based on Stable Diffusion + Gaussian Splatting
Просмотров 3,3 тыс.9 месяцев назад
LucidDreamer, a project for 3D scene generation from just one picture and one single text prompt, through stable diffusion inpainting outpainting, monocular depth estimation, and Gaussian splatting, local installation and visualization. LucidDreamer: github.com/luciddreamer-cvlab/LucidDreamer 00:00 Introdcution of LucidDreamer & Gaussian splatting - Workflow & demos: 01:03 sd-webui to generate ...
ComfyUI MagicAnimate + DensePose - Local Install
Просмотров 7 тыс.9 месяцев назад
ComfyUI workflow for MagicAnimate and SDXL turbo Face detailer, and local install tutorial for MagicAnimate and Vid2Densepose to convert customized Dense Pose videos. 00:00 Introduction 00:44 ComfyUI MagicAnimate workflow SDXL turbo Face Detailer 03:06 Installation of MagicAnimate 04:14 Installation of Vid2Densepose (Detectron2) Workflow: drive.google.com/file/d/10cDn0w89CiXidhMkLW0XCefsn8eyMZS...
ComfyUI AnimateDiff Flicker-Free Inpainting
Просмотров 5 тыс.10 месяцев назад
ComfyUI workflow with AnimateDiff, Face Detailer (Impact Pack), and inpainting to generate flicker-free animation, blinking as an example in this video. Workflow: drive.google.com/file/d/1TRlhp1oLQwqwFGWZ-_gXplRIp3Uo7WXX/ Batch Prompt example: "0" :"1girl, dynamic angle, octane render, Velvia, official art, unity 8k wallpaper, ultra detailed, aesthetic, (masterpiece:1.1), (best quality:1.1), Gh...
Comfyui Realtime LCM with Photoshop, Blender, C4D, Zbrush, Maya...
Просмотров 3,3 тыс.10 месяцев назад
A workflow integrating Latent Consistency Models (LCM) with screen share custom node, to generate realtime pictures w/ paints in photoshop, and 3D models in Blender, C4D, Zbrush, Maya, etc., enjoy! Workflow: drive.google.com/file/d/1RaUSzTz4pg4f3pxDDfmzH78GHzALnmyR/ comfyui-mixlab-nodes: github.com/shadowcz007/comfyui-mixlab-nodes or installed through the Comfyui Manager
how do you work it with maya
the most talentless and stupid workflow I've ever seen
this workflow leaves me with so many questions that your video didnt cover. in the cr prompt text box what am i supposed to put? positive prompts? because right now its linking back to your computer. it says D:/Program Files/ComfyUI_windows_portable/ComfyUI/output/ADiff/JanRT_P07/ what about the next CR prompt text box? "imgs, openpose," do I leave that as is? what about the Output_Folder text box? The Prefix text box? i put an image in the load image box but now where do you load the video where you want the face to be applied? I hit queue prompt and it said Prompt executed in 35.51 seconds, but it didnt produce anything.
can you update your workflow? When I opened it, it told me I didnt have InsightFaceLoader, IpAdapterApplyFace and IPAdapterApply. And I was like thats impossible!!!! theres no way I'm missing IPAdapterApply! And I'm almost certain I've used FaceLoader recently so how is this possible?! so I double clicked in the empty space to bring up that search bar thing, typed in IPAdapter FaceID and was able to load that node box from scratch. So I manually changed out IpAdapter InsightFace Loader and IPAdapter FaceID but not sure where to put IPAdapter Apply because the red node boxes didnt look like IpAdapter Apply but told me that I was missing it (when I wasnt). thats the first time I realized a workflow could falsely tell me I'm missing something that I'm not. maybe its a deprecated version?
Fantastic! Thank you for the epic tutorial. This is a game changer. Liked! Subscribed!! I wonder, could some form of this workflow be used to fix hands on images that have previously been rendered?
Just to make sure, the HandRefiner Github isn't needed to use this, is it? I'm assuming that's just the repo for the white paper, but I want to make sure before I try this.
Hello, I followed your instructions for installation, but the textures in the generated scene's PLY file are incorrect. Could you help me troubleshoot this issue?
Hi! I’m interested in a business collaboration. Could you please share your email? Thanks
Is this still working for anyone? I did a fresh install of Comfy + Python 3.10.10 and it still cannot load.
I keep getting the below error despite having installed YACS already : No module named 'yacs.config' File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux ode_wrappers\mesh_graphormer.py", line 66, in execute from controlnet_aux.mesh_graphormer import MeshGraphormerDetector File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\mesh_graphormer\__init__.py", line 5, in from controlnet_aux.mesh_graphormer.pipeline import MeshGraphormerMediapipe, args File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\mesh_graphormer\pipeline.py", line 12, in from custom_mesh_graphormer.modeling.hrnet.config import config as hrnet_config File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mesh_graphormer\modeling\hrnet\config\__init__.py", line 7, in from .default import _C as config File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mesh_graphormer\modeling\hrnet\config\default.py", line 15, in from yacs.config import CfgNode as CN
Thank you man, great job! But in my PC, IPAdapterApply node is missing and become red, when i try to replace it with IPAdapterAdvance node, i miss the 'insightface' input. Do you know how to solve this problem?
Getting runtime error "mat1 and mat2" shapes cannot be multiplied. Any idea as to what could be causing this?
this does not work on resting hands only raised hands
What can you put in the prompt that goes into unsampler?
Still trying to make a mesh from it
When loading the graph, the following node types were not found: IPAdapterApply I have fully installed the nodes and still get this error
This is too hard to follow. You should build it from scratch with every step shown.
OK, I will consider it in the next video, thanks for the feedback
This channel is underrated. You're the goat @JanRTstudio
Thank you, so glad to hear that!
on mine it appears in the install custom nodes tab. a red band shows the conflict. it does not appear on the desktop in confyui
hi i am getting this error could you help plz File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 293, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
Hi you are using cuda version comfyui without installing torch-cuXXX (like cu118 version). You can try to run "your_ComfyUI_folder\update\update_comfyui_and_python_dependencies.bat". Are you using portable version?
Can\t find practical usage
I was trying to use it as background in blender, but the addon has issues for importing.
thx
This workflow CREATES and image, but in most of the cases, you'd like to load an EXISTING image to refine it 😐
yeah similar suggestion in other comments, I will test and upload a Img2Img workflow
Thank you it works great, just something wrong with colors. They are too bright and the image is overexposed. Have you an idea what's wrong?
Sure! It might be the color match node, you can try different reference picture for the color match
Hi there! Great video tho. I've tried follow it to install Magic Animate nodes but I failed... Maybe you could help. My case is that despite all stuff is downloaded and it shows Magic Animate as installed in Manager I am not able to find those nodes in Comfy. I even tried to use your workflow but those nodes still appear as red boxes. I found out that my terminal shows this info under Magic Animate node: " cannot import name 'PositionNet' from 'diffusers.models.embeddings'" I'd appreciate any ideas what's wrong :)
I receive an error while trying the ScrenShare node: " Error accessing screen stream: NotAllowedError: Failed to execute 'getDisplayMedia' on 'MediaDevices': Access to the feature "display-capture" is disallowed by permission policy." Do you know what might cause the error? Where can I enable display-capture? thanks
Hi what's your system and browser? It seems the browser doesn't allow screen share. Are you using server version or sd-webui-comfyui?
When i run the workflow,it stops and show me : Error occurred when executing KSampler: mat1 and mat2 shapes cannot be multiplied (154x768 and 1280x2048)
It seems models/vae don't match, can you please double check the models are in the same nodes as in the video?
Thanks, I checked the models and vae and it looks like that's the problem. It works now@@JanRTstudio
you are a pro
Does it work on Mac M2?
I think so, comfyui stated support for M2 with any recent macOS version, and this is the native support so should work, though I don't have Mac to test it right now
hi comfyui easy use import failed error i m getting. Not able to to run style selector node.
Can you copy the error code in the command line window regarding the import failed? that's strange if installed from comfyui manager
SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) @@JanRTstudio
@@kuka7466 Hi can you check "Install Missing Custom Nodes" from comfyui-manager menu? generally it's missing nodes
Hi and thank you for this vid , i have a very large error, can u help me plz? Error occurred when executing KSampler: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 16, 24, 24] to have 4 channels, but got 16 channels instead and ...
Sure! Very similar to the issue mentioned in another comment, can you double check the model names in 4 black loader nodes (2 unets 1vae and 1 clip) are correct? Sometimes Comfyui just update a default value if your model file location (0:31) is not correct.
@@JanRTstudio thank you so much , it fixed 😍
@@nickchalion Awesome!
The following node types were not found: -StableCascade_StageB_Conditioning -StableCascade_EmptyLatentImage Unfortunately, they aren't available in the manager 😐
did you update the comfyui itself , since cascade nodes are native not custom nodes I believe
@@ischeka After updating ComfyUI, the nodes were available BUT the workflow stops with an error "Given groups=1, weight of size [320, 16, 1, 1], expected input[2, 64, 12, 12] to have 16 channels, but got 64 channels instead"
@@MikevomMars Can you double check it's stable_cascade selected in Load Clip type, and reselect all the models in that 4 black loader nodes? Or just drag the downloaded workflow to comfyui again to reload. I just updated comfyui, but can't replicate your error. Seems something wrong with the model loading.
@@ischeka yep, thank you!
@@ischeka Finally, it works - thanks for helping 😊👍 The issue was as follows: ComfyUI automatically filled the UNET, CLIP and VAE loaders, but for some strange reason, it inserted the stage a safetensor in the top UNET loader instead of the stage b. I had a hard time figuring out what safetensors need to go in what loader because they are so tiny in the video that it is hard to see. But it works now.
Are people able to train on this yet?
Yes, the training code has been released for lora, controlnet and Stages B & C. You can find it at the hyperlink "training" on their github webpage.
Lol i was just banging my head against a wall trying to fix this. Thank you
😄No problem my friend
Please upload your video to 4K for read node name more easy.
I will try 1440p next time, limited by my monitor res 😅thanks for the suggestion.
thansk so much for ur great tutorials.is there any render time limit in comfyui? i wanna use a 32seconds video.its 30 frame (1000 png) for video2video but i got this error on my 3090ti: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32
It seems 1000 might be a little bit more, I didn't have a chance to try that yet, but you can try a lower fps, like 30fps -> 10fps, and do frame interpolation (ComfyUI-Frame-Interpolation VFI node) afterwards, thus you only need to generate 320 images a time. I have a VFI node example in my RAVE video
@@JanRTstudio numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32 i got this error
Nice. Could you use Automatic 1111 to train a lora for monkeys hands using this as a base model? By "can you", I mean is it possible do you think?
I believe yes, A1111 not sure, but you can find training using python here: github.com/microsoft/MeshGraphormer/blob/main/docs/EXP.md
@@JanRTstudio - thanks
no problem! @@jeffg4686
@@JanRTstudio - Nice. I might have to take a trip to the zoo, or even do some gens with dalle or something.
Sounds good 😄@@jeffg4686
Another amazing video my friend ❤
😀
Thank you Thank you Thank you.
😀
You probably know this but you could just use IPadapter for the clothes, at 1/0/1.0, it has solid grasp on the image and given only few frames are generated with very little movement , a simple mask will do (but coco segmenting can also be implemented). Thank you!
Right, I bypassed IPA in the video, mask/segmenting is a good way to try, thanks for the suggestion!
@@JanRTstudio Thank you for the cool videos! Do you have an X account so we can follow you on?
Sure! Just created one, JanRT111, will update there! @@stephantual
i upgraded to 3090ti 24gig.how much cpu ram i need to do video to video SD? I have 32gig
That's pretty cool! 24G is a lot, it's mainly GPU ram to render, if shifted to CPU ram the speed will slow down dramatically, so 32G RAM is good enough you don't want to use it. With 24G you can try latent upscale to get high resolution animation. I just have 12G VRAM and can do 512x768 100frames+ one time. You are good to go!
What is a unsampler?
Sampling is a denoising process, unsampler does the reverse and creates the noise pattern from image, used to reconstruct the image with modified prompts.
nice work
Thank you!
fantastic work!, the only problem is that I found that after running this workflow, the background is always pure color, even if I have added some background info in prompt, it does not work, could you share some fix method?
Thanks for the feedback! First I think you can try to bypass the "AnimateDiff Loader" and set "Input_Img_Cap" to 1, and run some single picture to check if the background is generated as you wish, change to different models if not. Or you can add the depth controlnet with a very low strength, 0.2 for example. If you just want to restyle the source video, you can decrease the denoise value in the first Ksampler, like 0.6 - 0.8.
Error occurred when executing VHS_LoadImagesPath: directory is not valid: D:/Program Files/ComfyUI_windows_portable/ComfyUI/output/ADiff/JanRT_P05/ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 155, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 85, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 78, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\load_images_nodes.py", line 143, in load_images raise Exception("directory is not valid: " + directory) I've tried many times but I really don't know how to modify
You can change the first CR prompt text to your folder path "D:/ComfyUI_windows_portable/ComfyUI/output/ADiff/", and rerun from the beginning. Loading images from folder other than the "output" folder inside ComfyUI usually causes error, I think that's the reason.
My brain always shuts down when I see ComfyUI. I started a week ago on a1111 so yeah, very early days!
lol yeah get familiar with a1111, you will find it's just those options separated as nodes in comfyui
Good video, thanks
Thank you!
Its unfortunatelly that crossing fingers is interpreted as single hand, on many angles, I wish they could fix this up. A traning should be made, probably, and some sign to detect when one hand is under the other, because it deforms really bad as if there is only one hand on person! Also gloves have been a problem to me. It can be better yet.
Exactly, they mentioned this limitation. Works well in general poses, but still need to fix manually when crossing, overlapping, partial hands, etc.
Great content! Is there a guide on how sparse control rgb/scribble actually work? What I mean how do you know it's placed first and last image? Can you place RGB Sparse Control on frames 1,5,7,9,15,20? How?
Thank you! Yes, that's controlled by the "Sparse method", I am making another video for it and will talk about these methods
For some reason, even though I've successfully downloaded Comfyui Impact Pack, Comfyui will still say it's missing. So the node above SAMLoader in the Face Detailer section is red. Have you run into this issue?
That's strange, did you use comfyUI manager to install Impact? Just check the CMD window, during loading it will show "Import Failed" for Impact Pack, and before that, it actually gives you error and the cause of failure.