JanRT
JanRT
  • Видео 11
  • Просмотров 71 447
ComfyUI Stable Cascade workflow
2/20: models updated for comfyui, you can change to Load Checkpoint node as usual using the models below:
huggingface.co/stabilityai/stable-cascade/tree/main/comfyui_checkpoints
2/18: CascadeSampling node added by comfyui, workflow updated.
Comfyui just updated native support for stable cascade. This is an example of workflow.
Stable Cascade Github
github.com/Stability-AI/StableCascade/tree/master?tab=readme-ov-file
Cascade model huggingface
huggingface.co/stabilityai/stable-cascade/tree/main
ComfyUI-Easy-Use
github.com/yolain/ComfyUI-Easy-Use
Workflow:
drive.google.com/file/d/1FwhBn6KQuKXMU18CMs8rRpnG9kQAhx7T/view?usp=sharing
00:00 Introduction
00:44 Workflow walkthrough
01:12 examples with differen...
Просмотров: 4 905

Видео

AnimateDiff SparseCtrl RGB w/ single image and Scribble control
Просмотров 6 тыс.8 месяцев назад
AnimateDiffv3 SparseCtrl RGB w/ single image and Scribble control for smooth and flicker-free animation generation. This is an update from previous ComfyUI SparseCtrl workflow to generate animation with just one image, as starting or ending frame. Also talked about the openpose guidance, SparseCtrl scribble redraw, and the effect of total frames. Updated AnimateDiff Evolved to Gen2 node sets. A...
ComfyUI workflow for RAVE: Temporally Consistent Video Editing
Просмотров 2,4 тыс.8 месяцев назад
Comfyui workflow for RAVE, Randomized Noise Shuffling for Fast and Consistent Video Editing. Lightweight but works quite well for style transfer and object replacement, producing flicker-free results. Workflow step by step provided at the end. Added SDXL FaceID and reactor. RAVE Github: github.com/rehg-lab/RAVE Comfyui RAVE: github.com/spacepxl/ComfyUI-RAVE Comfyui noise: github.com/BlenderNeko...
ComfyUI Hand Correction Workflow - HandRefiner
Просмотров 10 тыс.9 месяцев назад
Comfyui work flow w/ HandRefiner, easy and convenient hand correction or hand fix. HandRefiner Github: github.com/wenquanlu/HandRefiner Controlnet inpaint depth hand model: huggingface.co/hr16/ControlNet-HandRefiner-pruned/tree/main comfyui_controlnet_aux: github.com/Fannovel16/comfyui_controlnet_aux Mesh Graphormer (FYI): github.com/microsoft/MeshGraphormer Workflow: drive.google.com/file/d/11...
AnimateDiffv3 faceID and ReActor
Просмотров 3,1 тыс.9 месяцев назад
Happy new year everyone! This video talks about AnimateDiff v3 w/ IPadapter FaceID and ReActor for creating animations using reference face picture, face swap and face analysis. Both ReActor and FaceID use insightface, please install it first. IP-Adapter-FaceID model: huggingface.co/h94/IP-Adapter-FaceID ComfyUI_IPAdapter_plus: github.com/cubiq/ComfyUI_IPAdapter_plus comfyui-reactor-node: githu...
Comfyui AnimateDiffv3 RGB image Sparse Control
Просмотров 7 тыс.9 месяцев назад
AnimateDiffv3 RGB image SparseCtrl example, comfyui workflow w/ Open pose, IPAdapter, and face detailer. SparseCtrl Github: guoyww.github.io/projects/SparseCtrl/ AnimateDiff v3 model: huggingface.co/guoyww/animatediff/tree/main ComfyUI-Advanced-ControlNet: github.com/Kosinkadink/ComfyUI-Advanced-ControlNet Workflow: drive.google.com/file/d/1X5DqLOOYUvcM5z9Wsx_MJJm_-sdEtEff/view?usp=sharing 00:0...
Comfyui AnimateDiff v3 + LCM Video to Video
Просмотров 18 тыс.9 месяцев назад
AnimateDiffv3 released, here is one comfyui workflow integrating LCM (latent consistency model) controlnet IPadapter Face Detailer auto folder name parser, with animation and realistic demo. AnimateDiff v3 model: huggingface.co/guoyww/animatediff/tree/main IPadapter models: huggingface.co/h94/IP-Adapter Vit-H model (SD1.5, I renamed): huggingface.co/h94/IP-Adapter/resolve/main/models/image_enco...
LucidDreamer: 3D scene generation based on Stable Diffusion + Gaussian Splatting
Просмотров 3,3 тыс.9 месяцев назад
LucidDreamer, a project for 3D scene generation from just one picture and one single text prompt, through stable diffusion inpainting outpainting, monocular depth estimation, and Gaussian splatting, local installation and visualization. LucidDreamer: github.com/luciddreamer-cvlab/LucidDreamer 00:00 Introdcution of LucidDreamer & Gaussian splatting - Workflow & demos: 01:03 sd-webui to generate ...
ComfyUI MagicAnimate + DensePose - Local Install
Просмотров 7 тыс.9 месяцев назад
ComfyUI workflow for MagicAnimate and SDXL turbo Face detailer, and local install tutorial for MagicAnimate and Vid2Densepose to convert customized Dense Pose videos. 00:00 Introduction 00:44 ComfyUI MagicAnimate workflow SDXL turbo Face Detailer 03:06 Installation of MagicAnimate 04:14 Installation of Vid2Densepose (Detectron2) Workflow: drive.google.com/file/d/10cDn0w89CiXidhMkLW0XCefsn8eyMZS...
ComfyUI AnimateDiff Flicker-Free Inpainting
Просмотров 5 тыс.10 месяцев назад
ComfyUI workflow with AnimateDiff, Face Detailer (Impact Pack), and inpainting to generate flicker-free animation, blinking as an example in this video. Workflow: drive.google.com/file/d/1TRlhp1oLQwqwFGWZ-_gXplRIp3Uo7WXX/ Batch Prompt example: "0" :"1girl, dynamic angle, octane render, Velvia, official art, unity 8k wallpaper, ultra detailed, aesthetic, (masterpiece:1.1), (best quality:1.1), Gh...
Comfyui Realtime LCM with Photoshop, Blender, C4D, Zbrush, Maya...
Просмотров 3,3 тыс.10 месяцев назад
A workflow integrating Latent Consistency Models (LCM) with screen share custom node, to generate realtime pictures w/ paints in photoshop, and 3D models in Blender, C4D, Zbrush, Maya, etc., enjoy! Workflow: drive.google.com/file/d/1RaUSzTz4pg4f3pxDDfmzH78GHzALnmyR/ comfyui-mixlab-nodes: github.com/shadowcz007/comfyui-mixlab-nodes or installed through the Comfyui Manager

Комментарии

  • @warlord8106
    @warlord8106 34 минуты назад

    how do you work it with maya

  • @NinzyaCat
    @NinzyaCat Месяц назад

    the most talentless and stupid workflow I've ever seen

  • @Rachelcenter1
    @Rachelcenter1 Месяц назад

    this workflow leaves me with so many questions that your video didnt cover. in the cr prompt text box what am i supposed to put? positive prompts? because right now its linking back to your computer. it says D:/Program Files/ComfyUI_windows_portable/ComfyUI/output/ADiff/JanRT_P07/ what about the next CR prompt text box? "imgs, openpose," do I leave that as is? what about the Output_Folder text box? The Prefix text box? i put an image in the load image box but now where do you load the video where you want the face to be applied? I hit queue prompt and it said Prompt executed in 35.51 seconds, but it didnt produce anything.

  • @Rachelcenter1
    @Rachelcenter1 Месяц назад

    can you update your workflow? When I opened it, it told me I didnt have InsightFaceLoader, IpAdapterApplyFace and IPAdapterApply. And I was like thats impossible!!!! theres no way I'm missing IPAdapterApply! And I'm almost certain I've used FaceLoader recently so how is this possible?! so I double clicked in the empty space to bring up that search bar thing, typed in IPAdapter FaceID and was able to load that node box from scratch. So I manually changed out IpAdapter InsightFace Loader and IPAdapter FaceID but not sure where to put IPAdapter Apply because the red node boxes didnt look like IpAdapter Apply but told me that I was missing it (when I wasnt). thats the first time I realized a workflow could falsely tell me I'm missing something that I'm not. maybe its a deprecated version?

  • @Byrdfl3wsNest
    @Byrdfl3wsNest Месяц назад

    Fantastic! Thank you for the epic tutorial. This is a game changer. Liked! Subscribed!! I wonder, could some form of this workflow be used to fix hands on images that have previously been rendered?

  • @sigmareaver680
    @sigmareaver680 2 месяца назад

    Just to make sure, the HandRefiner Github isn't needed to use this, is it? I'm assuming that's just the repo for the white paper, but I want to make sure before I try this.

  • @fengxu6967
    @fengxu6967 2 месяца назад

    Hello, I followed your instructions for installation, but the textures in the generated scene's PLY file are incorrect. Could you help me troubleshoot this issue?

  • @ShengzhuPeng
    @ShengzhuPeng 3 месяца назад

    Hi! I’m interested in a business collaboration. Could you please share your email? Thanks

  • @RhapsHayden
    @RhapsHayden 4 месяца назад

    Is this still working for anyone? I did a fresh install of Comfy + Python 3.10.10 and it still cannot load.

  • @MrMertall
    @MrMertall 5 месяцев назад

    I keep getting the below error despite having installed YACS already : No module named 'yacs.config' File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux ode_wrappers\mesh_graphormer.py", line 66, in execute from controlnet_aux.mesh_graphormer import MeshGraphormerDetector File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\mesh_graphormer\__init__.py", line 5, in from controlnet_aux.mesh_graphormer.pipeline import MeshGraphormerMediapipe, args File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\controlnet_aux\mesh_graphormer\pipeline.py", line 12, in from custom_mesh_graphormer.modeling.hrnet.config import config as hrnet_config File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mesh_graphormer\modeling\hrnet\config\__init__.py", line 7, in from .default import _C as config File "C:\Users\xpare\Desktop\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui_controlnet_aux\src\custom_mesh_graphormer\modeling\hrnet\config\default.py", line 15, in from yacs.config import CfgNode as CN

  • @ahmadzaini
    @ahmadzaini 5 месяцев назад

    Thank you man, great job! But in my PC, IPAdapterApply node is missing and become red, when i try to replace it with IPAdapterAdvance node, i miss the 'insightface' input. Do you know how to solve this problem?

  • @_gr1nchh
    @_gr1nchh 5 месяцев назад

    Getting runtime error "mat1 and mat2" shapes cannot be multiplied. Any idea as to what could be causing this?

  • @ryuktimo6517
    @ryuktimo6517 5 месяцев назад

    this does not work on resting hands only raised hands

  • @qus123
    @qus123 5 месяцев назад

    What can you put in the prompt that goes into unsampler?

  • @voxyloids8723
    @voxyloids8723 6 месяцев назад

    Still trying to make a mesh from it

  • @caoonghoang5060
    @caoonghoang5060 6 месяцев назад

    When loading the graph, the following node types were not found: IPAdapterApply I have fully installed the nodes and still get this error

  • @cinematic_monkey
    @cinematic_monkey 6 месяцев назад

    This is too hard to follow. You should build it from scratch with every step shown.

    • @JanRTstudio
      @JanRTstudio 6 месяцев назад

      OK, I will consider it in the next video, thanks for the feedback

  • @Distop-IA
    @Distop-IA 6 месяцев назад

    This channel is underrated. You're the goat @JanRTstudio

    • @JanRTstudio
      @JanRTstudio 6 месяцев назад

      Thank you, so glad to hear that!

  • @renanarchviz
    @renanarchviz 6 месяцев назад

    on mine it appears in the install custom nodes tab. a red band shows the conflict. it does not appear on the desktop in confyui

  • @bradyee227
    @bradyee227 6 месяцев назад

    hi i am getting this error could you help plz File "D:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 293, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

    • @JanRTstudio
      @JanRTstudio 6 месяцев назад

      Hi you are using cuda version comfyui without installing torch-cuXXX (like cu118 version). You can try to run "your_ComfyUI_folder\update\update_comfyui_and_python_dependencies.bat". Are you using portable version?

  • @voxyloids8723
    @voxyloids8723 6 месяцев назад

    Can\t find practical usage

    • @JanRTstudio
      @JanRTstudio 6 месяцев назад

      I was trying to use it as background in blender, but the addon has issues for importing.

  • @hanygh2240
    @hanygh2240 6 месяцев назад

    thx

  • @MikevomMars
    @MikevomMars 6 месяцев назад

    This workflow CREATES and image, but in most of the cases, you'd like to load an EXISTING image to refine it 😐

    • @JanRTstudio
      @JanRTstudio 6 месяцев назад

      yeah similar suggestion in other comments, I will test and upload a Img2Img workflow

  • @epelfeld
    @epelfeld 6 месяцев назад

    Thank you it works great, just something wrong with colors. They are too bright and the image is overexposed. Have you an idea what's wrong?

    • @JanRTstudio
      @JanRTstudio 6 месяцев назад

      Sure! It might be the color match node, you can try different reference picture for the color match

  • @mkrl89
    @mkrl89 6 месяцев назад

    Hi there! Great video tho. I've tried follow it to install Magic Animate nodes but I failed... Maybe you could help. My case is that despite all stuff is downloaded and it shows Magic Animate as installed in Manager I am not able to find those nodes in Comfy. I even tried to use your workflow but those nodes still appear as red boxes. I found out that my terminal shows this info under Magic Animate node: " cannot import name 'PositionNet' from 'diffusers.models.embeddings'" I'd appreciate any ideas what's wrong :)

  • @mfb-ur7kz
    @mfb-ur7kz 7 месяцев назад

    I receive an error while trying the ScrenShare node: " Error accessing screen stream: NotAllowedError: Failed to execute 'getDisplayMedia' on 'MediaDevices': Access to the feature "display-capture" is disallowed by permission policy." Do you know what might cause the error? Where can I enable display-capture? thanks

    • @JanRTstudio
      @JanRTstudio 6 месяцев назад

      Hi what's your system and browser? It seems the browser doesn't allow screen share. Are you using server version or sd-webui-comfyui?

  • @leetotti3064
    @leetotti3064 7 месяцев назад

    When i run the workflow,it stops and show me : Error occurred when executing KSampler: mat1 and mat2 shapes cannot be multiplied (154x768 and 1280x2048)

    • @JanRTstudio
      @JanRTstudio 7 месяцев назад

      It seems models/vae don't match, can you please double check the models are in the same nodes as in the video?

    • @leetotti3064
      @leetotti3064 7 месяцев назад

      Thanks, I checked the models and vae and it looks like that's the problem. It works now@@JanRTstudio

  • @kshabana_YT
    @kshabana_YT 7 месяцев назад

    you are a pro

  • @fabiotgarcia2
    @fabiotgarcia2 7 месяцев назад

    Does it work on Mac M2?

    • @JanRTstudio
      @JanRTstudio 7 месяцев назад

      I think so, comfyui stated support for M2 with any recent macOS version, and this is the native support so should work, though I don't have Mac to test it right now

  • @VFXMinds
    @VFXMinds 7 месяцев назад

    hi comfyui easy use import failed error i m getting. Not able to to run style selector node.

    • @JanRTstudio
      @JanRTstudio 7 месяцев назад

      Can you copy the error code in the command line window regarding the import failed? that's strange if installed from comfyui manager

    • @kuka7466
      @kuka7466 6 месяцев назад

      SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) @@JanRTstudio

    • @JanRTstudio
      @JanRTstudio 6 месяцев назад

      @@kuka7466 Hi can you check "Install Missing Custom Nodes" from comfyui-manager menu? generally it's missing nodes

  • @nickchalion
    @nickchalion 7 месяцев назад

    Hi and thank you for this vid , i have a very large error, can u help me plz? Error occurred when executing KSampler: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 16, 24, 24] to have 4 channels, but got 16 channels instead and ...

    • @JanRTstudio
      @JanRTstudio 7 месяцев назад

      Sure! Very similar to the issue mentioned in another comment, can you double check the model names in 4 black loader nodes (2 unets 1vae and 1 clip) are correct? Sometimes Comfyui just update a default value if your model file location (0:31) is not correct.

    • @nickchalion
      @nickchalion 7 месяцев назад

      @@JanRTstudio thank you so much , it fixed 😍

    • @JanRTstudio
      @JanRTstudio 7 месяцев назад

      @@nickchalion Awesome!

  • @MikevomMars
    @MikevomMars 7 месяцев назад

    The following node types were not found: -StableCascade_StageB_Conditioning -StableCascade_EmptyLatentImage Unfortunately, they aren't available in the manager 😐

    • @ischeka
      @ischeka 7 месяцев назад

      did you update the comfyui itself , since cascade nodes are native not custom nodes I believe

    • @MikevomMars
      @MikevomMars 7 месяцев назад

      @@ischeka After updating ComfyUI, the nodes were available BUT the workflow stops with an error "Given groups=1, weight of size [320, 16, 1, 1], expected input[2, 64, 12, 12] to have 16 channels, but got 64 channels instead"

    • @JanRTstudio
      @JanRTstudio 7 месяцев назад

      @@MikevomMars Can you double check it's stable_cascade selected in Load Clip type, and reselect all the models in that 4 black loader nodes? Or just drag the downloaded workflow to comfyui again to reload. I just updated comfyui, but can't replicate your error. Seems something wrong with the model loading.

    • @JanRTstudio
      @JanRTstudio 7 месяцев назад

      @@ischeka yep, thank you!

    • @MikevomMars
      @MikevomMars 7 месяцев назад

      @@ischeka Finally, it works - thanks for helping 😊👍 The issue was as follows: ComfyUI automatically filled the UNET, CLIP and VAE loaders, but for some strange reason, it inserted the stage a safetensor in the top UNET loader instead of the stage b. I had a hard time figuring out what safetensors need to go in what loader because they are so tiny in the video that it is hard to see. But it works now.

  • @Foolsjoker
    @Foolsjoker 7 месяцев назад

    Are people able to train on this yet?

    • @JanRTstudio
      @JanRTstudio 7 месяцев назад

      Yes, the training code has been released for lora, controlnet and Stages B & C. You can find it at the hyperlink "training" on their github webpage.

  • @sudabadri7051
    @sudabadri7051 7 месяцев назад

    Lol i was just banging my head against a wall trying to fix this. Thank you

    • @JanRTstudio
      @JanRTstudio 7 месяцев назад

      😄No problem my friend

  • @meywu
    @meywu 8 месяцев назад

    Please upload your video to 4K for read node name more easy.

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      I will try 1440p next time, limited by my monitor res 😅thanks for the suggestion.

  • @hitmanehsan
    @hitmanehsan 8 месяцев назад

    thansk so much for ur great tutorials.is there any render time limit in comfyui? i wanna use a 32seconds video.its 30 frame (1000 png) for video2video but i got this error on my 3090ti: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      It seems 1000 might be a little bit more, I didn't have a chance to try that yet, but you can try a lower fps, like 30fps -> 10fps, and do frame interpolation (ComfyUI-Frame-Interpolation VFI node) afterwards, thus you only need to generate 320 images a time. I have a VFI node example in my RAVE video

    • @hitmanehsan
      @hitmanehsan 8 месяцев назад

      @@JanRTstudio numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32 i got this error

  • @jeffg4686
    @jeffg4686 8 месяцев назад

    Nice. Could you use Automatic 1111 to train a lora for monkeys hands using this as a base model? By "can you", I mean is it possible do you think?

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      I believe yes, A1111 not sure, but you can find training using python here: github.com/microsoft/MeshGraphormer/blob/main/docs/EXP.md

    • @jeffg4686
      @jeffg4686 8 месяцев назад

      @@JanRTstudio - thanks

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      no problem! @@jeffg4686

    • @jeffg4686
      @jeffg4686 8 месяцев назад

      @@JanRTstudio - Nice. I might have to take a trip to the zoo, or even do some gens with dalle or something.

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      Sounds good 😄@@jeffg4686

  • @sudabadri7051
    @sudabadri7051 8 месяцев назад

    Another amazing video my friend ❤

  • @GggggQqqqqq1234
    @GggggQqqqqq1234 8 месяцев назад

    Thank you Thank you Thank you.

  • @stephantual
    @stephantual 8 месяцев назад

    You probably know this but you could just use IPadapter for the clothes, at 1/0/1.0, it has solid grasp on the image and given only few frames are generated with very little movement , a simple mask will do (but coco segmenting can also be implemented). Thank you!

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      Right, I bypassed IPA in the video, mask/segmenting is a good way to try, thanks for the suggestion!

    • @stephantual
      @stephantual 8 месяцев назад

      @@JanRTstudio Thank you for the cool videos! Do you have an X account so we can follow you on?

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      Sure! Just created one, JanRT111, will update there! @@stephantual

  • @hitmanehsan
    @hitmanehsan 8 месяцев назад

    i upgraded to 3090ti 24gig.how much cpu ram i need to do video to video SD? I have 32gig

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      That's pretty cool! 24G is a lot, it's mainly GPU ram to render, if shifted to CPU ram the speed will slow down dramatically, so 32G RAM is good enough you don't want to use it. With 24G you can try latent upscale to get high resolution animation. I just have 12G VRAM and can do 512x768 100frames+ one time. You are good to go!

  • @anoubhav
    @anoubhav 8 месяцев назад

    What is a unsampler?

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      Sampling is a denoising process, unsampler does the reverse and creates the noise pattern from image, used to reconstruct the image with modified prompts.

  • @tianxiangxu2288
    @tianxiangxu2288 8 месяцев назад

    nice work

  • @RaymondGuo-c8z
    @RaymondGuo-c8z 8 месяцев назад

    fantastic work!, the only problem is that I found that after running this workflow, the background is always pure color, even if I have added some background info in prompt, it does not work, could you share some fix method?

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      Thanks for the feedback! First I think you can try to bypass the "AnimateDiff Loader" and set "Input_Img_Cap" to 1, and run some single picture to check if the background is generated as you wish, change to different models if not. Or you can add the depth controlnet with a very low strength, 0.2 for example. If you just want to restyle the source video, you can decrease the denoise value in the first Ksampler, like 0.6 - 0.8.

  • @typho0n5
    @typho0n5 8 месяцев назад

    Error occurred when executing VHS_LoadImagesPath: directory is not valid: D:/Program Files/ComfyUI_windows_portable/ComfyUI/output/ADiff/JanRT_P05/ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 155, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 85, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 78, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-VideoHelperSuite\videohelpersuite\load_images_nodes.py", line 143, in load_images raise Exception("directory is not valid: " + directory) I've tried many times but I really don't know how to modify

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      You can change the first CR prompt text to your folder path "D:/ComfyUI_windows_portable/ComfyUI/output/ADiff/", and rerun from the beginning. Loading images from folder other than the "output" folder inside ComfyUI usually causes error, I think that's the reason.

  • @mick7727
    @mick7727 8 месяцев назад

    My brain always shuts down when I see ComfyUI. I started a week ago on a1111 so yeah, very early days!

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      lol yeah get familiar with a1111, you will find it's just those options separated as nodes in comfyui

  • @ronnysempai
    @ronnysempai 8 месяцев назад

    Good video, thanks

  • @GfcgamerOrgon
    @GfcgamerOrgon 8 месяцев назад

    Its unfortunatelly that crossing fingers is interpreted as single hand, on many angles, I wish they could fix this up. A traning should be made, probably, and some sign to detect when one hand is under the other, because it deforms really bad as if there is only one hand on person! Also gloves have been a problem to me. It can be better yet.

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      Exactly, they mentioned this limitation. Works well in general poses, but still need to fix manually when crossing, overlapping, partial hands, etc.

  • @sureshotmv8255
    @sureshotmv8255 8 месяцев назад

    Great content! Is there a guide on how sparse control rgb/scribble actually work? What I mean how do you know it's placed first and last image? Can you place RGB Sparse Control on frames 1,5,7,9,15,20? How?

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      Thank you! Yes, that's controlled by the "Sparse method", I am making another video for it and will talk about these methods

  • @risewithgrace
    @risewithgrace 8 месяцев назад

    For some reason, even though I've successfully downloaded Comfyui Impact Pack, Comfyui will still say it's missing. So the node above SAMLoader in the Face Detailer section is red. Have you run into this issue?

    • @JanRTstudio
      @JanRTstudio 8 месяцев назад

      That's strange, did you use comfyUI manager to install Impact? Just check the CMD window, during loading it will show "Import Failed" for Impact Pack, and before that, it actually gives you error and the cause of failure.