Looks very interesting as always. Curious about your face fix workflow which looks a bit different then previous versions? Version 3 made some problems with slightly blury results and faces went a bit too dark. Dunno what i did wrong. Have you ever tried text to video with controlnet combined? I really like what i get out but i struggle to get a stable background :(
You can Ignore the "OpenPose Groups" and 1) Connect the line art pre-processor with the images (with a load image node or so..) you want as inputs to the controlnet node Input "image" 2) Change the load controlnet model to lineArt as well Just play around with strength and end percent till you get your results.
Thanks for the lesson, I did it! Just one question: in the "OpenPose_Directory" group, you have the value 170 in the OpenPose column in the star_index parameter, what does this mean?
Ummm... I must have missed to change it. The value should be back to 0. Btw I skipped 170 frames so It would also skip some dialogues of the reference video so only face motion is capture... not lips.
I am using both, My PC and Runcomfy.com for cloud. You can contact the support of runpod, they will help you out if there are some permissions errors during install
The Model has been renamed to CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors by the author You can download it from here: github.com/cubiq/ComfyUI_IPAdapter_plus
i`m getting rough transitions moves, do you know how to get smooth consistency, steady like yours ? in the group Saving, already put frame_rate 8, another think my Sequence OpenPose has 170 images, is it means my batch size is gonna be 170? if i want to use the whole OpenPose Sequence
1) I had render like 20-30 small videos first then pick one of the best video. It's a trial and error game. 2) you can play with animatediff's motion scale settings, for getting less motion. 3) Yes, You are correct, put 170 in the batch size same as number of openpose image.
@@provi1085 Yes, I change seeds, motion models, motion loras, it's weights and see which variables has the best result then I render that in longer video then upscale.
Is it possible to use this workflow to animate a video? I mean the animation moves according to the original video, not a moving photo? If so, what do I need to do?
Hai, wanna ask, when i activated the open pose, i get this kind of error , any advice ? Thanks Error occurred when executing ACN_AdvancedControlNetApply: AdvancedControlNetApply.apply_controlnet() missing 1 required positional argument: 'image' File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
This guy and latent vision are a God send for the community
true 👍
Agree 💯
true bro
Bring us such cool stuff like always,加油 J
can`t believe this tutorial is out, thanks a lot!!
You're welcome!
@@jerrydavos thanks, today i tested and worked!! do you know where i can find OpenPose Library?
I cant find the animation legacy file anywhere in that link...
when i use your workflow, error occurs: missing node types:LatenGaussianNoise. how can i solve this problem?
Has it been resolved now? I encountered the same problem
@@WeiLI-g5j no
Looks very interesting as always. Curious about your face fix workflow which looks a bit different then previous versions? Version 3 made some problems with slightly blury results and faces went a bit too dark. Dunno what i did wrong. Have you ever tried text to video with controlnet combined? I really like what i get out but i struggle to get a stable background :(
Hey, previous face fixer worked on LCM, which can cause blurry faces, I'll be working on tutorial for background changer next.
This is a life saving tutorial, thanks man!!!
May I know is it possible to change openpose to lineart controlnet? and how can i do it? thanks a lot
You can Ignore the "OpenPose Groups" and
1) Connect the line art pre-processor with the images (with a load image node or so..) you want as inputs to the controlnet node Input "image"
2) Change the load controlnet model to lineArt as well
Just play around with strength and end percent till you get your results.
Thank you!
Thanks for the lesson, I did it! Just one question: in the "OpenPose_Directory" group, you have the value 170 in the OpenPose column in the star_index parameter, what does this mean?
Ummm... I must have missed to change it. The value should be back to 0.
Btw I skipped 170 frames so It would also skip some dialogues of the reference video so only face motion is capture... not lips.
Error occurred when executing ACN_AdvancedControlNetApply:
'NoneType' object is not iterable
Check that prompt travel is not enabled...
Can you provide pictures of openpose? Thanks
Here you go:
drive.google.com/drive/folders/17qs6fVPk6HgxJDEhH-dmF3YxXvkWik_U?usp=sharing
@@jerrydavos thank,Bro! I have succeeded,
Hey, thanks for sharing. Do you use your own computer or cloud solution? I am using Runpod and this nodes are not installing :( Crystools and Allor
I am using both, My PC and Runcomfy.com for cloud. You can contact the support of runpod, they will help you out if there are some permissions errors during install
Where can I download "clip_vision_old.safetensors"?
The Model has been renamed to CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors by the author
You can download it from here:
github.com/cubiq/ComfyUI_IPAdapter_plus
Thanks for the tutorial! Can we animate real image? Using real image as input.
For openpose you can input real images... but can't animate like this with real image..... surely face can be swapped later
i`m getting rough transitions moves, do you know how to get smooth consistency, steady like yours ? in the group Saving, already put frame_rate 8, another think my Sequence OpenPose has 170 images, is it means my batch size is gonna be 170? if i want to use the whole OpenPose Sequence
1) I had render like 20-30 small videos first then pick one of the best video. It's a trial and error game.
2) you can play with animatediff's motion scale settings, for getting less motion.
3) Yes, You are correct, put 170 in the batch size same as number of openpose image.
@@jerrydavos Hey thanks for the vid!
1) you mean test generating like first 10 frames, and note down the best performing seeds?
@@provi1085 Yes, I change seeds, motion models, motion loras, it's weights and see which variables has the best result then I render that in longer video then upscale.
Is it possible to use this workflow to animate a video? I mean the animation moves according to the original video, not a moving photo? If so, what do I need to do?
Nice Idea but I don't think it can be done yet....
Hai, wanna ask, when i activated the open pose, i get this kind of error , any advice ? Thanks
Error occurred when executing ACN_AdvancedControlNetApply:
AdvancedControlNetApply.apply_controlnet() missing 1 required positional argument: 'image'
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
Make sure you are connected to the proper reroute nodes.... Single or Directory, The Reroute nodes should be linked appropriately.
@@jerrydavos Oh nooo thanks for the clue! wrong connected image to controlnet . it works now Thanks :D