Thank you for the compliment. Yes! -you can check out my other video here using a Lora to style a video. ruclips.net/video/c8HI3vyVcso/видео.htmlsi=JdxvwaImqY5N7ElW
Yes! that is possible, However you might need to setup the LCM nodes with the right settings, which I have explained here ruclips.net/video/c8HI3vyVcso/видео.htmlsi=rCt48aluof8GA30V
You are most welcome. You can set themes for your video using the Batch Prompt node. Similar to the Prompt Travel Technique, you can include multiple frame numbers + the prompt to modify your themes, but you might expirience subtle changes in background elements.
I'm glad you found the tutorial helpful. The error could be due to memory constraints. Try reducing the batch size or image resolution to see if that helps. Also, you can make sure make, you have the latest version of ComfyUI and all the necessary nodes updated.
you are most welcome, we all start some where. First, try checking your VRAM usage to ensure you have enough memory. If that's not the issue, double-check your node connections and settings to make sure everything is set up correctly. Sometimes, simply reloading the workflow or restarting ComfyUI can help too.
how solve this problem ? Error occurred when executing VAEEncode: 'VAE' object has no attribute 'vae_dtype' File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI odes.py", line 296, in encode t = vae.encode(pixels[:,:,:,:3]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 331, in encode memory_used = self.memory_used_encode(pixel_samples.shape, self.vae_dtype) ^^^^^^^^^^^^^^
i tried the method and i was amazed with its stability ز the main reason that i stop making videos was the flickering issue, so I guess I'm back again :) but I guess that this method work more like a filter more than actual drawing, is there any nodes that can be added so it can be used in more applications. like for example change the person look and clothes, etc .. ?
I'm glad it's refreshed your passion for making videos. You're right, this method can work like a filter, BUT with some tweaks, you can definitely push it further to change appearances, clothes, and more by using segmentation nodes to isolate different parts of the video. Style transfer nodes can also help to change the artistic style of the video.
Doesn't work for me. The workflow in the video shows the height/width node which isn't there in the workflow. Also there are too many errors despite all resources/models being installed. Comfyui is a pain
@@goshniiAI yes thats atopic that i still cant find how to jump into, i see people generating realistic videos, so its likely possible to feed the AI with alot of videos and make it cook similar videos just by changing the prompt.
You are welcome. So various reasons like incorrect file path, Video format or memory limitations can cause the Error. First, double-check the file path and confirm that the video format is supported. Also, try resizing or reducing the video's resolution to see if that helps with memory limitations. Most importantly, ensure that there are no broken spline connections to any of the nodes. I hope any of these are helpful.
Thanks for bringing this up! There are situations when nodes or modules may not work well together. updating ComfyUI and the nodes to their most recent versions can resolve the problem.
@@goshniiAI It did actually. The problem was solved by updating ComfyUI and all extensions. The ttn doesn't have a requirements-file for calling the command " pip install -r requirements.txt", so updating the entire repository was the solution. The only new issue that might create is when certain workflows require older versions like, numpy 1.23 instead of 1.26 for instance. Then its may be best to have several slightly different repositories depending on different workflows.
It reveals that you haven't connected the node correctly or that the proper settings are not present. Please review the settings and connections you have to that specific node.
Before generating the video, I recommend manually adding the missing nodes to ensure that the video size is correct. Also, make sure to update both comfyUI and all updates. I hope any of these are helpful
Hey one more problem, i have gtx 1650 graphics card its not enough for this , my process is stucked from 1 hour on 0% , please help me how to load image's in batch , like this tutorial have video tab but its not enough for my pc , so How to add image batch loader node
If your GPU isn't quite up to the task, consider splitting the workload into smaller batches. This prevents your GPU from being overwhelmed. If you are unsure how to do this, I am certain you might be able to find a resourceful tutorial online to guide you.
@@goshniiAImy gpu creates image to image in 30 seconds maximum, I can manage differently, but I didn't found how to image batch process in comfy ui , many people doing that in tutorial but not showing it how to add that image batch loader Node
Once your video has been entirely rendered in ComfyUI, it is usually located in the output folder within the comfyui directory. Option 2 - you can right-click on the final video node in comfyui and select 'Save, the video will be saved to the given folder you choose.
Your setup should handle Comfy UI, but for smoother performance, particularly with larger projects, I'd consider upgrading your GPU to something like an NVIDIA GTX 1660 or higher this would improve performance. I hope this helps!
You are correct, simply drag and drop the file into comfyUi on an empty canvas after downloading. You can also make use of this alternative link: tinyurl.com/msbe8fb3
@@goshniiAI thank you , save nothing I believe everything will be outdated in a couple months, even comfyui is destined or fated 2 be over taken by a new U.I
@@Trending_editzs Double-check your workflow settings and parameters to confirm they are accurate. A minor error may at times cause the entire operation to fail. - Heavy processing operations can put a burden on resources, so monitor your GPU and your Ram - Resize your video to match the same batch size output while keeping the frame size small, the default approach by Inner reflection upscales the Original video. i hope any of these helps. Don't give Up!
What is the VAE and what should it be used for? What is the the Animate Path node doing? How do we know which one of the models "is necessary"?? This is unfollowable because you aren't covering the basics.
Hello there, I appreciate your feedback about covering the basics. I’ll make sure to include more foundational explanations in future videos to help everyone follow along more easily. the VAE helps compress and decompress image data, ensuring better quality and detail in the generated images. The Animate Path Node is used to create smooth transitions and animations between different frames How Do We Know Which Models Are Necessary? Some models are better suited for certain types of animations or image styles, you can Refer to the model documentation for specific use cases or try different models and see which one gives you the desired result. i hope this was helpful and keep experimenting!
Hey the link points to thibaud/controlnet-sd21 but i see in the video you're using sd15. Is that why Im getting RuntimeError: mat1 and mat2 shapes cannot be multiplied (1232x768 and 1024x320)?
Insane!
Thanks to share your knowledge 🙏🏻
Thank you for the kind words
Thank you very much for your workflow and careful explanation, I decided to give it a try
I'm glad that you found the process and explanation to be helpful. You are very welcome!
Thank you so much for the tutorial! I thought it would take me weeks to figure this out. I wish you good luck in everything!
pleasure I could assist you, glad to hear from you.
Thanks for sharing and also giving us all the links in the description. Subscribed!
Thank you for your feedback. i am glad you're here. I could also learn a lot from my channel.
Thank you very much for your workflow
It's very nice of you.
thanks man you explained very well , i was so confused
Thank you for the Feedback, I'm glad you found this helpful!
Thanks, your video are great! Is it possible to apply LoRa to video including trigger words for style adaption?
Thank you for the compliment.
Yes! -you can check out my other video here using a Lora to style a video. ruclips.net/video/c8HI3vyVcso/видео.htmlsi=JdxvwaImqY5N7ElW
Hey, I followed all the steps, don't have any error and when I click the Queue Prompt it have his process but in the end there is no final image
Thank you for the video. Can't you do the 20-30 second video?
I appreciate the suggestion very much. I will take that into account.
Thanks for the workflow.
Will not adding LCM Lora to the workflow can i speedup the generation lot fast?
Yes! that is possible, However you might need to setup the LCM nodes with the right settings, which I have explained here ruclips.net/video/c8HI3vyVcso/видео.htmlsi=rCt48aluof8GA30V
Thank you so much for the content.
Is there any way to change the theme of the video without changing the appearance of the face.
You are most welcome. You can set themes for your video using the Batch Prompt node. Similar to the Prompt Travel Technique, you can include multiple frame numbers + the prompt to modify your themes, but you might expirience subtle changes in background elements.
Hi, thank you for your tutorial! Everything is clear, but I have an error with Ksampler. Can you help me with that?
I'm glad you found the tutorial helpful. The error could be due to memory constraints. Try reducing the batch size or image resolution to see if that helps.
Also, you can make sure make, you have the latest version of ComfyUI and all the necessary nodes updated.
Thank you so much! Sorry I'm new to the comfy ui, for some reason my generation stops at animate diff node. It is green and then it stops.
you are most welcome, we all start some where.
First, try checking your VRAM usage to ensure you have enough memory. If that's not the issue, double-check your node connections and settings to make sure everything is set up correctly. Sometimes, simply reloading the workflow or restarting ComfyUI can help too.
Thank you man!
You are most welcome.
Great explanation. Only the workflow link is not valid anymore. Could you please reuploud your workflow. Thanks a lot!
Thank you for your feedback, the link has been modified in the description.
Great, thanks for quick response@@goshniiAI
how solve this problem ?
Error occurred when executing VAEEncode:
'VAE' object has no attribute 'vae_dtype'
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI
odes.py", line 296, in encode
t = vae.encode(pixels[:,:,:,:3])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 331, in encode
memory_used = self.memory_used_encode(pixel_samples.shape, self.vae_dtype)
^^^^^^^^^^^^^^
Hello there, you might be missing the VAE model required for the VAE node. also check if the nodes are well connected to each other.
i tried the method and i was amazed with its stability ز
the main reason that i stop making videos was the flickering issue, so I guess I'm back again :)
but I guess that this method work more like a filter more than actual drawing, is there any nodes that can be added so it can be used in more applications.
like for example change the person look and clothes, etc .. ?
I'm glad it's refreshed your passion for making videos.
You're right, this method can work like a filter, BUT with some tweaks, you can definitely push it further to change appearances, clothes, and more by using segmentation nodes to isolate different parts of the video. Style transfer nodes can also help to change the artistic style of the video.
Doesn't work for me. The workflow in the video shows the height/width node which isn't there in the workflow. Also there are too many errors despite all resources/models being installed. Comfyui is a pain
do you have a tutorial about how to train stable diffusion to generate similar videos to the video you give it as a source? TY
hello there,I do not currently have a tutorial, but I appreciate the question. It's certainly a topic worth exploring in future content.
@@goshniiAI yes thats atopic that i still cant find how to jump into, i see people generating realistic videos, so its likely possible to feed the AI with alot of videos and make it cook similar videos just by changing the prompt.
Thanks. I ran it, but I keep getting this message "Error occurred when executing LoadImage". What should I do?
You are welcome.
So various reasons like incorrect file path, Video format or memory limitations can cause the Error.
First, double-check the file path and confirm that the video format is supported. Also, try resizing or reducing the video's resolution to see if that helps with memory limitations.
Most importantly, ensure that there are no broken spline connections to any of the nodes. I hope any of these are helpful.
When loading the graph, the following node types were not found:
ttN text
Hello there, The "ttN text" node might be from a custom node library or an update that you might not have installed yet.
This node actually creates errors after installation "import failed". Even with "git clone", same result.
Thanks for bringing this up! There are situations when nodes or modules may not work well together. updating ComfyUI and the nodes to their most recent versions can resolve the problem.
@@goshniiAI It did actually. The problem was solved by updating ComfyUI and all extensions. The ttn doesn't have a requirements-file for calling the command " pip install -r requirements.txt", so updating the entire repository was the solution. The only new issue that might create is when certain workflows require older versions like, numpy 1.23 instead of 1.26 for instance. Then its may be best to have several slightly different repositories depending on different workflows.
What to do , if there are red lines around a node, it appeared after I qued the prompt?
It reveals that you haven't connected the node correctly or that the proper settings are not present. Please review the settings and connections you have to that specific node.
Hey is it OK if I don't have the weight and hight node in comfyu .when I put the workflow they were missing?
Before generating the video, I recommend manually adding the missing nodes to ensure that the video size is correct. Also, make sure to update both comfyUI and all updates. I hope any of these are helpful
thanks very much
You are most welcome.
Hey one more problem, i have gtx 1650 graphics card its not enough for this , my process is stucked from 1 hour on 0% , please help me how to load image's in batch , like this tutorial have video tab but its not enough for my pc , so How to add image batch loader node
If your GPU isn't quite up to the task, consider splitting the workload into smaller batches. This prevents your GPU from being overwhelmed. If you are unsure how to do this, I am certain you might be able to find a resourceful tutorial online to guide you.
@@goshniiAImy gpu creates image to image in 30 seconds maximum, I can manage differently, but I didn't found how to image batch process in comfy ui , many people doing that in tutorial but not showing it how to add that image batch loader Node
@@Dwoz_Bgmi I fully understand and agree, and I'll keep that in mind for next videos.
@@goshniiAI thanks, im waiting no one made video on image to image batch so it's better topic for next video
How can I save the generated video?
Once your video has been entirely rendered in ComfyUI, it is usually located in the output folder within the comfyui directory.
Option 2 - you can right-click on the final video node in comfyui and select 'Save, the video will be saved to the given folder you choose.
@@goshniiAI Thank you very much. I've already seen it now but was so clueless at first :D
@@adrianfels2985 You're welcome! i am glad you figured it out and happy animating!
i have a core i7 3770 16gb ram and gt 1030 will this work on my system if not which gpu is mini required for using comfy ui🙂
Your setup should handle Comfy UI, but for smoother performance, particularly with larger projects, I'd consider upgrading your GPU to something like an NVIDIA GTX 1660 or higher this would improve performance. I hope this helps!
Get a 3060 12gb or a 4060ti 16gb.
Hey man how to load a lora in this workflow?
This example shows you how to use Lora as a guide. comfyanonymous.github.io/ComfyUI_examples/lora/
hey please help ! lineart model is not showing in the link
Thank you for your observation; the link has been updated. Make use of safetensor version.
Can you upload the workflow in a more simple way , the Google doc is not easy to use , even though it probably is ( I'm guessing click and drop )
You are correct, simply drag and drop the file into comfyUi on an empty canvas after downloading. You can also make use of this alternative link: tinyurl.com/msbe8fb3
@@goshniiAI thank you , save nothing I believe everything will be outdated in a couple months, even comfyui is destined or fated 2 be over taken by a new U.I
Thank you for your feedback! Being open to change and staying adaptable is essential.@@arkelss4
hey. my text box undefined. its about controlnet?
Hello there, Can you further clarify the issue you're having? I'm having trouble understanding it.
when I put your folder .json on COMFYUI html, TEXT box undefined. other boxes I fixed all missed ut text box :( @@goshniiAI
and anyway thanks. I watched 100 videos and you are the one teaching so simple and good. the best wishes@@goshniiAI
I appreciate your point of view and opinion. it means a lot.@@zeta.lifestyles
may i know rendering time and requirement spec
Using an NVIDIA RTX 3060, The rendering took a few hours.
CheckpointLoaderSimpleWithNoiseSelect
ADE_AnimateDiffLoaderWithContext
ADE_AnimateDiffCombine
ADE_AnimateDiffUniformContextOptions
error
Hello there! You can choose your models again for each node that has the error. It is possible that the workflow and your model paths differ.
Is there any way i can sort out my issue😢
Could you please provide more details about the issue you're facing
@@goshniiAI it doesn't move to the last step it make till 2process of turning images into black sketches and stops there
@@Trending_editzs Double-check your workflow settings and parameters to confirm they are accurate. A minor error may at times cause the entire operation to fail.
- Heavy processing operations can put a burden on resources, so monitor your GPU and your Ram
- Resize your video to match the same batch size output while keeping the frame size small, the default approach by Inner reflection upscales the Original video.
i hope any of these helps. Don't give Up!
What is the VAE and what should it be used for?
What is the the Animate Path node doing? How do we know which one of the models "is necessary"??
This is unfollowable because you aren't covering the basics.
Hello there, I appreciate your feedback about covering the basics. I’ll make sure to include more foundational explanations in future videos to help everyone follow along more easily.
the VAE helps compress and decompress image data, ensuring better quality and detail in the generated images.
The Animate Path Node is used to create smooth transitions and animations between different frames
How Do We Know Which Models Are Necessary?
Some models are better suited for certain types of animations or image styles, you can Refer to the model documentation for specific use cases or try different models and see which one gives you the desired result.
i hope this was helpful and keep experimenting!
Hold on bro, Are you a Nigeria?!
I am actually of Ghanaian descent, but I believe both countries have similarities.
Hey the link points to thibaud/controlnet-sd21 but i see in the video you're using sd15. Is that why Im getting RuntimeError: mat1 and mat2 shapes cannot be multiplied (1232x768 and 1024x320)?
please navigate to the manager in comfyUi and select install models. Search for the model name to find the SD15 models as well.