@@ARMASPIRIT I'm using windows 10, simply go to control panel look on your choice of search engine online for the answer about how to change swap memory or virtual memory in windows 10 and before.
I've been looking for a solution where we can identify keyframes, select the item/cloth we want to change automatically, then inpaint and finally interpolate between those frames. The interpolation part is very interesting to me, I wonder what that would look like with similar keyframes
awesome video. the text to video worked right out of the gate, but the img to video is missing a custom node cosmosimgtovideolatent that the node manager does not see
turns out that comfyui needed updating, but doing it via manager did not work. git pull in the comfyui dir works in my case "comfyuiportable/ comfyui" cmd there and type git pull in the terminal
The new CaPa paper for mesh generation mentioned it will fit onto ControlNet pretty well. Wonder if that's going to go crazy with 3d Printing or not. We only have the code rn, no demos, so might a "in a few months" thing
Hi I keep getting a ksampler error: "Expected size for first two dimensions of batch2 tensor to be: [154, 768] but got: [154, 1024]." and I didn't alter the base workflow at all so not sure why this is happening
Good job, it works on my 3080 10gb very well (7B model). Nothing crashes due to lack of memory. i2v 704x704@65 frames ~ 7min, 1024x1024@65 frames ~ 37min.
@@BuntAsHell 1. When starting the ComfyUI, make sure that xformer - enabled ( read console ) 2. In the nvidia panel, enable CUDA in the global settings - use the system memory reserve. (I have 64 GB) Maybe it will help.
Can confirm that it worked on my 3080 as well, albeit it took ~30 mins at 704x704@121 frames. Feels too slow to experiment with, but the result was surprisingly consistent.
Hi there! Great video. This is exactly something that I needed for a task I was asked to do but I am brand new to SD and I was wondering if this workflow would run fine on a 16gb vram gpu or would it need 24? Thanks in advance!
35 minutes for a very artifact-y 5 second render on a 3090ti that screams bloody murder the whole time? No, thank you. As long as you're not using it locally, it might be ok.
@@bazio5592 3090 isn't that bad of a card bro. Not everybody needs to fund Jensen's jacket collection. 3090 is still a $900 card. And still better than a 4060~4070 with vram... Someone sounds like they are trying to justify purchasing their 40 series so close to the 50 series release and/or compensating for something by bashing other peoples hardware... You replace a GPU like it's a glass of milk? Good for you be proud of your superiority or w/e. But not everybody, CAN, wants to, or needs to do that. Unless you are rocking a rack of H100's I'd get off my high horse. None of the gaming hardware like even the 5090 is worth a damn for more than a quarter anyway. So enjoy your 5090 it will be considered bargain bin trash soon too.
I can't wait for consumer NPUs to become available in 2-3 years from now, as GPUs are not scaling along with model capabilities, and I don't have much hope for optimizations that will make them viable for real-time local use (video game emulation).
Bummer, for some reason it just crashes my ComfyUI...I have all the models downloaded and everything, using a 3090Ti. ... nevermind ... I forgot I was training a LoRA at the same time lol. Weird I didn't see any kinda message about running out of memory.
Hey man, love your videos. However I wanted to ask that the worflow youre using is not the same as in the example workflow. could you provide us with the workflow shown in your video. thanks a lot! Much love
Hej, great info, but ... CosmosImageToVideoLatent this node is red outlined and gives an error: Cannot execute because a node is missing the class_type property.: Node ID '#83'
can we just pause everything so my 4090 stays good enough to curb my obsession with this entire scene? cause i kinda want that 5090 and it's a bit pricey, my four-legged g? i dont care if its faster, this is gollum level shit, i just want the benchmark json... I'm concerned that i'll upload my face and it will somehow just Bad Egg me like willy wonka for being too unstable 4 stable. I will say that the engineer brains dont do well with marketing/design sometimes, and I love how this particular hobby kinda forces them together. Interesting to see non-artistic brains get in the Picasso Mech Suit and mess around in awkward ways. That's kind of AI in a nut shell to me. Not quite right in that 'you seem a biiiiit too drunk to drive but it's nothing personal' way. uncanny valley is deeper than I ever though but the parties down here are wicked
“The best model”... not really. Hunyuan far surpasses it in terms of rendering quality, speed, flexibility, lora support, etc... If you're using the Kijai's wrapper you can go even further with optimizations. Not to mention the forthcoming arrival of their I2V version. I think this model will soon be forgotten unless they come out with a new more accomplished version.
Slow @ 10 minutes on a 4090? Meanwhile, I've been generating an 85 frame video in Hunyuan for about an hour so far. Can't wait to try slow! UPDATE: The Nvidia model isn't any faster and the quality is much worse than Hunyuan's. Tried with both models and neither produced worthwhile results. I don't know where they get the "50x faster" idea. Must be marketing by the guy at Nvidia saying the 5070 is the same as a 4090. UPDATE2: While this model's txt2vid is bad, the img2vid is the best I've seen in a local gen. That's not saying it's "good" because, well, it's not. It has a tendency to create extreme motion with massive morphing no matter the prompt or input image. It has an absolute lack of understanding of basic physics and even trying to make a character walk is a horror show. I'm actually pretty surprised NVidia would put their name on something of this quality. But it is also actually pretty fast so that's nice. Hopefully I can find some sort of settings / prompts to make this model useful.
getting this error saying CosmosImageToVideoLatent is missing, but I downloaded the diffusion models and everything. Missing custom nodes dont show anything how can I fix this?
If you don’t have the built-in node, then it’s probable that you aren’t using a current version, but are instead using a very old version. What you will find is the previous versions don’t have the same functionality as the current versions. This means that whenever something new comes out, you won’t have it in your old version. Obviously this doesn’t just apply here, but generally throughout all software.
This is quite amazing. Keep making more amazing video on open source image to video models. Amazing times!
Just did another one… Apache 2.0 this time 😍
This is something that I have been saying from the start, pure learning from videos or images won't do, modeling a 3d world, now that's where it's at.
hey man love your work, also your voice is amzing and calming.
Thanks for this one Nerdy! 😊
everything from flux to SD has been worked out on my 4GB Nvidia since the early days using virtual memory, it's slow but it works.
@@synthoelectro hi, please I need to use virtual memory, is there any tutorial?
@@ARMASPIRIT I'm using windows 10, simply go to control panel look on your choice of search engine online for the answer about how to change swap memory or virtual memory in windows 10 and before.
you always rocks 🤘 thanks🙏
I'm getting a speed of 423.54s/it with the default res and length at 20 steps. This with with 16gb vram. Why is it terribly slow for me?
I've been looking for a solution where we can identify keyframes, select the item/cloth we want to change automatically, then inpaint and finally interpolate between those frames.
The interpolation part is very interesting to me, I wonder what that would look like with similar keyframes
awesome video. the text to video worked right out of the gate, but the img to video is missing a custom node cosmosimgtovideolatent that the node manager does not see
turns out that comfyui needed updating, but doing it via manager did not work. git pull in the comfyui dir works in my case "comfyuiportable/ comfyui" cmd there and type git pull in the terminal
The new CaPa paper for mesh generation mentioned it will fit onto ControlNet pretty well. Wonder if that's going to go crazy with 3d Printing or not. We only have the code rn, no demos, so might a "in a few months" thing
Nice ending song!
Hi I keep getting a ksampler error: "Expected size for first two dimensions of batch2 tensor to be: [154, 768] but got: [154, 1024]." and I didn't alter the base workflow at all so not sure why this is happening
you need the "oldt5" clip model.
@@OptimusRhymeCrew Thank you that worked!
Good job, it works on my 3080 10gb very well (7B model). Nothing crashes due to lack of memory.
i2v 704x704@65 frames ~ 7min, 1024x1024@65 frames ~ 37min.
Hi. I ran the example from the Text 2 Video workflow on a 4070ti (12gb) and it took 37 minutes. Any tips on speading that up?
@@BuntAsHell 1. When starting the ComfyUI, make sure that xformer - enabled ( read console )
2. In the nvidia panel, enable CUDA in the global settings - use the system memory reserve. (I have 64 GB)
Maybe it will help.
Can confirm that it worked on my 3080 as well, albeit it took ~30 mins at 704x704@121 frames. Feels too slow to experiment with, but the result was surprisingly consistent.
7:45 i am trying to figure out since 3 hours, how you made this workflow. i wish i could download this or maybe better explanation :(
he just rearranged the workflow from the author. as for the chain at the end, i haven't figured that out yet
Hi there! Great video. This is exactly something that I needed for a task I was asked to do but I am brand new to SD and I was wondering if this workflow would run fine on a 16gb vram gpu or would it need 24? Thanks in advance!
Yes, the requirements are provided in the video
35 minutes for a very artifact-y 5 second render on a 3090ti that screams bloody murder the whole time? No, thank you. As long as you're not using it locally, it might be ok.
3090 in 2025 bro, change job 😂
@@bazio5592 wow super the relevance of the answer.. Buy yourself a life no?
@@bazio5592 what an out of touch comment
@CHARiOTangler that doesn’t sound right as my 3090 is faster…
@@bazio5592 3090 isn't that bad of a card bro. Not everybody needs to fund Jensen's jacket collection. 3090 is still a $900 card. And still better than a 4060~4070 with vram...
Someone sounds like they are trying to justify purchasing their 40 series so close to the 50 series release and/or compensating for something by bashing other peoples hardware...
You replace a GPU like it's a glass of milk? Good for you be proud of your superiority or w/e. But not everybody, CAN, wants to, or needs to do that.
Unless you are rocking a rack of H100's I'd get off my high horse. None of the gaming hardware like even the 5090 is worth a damn for more than a quarter anyway. So enjoy your 5090 it will be considered bargain bin trash soon too.
I can't wait for consumer NPUs to become available in 2-3 years from now, as GPUs are not scaling along with model capabilities, and I don't have much hope for optimizations that will make them viable for real-time local use (video game emulation).
NPU's are mostly monitoring hardware. Not for user playtime.
Does it work with portable comfyUI?
I just get the error "compute_indices_weights_linear" not implemented for 'Half'
Where do the videos save? I'm getting single pngs in the default folder but no video. Tried adding the Video Save node but that doesn't work.
Bummer, for some reason it just crashes my ComfyUI...I have all the models downloaded and everything, using a 3090Ti. ... nevermind ... I forgot I was training a LoRA at the same time lol. Weird I didn't see any kinda message about running out of memory.
Hey man, love your videos. However I wanted to ask that the worflow youre using is not the same as in the example workflow. could you provide us with the workflow shown in your video. thanks a lot! Much love
for me it is generating only a black video. 121 frames just pure black. anyone else had that problem and how can i fix it? :)
Me and my mac M2 feeling that we are missing out.
Fun times! 🐭
Anyone knows how does it compare to Hunyuan?
Its data to produce data so data can be data! O_o;
Data will eat itself!
@@NerdyRodent MOOR
I can't wait to try this a soon-to-be-mine RTX 5070. They say it's as fast as an RTX 4090.
5070 is equal to 4090 only in games with AI frame generation. In Raw tests 5070 will be 15% faster than 4070super.
@@dogvandog I'm pretty sure that comment is just bait. I'm hoping I'm not wrong. I'd be concerned for him if he wasn't joking
darn it - I wish it could work for my lowly 8GB GPU :(
ello, Empty Cosmos Latent Video seems to be missing, how can I fix it?
Check out the first few seconds of the video for more information!
Note to self: Avoid drinking bewerages while watching ai-vidoes.
My nerdy friend🤘💜!!!!
🌳🦋💃🌍💃🦋🌳
can you please make a video only for the loop? in minute 8:00 zoom in to all nodes and explain how to connect them
It's good. But hopefully ComfyUI reached the level of Hailuo Video Generator.
Oh, Nerdy Rodent! 🐭🎵
He really makes my day! ☀
Showing us AI, 🤖
in a really British way! ☕🎶
it says that CosmosImageToVideoLatent doesnt exist, any fix ?
Check 0:00 😉
@@NerdyRodent ye dw I ended up restarting comfy and it worked lol
Hej, great info, but ... CosmosImageToVideoLatent this node is red outlined and gives an error:
Cannot execute because a node is missing the class_type property.: Node ID '#83'
update was not solving that issue, also cosmos is missing at the loadclip node
better use GIT PULL! ;)
Really need new gpu
Extremely slow running locally. Not yet.
can we just pause everything so my 4090 stays good enough to curb my obsession with this entire scene? cause i kinda want that 5090 and it's a bit pricey, my four-legged g? i dont care if its faster, this is gollum level shit, i just want the benchmark json... I'm concerned that i'll upload my face and it will somehow just Bad Egg me like willy wonka for being too unstable 4 stable. I will say that the engineer brains dont do well with marketing/design sometimes, and I love how this particular hobby kinda forces them together. Interesting to see non-artistic brains get in the Picasso Mech Suit and mess around in awkward ways. That's kind of AI in a nut shell to me. Not quite right in that 'you seem a biiiiit too drunk to drive but it's nothing personal' way. uncanny valley is deeper than I ever though but the parties down here are wicked
круто!
the voice track says the original is english (united states) lol
y2mate just popped this shit up
I just want LTX 1.0 with paid video quality. These hunyuan and cosmos too heavy.
“The best model”... not really. Hunyuan far surpasses it in terms of rendering quality, speed, flexibility, lora support, etc... If you're using the Kijai's wrapper you can go even further with optimizations. Not to mention the forthcoming arrival of their I2V version. I think this model will soon be forgotten unless they come out with a new more accomplished version.
First and last image to video. 🎤
Slow @ 10 minutes on a 4090? Meanwhile, I've been generating an 85 frame video in Hunyuan for about an hour so far. Can't wait to try slow!
UPDATE: The Nvidia model isn't any faster and the quality is much worse than Hunyuan's. Tried with both models and neither produced worthwhile results. I don't know where they get the "50x faster" idea. Must be marketing by the guy at Nvidia saying the 5070 is the same as a 4090.
UPDATE2: While this model's txt2vid is bad, the img2vid is the best I've seen in a local gen. That's not saying it's "good" because, well, it's not. It has a tendency to create extreme motion with massive morphing no matter the prompt or input image. It has an absolute lack of understanding of basic physics and even trying to make a character walk is a horror show. I'm actually pretty surprised NVidia would put their name on something of this quality. But it is also actually pretty fast so that's nice. Hopefully I can find some sort of settings / prompts to make this model useful.
der workflow funktioniert nicht, weil 2 nodes in ComfyUI fehlen!
NERDSS!!!!
Another "free" product. We all know it will change into a really "expensive" one finally
do you think SD will remain free and Comfy will be expensive eventually?
It's "we nerds can", not "us nerds can". You wouldn't say "us can". Why does literally everybody get this wrong?
no one cares
Says the person using 'literally' incorrectly.
@@Elwaves2925 Bro all y'all are some nerds lmao
Just like people using God instead of god .. internet does not care about grammar
@ Educated dear boy/girl, educated and thank you for your compliment.
hunyuan or ltx locally just smokes this, you can upscale vids with topaz video upscaler $299
Watching with rtx 3060
getting this error saying CosmosImageToVideoLatent is missing, but I downloaded the diffusion models and everything. Missing custom nodes dont show anything how can I fix this?
If you don’t have the built-in node, then it’s probable that you aren’t using a current version, but are instead using a very old version. What you will find is the previous versions don’t have the same functionality as the current versions. This means that whenever something new comes out, you won’t have it in your old version. Obviously this doesn’t just apply here, but generally throughout all software.