New Open Source Video Model - How to run Nvidia Cosmos in ComfyUI

Поделиться
HTML-код
  • Опубликовано: 30 янв 2025

Комментарии • 112

  • @LightPillar
    @LightPillar 13 дней назад +6

    The pace of ai is astounding. Rather than seeing models as this vs that I like to see them as tools in the toolbox, similar to you. Use em all depending on the needs of the task. Thanks for bringing this to our attention plus the demonstration and guide.

    • @aegisgfx
      @aegisgfx 13 дней назад

      Really?? We've been stuck on three to four second length video generation now for about 2 years, and we're still stuck there. I'd really like someone to explain to me how 3 seconds of video is supposed to really do anything for anybody.

    • @LightPillar
      @LightPillar 13 дней назад

      @ well I was speaking more about the quality of the generation, (in general, not this model exclusively) especially when using a lora. as opposed to will smith eating spaghetti from two or three years ago.

    • @heshanlahiru2120
      @heshanlahiru2120 12 дней назад

      ​@@aegisgfxwhy do you think you give a prompt and ai to genarate minutes long Videos. Ai can't do that. Ais can't think

    • @autolykos9822
      @autolykos9822 12 дней назад

      @@heshanlahiru2120 You could probably get Claude or Llama to write the prompts and keep appending to the clips. If it can pass the bar exam and ace the math Olympiad, it can probably think well enough to write a video script, given a few examples. It might even be able to tell whether the last frame degenerated too much and a clip needs to be done again. You just can't do it in a single model. Yet.

    • @aegisgfx
      @aegisgfx 11 дней назад

      @@heshanlahiru2120 hey It can't really do anything as far as I can tell. Again I ask people what is 3 seconds of video good for???

  • @solomslls
    @solomslls 13 дней назад +7

    cant wait to see image to video workflows, thx for the video , looks good , ill try it on rtx 3060

    • @sebastiankamph
      @sebastiankamph  13 дней назад +2

      You and me both!

    • @jovanniagara
      @jovanniagara 13 дней назад +2

      tell me if it work on 3060 buddy!

    • @thrasher7666
      @thrasher7666 12 дней назад +1

      It worked on an RTX 3060 with 32GB of ram at 3200MHz, it took 88 minutes to make the video that comes in the model in the video

    • @solomslls
      @solomslls 12 дней назад

      @@thrasher7666 I think you used wrong models maybe fp16

    • @thrasher7666
      @thrasher7666 12 дней назад

      @@solomslls no i used the fp8 model

  • @360sblulev
    @360sblulev 13 дней назад

    love ur vids pls talk more about comparisons to other systems like example why someone would opt for this instead of competitors, its nsfw filters etc

  • @NicolasElsig
    @NicolasElsig 13 дней назад +1

    Thanks Sebastian !

  • @erdbeerbus
    @erdbeerbus 10 дней назад

    GREAT explained ...

  • @Dj-Mccullough
    @Dj-Mccullough 13 дней назад +2

    the "EmptyCosmosLatentVideo" and "CosmosImageToVideoLatent" nodes dont appear in comfyui manager to download, rendering the workflow dead. Edit... Not sure what happened. i went back to the window, and it was there, does comfyui update while in use?

  • @hxhshow9
    @hxhshow9 13 дней назад +2

    it's crazy to be the first first time

    • @sebastiankamph
      @sebastiankamph  13 дней назад

      I should start giving out awards! 🌟

    • @tomaszwota1465
      @tomaszwota1465 13 дней назад

      @@sebastiankamph Don't encourage them! :D

    • @hxhshow9
      @hxhshow9 13 дней назад

      @@tomaszwota1465 we never die... I guess

  • @curious_about_AI-we9lt
    @curious_about_AI-we9lt 13 дней назад +2

    Thanks for another great video - I only have 8GB ram card - is it still worth trying?

    • @Dj-Mccullough
      @Dj-Mccullough 13 дней назад

      It might work, but i've had Hunyuan work on a 8gb card. If this doesnt work, use that one.

    • @fluffsquirrel
      @fluffsquirrel 13 дней назад

      @@Dj-McculloughI love the idea, but is Hunyuan trustworthy?

    • @curious_about_AI-we9lt
      @curious_about_AI-we9lt 13 дней назад

      @@Dj-Mccullough Yeah so far I don't seem to have had problems generally, may be slower but so what. Thanks for reply.

  • @ralf.starke
    @ralf.starke 13 дней назад

    Thank you!

  • @banzaipiegaming
    @banzaipiegaming 10 дней назад

    Hi Sebastian, I keep getting a ksampler error: "Expected size for first two dimensions of batch2 tensor to be: [154, 768] but got: [154, 1024]." and I didn't alter the base workflow at all so not sure why this is happening

  • @philippeheritier9364
    @philippeheritier9364 12 дней назад

    Thanks for this cool tutorial . In ksampler - sampler name - you have "res_multistep" what is it ? i don't have this sampler

  • @FunwithBlender
    @FunwithBlender 13 дней назад

    also might make sense to lower resolution and then have a flow to grab each frame and then upscale?

  • @cxs001
    @cxs001 5 дней назад

    wie erhält man die workflows? öffnen geht nicht. reinziehen in comfyui geht auch nicht.

  • @thesolitaryowl
    @thesolitaryowl 12 дней назад

    how exactly do I adjust the number of frames per second generated with cosmo? the frame rate on the save to WEBP node? or elsewhere in the workflow?

  • @valorantacemiyimben
    @valorantacemiyimben 13 дней назад +1

    Hello, Empty Cosmos Latent Video seems to be missing, how can I fix it?

    • @MongooseFab
      @MongooseFab 13 дней назад

      Same issue here

    • @MongooseFab
      @MongooseFab 13 дней назад

      I take that back, after a restart of the UI all looks OK

    • @xevios.9336
      @xevios.9336 12 дней назад

      Still missing for me also ltxvideoconditioning 🤷🏾‍♂️

  • @MichaelFishler
    @MichaelFishler 6 дней назад

    Any chance this will work in A1111?

  • @rauliss1
    @rauliss1 10 дней назад

    ello, Empty Cosmos Latent Video seems to be missing, how can I fix it?

  • @H_isonYoutube
    @H_isonYoutube 13 дней назад

    Can't get this to work at all, and I have the top of the line hardware out (4090, amd 9, etc.). Keep getting VAE header too large, or Clip header too large, or Unet too large.

  • @GAMINGGEEKzzz
    @GAMINGGEEKzzz 12 дней назад

    seb can you do lora training guide for kohoya ss ?

  • @TheSeniorzone
    @TheSeniorzone 2 дня назад

    Thanks! 👍 Half hour with a 4060 16gb. Nice, but ... how can i save and play the vid? It is a WebP file

    • @TheSeniorzone
      @TheSeniorzone 2 дня назад

      Ok, i use a online webp > mp4 converter 🤗

    • @sebastiankamph
      @sebastiankamph  53 минуты назад

      Glad you got it sorted. You can actually save mp4 straight from the video combine node from vhs.

  • @YungSuz
    @YungSuz 9 дней назад

    hey with an rtx 4090 it takes like 16 min per standard setting generaation. Is that normal? Feels so slow

    • @sebastiankamph
      @sebastiankamph  9 дней назад

      Took me almost exactly 10 minutes on my 4090

  • @o.b.1904
    @o.b.1904 11 дней назад

    Tried first video, was done in about an hour, tried another and left, turned out it took 7 hours, I don't think it is supposed to work like this. on 3090

  • @jriker1
    @jriker1 12 дней назад

    Is it possible to load a Flux LoRA into the model and use that character's face?

  • @gurhankumus320
    @gurhankumus320 13 дней назад

    hi eror help please
    KSampler
    Expected size for first two dimensions of batch2 tensor to be: [256, 4096] but got: [256, 1024].

    • @gurhankumus320
      @gurhankumus320 13 дней назад

      my system RTX 4080 Süper / amd ryzen 7800x3d / 32 gb ram 7800 mhz

  • @Puria_art
    @Puria_art 12 дней назад

    so with 8g vram , it will be too slow ?

  • @usafshorts
    @usafshorts 13 дней назад

    Hello, I ran it on linux usind and AMD GPU 7900 XTX 24GB the sample prompt took me one hour to generate, probably doesn';t use my gpu and I will need an Nvidia por this.

    • @joefawcett2191
      @joefawcett2191 13 дней назад +2

      yeah you need CUDA from an Nvidia GPU, AMD cards are slow with current software

    • @krimpfugly
      @krimpfugly 12 дней назад +1

      It says in the docs it takes bout an hr even for 24gb gou

    • @usafshorts
      @usafshorts 12 дней назад

      @ sad, because image waiting one hour for a video you don’t like to make corrections lol

  • @GrocksterRox
    @GrocksterRox 11 дней назад

    How long did it take to run locally for you?

    • @v3ucn
      @v3ucn 11 дней назад

      least 30 mins

    • @sebastiankamph
      @sebastiankamph  10 дней назад

      10 minutes on the dot, give or take a few seconds.

    • @GrocksterRox
      @GrocksterRox 10 дней назад

      @@sebastiankamph hardware = 4090? or did you do it with an online service w/ beefier hardware?

  • @professor-seba
    @professor-seba 13 дней назад

    Hi, Seba! Is there any way to run this with an AMD 6900 XT?

  • @FunwithBlender
    @FunwithBlender 13 дней назад

    does it do image to video

  • @Keith_Rothwell
    @Keith_Rothwell 12 дней назад +1

    I am so sorry, everyone. One time, I told Sabation in the comments that I liked his dad jokes, and he hasn't stopped since. I kind of blame myself.
    He should go back to talking about current events.
    Like, has he even said anything about that AI finetuned to be a stand-up comedian? It bombed so hard they had to rebuild half the data center.

  • @nicolasmarnic399
    @nicolasmarnic399 13 дней назад

    You need try some controlnets, or some refiner to try to fix your hair Sebastian

  • @purposefully.verbose
    @purposefully.verbose 13 дней назад +1

    hey guy, thanks! you doing ok?

    • @Kozitaju
      @Kozitaju 13 дней назад

      Same concern here.

  • @Kirmm
    @Kirmm 13 дней назад

    the end result is so MEH that it doesn't feel like it's worth it at all though

  • @havemoney
    @havemoney 13 дней назад +1

    The model focuses on creating realistic data and not a cat in the hat, it has a problem with fantasy

    • @fluffsquirrel
      @fluffsquirrel 13 дней назад

      Interesting! Does it have good realistic detail then?

    • @havemoney
      @havemoney 13 дней назад +1

      @@fluffsquirrel her specialty is teaching robots to interact with the real world

    • @fluffsquirrel
      @fluffsquirrel 12 дней назад

      @@havemoney I'm sorry, I misunderstand. Who is "her" and why does the model need to interact with the world? It's not going to be run on a physical robot hardware, is it?

  • @ZelpinAI
    @ZelpinAI 5 дней назад

    Many new AI tools. so fast.

  • @ratside9485
    @ratside9485 12 дней назад

    Surely it is better than Hunyuan Video😂

  • @robertaopd2182
    @robertaopd2182 12 дней назад

    Best? I think even ltx drop it

  • @nuky_999
    @nuky_999 13 дней назад

    Can you make a video explaining all the AI tools and pros/cons of each for example D-ID, Synthesia, Runaway etc

    • @sebastiankamph
      @sebastiankamph  13 дней назад +3

      AAALLLLL the AI tools? That'll be a 3 year long video.

    • @nuky_999
      @nuky_999 13 дней назад

      @sebastiankamph Not all, at least the most popular ones because I just got into AI and I feel overwhelmed and not sure which ones to use and for what are they good

  • @z1mt0n1x2
    @z1mt0n1x2 12 дней назад

    Too big, too slow. We need faster models like LTX-Video to be improved.

  • @ronbere
    @ronbere 13 дней назад +18

    Don't waste your time, render is ugly. Hunyuan stills better

    • @sebastiankamph
      @sebastiankamph  13 дней назад +6

      I got some pretty good generations out of it, at much better speeds than Hunyuan.

    • @ronbere
      @ronbere 13 дней назад +6

      @@sebastiankamph i ve tested both...Sorry Hunyuan is better with a good workflow

    • @kiransurwade3576
      @kiransurwade3576 13 дней назад

      You should show it too!​@@sebastiankamph

    • @HistoryViper
      @HistoryViper 13 дней назад +2

      If you want crappy looking drawings, it excels at it. Even LTX is better

    • @sebastiankamph
      @sebastiankamph  13 дней назад +2

      @@ronbere how do you mean "with a good workflow"?