🌊 Depthflow in ComfyUI

Поделиться
HTML-код
  • Опубликовано: 30 янв 2025

Комментарии • 38

  • @Proton555
    @Proton555 26 дней назад

    Love the effect this gens! Thanks for the walkthrough.

  • @miinyoo
    @miinyoo 28 дней назад

    This is fantastic for embellishing certain types of photos. I have been looking for a tool like this since diffusion 1. Thanks Akatz.

  • @SouthbayCreations
    @SouthbayCreations 3 месяца назад +2

    This workflow is fantastic!! It's surprisingly fast!! The render time is a lot less than I thought it would be. Thanks for sharing!

    • @akatz_ai
      @akatz_ai  3 месяца назад +1

      @@SouthbayCreations Of course! We’re all used to SD render times so it’s a nice change when something renders so fast 😁

  • @meadow-maker
    @meadow-maker 18 дней назад

    Thank you.

  • @howto-notimewaste388
    @howto-notimewaste388 3 месяца назад +1

    Bro.... you are a total champ!!! I have lnown DP for a while but was hard to use with the terminal. Bravo!

    • @akatz_ai
      @akatz_ai  3 месяца назад

      Thanks man! Glad I could help!

  • @ilyanemihin6029
    @ilyanemihin6029 3 месяца назад

    Amazing Nodes, thanks for sharing!

  • @kobe5113
    @kobe5113 3 месяца назад +1

    leaving comment for the algo, thanks for your work :) much appreciated

    • @akatz_ai
      @akatz_ai  3 месяца назад

      Appreciate it brother!

  • @RamonGuthrie
    @RamonGuthrie 3 месяца назад +2

    Hey great nodes, a lot of those nodes for effects and movement could rolled into just a few nodes, with an option select or drop-down menu, instead of having loads of multiple nodes, this would make workflows much more easier to manage

    • @akatz_ai
      @akatz_ai  3 месяца назад

      Hi @RamonGuthrie, thanks for watching! Most of the presets and effects have a different set of parameters which is why I didn't standardize them into a single node with a dropdown. It's possible to have dynamic parameter lists which can change depending on a selected effect or motion, but that is non-standard and would require front-end work to pull off. You would end up using the same number of nodes in the workflow to chain motions or effects together, so it would mostly just help to clean up the number of nodes in the pack. I'll definitely keep it in mind for the future as a possible enhancement if enough folks would prefer that! Thanks for the recommendation!

  • @Ambeosounds
    @Ambeosounds 3 месяца назад

    Thanks for sharing! much appreciated

  • @bxeagle3932
    @bxeagle3932 26 дней назад

    how to get rid of that stretching, im new to this but im sure you can make a custom node where it cuts out the foreground and background and generative fill the background and we add depth to get rid of the stretching

  • @peterxyz3541
    @peterxyz3541 3 месяца назад

    Nice, thanks!

  • @ryanontheinside
    @ryanontheinside 3 месяца назад +1

    hell yeeeeeeeeee

  • @ritikagrawal8454
    @ritikagrawal8454 3 месяца назад

    thanks so much for creating this, what it be compatible on low VRAM systems?

    • @akatz_ai
      @akatz_ai  3 месяца назад +1

      @@ritikagrawal8454 Yes this should run on systems with low vram, the largest vram requirement is for creating the depthmap using depthanything v2. If you run out of memory at this step try lowering the resolution of the image using the upscale by node.

    • @ritikagrawal8454
      @ritikagrawal8454 3 месяца назад

      @@akatz_ai Apologies for spamming, however I'm using A100G to run this workflow (on patreon) still it takes about 3-4 minutes to generate the video, am I doing something wrong as it seems you are able to generate the video only in few seconds.

    • @akatz_ai
      @akatz_ai  3 месяца назад

      @@ritikagrawal8454 No worries, depthflow currently only runs with GPU acceleration on certain cloud providers such as Runpod and Modal. If you are on Runpod I would take a look at the following gh issue to help set up the docker image correctly: github.com/akatz-ai/ComfyUI-Depthflow-Nodes/issues/8
      Otherwise you may be out of luck for now until a workaround is found. You can keep up to date with info about running Depthflow in the cloud here (will be updated soon): brokensrc.dev/get/cloud/

    • @jittthooce
      @jittthooce 3 месяца назад

      @@akatz_ai thanks a lot man!! this helped resolve my problem in runpod.

  • @AlessandroVargas-l7l
    @AlessandroVargas-l7l 13 дней назад

    What can I do to make the video less cropped? Most presets cut off a lot of the image edges

  • @vidokk77
    @vidokk77 29 дней назад

    Your workflow is great, thank you!!! How do we switch to 25fps youse you said that only green nodes are changable, i see default is 30fps?

    • @akatz_ai
      @akatz_ai  29 дней назад +1

      Thanks! To change the output video to 25fps you can edit the value in the "Output FPS" node under the Depthflow group from 30.00 to 25.00. Input FPS is automatically set based on image (length of video value) or the input video (you can use "force rate" set to 25fps if needed). Hope this helps!

  • @creativephotographyclub
    @creativephotographyclub 3 месяца назад

    i've been waiting so long for this... davinci resolve can do it, but its very meh... thank you so much!

  • @galileor713
    @galileor713 13 дней назад

    Great workflow, what gpu do you use? the generation seems very fast

    • @akatz_ai
      @akatz_ai  12 дней назад

      Thanks! I have a 4090, but depthflow does not require significant GPU resources to run. Generating the depthmap actually takes the most VRAM in this workflow.

  • @glennm.9015
    @glennm.9015 3 месяца назад +1

    I have a problem with the enable. When I press it on the other nodes they simply don't enable. Anyone else has this problem?

    • @akatz_ai
      @akatz_ai  3 месяца назад +1

      Hi @gleenm.9015, by enable do you mean "bypass"? With the way the workflow is set up, you need to ensure that only ONE motion group is un-bypassed (or "enabled") at a time. If there are 2 or more motion groups active then only one of them will actually apply to the animation. Hope this helps!

  • @Ensaladitas
    @Ensaladitas 3 месяца назад +1

    hello akatz !! im maybe a little slow mind haha, but how can i export the final video??

    • @Ensaladitas
      @Ensaladitas 3 месяца назад

      okay i already figure out 😛 Thank you soo much for this amazing work !!!❤❤❤

  • @BillyNoMate
    @BillyNoMate 2 месяца назад

    Nice! is there a way to start with offset to left or right. I like to turn this into 3D SBS video by creating left/right focal points.

    • @akatz_ai
      @akatz_ai  Месяц назад

      For sure! I haven't built a workflow that does this yet but it's definitely possible. Would just need to render 2 videos with the view offset shifted left and right and then combine the result into a SBS video. I might build a workflow for this some day but if you'd like to try to build it yourself now I'd recommend checking out the parameters section of the depthflow docs: brokensrc.dev/depthflow/learn/parameters/

  • @eveekiviblog7361
    @eveekiviblog7361 3 месяца назад +1

    how can I modify the node?
    is visual studio code applicable for that?

    • @akatz_ai
      @akatz_ai  3 месяца назад

      @@eveekiviblog7361 Hi, yes you can modify the node in any text editor, I personally use Cursor 🙂

  • @RolePlayFanfic
    @RolePlayFanfic 3 месяца назад

    Can depthflow be installed with pinokio?