Mastering Video to Video in ComfyUI's Stable Diffusion (Without Node Skills)

Поделиться
HTML-код
  • Опубликовано: 17 ноя 2024

Комментарии • 92

  • @budygang9918
    @budygang9918 9 месяцев назад +2

    Insane!
    Thanks to share your knowledge 🙏🏻

    • @goshniiAI
      @goshniiAI  9 месяцев назад

      Thank you for the kind words

  • @zhiwei999
    @zhiwei999 10 месяцев назад

    Thank you very much for your workflow and careful explanation, I decided to give it a try

    • @goshniiAI
      @goshniiAI  10 месяцев назад +1

      I'm glad that you found the process and explanation to be helpful. You are very welcome!

  • @IS-dq4fw
    @IS-dq4fw 5 месяцев назад

    Thank you so much for the tutorial! I thought it would take me weeks to figure this out. I wish you good luck in everything!

    • @goshniiAI
      @goshniiAI  5 месяцев назад

      pleasure I could assist you, glad to hear from you.

  • @BlenderBob
    @BlenderBob Месяц назад

    Thanks for sharing and also giving us all the links in the description. Subscribed!

    • @goshniiAI
      @goshniiAI  Месяц назад

      Thank you for your feedback. i am glad you're here. I could also learn a lot from my channel.

  • @samon29
    @samon29 9 месяцев назад +1

    Thank you very much for your workflow

    • @goshniiAI
      @goshniiAI  9 месяцев назад

      It's very nice of you.

  • @Dwoz_Bgmi
    @Dwoz_Bgmi 9 месяцев назад

    thanks man you explained very well , i was so confused

    • @goshniiAI
      @goshniiAI  9 месяцев назад

      Thank you for the Feedback, I'm glad you found this helpful!

  • @90boiler
    @90boiler 15 дней назад

    Thanks, your video are great! Is it possible to apply LoRa to video including trigger words for style adaption?

    • @goshniiAI
      @goshniiAI  15 дней назад

      Thank you for the compliment.
      Yes! -you can check out my other video here using a Lora to style a video. ruclips.net/video/c8HI3vyVcso/видео.htmlsi=JdxvwaImqY5N7ElW

  • @kon28912
    @kon28912 2 месяца назад +1

    Hey, I followed all the steps, don't have any error and when I click the Queue Prompt it have his process but in the end there is no final image

  • @벤치마킹-f1z
    @벤치마킹-f1z Месяц назад +1

    Thank you for the video. Can't you do the 20-30 second video?

    • @goshniiAI
      @goshniiAI  Месяц назад

      I appreciate the suggestion very much. I will take that into account.

  • @satyajitroutray282
    @satyajitroutray282 2 месяца назад

    Thanks for the workflow.
    Will not adding LCM Lora to the workflow can i speedup the generation lot fast?

    • @goshniiAI
      @goshniiAI  2 месяца назад

      Yes! that is possible, However you might need to setup the LCM nodes with the right settings, which I have explained here ruclips.net/video/c8HI3vyVcso/видео.htmlsi=rCt48aluof8GA30V

  • @harithagamagedara5893
    @harithagamagedara5893 9 месяцев назад

    Thank you so much for the content.
    Is there any way to change the theme of the video without changing the appearance of the face.

    • @goshniiAI
      @goshniiAI  9 месяцев назад

      You are most welcome. You can set themes for your video using the Batch Prompt node. Similar to the Prompt Travel Technique, you can include multiple frame numbers + the prompt to modify your themes, but you might expirience subtle changes in background elements.

  • @i_free_man
    @i_free_man 5 месяцев назад

    Hi, thank you for your tutorial! Everything is clear, but I have an error with Ksampler. Can you help me with that?

    • @goshniiAI
      @goshniiAI  5 месяцев назад +1

      I'm glad you found the tutorial helpful. The error could be due to memory constraints. Try reducing the batch size or image resolution to see if that helps.
      Also, you can make sure make, you have the latest version of ComfyUI and all the necessary nodes updated.

  • @bubblerlek
    @bubblerlek 5 месяцев назад

    Thank you so much! Sorry I'm new to the comfy ui, for some reason my generation stops at animate diff node. It is green and then it stops.

    • @goshniiAI
      @goshniiAI  5 месяцев назад

      you are most welcome, we all start some where.
      First, try checking your VRAM usage to ensure you have enough memory. If that's not the issue, double-check your node connections and settings to make sure everything is set up correctly. Sometimes, simply reloading the workflow or restarting ComfyUI can help too.

  • @ispira2464
    @ispira2464 Месяц назад

    Thank you man!

    • @goshniiAI
      @goshniiAI  Месяц назад

      You are most welcome.

  • @logman121
    @logman121 9 месяцев назад

    Great explanation. Only the workflow link is not valid anymore. Could you please reuploud your workflow. Thanks a lot!

    • @goshniiAI
      @goshniiAI  9 месяцев назад +2

      Thank you for your feedback, the link has been modified in the description.

    • @logman121
      @logman121 9 месяцев назад

      Great, thanks for quick response@@goshniiAI

  • @elurudhfm623
    @elurudhfm623 4 месяца назад

    how solve this problem ?
    Error occurred when executing VAEEncode:
    'VAE' object has no attribute 'vae_dtype'
    File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\AI\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\AI\ComfyUI_windows_portable\ComfyUI
    odes.py", line 296, in encode
    t = vae.encode(pixels[:,:,:,:3])
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\AI\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 331, in encode
    memory_used = self.memory_used_encode(pixel_samples.shape, self.vae_dtype)
    ^^^^^^^^^^^^^^

    • @goshniiAI
      @goshniiAI  3 месяца назад

      Hello there, you might be missing the VAE model required for the VAE node. also check if the nodes are well connected to each other.

  • @gamalfarag
    @gamalfarag 4 месяца назад

    i tried the method and i was amazed with its stability ز
    the main reason that i stop making videos was the flickering issue, so I guess I'm back again :)
    but I guess that this method work more like a filter more than actual drawing, is there any nodes that can be added so it can be used in more applications.
    like for example change the person look and clothes, etc .. ?

    • @goshniiAI
      @goshniiAI  4 месяца назад +1

      I'm glad it's refreshed your passion for making videos.
      You're right, this method can work like a filter, BUT with some tweaks, you can definitely push it further to change appearances, clothes, and more by using segmentation nodes to isolate different parts of the video. Style transfer nodes can also help to change the artistic style of the video.

  • @ArtificialHorizons
    @ArtificialHorizons Месяц назад

    Doesn't work for me. The workflow in the video shows the height/width node which isn't there in the workflow. Also there are too many errors despite all resources/models being installed. Comfyui is a pain

  • @ADELTUF
    @ADELTUF 7 месяцев назад

    do you have a tutorial about how to train stable diffusion to generate similar videos to the video you give it as a source? TY

    • @goshniiAI
      @goshniiAI  7 месяцев назад

      hello there,I do not currently have a tutorial, but I appreciate the question. It's certainly a topic worth exploring in future content.

    • @ADELTUF
      @ADELTUF 7 месяцев назад

      @@goshniiAI yes thats atopic that i still cant find how to jump into, i see people generating realistic videos, so its likely possible to feed the AI with alot of videos and make it cook similar videos just by changing the prompt.

  • @lilillllii246
    @lilillllii246 9 месяцев назад

    Thanks. I ran it, but I keep getting this message "Error occurred when executing LoadImage". What should I do?

    • @goshniiAI
      @goshniiAI  9 месяцев назад

      You are welcome.
      So various reasons like incorrect file path, Video format or memory limitations can cause the Error.
      First, double-check the file path and confirm that the video format is supported. Also, try resizing or reducing the video's resolution to see if that helps with memory limitations.
      Most importantly, ensure that there are no broken spline connections to any of the nodes. I hope any of these are helpful.

  • @androsandros5849
    @androsandros5849 5 месяцев назад

    When loading the graph, the following node types were not found:
    ttN text

    • @goshniiAI
      @goshniiAI  5 месяцев назад

      Hello there, The "ttN text" node might be from a custom node library or an update that you might not have installed yet.

    • @konradschoeller6213
      @konradschoeller6213 5 месяцев назад

      This node actually creates errors after installation "import failed". Even with "git clone", same result.

    • @goshniiAI
      @goshniiAI  5 месяцев назад

      Thanks for bringing this up! There are situations when nodes or modules may not work well together. updating ComfyUI and the nodes to their most recent versions can resolve the problem.

    • @konradschoeller6213
      @konradschoeller6213 5 месяцев назад

      @@goshniiAI It did actually. The problem was solved by updating ComfyUI and all extensions. The ttn doesn't have a requirements-file for calling the command " pip install -r requirements.txt", so updating the entire repository was the solution. The only new issue that might create is when certain workflows require older versions like, numpy 1.23 instead of 1.26 for instance. Then its may be best to have several slightly different repositories depending on different workflows.

  • @discountcode341
    @discountcode341 8 месяцев назад

    What to do , if there are red lines around a node, it appeared after I qued the prompt?

    • @goshniiAI
      @goshniiAI  8 месяцев назад

      It reveals that you haven't connected the node correctly or that the proper settings are not present. Please review the settings and connections you have to that specific node.

  • @itssannabelle
    @itssannabelle 9 месяцев назад

    Hey is it OK if I don't have the weight and hight node in comfyu .when I put the workflow they were missing?

    • @goshniiAI
      @goshniiAI  9 месяцев назад +1

      Before generating the video, I recommend manually adding the missing nodes to ensure that the video size is correct. Also, make sure to update both comfyUI and all updates. I hope any of these are helpful

  • @Elleviya
    @Elleviya 5 месяцев назад

    thanks very much

    • @goshniiAI
      @goshniiAI  5 месяцев назад

      You are most welcome.

  • @Dwoz_Bgmi
    @Dwoz_Bgmi 9 месяцев назад

    Hey one more problem, i have gtx 1650 graphics card its not enough for this , my process is stucked from 1 hour on 0% , please help me how to load image's in batch , like this tutorial have video tab but its not enough for my pc , so How to add image batch loader node

    • @goshniiAI
      @goshniiAI  9 месяцев назад +1

      If your GPU isn't quite up to the task, consider splitting the workload into smaller batches. This prevents your GPU from being overwhelmed. If you are unsure how to do this, I am certain you might be able to find a resourceful tutorial online to guide you.

    • @Dwoz_Bgmi
      @Dwoz_Bgmi 9 месяцев назад +1

      @@goshniiAImy gpu creates image to image in 30 seconds maximum, I can manage differently, but I didn't found how to image batch process in comfy ui , many people doing that in tutorial but not showing it how to add that image batch loader Node

    • @goshniiAI
      @goshniiAI  9 месяцев назад +2

      @@Dwoz_Bgmi I fully understand and agree, and I'll keep that in mind for next videos.

    • @Dwoz_Bgmi
      @Dwoz_Bgmi 9 месяцев назад

      @@goshniiAI thanks, im waiting no one made video on image to image batch so it's better topic for next video

  • @adrianfels2985
    @adrianfels2985 4 месяца назад

    How can I save the generated video?

    • @goshniiAI
      @goshniiAI  4 месяца назад +1

      Once your video has been entirely rendered in ComfyUI, it is usually located in the output folder within the comfyui directory.
      Option 2 - you can right-click on the final video node in comfyui and select 'Save, the video will be saved to the given folder you choose.

    • @adrianfels2985
      @adrianfels2985 4 месяца назад

      @@goshniiAI Thank you very much. I've already seen it now but was so clueless at first :D

    • @goshniiAI
      @goshniiAI  4 месяца назад

      @@adrianfels2985 You're welcome! i am glad you figured it out and happy animating!

  • @YOUTUBECOUSPOLICY
    @YOUTUBECOUSPOLICY 7 месяцев назад

    i have a core i7 3770 16gb ram and gt 1030 will this work on my system if not which gpu is mini required for using comfy ui🙂

    • @goshniiAI
      @goshniiAI  7 месяцев назад +3

      Your setup should handle Comfy UI, but for smoother performance, particularly with larger projects, I'd consider upgrading your GPU to something like an NVIDIA GTX 1660 or higher this would improve performance. I hope this helps!

    • @EvolGamor
      @EvolGamor 3 месяца назад +1

      Get a 3060 12gb or a 4060ti 16gb.

  • @nirdeshshrestha9056
    @nirdeshshrestha9056 9 месяцев назад

    Hey man how to load a lora in this workflow?

    • @goshniiAI
      @goshniiAI  9 месяцев назад

      This example shows you how to use Lora as a guide. comfyanonymous.github.io/ComfyUI_examples/lora/

  • @Dwoz_Bgmi
    @Dwoz_Bgmi 9 месяцев назад

    hey please help ! lineart model is not showing in the link

    • @goshniiAI
      @goshniiAI  9 месяцев назад

      Thank you for your observation; the link has been updated. Make use of safetensor version.

  • @arkelss4
    @arkelss4 10 месяцев назад

    Can you upload the workflow in a more simple way , the Google doc is not easy to use , even though it probably is ( I'm guessing click and drop )

    • @goshniiAI
      @goshniiAI  10 месяцев назад

      You are correct, simply drag and drop the file into comfyUi on an empty canvas after downloading. You can also make use of this alternative link: tinyurl.com/msbe8fb3

    • @arkelss4
      @arkelss4 10 месяцев назад

      @@goshniiAI thank you , save nothing I believe everything will be outdated in a couple months, even comfyui is destined or fated 2 be over taken by a new U.I

    • @goshniiAI
      @goshniiAI  10 месяцев назад

      Thank you for your feedback! Being open to change and staying adaptable is essential.@@arkelss4

  • @zeta.lifestyles
    @zeta.lifestyles 9 месяцев назад

    hey. my text box undefined. its about controlnet?

    • @goshniiAI
      @goshniiAI  9 месяцев назад

      Hello there, Can you further clarify the issue you're having? I'm having trouble understanding it.

    • @zeta.lifestyles
      @zeta.lifestyles 9 месяцев назад

      when I put your folder .json on COMFYUI html, TEXT box undefined. other boxes I fixed all missed ut text box :( @@goshniiAI

    • @zeta.lifestyles
      @zeta.lifestyles 9 месяцев назад

      and anyway thanks. I watched 100 videos and you are the one teaching so simple and good. the best wishes@@goshniiAI

    • @goshniiAI
      @goshniiAI  9 месяцев назад +1

      I appreciate your point of view and opinion. it means a lot.@@zeta.lifestyles

  • @premnathrajendran1364
    @premnathrajendran1364 9 месяцев назад

    may i know rendering time and requirement spec

    • @goshniiAI
      @goshniiAI  9 месяцев назад

      Using an NVIDIA RTX 3060, The rendering took a few hours.

  • @IshanJaiswal26
    @IshanJaiswal26 4 месяца назад

    CheckpointLoaderSimpleWithNoiseSelect
    ADE_AnimateDiffLoaderWithContext
    ADE_AnimateDiffCombine
    ADE_AnimateDiffUniformContextOptions
    error

    • @goshniiAI
      @goshniiAI  3 месяца назад

      Hello there! You can choose your models again for each node that has the error. It is possible that the workflow and your model paths differ.

  • @Trending_editzs
    @Trending_editzs 8 месяцев назад

    Is there any way i can sort out my issue😢

    • @goshniiAI
      @goshniiAI  8 месяцев назад

      Could you please provide more details about the issue you're facing

    • @Trending_editzs
      @Trending_editzs 8 месяцев назад

      @@goshniiAI it doesn't move to the last step it make till 2process of turning images into black sketches and stops there

    • @goshniiAI
      @goshniiAI  8 месяцев назад

      @@Trending_editzs Double-check your workflow settings and parameters to confirm they are accurate. A minor error may at times cause the entire operation to fail.
      - Heavy processing operations can put a burden on resources, so monitor your GPU and your Ram
      - Resize your video to match the same batch size output while keeping the frame size small, the default approach by Inner reflection upscales the Original video.
      i hope any of these helps. Don't give Up!

  • @Bitcoin_Baron
    @Bitcoin_Baron 5 месяцев назад

    What is the VAE and what should it be used for?
    What is the the Animate Path node doing? How do we know which one of the models "is necessary"??
    This is unfollowable because you aren't covering the basics.

    • @goshniiAI
      @goshniiAI  5 месяцев назад

      Hello there, I appreciate your feedback about covering the basics. I’ll make sure to include more foundational explanations in future videos to help everyone follow along more easily.
      the VAE helps compress and decompress image data, ensuring better quality and detail in the generated images.
      The Animate Path Node is used to create smooth transitions and animations between different frames
      How Do We Know Which Models Are Necessary?
      Some models are better suited for certain types of animations or image styles, you can Refer to the model documentation for specific use cases or try different models and see which one gives you the desired result.
      i hope this was helpful and keep experimenting!

  • @david_ce
    @david_ce 9 месяцев назад

    Hold on bro, Are you a Nigeria?!

    • @goshniiAI
      @goshniiAI  9 месяцев назад

      I am actually of Ghanaian descent, but I believe both countries have similarities.

  • @PeterZ-l2w
    @PeterZ-l2w 9 месяцев назад

    Hey the link points to thibaud/controlnet-sd21 but i see in the video you're using sd15. Is that why Im getting RuntimeError: mat1 and mat2 shapes cannot be multiplied (1232x768 and 1024x320)?

    • @goshniiAI
      @goshniiAI  9 месяцев назад +1

      please navigate to the manager in comfyUi and select install models. Search for the model name to find the SD15 models as well.