ReCreator workflow for ComfyUI

Поделиться
HTML-код
  • Опубликовано: 3 окт 2024
  • ReCreator workflow for ComfyUI!!
    Download workflow:
    drive.google.c...

Комментарии • 50

  • @AI.Absurdity
    @AI.Absurdity 3 месяца назад +5

    This will be fun to experiment with. Thank you!

  • @sensitivehedonist
    @sensitivehedonist 3 месяца назад +3

    This is great, but I'm looking for ways to restore without changing the faces so much. This is a huge issue everywhere.

    • @dadadies
      @dadadies 2 месяца назад +1

      For keeping to the original, you can lower the sampler's denoise down to .2 to avoid changing details like the face too much, or raise up the controlnet values up to keep the original image/face as much as possible. Another option is to use ipadapter with a face you like to influence the face (controlling the structure, ethnicity, colors, etc. for example you can use the same exact image to force the same face from the original image).

  • @neuraldee
    @neuraldee 3 месяца назад +3

    Thank you, for sharing this workflow to us!

    • @AIFuzz59
      @AIFuzz59  2 месяца назад +1

      Glad you like it!

  • @magneticanimalism7419
    @magneticanimalism7419 3 месяца назад +3

    Quick tip for Abigail: If you use the middle mouse button (press the scroll wheel) instead of the left mouse button to move around the canvas, you won't move any nodes around. Took me forever to change to this and I still catch myself panning with the left button sometimes.

  • @marcoantonionunezcosinga7828
    @marcoantonionunezcosinga7828 2 месяца назад +1

    Greetings, thanks for the video. Some details, the first girl's eyes point elsewhere, the second girl's face is not as congruent with the original. And I wanted to ask you if so many nodes were necessary. I would love to see more "workflow". You could make one similar to "kling ai"

    • @AIFuzz59
      @AIFuzz59  2 месяца назад +1

      @@marcoantonionunezcosinga7828 yes you are right, the eyes are always looking somewhere else. Maybe prompting would help with that not sure. We will try to compac the workflow a bit to cut down on the number of nodes.

  • @Kryptonic83
    @Kryptonic83 3 месяца назад +1

    Interesting workflow, thanks for sharing. The first time I ran it got stuck during VAE decode after the upscale and was showing 99% vram with my 4090. Had to stop and restart comfyui. I wondered if it was maybe because of the two load checkpoints so I ended up bypassing the 2nd load checkpoint and just used reroutes to connect the upscale section to the 1st load checkpoint and it ran through ok that time, but still a pretty lengthy process, but fairly solid results!

    • @AIFuzz59
      @AIFuzz59  2 месяца назад

      Yes! It is a beast on your system.

  • @ImAlecPonce
    @ImAlecPonce 3 месяца назад +1

    Looks cool

    • @AIFuzz59
      @AIFuzz59  2 месяца назад

      Thanks! You are cool!

  • @nodewizard
    @nodewizard 3 месяца назад +1

    Excellent vid. Could you do a deep dive into Hallo? I find that to be a beast to set up with it's millions of dependencies and tentacle-like extra nodes.

  • @stefanvozd
    @stefanvozd 3 месяца назад +2

    amazing content, subscribed!

  • @aiterbaru
    @aiterbaru 3 месяца назад +1

    Error occurred when executing KSampler:
    'NoneType' object has no attribute 'shape'

    • @AIFuzz59
      @AIFuzz59  2 месяца назад

      So the error is happening with one of the nodes going into the KSampler. We would check all the input nodes
      And see if there
      Are
      Any missing values

  • @SouthbayCreations
    @SouthbayCreations 3 месяца назад +1

    Would love to try this workflow but for the life of me the recreator node won't load. Getting these errors, any ideas?
    When loading the graph, the following node types were not found:
    ReActorFaceSwap
    LayerColor: Brightness Contrast
    Nodes that have failed to load will show as red on the graph.

    • @AIFuzz59
      @AIFuzz59  2 месяца назад

      So we would make sure reactor face swap is installed and is the latest version from their GitHub. Also update ComfyUi and dependencies. For the layer brightness and contrast, you can bypass that node for now

    • @SouthbayCreations
      @SouthbayCreations 2 месяца назад

      @@AIFuzz59 Tried that and still the same error with the "LayerColor: Brightness Contrast"

  • @WhySoBroke
    @WhySoBroke 3 месяца назад +1

    Many thanks for the great workflow!! Have you tried comparing it with SUPIR?

    • @AIFuzz59
      @AIFuzz59  3 месяца назад +1

      No we haven’t. We used to use the Supir workflow in the end of all of our workflows but once we created our own, we just use that

    • @WhySoBroke
      @WhySoBroke 3 месяца назад

      ​@@AIFuzz59thanks for reply. What are the best adjustments to make it work with SDXL? I tried and image output is noisy distorted. Thanks in advance for guidance

  • @aiterbaru
    @aiterbaru 2 месяца назад

    Error occurred when executing KSampler:
    'ModuleList' object has no attribute '1'

  • @SheRoMan
    @SheRoMan 2 месяца назад

    How to make the comfyui background black ?

  • @AnOldMansView
    @AnOldMansView 2 месяца назад

    Only issue Im having is sdxl.safetensor missing?

  • @johnriperti3127
    @johnriperti3127 3 месяца назад +1

    I love your voice, it is soothing

    • @AIFuzz59
      @AIFuzz59  2 месяца назад

      Thank you baby 😊

  • @barepixels
    @barepixels 3 месяца назад +1

    Make one for photo to painting

  • @aonozan
    @aonozan 2 месяца назад

    Lol, I am learning how to upscale and enhance images and this is a great vid, but narrator comments crack me up. No I'm not a Swiftie.

  • @voxyloids8723
    @voxyloids8723 3 месяца назад +1

    I will try to add LaVa or some LLM to detect age and use with ipadapter and embandings... This WF is great start with thank you!

    • @AIFuzz59
      @AIFuzz59  2 месяца назад

      Thanks! Let us know how it works out!

  • @magneticanimalism7419
    @magneticanimalism7419 3 месяца назад +1

    Awesome workflow as always, the only thing I think is missing is a style and subject selector to make it even simpler.

    • @AIFuzz59
      @AIFuzz59  2 месяца назад

      Great suggestion!

  • @MadrissS
    @MadrissS 3 месяца назад +1

    Thanks for this amazing workflow. Do you think it can be adapted for old video ? Would it still work by adding animatediff ?

    • @AIFuzz59
      @AIFuzz59  3 месяца назад

      It should work! It may take time as it will do frame by frame

  • @slowrobot8369
    @slowrobot8369 3 месяца назад +1

    Try multiple takes on the voice over.
    Congrats on the second example, you added 20 years... for some reason. On the next chidlren example, you added cleavage, questionable move but OK.

    • @AIFuzz59
      @AIFuzz59  3 месяца назад

      Thanks for your support 😎

  • @정보리-k9t
    @정보리-k9t 3 месяца назад

    Hello, I am a Korean learning Comfy UI.
    I tried downloading the json file and running it.
    However, I'm running into a problem with the dwpose estimator node.
    I copied the error message and asked chatgpt.
    I don't understand everything, but roughly the problem is
    dwpose estimator's bbox_detector model and pose_estimator model are
    It seems to be happening because there is no problem.
    Where can I get the models for these two widgets?

    • @정보리-k9t
      @정보리-k9t 3 месяца назад

      And please understand if my English is awkward.
      I rely on Google Translator.

    • @ismgroov4094
      @ismgroov4094 3 месяца назад +1

      보리야 힘내자! 너 영어 잘하니깐 기 죽지 말오!

    • @AIFuzz59
      @AIFuzz59  2 месяца назад

      Your English is very good! Are you missing the models?

    • @정보리-k9t
      @정보리-k9t 2 месяца назад

      @@AIFuzz59 There is a model for the basic open pose.
      It seems to me that there is no dedicated model for dwopenpose.
      Rather than saying it's missing, it seems like it wasn't there from the beginning.

    • @ismgroov4094
      @ismgroov4094 2 месяца назад

      @@정보리-k9t 보리야 힘내! Cheer up!!!!!!!!

  • @InfoWiser
    @InfoWiser 3 месяца назад +1

    What if I just want to enhance the overall features (the background, texture, plants, etc.) in the image without affecting the facial structure. Can you create a tutorial on that? Thank you so very much!

  • @profitsmimetiques8682
    @profitsmimetiques8682 2 месяца назад

    Hi ! I wanted to know if you have any idea how zia fit on I.ns.sta is made ?
    It seems that the base image is an exsiting one, but then they maybe use a 3d pose character + openpose + lora for body + lora for face, but something is off.