LivePortrait In ComfyUI - A Hedra AI Alike Talking Avatar In Local PC

Поделиться
HTML-код
  • Опубликовано: 29 авг 2024
  • LivePortrait In ComfyUI - A Hedra AI Alike Talking Avatar In Local PC
    We are diving into the use of LivePortrait In ComfyUI, an AI model that brings your photos to life with realistic movements. With LivePortrait, you can create animated portraits that resemble those magical moving pictures from the Harry Potter series.
    About LivePortrait : thefuturethink...
    *** Update : LivePortrait Video2Video Method Here : • LivePortrait In ComfyU...
    In this tutorial, I'll explain how LivePortrait works, utilizing implicit keypoints to understand and animate important parts of your face. We'll explore how it learns from real videos, allowing your photo to mimic the motions of the person in the video.
    I'll guide you through the installation process of the LivePortrait custom node in ComfyUI, including the download of the safe tensors model and the non-commercial face recognition library, InsightFace. We'll also discuss the necessary dependencies and components for this custom nodes project.
    Next, I'll demonstrate how to set up and configure the LivePortrait custom node, showcasing different examples and settings. We'll explore retargeting options for eyes and lips, and the impact they have on the animation. I'll share tips on achieving more natural and detailed facial movements by fine-tuning the settings.
    LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control
    liveportrait.g...
    github.com/Kwa...
    github.com/kij...
    If You Like tutorial like this, You Can Support Our Work In Patreon:
    / aifuturetech
    Discord : / discord
    #hedra #LivePortrait #talkingavatar
  • НаукаНаука

Комментарии • 124

  • @dannii_L
    @dannii_L Месяц назад +3

    That girl doing the hyperactive facial movements has a very particular talent set indeed. It's actually crazy how nicely it translates to the model though and the model's smile gives it an almost anime like quality alongside the over-emphasised faces that anime characters love to pull.

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      I will try some anime image and test how smooth it can be. :)

    • @knightride9635
      @knightride9635 Месяц назад

      Do you think it can work on a 6gb GPU ?

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +1

      @@knightride9635 na... don't put yourself in a difficult situation. You won't have mood to run AI with it.

    • @carterd2870
      @carterd2870 Месяц назад

      @@knightride9635 cant

  • @chunlingjohnnyliu2889
    @chunlingjohnnyliu2889 Месяц назад +1

    An image to image implementation of this can really help creating a more diverse facial expressions for all of the ai generations, definitely testing it out.

  • @davimak4671
    @davimak4671 Месяц назад +2

    Bro, good continuation would be to explain how to use it - vid to vid. Can you explain this? I saw examples when guys making vid to vid using liveportait and this is awesome

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      U are in a right time😉 check out the Community tab in this channel

  • @MrDebranjandutta
    @MrDebranjandutta Месяц назад

    Man you are such a blessing! God bless you mate.

  • @WiLDeveD
    @WiLDeveD Месяц назад

    You're Awesome man ! Awesome... Best Comfy channel in RUclips is this channel. thanks for creating such a useful content 👍💯

  • @user-gk7ty6yg5y
    @user-gk7ty6yg5y Месяц назад +1

    Wow. Thank you for video. Can you do a tutorial how use Wunjo open source?

  • @zimnelredoran9985
    @zimnelredoran9985 Месяц назад +1

    Hi there, thanks a lot for this, I've been tinkering with it for hours. I just got an issue when I bypass the image concatenate multi node, the output that I get from video combine is not the source image animated, I just get the source video again.

  • @crazyleafdesignweb
    @crazyleafdesignweb Месяц назад

    Great one 👍 i have been play around with it this afternoon 😆

  • @JaysterJayster
    @JaysterJayster Месяц назад +1

    I always have an error on my first step 😂 that first custom node you mentioned shows 2 node conflicts. I’m new to this so idk what to do next

  • @thays182
    @thays182 8 часов назад

    Is the Gradio Interface able to produce the same results as the comfyui interface? Is there any difference? Regarding speed of generation, or quality of output? Or is the gradio version and comfy UI ONLY a user interface difference?

  • @WiLDeveD
    @WiLDeveD Месяц назад +2

    this node could be used in many AI Videos. Education , Fun , News , ...

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      saw a lots of AI software introductory channels , telling people to use some software like this to create Faceless channel, doing news, sports report. haha, well, it can be creative.

    • @WiLDeveD
      @WiLDeveD Месяц назад

      @@TheFutureThinker ❤💯

    • @WiLDeveD
      @WiLDeveD Месяц назад

      @@TheFutureThinker and some youtubers already use it for Faceless channel. including you 😊😊😊

  • @mohamedrizwan.m7755
    @mohamedrizwan.m7755 Месяц назад +1

    TypeError: LivePortraitProcess.process() missing 1 required positional argument: 'crop_info'
    This what i end up with, pls help

  • @justaguy-69
    @justaguy-69 29 дней назад

    can you do a video of changing the mouth movement in a video?
    like input a video then change the video to say what you want , this would be good in a stand alone open source format.
    also something that works on AMD not just invidia

  • @kopper1956
    @kopper1956 Месяц назад

    You need Resize and Concat nodes if you want to see ref and output videos side-by-side only

  • @LuckRenewal
    @LuckRenewal 28 дней назад

    this is good! thanks for sharing!

  • @cerlinealviora
    @cerlinealviora 17 дней назад

    hi thank you so much, and how to make more than 8 second video? my input video is 13 sec but the ouput is just 8 sec. why is that?

  • @DeFirm-
    @DeFirm- Месяц назад

    and i'm here, waiting for a1111 version :)))

  • @UnsolvedMystery51
    @UnsolvedMystery51 Месяц назад

    this is awesome! thank you!

  • @DaveTheAIMad
    @DaveTheAIMad Месяц назад

    One thing with hedra though is you can create a talking avatar with just a single image and a wav/mp3 file.
    I wish we could do that in comfyui but everything i have tried is either broken or needs a video to guide it.
    The insane thing is over a decade ago I had a tool called crazy talk that did what hedra does... maybe not as well and you had to mask the avatar, set the eyes and mouth your self.... but it did it.

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      Oh yes, that reminds me. Some face swap alike software before.

    • @DaveTheAIMad
      @DaveTheAIMad Месяц назад

      @TheFutureThinker yeah used to be tons of mobile phone ones... which worked... to a point lol.. we are spoiled for what we have now, just occasionally you find something we just can't do.
      Sound+single image to talking avatar being one.
      Local music generation with lyrics would be the other major one. Suno and udio(though I found for my needs udio is really bad) have cornered the market there and not about to release the weights anytime soon.

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      @@DaveTheAIMad and SFX also one

    • @DaveTheAIMad
      @DaveTheAIMad Месяц назад

      @@TheFutureThinker I think my other reply got deleted, may have been the github link and the fact I am on a new channel. I found a possible one for avatar gen from sound and single image called EchoMimic by BadToBest. At work so havent tested it, and since the models it has you download are not safetensors it has me a bit nervous.

  • @BCCBiz-dc5tg
    @BCCBiz-dc5tg Месяц назад +1

    Awesome!

  • @TheGoodContent37
    @TheGoodContent37 Месяц назад

    Which would be the best hardware specs to use these tools?

  • @DJVARAO
    @DJVARAO Месяц назад +1

    Awesome. Is there any similar node for static images? I want to copy the exact expression from a picture.

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      Well, you can change the Load Video to Load Image, I think. Because in theory its handle image frames as the face pose.

  • @SeanietheSpaceman
    @SeanietheSpaceman Месяц назад +1

    I wonder how long till we can use this with a live feed webcam

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +2

      I think this can be , there's a live cam node in Comfy, connect that one as ref. video, putting small batch of images into the LivePortraitProcess node. and trigger the rendering time like every second in ComfyUI (just like how the SDXL turbo do). In theory, this is doable.

  • @ChandlerLawsonPlays
    @ChandlerLawsonPlays Месяц назад

    Can this also be added to Automatic 1111?

  • @tracingzimpossible
    @tracingzimpossible Месяц назад

    i tried this and its good but as shown in the source github page it works on video too but every time i tried this on video after processing for few mins it gives error ,,,,any idea how to solve this issue?

  • @CLZ1026
    @CLZ1026 Месяц назад +1

    Keep getting this result anyone has the same error?
    When loading the graph, the following node types were not found:
    DownloadAndLoadLivePortraitModels
    LivePortraitProcess
    Nodes that have failed to load will show as red on the graph.

    • @CLZ1026
      @CLZ1026 Месяц назад

      basically, comfyui cannot recongnize this costume node

    • @Vashthareaper
      @Vashthareaper Месяц назад

      same, any suggestions?

    • @liangmen
      @liangmen Месяц назад

      same , have not found a solution yet

    • @Vashthareaper
      @Vashthareaper Месяц назад

      @@liangmen you need to install insightface and that should fix it

  • @kalakala4803
    @kalakala4803 Месяц назад +2

    😆😆lets go make some fun faces

  • @erdbeerbus
    @erdbeerbus Месяц назад

    great technique ... but is there a node in comfy to set a live video as source ... on a Mac? thx in advance!

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +1

      yes, I forgot the Github name of the livecam Comfy node. but it's possible to do so.

  • @batvanio
    @batvanio Месяц назад

    Isn't it available as a standalone app? I want to use it but it seems too complicated for me.

    • @carterd2870
      @carterd2870 Месяц назад +1

      It can input a json file,but cant use non-commercial in nature.

    • @batvanio
      @batvanio Месяц назад

      @@carterd2870 This is very unfortunate. I'd still like it to be something personal and not run in the cloud.

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +1

      Standalone? Yes you can use their Github project. But even this one is complicated for you, then open source GitHub project is not for you. You should wait for an app and buy it later.

  • @AnansitheSpider8
    @AnansitheSpider8 6 дней назад

    These AI programs need to be programmed to make the entire body move, not just the head.

  • @d3nshirenji
    @d3nshirenji Месяц назад

    Can you and will you implement emotion speech controls to this?

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      Its just depends on the driving video and the audio to record your speech

    • @d3nshirenji
      @d3nshirenji Месяц назад

      @@TheFutureThinker My bad, I had multiple youtube windows open and wrote this comment accidentally here although it was meant to another video (so it really doesn't make sense here at all...) Cool stuff though, and thanks for replying :)

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      @@d3nshirenji no worry :) have fun

  • @heymyfriend4445
    @heymyfriend4445 Месяц назад

    my generated output video only run for 1 second. Any option to increase the generated video length?

  • @darkreader01
    @darkreader01 Месяц назад

    Can you please make a video on how to run it in colab?

  • @Syndi_AI
    @Syndi_AI Месяц назад

    Forgive me, this tutorial appears to jump straight into comfyUI. How do I get that?! it's like missing the first step. I'm very interested in this but can anyone link to the missing part, the first part of this tutorial please?

    • @crazyleafdesignweb
      @crazyleafdesignweb Месяц назад +1

      Link to of the Github in video description.
      And how to get that? Show you already in here 01:40.

    • @Syndi_AI
      @Syndi_AI Месяц назад

      @@crazyleafdesignweb First....would be what is comfyUI and how do I get it not showing the custom node already open in....what I'm assuming is comfyAI but I don't know.

  • @Official-PRIMZ
    @Official-PRIMZ Месяц назад

    What?!😮 Damn thats very good

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +1

      More to come

    • @Official-PRIMZ
      @Official-PRIMZ Месяц назад

      @@TheFutureThinker Can't wait 💪😎

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      Combine AI video generator, this one, the Mimic Motion and some other new models, i think its possible to do a movie. I mean not the camera panning style.

    • @Official-PRIMZ
      @Official-PRIMZ Месяц назад

      @@TheFutureThinker 😃 Yess that would be awesome and for sure possible. Ai stuff is getting better so fast

    • @tracingzimpossible
      @tracingzimpossible Месяц назад

      @@TheFutureThinker well i work for films vfx and i believe with the mix of traditional methods along with Ai its doable (i mean movie with camera motion) and for last 1 or 2 months i have been doing research on it to get an workaround because i want to start something ...can you help ben? it will make the whole process much faster , i saw lot of potential for mimic motion n live portrait but i tried v2v with live portrait but its failing

  • @Avalon19511
    @Avalon19511 Месяц назад

    How did you get just the target image?

  • @Vashthareaper
    @Vashthareaper Месяц назад

    Kijai just updated the node to run it on cpu if anyone having problems with onnx like myself

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      Ha! Nice, how is the performance for Cpu running? Have anyone try?

    • @Vashthareaper
      @Vashthareaper Месяц назад +1

      @@TheFutureThinker it's pretty good tbh. Just as fast as gpu I'd say

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      @@Vashthareaper nice, and I am trying with v2v method in LivePortrait. Looks like the Comfy custom node did not have this feature implement. the sampler are supporting single image only. I have to mod the code.

    • @tracingzimpossible
      @tracingzimpossible Месяц назад

      @@TheFutureThinker i did try with cpu mode too although i didnt compare the timing with GPU but i use gpu mode

    • @tracingzimpossible
      @tracingzimpossible Месяц назад

      @@TheFutureThinker ohh here i got the answer pls do let me know if u get an workaround for v2v.

  • @reaperhammer
    @reaperhammer Месяц назад

    This is pretty funny 😂

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      😂😂😂 beautiful characters can have funny moment

  • @user-zl9gl9cy3l
    @user-zl9gl9cy3l Месяц назад

    @TheFutureThinker Hello bro, Is it possible to use on real-time ? using webcam ?

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      Yes theres a livecam custom node in Comfyui. Maybe you can use that one, I forgot the name of it.
      Then generate small batch size image frames every second.
      Of course , the generation result will be a little delay with the livecam record

    • @user-zl9gl9cy3l
      @user-zl9gl9cy3l Месяц назад

      @@TheFutureThinker how to download custom node. I could not find it

  • @quotesspace1713
    @quotesspace1713 Месяц назад

    can we use it for commercial??

  • @littledovecitydust
    @littledovecitydust Месяц назад

    does this work on a MacBook

  • @musicinthemachine
    @musicinthemachine 18 дней назад

    This is really not at all like Hedra. The main thing Hedra does is animate a photo and lip sync it with text to audio with an AI voice. This does none of that except animate a face and it needs an input video to even do that.

  • @zikwin
    @zikwin Месяц назад

    can do it video to video too ?

  • @boyboy168
    @boyboy168 Месяц назад

    look smooth, but better to have an online version...

  • @seanknowles7987
    @seanknowles7987 Месяц назад

    is the data of your face being retained by these companies? Also is this possible outside of comfyui?

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      The AI Model can be run outside of Comfyui. The custom node is like an extension pack in ComfyUI to run this AI.

    • @seanknowles7987
      @seanknowles7987 Месяц назад

      @@TheFutureThinker Will you be doing a tutorial on running this outside of comfy? Or do you have something that helps explain it? Also what about the part about the companies retaining your data such as speech and face. Thank you for your earleir reply btw.

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +1

      @@seanknowles7987 no, I don't. I can smell that you are trying to implement this on the backend of app ? And if you want to build an app around this, you should hire someone to do. There's nothing Free , like Jack Ma said. And InsightFace can sue you till you bankrupts if you use their library in your app backend. Just beware of that.

  • @luisellagirasole7909
    @luisellagirasole7909 Месяц назад

    Hello and thanks, will you publish a workflow on patreon? Thanks!

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +1

      It have the workflow including in custom node pack. Doesn't need much customize for this AI.
      03:06 here

    • @luisellagirasole7909
      @luisellagirasole7909 Месяц назад

      @@TheFutureThinker Yes, but you made some changes and I wasn't able to do :)

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +1

      Okay, i will create a new one in there.

  • @Vanced2Dua
    @Vanced2Dua Месяц назад

    Mantap bro... Saya sudah subscribe dan like
    Bolehkah saya meminta workflownya???

  • @Instant_Nerf
    @Instant_Nerf Месяц назад

    What about the rest of the body

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      You need this : ruclips.net/video/q816HyZiw18/видео.html

  • @raphaild279
    @raphaild279 Месяц назад

    is this in real time?

  • @ImAlecPonce
    @ImAlecPonce Месяц назад

    animations like anime where they just animate the eyes and mouth XD

  • @TsujioDragan-zz3bj
    @TsujioDragan-zz3bj Месяц назад

    What are you saying? The example with Jack Ma doesn't match AT ALL ! WTF

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +1

      04:20 - no drama here. Or he will find you.
      That was a setting expirement when I compare the retargeting option On/Off.

    • @TsujioDragan-zz3bj
      @TsujioDragan-zz3bj Месяц назад

      @@TheFutureThinker I see your point, in context it didn't really translate that well because I was looking for the transformation to somewhat match the phonemes, but you're right if we consider timing only. Also, Jack Ma has already gone "missing" once, it can happen again 😂

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      @@TsujioDragan-zz3bj well... he is somewhere in some place, from a legendary entrepreneur to a mystery person now. XD

  • @mobinpourabbas
    @mobinpourabbas Месяц назад

    AAAAAAAAA GIVE ME WORKFLOW

  • @ismgroov4094
    @ismgroov4094 Месяц назад

    workflow plz sir

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +1

      Mentioned in the video sir, if you have pay attention get some information, don't just be a zombie geek click and download, you will able to find it, sir.

    • @ismgroov4094
      @ismgroov4094 Месяц назад

      @@TheFutureThinkersorry sir!

  • @manolomaru
    @manolomaru Месяц назад

    ✨👌😎😮😮😮😎👍✨

  • @realhamzabarami
    @realhamzabarami Месяц назад

    i know this is unrelated but give the Quran a read

  • @RhapsHayden
    @RhapsHayden Месяц назад

    Going to try it. Making a lookbook for YT and using this randomly to scare the crap out of people.

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад +1

      LOL haha yup :)

    • @RhapsHayden
      @RhapsHayden Месяц назад

      @@TheFutureThinker Oh yeah! Testing it now and uh its fun :) Really low VRAM usage so far but im not going with high frames atm

    • @TheFutureThinker
      @TheFutureThinker  Месяц назад

      Yup, it can gen 30 seconds with 1 queue from my test. I am not sure if this thing can handle a longer length video.

  • @angloland4539
    @angloland4539 Месяц назад