Control AI Actors with this Awesome Workflow (LivePortrait Tutorial)

Поделиться
HTML-код

Комментарии • 156

  • @fraterseamus
    @fraterseamus 3 месяца назад +12

    Best tutorial on Live Portrait I've seen yet, thanks for posting.

    • @nikgrid
      @nikgrid 3 месяца назад

      @@DrZaious Agreed. You can do this for free using comfyUI and Live portrait workflow.

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      Wow, thanks!

  • @IAmVo
    @IAmVo 3 месяца назад +2

    THANK YOU!!! When you opened Comfy I almost clicked away. Its so so complex and I just wanna hire people to collab with at that point. But you broke it down SO well, I understood and it made so much more accessible thanks

  • @animateclay
    @animateclay 3 месяца назад +3

    Thanks for the tutorial. it's the most simple solution out there for animating faces that I've seen!

  • @ZedMagnet
    @ZedMagnet 2 месяца назад +1

    I respect the flexibility of ComfyUI, but only super geeks are going to even attempt to grok it. Higher level tools (even if Comfy is under the hood) will be needed for wide spread usage, IMO.
    Awesome presentation. You are top notch, sir.

  • @heartshinemusic
    @heartshinemusic 2 месяца назад

    Thanks for the tutorial. These lip-sync features are so important to advance the control over consistent characters. I hope more A.I. video platforms will implement this under the hood, without all the incredible complex Comfy parameters.

  • @AEFox
    @AEFox 3 месяца назад +5

    For better results, I think it's better that you don't put in your prompt any "talking" "Saying" (mouth/lips movements) anything for the base video, so the mouth and face do not be doing any expression as base face video, so then will be modified from a "blank default/idle expression" of the face by the face controlled video, so the modification will happen to the same face and not a face that it's already doing/changing over time.

    • @Aurelius511
      @Aurelius511 3 месяца назад

      Did you have a stroke while writing this comment?

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      True!

  • @Realidyne
    @Realidyne 3 месяца назад +4

    So if you wanted to sync up with audio, I'd imagine you'd have to slow the final output by 25%? If that's the case you'd have to increase the fps by 25% (around 30fps) if your final final output is 24fps with synced video?

    • @phu320
      @phu320 3 месяца назад

      I just went through a nightmare like this yesterday.

    • @EmilyNilsen
      @EmilyNilsen 3 месяца назад +1

      I found that as long as I had the same frame rate on each video, I didn't need to increase the speed of the driving video by 125% initially. Also, I got better results, prompting for subtle head movements only and without prompting the character to speak into the camera.

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      Would def take some finesse in the editor!

    • @logan_smith
      @logan_smith 2 месяца назад +1

      @@EmilyNilsen Yup. Grab the fps from the driving video via the Video Info VHS node and plug that into the Video Combine VHS node. You shouldnt have to do any speed changing.

  • @WattleburyWoods
    @WattleburyWoods 3 месяца назад +2

    That sweater on that background is ❤‍🔥

  • @Hadar360
    @Hadar360 3 месяца назад

    Glad that I found your channel and tutorials, great stuff

  • @DigitalAI_Francky
    @DigitalAI_Francky 3 месяца назад +1

    With the local version and adding an upscale node, you should be able to have it in a bigger resolution 😊

  • @jzwadlo
    @jzwadlo 3 месяца назад +1

    would it not make more sense to also record the reference footage in front of a white wall or plain background vs clutter in the background? Surely that makes a difference?

    • @curiousrefuge
      @curiousrefuge  3 месяца назад +1

      It would likely help! But not totally necessary

    • @jzwadlo
      @jzwadlo 3 месяца назад

      @@curiousrefuge fair enough! 🤝

  •  3 месяца назад +1

    Thanks for the info, some useful tips. Wondering how do you get around videos that need lip syncing?
    Also, where can I buy a sweater like yours? :)

  • @Sean43322
    @Sean43322 3 месяца назад

    Thanks for tutorial. You used close-up image and the result was almost good and acceptable. My question is, can the same result be obtained with medium or wide shot images? Have you ever tried it?

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      We will have to do some testing! But close up is always best results

  • @malusjoon
    @malusjoon 3 месяца назад +1

    i would love to see your studio...how you setup and how you get the ring animation in the background! 🙂

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      Maybe we'll do a desk breakdown someday :)

  • @mitchdavis5159
    @mitchdavis5159 2 месяца назад

    Thank you, Curious Refuge!

  • @High-Tech-Geek
    @High-Tech-Geek 3 месяца назад +1

    Just to clarify if I've got this correct:
    1. If you're uploading a control video to animate an image, you can move your head all over the place and the resulting video will mimic those movements.
    2. But if you're uploading a control video to animate a video, you need to keep your head still as it will only apply your face to the video character that is already moving their head.
    Is it correct to say the control video head movements will not affect the existing video character's head movements?

  • @Art-ifishl_Intelligence
    @Art-ifishl_Intelligence 3 месяца назад +2

    You were the best sweaters lol

  • @LaurenceBricker
    @LaurenceBricker 3 месяца назад

    Thank you! Can you add how to deal with the actual audio in the performance?

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      You'll have to bring it into your editor :)

  • @TheJereld
    @TheJereld 3 месяца назад +3

    I like the new background!!!

  • @TheCrackingSpark
    @TheCrackingSpark 2 месяца назад

    Hey! Great video. I have an unrelated question, kind of an odd one. What's your background? Are you using a greenscreen? If so why can't I see any green in your glasses? Is it not a green screen and just a straight up tv screen? Your shot is lit very nicely, and it matches the background very well! I just had that question lol

    • @curiousrefuge
      @curiousrefuge  2 месяца назад

      It's a projector :) Glad you enjoy the look!

  • @julbombning4204
    @julbombning4204 3 месяца назад +2

    Hey I want to buy your sweater asap, where can I buy it?

  • @JoJoAIventures
    @JoJoAIventures 3 месяца назад

    Thank you, ComfyUI is pretty much a game changer these days. Every serious AI video creator should consider learning it.

  • @chariots8x230
    @chariots8x230 3 месяца назад

    Can you also do a video about ‘Crew AI’? I think ‘Crew AI’ seems like a useful AI tool because it gives users a lot of control over their AI character’s facial expressions, just by moving some sliders around. It doesn’t really create animations, but it can change the facial expressions of characters in still images. I need to work with both still images and animations, so I like that there’s a tool that can modify characters in still images like this. Also, in terms of animation, I can see a tool like this being useful for creating a ‘start frame’ and ‘end frame’ for an AI animation.

  • @joakimmogren1727
    @joakimmogren1727 3 месяца назад +4

    I'm a horrible actor and I don't want to record myself. I know this is going to sound incredibly lazy but I hope the option to prompt different emotions becomes available. I don't need Oscar worthy performances for the projects that I'm working on. It's great that's there's a lot option for controlling faces. Now only if we could have 3D poseable consistent characters we'll finally be able to have AI actors?

    • @SykoActiveStudios
      @SykoActiveStudios 3 месяца назад +2

      Metahuman with unreal.

    • @JulioMacarena
      @JulioMacarena 3 месяца назад +1

      Hedra AI - check it out.

    • @JulioMacarena
      @JulioMacarena 3 месяца назад

      ​@SykoActiveStudios thanks. Tell me more. :)

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      You can certainly try other things like Hedra!

  • @justinlloyd3
    @justinlloyd3 3 месяца назад +1

    Great tutorial! Thanks you!

  • @IlluminousThoughts
    @IlluminousThoughts 3 месяца назад +1

    this is really cool

  • @FilmSpook
    @FilmSpook 3 месяца назад

    Many Thanks for all your help, Bro.

  • @abbasmedina786
    @abbasmedina786 3 месяца назад +3

    Awesome

  • @okgo5152
    @okgo5152 3 месяца назад

    Thanks, great video!

  • @gumvue.studio
    @gumvue.studio 3 месяца назад

    thank you, this is good to try . I upload some AI films, 8 min, but will try this way also

  • @festivitycat
    @festivitycat 3 месяца назад

    Love that sweater!

  • @StéphaneLévy-i7g
    @StéphaneLévy-i7g 2 месяца назад

    Can't get this to work. I've followed the tutorial step by step, and used the specified assets but the video combine node always appears EMPTY and I cannot render. How to fix this? Thanks in advance.

    • @curiousrefuge
      @curiousrefuge  2 месяца назад +1

      Feel free to jump in our discord and we can try and help you troubleshoot!

    • @StéphaneLévy-i7g
      @StéphaneLévy-i7g 2 месяца назад

      @@curiousrefuge Thanks.

  • @0A01amir
    @0A01amir 3 месяца назад

    Any site with face portrait videos that we can download ?

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      You can use Midjourney to make thme!

  • @SmartVideoBE
    @SmartVideoBE 3 месяца назад

    can we run it locally on mac or windows without paying? how is this done?

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      Yes but we prefer running it online as it takes quite a bit of cpu power

  • @elgodric
    @elgodric 2 месяца назад

    What's the GPU requirement to run Live-portrait locally?

    • @curiousrefuge
      @curiousrefuge  2 месяца назад +1

      Good question...we'll have to check!

  • @mitchdavis5159
    @mitchdavis5159 2 месяца назад

    How do I get the output video to be longer than 8s?

    • @curiousrefuge
      @curiousrefuge  2 месяца назад

      Editing software and splice clips together :)

  • @stevey19861
    @stevey19861 3 месяца назад

    What background are you using?

  • @photogeneze
    @photogeneze Месяц назад +1

    why there is no workflow downloadable?

    • @curiousrefuge
      @curiousrefuge  Месяц назад

      It's not something to download!

    • @photogeneze
      @photogeneze Месяц назад

      ​@@curiousrefuge i mean the comfyui's workflow, simple json file, of course the models and everything else is separate download, i understand that. But why not workflow file?

  • @Escelce
    @Escelce 3 месяца назад

    Great video thank you

  • @Gamingtamilworld-o4x
    @Gamingtamilworld-o4x 26 дней назад

    Why we have to use comfy ai nodes? When we already have the liveportrait easy way of uploading videos and photos to make this process easier? Whats the difference bro?

  • @ai.ai.captain
    @ai.ai.captain 3 месяца назад +1

    ❤❤❤

  • @omer-books
    @omer-books 14 дней назад

    Can you put the link of the workflow?

  • @khalil2099
    @khalil2099 3 месяца назад

    can we only do lipsync ? meaning, the app will ignore the eye.

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      It's not as refined for other facial features

  • @DanielSierra-l1u
    @DanielSierra-l1u Месяц назад

    Why does my output come out so damn blurry? Its so blurry i can barely see lip movement

    • @curiousrefuge
      @curiousrefuge  Месяц назад

      Let us know in our discord and we'll try to help!

  • @nickjames5602
    @nickjames5602 25 дней назад

    It's not letting me pay $1. It says I have to pay at least $16. Is this normal???

    • @curiousrefuge
      @curiousrefuge  20 дней назад

      Hmm....it may ahve changed over the thxgiving/black friday sale.

  • @NoFaithNoPain
    @NoFaithNoPain Месяц назад

    What we need is faces at an angle and people who are moving

  • @faisaledits1
    @faisaledits1 3 месяца назад

    Gen 3 Alpha is not free 😔

  • @DaveandhisDeathbeanie
    @DaveandhisDeathbeanie 3 месяца назад +1

    Why are people so into the sweater?!

  • @planetofshireen
    @planetofshireen 3 месяца назад +1

    Wowwwwwwwwwwwww

  • @JulioMacarena
    @JulioMacarena 3 месяца назад

    Tell the actor that you're going to use their single performance many times in different projects before you record them. :)

  • @deepdiver849
    @deepdiver849 3 месяца назад

    Its not working for me when i upload an image of an ai created cat. It doesn’t recognize it.... so ai is still behind

    • @curiousrefuge
      @curiousrefuge  3 месяца назад +1

      Hmm, jump in our discord and we can try to help!

  • @julx97
    @julx97 3 месяца назад +6

    i dont like ComfyUI, that's why I like to use LivePortrait in "LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control".

    • @adarwinterdror7245
      @adarwinterdror7245 3 месяца назад +3

      What? What are you talking about?
      Is there an alternative to do what was shown in this video without comfy UI and doesn't involve installing Live portrait on the PC? (Which I had issues with)

    • @julx97
      @julx97 3 месяца назад

      @@adarwinterdror7245 aitrepreneur made a tutorial about this topic. It’s his second most recent video

    • @julx97
      @julx97 3 месяца назад

      @@adarwinterdror7245 a it re pr en eu r made a tutorial about this topic. It’s his second most recent video.
      I had to write spaces in between. As my comment disappears when writing his channel name

    • @findthatvid-pro
      @findthatvid-pro 3 месяца назад

      @@adarwinterdror7245 Find the link in my post (as RUclips comments get deleted all the time)
      The post will expire in 24 hours

    • @findthatvid-pro
      @findthatvid-pro 3 месяца назад

      @@adarwinterdror7245find the name in my post

  • @GaryMillyz
    @GaryMillyz 3 месяца назад

    AI gfs are gonna be f'in wild

  • @jonathankrimer
    @jonathankrimer 3 месяца назад

    Im not sure if my scrept needs that. Lol

  • @woltergeist9175
    @woltergeist9175 3 месяца назад +1

    AI is really good at generating non-ugly women

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      True! It's trained on a bunch of models it seems for all genders!

  • @kamizama9177
    @kamizama9177 3 месяца назад +13

    Not easy at all lol

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      We appreciate you trying!

    • @ryan18462
      @ryan18462 2 месяца назад

      Every AI tool is the best!

    • @jeremyleonbarlow
      @jeremyleonbarlow Месяц назад

      If you want easy, Runway has Act-One for talking heads. This however looks like it might be beneficial if you were doing a walk and talk video from an image to video and you needed a better facial performance on the character. Is it easy? No. Is it less expensive and easier than hiring a location, hiring actors, and doing a full fledged on location shoot to get essentially a similar shot? Yeah, and what's more you can create the perfect version of your character if you iterate to get what you want in the original image generation and take the time to train proper LORAs.

  • @robertdouble559
    @robertdouble559 3 месяца назад

    Very rubbery. No regard for underlying bone or muscle structure. But hey, its early days, its only gonna get better. Good enough for memes and amateur work, I guess.

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      It's certainly still developing!

    • @lefourbe5596
      @lefourbe5596 2 месяца назад

      Lora dataset augmentation. Great for that

  • @fabianoperes2155
    @fabianoperes2155 3 месяца назад

    Looks too artificial, maybe in w few next versions they will fix it.

  • @ToneVirtue
    @ToneVirtue 3 месяца назад

    I think it is over. All music and film was about relating to fellow human beings. This AI is making everything oversaturated and quite frankly pointless. Doesn't make me feel like watching anything. It is just a pile of plastic straws at this point.

    • @themightyflog
      @themightyflog 3 месяца назад

      Nah. That was done away with when it costs so much to create a vision. Money made us lose the connection.

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      Before AI, how did you feel about online content?

    • @ToneVirtue
      @ToneVirtue 3 месяца назад

      @@curiousrefuge Great. It opened a market place with loads of opportunities and jobs for people. If you want people to use AI to replace photographers, filmmakers, writers musicians then It will basically kill the whole collaborative market. AI content was not made to improve anything but rather to destroy content creation economy and alienate people form each other. Soon AI won't even need people to write a prompt. I personally think they should ban AI generated content monetization to save content creator economy.

  • @雪鷹魚英語培訓的領航
    @雪鷹魚英語培訓的領航 3 месяца назад

    Still not good enough for my taste, but one or two more papers down the line, or if the EMO paper gets put to use I'm game.

    • @curiousrefuge
      @curiousrefuge  3 месяца назад

      True, still needs a little work!

  • @anthonymark2516
    @anthonymark2516 3 месяца назад

    Not bad!
    But my technique is better. As I don't need any input videos, just my imagination.
    And I can get multiple performances, so I can actually direct the actors by choosing the performance that works best.
    /watch?v=iMXbSXSJqqs
    We're at the point now where the tech does not matter so much as how we use it.

  • @Pure_Science_and_Technology
    @Pure_Science_and_Technology 3 месяца назад

    This is bad.

  • @seencapone
    @seencapone 2 месяца назад

    It might have been better if you would've laid your original vocals over the final clip so we could see how well the lip sync holds up through the processing. No one wants to go through this whole workflow just to make a silent film of people flappin' their lips.
    This reminds me of all the face capture demos -- and I've watched 'em all -- no one demonstrating the tech ever records any dialogue or normal scenes of simple talking, they just pull crazy funny faces like that's supposed to be useful.

    • @curiousrefuge
      @curiousrefuge  2 месяца назад

      You've watched them all? We've seen quite a few of people using their real voice (and typically adding some flavor with elevenalbs with v2v)