ComfyUI Tutorial Series Ep 26: Live Portrait & Face Expressions

Поделиться
HTML-код
  • Опубликовано: 13 дек 2024

Комментарии • 52

  • @insane3953
    @insane3953 9 часов назад +2

    Man, please continue with this clear format, you're the king!

  • @Billabongesta
    @Billabongesta 13 часов назад +1

    WOW!! We needed it so much! Especially when we don't have an advanced open source video model yet! The results are amazing: seems natural and even quick to render. I appreciate all your efforts in making such amazing content for us Pixaroma! I am amazed with the hard work you put in any video, and just to imagine you have to really devote so much energy just for the correct voice over & timing. Thank you Pixaroma!

    • @pixaroma
      @pixaroma  13 часов назад +1

      Thank you 😊

  • @JoelB71
    @JoelB71 День назад +4

    This is yet another excellently constructed and communicated tutorial. Top notch stuff, sir.

  • @59Marcel
    @59Marcel 12 часов назад

    Fantastic tutorial Thank you and congratulation on your 20K subscribers. Time to celebrate. Well done.

    • @pixaroma
      @pixaroma  11 часов назад

      Thank you very much 😊

  • @ivo_tm
    @ivo_tm День назад

    Very useful tutorial and congratulations on 20K subscribers 😀

    • @pixaroma
      @pixaroma  День назад

      Almost there , 85 more and I am there 😊 thanks

  • @AndreyJulpa
    @AndreyJulpa День назад +1

    Nice tutorial as always, was using live portrait in Forge, nice to have it in comfyui

  • @Uday_अK
    @Uday_अK День назад

    Nice breakdown of the process! 📚🎥 The live portrait and expression tips were super useful-thanks!

    • @pixaroma
      @pixaroma  День назад +1

      Thanks Uday 😊

  • @dem_soul
    @dem_soul 16 часов назад

    Well done Pix. Useful stuff 🙏

  • @SumoBundle
    @SumoBundle День назад

    I’m looking forward to learning how to set up custom nodes and fine-tune animations for both realistic and stylized effects.

  • @ChanhDucTuong
    @ChanhDucTuong 8 часов назад

    Nice thank you for sharing.

  • @gymkhanasports
    @gymkhanasports День назад

    Another amazing tutorial. Thank you sensei 🙏

  • @labmike3d
    @labmike3d День назад

    Hey! A few months ago, I gave Live Portrait a shot on my YT channel, and I was honestly blown away by the results. Took it a step further by re-targeting Hedra animations to Live Portrait, and everything ran super smoothly in ComfyUI - and this was on my basic RTX 3060! 😱 If I could wish for one thing, though, it’d be the ability to re-target full-body animations or slap on some mocap presets. That would level things up! 🙌 Also, quick thought - RUclips’s auto-dubbing feature is cool, but it’d be amazing if they could match TTS voices to the original voiceovers. It’s still not quite there yet (even on your channel), but fingers crossed they improve it soon! Anyway, great video as always - keep rocking and having fun! 🤘

    • @pixaroma
      @pixaroma  16 часов назад +1

      thank you, I am sure in the future new tech would appear that make things easier, is just the begining

  • @rodolpher8056
    @rodolpher8056 День назад

    Bonnes explications, merci

  • @giuseppedaizzole7025
    @giuseppedaizzole7025 День назад

    Top notch...thanks

  • @josemtb5983
    @josemtb5983 15 часов назад

    Tutoriales fantásticos que puedo disfrutar gracias al audio doblado de youtube. Gracias pixaroma y gracias youtube!!

    • @pixaroma
      @pixaroma  14 часов назад

      you are welcome 🙂

  • @tonyambito8028
    @tonyambito8028 2 часа назад

    thx. well explained

  • @StateofFact-LLC
    @StateofFact-LLC День назад

    Comment! Great as always.

  • @daibaogoh5487
    @daibaogoh5487 21 час назад

    Great video. Is there away to have a live camera as the source driving video? Basically to live animate an image and be able to stream the preview window? 🤔

    • @pixaroma
      @pixaroma  16 часов назад

      I saw someone do it, do a search for webcam capture node comfyui, saw something with WebcamCaptureCV2, maybe you can find more info

  • @canaldetestes4517
    @canaldetestes4517 9 часов назад

    Hi, thank you for sharing your work with us. I'm Brazilian and this video has the second audio (Portuguese and some others), which is useful to me to help me understand. Anyway, I have a simple question is possible to create a notebook with this ComfyUI workflow to be used in Google Colab? I ask because I do not have a good computer and a good understanding. Anyway I would like to learn how to create Notebooks. Thanks again

    • @pixaroma
      @pixaroma  9 часов назад +1

      Thanks, unfortunately i dont have knowledge of colab, so i don't know how to do that, but probably someone already did that, maybe you can find on Reddit or on civit ai some workflows

  • @jomiller7332
    @jomiller7332 День назад

    thanks !

  • @Genzzry
    @Genzzry День назад

    Could this be used with live video input & output for virtual avatars?
    For example, when streaming, on video calls, etc?
    Essentially allowing you to use any face / cartoon as an avatar instead of only ones already in an avatar application?

    • @pixaroma
      @pixaroma  День назад +1

      I think i saw someone using it so it must be possible, just not sure what nodes it needs

  • @ian2593
    @ian2593 12 часов назад

    Great as usual. Can you do an episode on Sana from nvidia?

    • @pixaroma
      @pixaroma  12 часов назад +1

      I am looking into it, will do some research and probably a video too if all works out

  • @insane3953
    @insane3953 9 часов назад

    How can we adapt this workflow with characther in movement or not frontal? In your opinion is there any workflow/node that can do that?

    • @pixaroma
      @pixaroma  9 часов назад

      Not sure if it works the model looks for faces, can be a little in angle but if is too much it cannot recognize probably the face. So it is hard to move things in certain angles and things might get distorted

    • @insane3953
      @insane3953 7 часов назад

      @@pixaroma It's true, maybe training the model with frontal face and add after..

  • @jarlvakr
    @jarlvakr День назад

    Share the specifications of the machine you're working on. I'm a bit curious about it.

    • @pixaroma
      @pixaroma  День назад +2

      This is what I have
      - CPU Intel Core i9-13900KF (3.0GHz, 36MB, LGA1700) box
      - GPU GIGABYTE AORUS GeForce RTX 4090 MASTER 24GB GDDR6X 384-bit
      - Motherboard GIGABYTE Z790 UD LGA 1700 Intel Socket LGA 1700
      - 128 GB RAM Corsair Vengeance, DIMM, DDR5, 64GB (4x32gb), CL40, 5200Mhz
      - SSD Samsung 980 PRO, 2TB, M.2
      - SSD WD Blue, 2TB, M2 2280
      - Case ASUS TUF Gaming GT501 White Edition, Mid-Tower, White
      - Cooler Procesor Corsair iCUE H150i ELITE CAPELLIX Liquid
      - PSU Gigabyte AORUS P1200W 80+ PLATINUM MODULAR, 1200W
      - Microsoft Windows 11 Pro 32-bit/64-bit English USB P2, Retail
      - Wacom Intuos Pro M

    • @jarlvakr
      @jarlvakr 11 часов назад +1

      @@pixaroma A very nice set.

  • @donblaa
    @donblaa День назад

    is there a way to make it higher res tho

    • @pixaroma
      @pixaroma  День назад +1

      Not sure, probably needs a better model, you can use maybe a video upscaler to make it after bigger, or you can save the frames and maybe those can be upscaled, i only recently started to use it so I need to do more research

  • @konnstantinc
    @konnstantinc День назад

    ❤❤

  • @Илья-к6е5и
    @Илья-к6е5и День назад

    Awesome. Thanks!

  • @eveekiviblog7361
    @eveekiviblog7361 День назад

    What if the head turns left or right?

    • @pixaroma
      @pixaroma  День назад

      You can turn it but not too much, depends on image, you can not het complete side view, but you can still rotate a little

  • @obito-269
    @obito-269 День назад

    👍

  • @petneb
    @petneb День назад

    I don't know why so many videos describing the advanced live portrait node call it motions when it is called expressions. The node is creating in-between motions by linearly interpolating between the Expressions in the numbered expression nodes, not motion nodes.

    • @pixaroma
      @pixaroma  День назад +1

      The node creator described in their example motion 1, motion 2 and so on, just tried to keep it consistent with what they have

    • @petneb
      @petneb 23 часа назад

      @pixaroma I don't think it's a good idea to just continue a very bad terminology. Interpolating motion between motion is very hard and I would say almost impossible, to understand or there needs some very in-depth explanation on what that is supposed to mean.