AI-Powered Facial Animation - LiveLink with NVIDIA Audio2Face & Unreal Engine Metahuman

Поделиться
HTML-код
  • Опубликовано: 19 янв 2025

Комментарии • 72

  • @VJSCHOOL
    @VJSCHOOL  Год назад +8

    Social Links:
    instagram.com/olegchomp/
    twitter.com/oleg__chomp
    Discord:
    discord.com/invite/wNW8xkEjrf
    Blendshape list:
    "eyeBlinkLeft", "eyeLookDownLeft", "eyeLookInLeft", "eyeLookOutLeft", "eyeLookUpLeft", "eyeSquintLeft", "eyeWideLeft", "eyeBlinkRight", "eyeLookDownRight", "eyeLookInRight", "eyeLookOutRight", "eyeLookUpRight", "eyeSquintRight", "eyeWideRight", "jawForward", "jawLeft", "jawRight", "jawOpen", "mouthClose", "mouthFunnel", "mouthPucker", "mouthLeft", "mouthRight", "mouthSmileLeft", "mouthSmileRight", "mouthFrownLeft", "mouthFrownRight", "mouthDimpleLeft", "mouthDimpleRight", "mouthStretchLeft", "mouthStretchRight", "mouthRollLower", "mouthRollUpper", "mouthShrugLower", "mouthShrugUpper", "mouthPressLeft", "mouthPressRight", "mouthLowerDownLeft", "mouthLowerDownRight", "mouthUpperUpLeft", "mouthUpperUpRight", "browDownLeft", "browDownRight", "browInnerUp", "browOuterUpLeft", "browOuterUpRight", "cheekPuff", "cheekSquintLeft", "cheekSquintRight", "noseSneerLeft", "noseSneerRight", "tongueOut"

  • @mlgjman1837
    @mlgjman1837 9 месяцев назад

    Perfectly explained. Thank you!

  • @alysaliu8890
    @alysaliu8890 Год назад +1

    Sorry but I don't see the comment with the list of blend shapes?

    • @yabuliao
      @yabuliao Год назад

      Found out it was in another video: ruclips.net/video/tgq9m1HgASE/видео.html

  • @jumpieva
    @jumpieva Год назад +1

    this is a good step in the right direction, but in no way will pass for cinematics/up close dialogue. is there a way to make better facials? or do we have to still use something like character creator.

  • @alekai2178
    @alekai2178 Год назад +1

    Great vid! How come the animation at the end is not as accurate as the example animation you showed at the very beginning? thanks

  • @HarmansSingh-q9i
    @HarmansSingh-q9i 2 месяца назад

    Hi, I am going to create the advanced project that you made using azure-tts, gpt4, nlp and more. Are there any resources you can point me to?
    Thank you

  • @CalvinWaynes
    @CalvinWaynes 11 месяцев назад

    does the audio work doing it this way? currently the audio doesnt work for pixel streaming

  • @chanakhs
    @chanakhs Год назад

    Thanks this is a great stuff. I have purchased the advanced tutorial few months ago and made good progress with the help of it. However I am trying to achieve something very essential in this workflow. Play some body animation in sync with the facial animation. I have some animation asset which I want to play when the face weights are being received by the live link or when the audio is playing in Unreal Engine. Essential and simple as it sounds. I haven't had a break through yet. Tried many things. a) "isPlaying (face)" returns false as there is no animation being played on the face its the ARkit face weights. b) Binding an event to "on Live Link updated" to play animation was totally useless, it just acts essentially like a tick in editor and when playing, does not indicate anything useful. c) "on controler map updated" does not get fired when weights are being passed d) Get Animation Frame data is always returning false. obviously as animation is not being played here. Any pointers would be appreciated.

  • @freenomon2466
    @freenomon2466 Год назад

    is there a way to access the metahuman facial rig controls in blueprint (the facial rig controller sliders)? I'd rather map the incoming data myself using the metahuman facial controllers..

  • @p1ontek
    @p1ontek Год назад

    Hi thanks for your tutorial. Unfortunately at 3:25 my animation does not work for the arkit asset.

  • @sinaasadiyan
    @sinaasadiyan 10 месяцев назад

    hello, we have developed a n ai assistant and using tts we get audio response. how can we connect audio to A2F ?

  • @AndreSantos-nx2yf
    @AndreSantos-nx2yf Год назад

    Hi! The tutorial selling on boosty is more complete? Or is the same of this video? I have a project to develop something like your example using gpt. But I don't have much knowledge about this flow of the video, so if the tutorial is more complete will be great.

  • @msk.filmsahealingworld1723
    @msk.filmsahealingworld1723 Год назад +1

    Amazing, Thanks a lot. It so helpful. Can we link Maya with Audio 2 Face and adjust with blend shapes also with UE live link?

  • @RuolinJia
    @RuolinJia Год назад

    Hello, after clicking on "setupblendshapesove", the blue head did not follow. I checked the facsSloper to ensure it was the same as in your video. Please tell me what to do

    • @shanmukhram9209
      @shanmukhram9209 10 месяцев назад

      me tooo im stuck at the same place! any one HELP?

  • @TopoVizio
    @TopoVizio Год назад +2

    Hey thanks for this! Is there documentation for maybe just using OSC to manipulate the facial animations and skip all of the omniverse stuff? I'm not trying to do anything with text to speech. Either way, great tutorial :)

    • @VJSCHOOL
      @VJSCHOOL  Год назад +1

      Hey! If you familiar with python, look in part about facsolver script. You will find list of blend shapes in first comment and just pass values to those elements in list.
      If you familiar with TouchDesigner, create constant chop with channel names as items in blendshape list (ex. eyeBlinkLeft) and send them with OSC to UE

  • @DiTo97
    @DiTo97 Год назад

    The UE avatar is lagging a bit, while the rest works fine. Any idea why? @VJSCHOOL

  • @JacksonPopiel
    @JacksonPopiel Год назад

    How were you able to get audio to stream from the gpt tts to audio2face?

  • @彭亮-i9j
    @彭亮-i9j 11 месяцев назад

    hi, thanks for the great video. Do you know how to send the driven audio from A2F to UE?

  • @LeeSurber
    @LeeSurber Год назад

    Excellent tutorial.. I get to the part at 1:47 validating script editor and get error 22 Invalid Argument. Any ideas what I'm doing wrong.

  • @hailongwang7549
    @hailongwang7549 Год назад

    Thank you very much!

  • @matthiass976
    @matthiass976 8 месяцев назад

    Hey, I am using audio2face 2023.2.0 and my Metahuman looks like she had a stroke. Does the Blendshape list work for this Version of audio2face also?
    And if I focus un unreal engine, my audio2face is starting to lag. Is it just because of my PC-Specs?

  • @郭柱江
    @郭柱江 Год назад

    hallo,how can I make it support automatic emoticons?

  • @MiyaL-x5r
    @MiyaL-x5r Год назад

    Hello, thank you for your tutorial. I encountered a problem in A2F. After writing the script according to the tutorial, the a2f Data Conversion tab of the audio2face interface will disappear directly. What is the reason?

    • @VJSCHOOL
      @VJSCHOOL  Год назад

      This happened if you made mistake somewhere in code or osc library not installed correctly

  • @joshhuang4855
    @joshhuang4855 Год назад

    I am in the enabling facial poart but the AnimGraph doesn't have a custom control function nor a Modify curve, I am not sure what is wrong

    • @VJSCHOOL
      @VJSCHOOL  Год назад

      You can create manually

    • @joshhuang4855
      @joshhuang4855 Год назад

      @@VJSCHOOL I did that but now it's giving me a warning saying cannot copy property (TMap -> TMap), and the animation is not working

  • @DigitalDesignerAI
    @DigitalDesignerAI Год назад

    Does the updated Audio2Face livelink plugin now support this? I can't for the life of me figure out how to configure the blueprint to work similar to this.

    • @VJSCHOOL
      @VJSCHOOL  Год назад

      With new update you don’t need to follow this tutorial

    • @DigitalDesignerAI
      @DigitalDesignerAI Год назад

      @@VJSCHOOL Thanks for the clarification. Still new to working with the Audio2Face application and plugin. Do you have any plans to upload an updated tutorial using the new updates to the plugin? I'm personally having issues linking everything up, and a tutorial would be super helpful. Thanks for everything! Awesome tutorial!

  • @nikkho625
    @nikkho625 Год назад

    а морф такой убогий на липсинке пушо нет ру фонем или по какой то иной причине? я обычно выражаюсь резковато, нет намерений кого то задеть, действительно интересует ответ на вопрос. В целом я уже на пути к покупке айфона, но все же пока есть надежда наткнуться на что то более ли менее не ковыляющее черезмерно.

  • @shishi6631
    @shishi6631 Год назад

    Hi, thanks for your tutorial! It is really amazing to have this workflow.
    I met a question in A2F, I cannot find the data convertion panel, and I trid to trun it on in toolbar, it shows error said AttributeError:'AudiotoFaceExportExtention' object has no attribute '_exporter window'. Any clue for this? Thank you!

    • @jiazeli7762
      @jiazeli7762 Год назад +1

      me either

    • @VJSCHOOL
      @VJSCHOOL  Год назад

      hey! I think it's better to ask on a2f forum.
      forums.developer.nvidia.com/c/omniverse/apps/audio2face/

    • @VJSCHOOL
      @VJSCHOOL  Год назад +1

      Quick update. Sometimes this happened if you made mistake in facsSolver script

  • @ShreyansKothari-j1k
    @ShreyansKothari-j1k Год назад

    Hi! Firstly, great video- thank you!
    I implemented all the steps in the video, including the last part about the OSC disconnecting fix. Despite that, for some reason, the connection keeps breaking and then comes back on after a couple of seconds. Is there a way/is it possible to keep the osc connection open indefinitely without any disconnections?

    • @VJSCHOOL
      @VJSCHOOL  Год назад

      Set variable with osc server. It should solve the issue

  • @michaelmichaelguo2001
    @michaelmichaelguo2001 Год назад

    I hope you can create a tool that converts ARKit model exported as JSON file to CSV format exported by LiveLink Face. This way, we can use the LiveLink Importer plugin in UE to apply it to Daz characters without real-time recording.

  • @aymenselmi8318
    @aymenselmi8318 Год назад

    Hi thanks for this video, i just got an issue, i followed every step you did, but when i client on Localhost in the software, i get an error (failed to stat url omniverse://) how can i fix it ? thanks

    • @VJSCHOOL
      @VJSCHOOL  Год назад +2

      Try to install nucleus server.
      ruclips.net/video/Ol-bCNBgyFw/видео.html

    • @aymenselmi8318
      @aymenselmi8318 Год назад

      @@VJSCHOOL yeah i installed nucleus server, created an account, everything is fine but for some reason, A2F data conversion tab doesn't exist, i tried everything , any clue?

    • @VJSCHOOL
      @VJSCHOOL  Год назад

      @@aymenselmi8318 did you try to open pre-made scenes?

  • @rachmadagungpambudi7820
    @rachmadagungpambudi7820 Год назад

    what if Facial Animation Live Link is included in the sequence? so Facial Animation is recorded into the sequence

    • @VJSCHOOL
      @VJSCHOOL  Год назад

      You can install Omniverse plugin for UE and export A2F animation as USD file. After that you can animate it with sequencer.

  • @rachmadagungpambudi7820
    @rachmadagungpambudi7820 Год назад +1

    it doesn't work, I've tried it still doesn't work

    • @VJSCHOOL
      @VJSCHOOL  Год назад

      What step doesn’t work?

    • @rachmadagungpambudi7820
      @rachmadagungpambudi7820 Год назад

      @@VJSCHOOL I don't know, I've followed the steps on audio2face 2022.2.0 and unreal 5.1.1 but can't connect between omniverse and unreal. is there something wrong huh? I feel I have followed the steps

    • @VJSCHOOL
      @VJSCHOOL  Год назад +1

      First, try to print values, then:
      If values not printed, means that something wrong with OSC. Check script in a2f or OSC server in UE.
      If values printed, something wrong with animBP

    • @rachmadagungpambudi7820
      @rachmadagungpambudi7820 Год назад

      @@VJSCHOOL I suspect it's OSC server, what should I do?

    • @rachmadagungpambudi7820
      @rachmadagungpambudi7820 Год назад

      ​@@VJSCHOOL there is a message from the log "LogOSC: Warning: Outer object not set. OSCServer may be collected garbage if not referenced." Why?

  • @Chatmanstreasure
    @Chatmanstreasure Год назад

    Do you have to have an iPhone for the bootsy project?

    • @VJSCHOOL
      @VJSCHOOL  Год назад

      Nope. Boosty tutorial uses Al to generate speech and facial animation. You
      will need a little bit of Python knowledge to follow Boosty tutorial.

  • @berniemovlab8323
    @berniemovlab8323 Год назад

    Is your blueprint on the Blueprint site?

  • @REALVIBESTV
    @REALVIBESTV Год назад

    This was not real time it was using a pre-recording audio

    • @VJSCHOOL
      @VJSCHOOL  Год назад

      You can change audio player in A2F to streaming and use for ex. microphone as input

    • @REALVIBESTV
      @REALVIBESTV Год назад

      @@VJSCHOOLCould you please provide a video tutorial on this topic? Also, it would be helpful if you could pace the tutorial slower. I find that many RUclipsrs go through the steps quickly, which can be challenging for beginners like me.

    • @VJSCHOOL
      @VJSCHOOL  Год назад

      @@REALVIBESTV search for audio2face livelink, there is a lot of tutorials

  • @갱스터깍지
    @갱스터깍지 Год назад

    Good

  • @shorewiseapps2269
    @shorewiseapps2269 Год назад

    Hi Oleg, would you be interested in helping us with a AI Avatar project as a consultant?

  • @reznik63
    @reznik63 Год назад

    Друг, анимации кривые плагин сырой, хоть и времени занимает больше, но лучше и проще с помощью facelink делать все это.
    А с английским не паришься вообще

    • @VJSCHOOL
      @VJSCHOOL  Год назад

      Так идея в том, что можно сделать генерацию ответов и голоса с помощью ИИ и использовать под разные задачи. Анимацию нужно настраивать под каждую голову, а не брать дефолтные значения.
      Записать лицо через facelink это совсем другая область применения.

  • @hwk_un1te915
    @hwk_un1te915 Год назад

    Great video🎉 everything works but the osc disconnects itself every 10 s and takes a while to reconnect- even though i done the checker. Im using ue 5.1 could you help me?

    • @VJSCHOOL
      @VJSCHOOL  Год назад

      Try to create variable with osc server, it should solve this.

    • @heresmynovel331
      @heresmynovel331 Год назад

      can it be used in ue5??? 🤔