this is a good step in the right direction, but in no way will pass for cinematics/up close dialogue. is there a way to make better facials? or do we have to still use something like character creator.
Hi, I am going to create the advanced project that you made using azure-tts, gpt4, nlp and more. Are there any resources you can point me to? Thank you
Thanks this is a great stuff. I have purchased the advanced tutorial few months ago and made good progress with the help of it. However I am trying to achieve something very essential in this workflow. Play some body animation in sync with the facial animation. I have some animation asset which I want to play when the face weights are being received by the live link or when the audio is playing in Unreal Engine. Essential and simple as it sounds. I haven't had a break through yet. Tried many things. a) "isPlaying (face)" returns false as there is no animation being played on the face its the ARkit face weights. b) Binding an event to "on Live Link updated" to play animation was totally useless, it just acts essentially like a tick in editor and when playing, does not indicate anything useful. c) "on controler map updated" does not get fired when weights are being passed d) Get Animation Frame data is always returning false. obviously as animation is not being played here. Any pointers would be appreciated.
is there a way to access the metahuman facial rig controls in blueprint (the facial rig controller sliders)? I'd rather map the incoming data myself using the metahuman facial controllers..
Hi! The tutorial selling on boosty is more complete? Or is the same of this video? I have a project to develop something like your example using gpt. But I don't have much knowledge about this flow of the video, so if the tutorial is more complete will be great.
Hello, after clicking on "setupblendshapesove", the blue head did not follow. I checked the facsSloper to ensure it was the same as in your video. Please tell me what to do
Hey thanks for this! Is there documentation for maybe just using OSC to manipulate the facial animations and skip all of the omniverse stuff? I'm not trying to do anything with text to speech. Either way, great tutorial :)
Hey! If you familiar with python, look in part about facsolver script. You will find list of blend shapes in first comment and just pass values to those elements in list. If you familiar with TouchDesigner, create constant chop with channel names as items in blendshape list (ex. eyeBlinkLeft) and send them with OSC to UE
Hey, I am using audio2face 2023.2.0 and my Metahuman looks like she had a stroke. Does the Blendshape list work for this Version of audio2face also? And if I focus un unreal engine, my audio2face is starting to lag. Is it just because of my PC-Specs?
Hello, thank you for your tutorial. I encountered a problem in A2F. After writing the script according to the tutorial, the a2f Data Conversion tab of the audio2face interface will disappear directly. What is the reason?
Does the updated Audio2Face livelink plugin now support this? I can't for the life of me figure out how to configure the blueprint to work similar to this.
@@VJSCHOOL Thanks for the clarification. Still new to working with the Audio2Face application and plugin. Do you have any plans to upload an updated tutorial using the new updates to the plugin? I'm personally having issues linking everything up, and a tutorial would be super helpful. Thanks for everything! Awesome tutorial!
а морф такой убогий на липсинке пушо нет ру фонем или по какой то иной причине? я обычно выражаюсь резковато, нет намерений кого то задеть, действительно интересует ответ на вопрос. В целом я уже на пути к покупке айфона, но все же пока есть надежда наткнуться на что то более ли менее не ковыляющее черезмерно.
Hi, thanks for your tutorial! It is really amazing to have this workflow. I met a question in A2F, I cannot find the data convertion panel, and I trid to trun it on in toolbar, it shows error said AttributeError:'AudiotoFaceExportExtention' object has no attribute '_exporter window'. Any clue for this? Thank you!
Hi! Firstly, great video- thank you! I implemented all the steps in the video, including the last part about the OSC disconnecting fix. Despite that, for some reason, the connection keeps breaking and then comes back on after a couple of seconds. Is there a way/is it possible to keep the osc connection open indefinitely without any disconnections?
I hope you can create a tool that converts ARKit model exported as JSON file to CSV format exported by LiveLink Face. This way, we can use the LiveLink Importer plugin in UE to apply it to Daz characters without real-time recording.
Hi thanks for this video, i just got an issue, i followed every step you did, but when i client on Localhost in the software, i get an error (failed to stat url omniverse://) how can i fix it ? thanks
@@VJSCHOOL yeah i installed nucleus server, created an account, everything is fine but for some reason, A2F data conversion tab doesn't exist, i tried everything , any clue?
@@VJSCHOOL I don't know, I've followed the steps on audio2face 2022.2.0 and unreal 5.1.1 but can't connect between omniverse and unreal. is there something wrong huh? I feel I have followed the steps
First, try to print values, then: If values not printed, means that something wrong with OSC. Check script in a2f or OSC server in UE. If values printed, something wrong with animBP
@@VJSCHOOLCould you please provide a video tutorial on this topic? Also, it would be helpful if you could pace the tutorial slower. I find that many RUclipsrs go through the steps quickly, which can be challenging for beginners like me.
Друг, анимации кривые плагин сырой, хоть и времени занимает больше, но лучше и проще с помощью facelink делать все это. А с английским не паришься вообще
Так идея в том, что можно сделать генерацию ответов и голоса с помощью ИИ и использовать под разные задачи. Анимацию нужно настраивать под каждую голову, а не брать дефолтные значения. Записать лицо через facelink это совсем другая область применения.
Great video🎉 everything works but the osc disconnects itself every 10 s and takes a while to reconnect- even though i done the checker. Im using ue 5.1 could you help me?
Social Links:
instagram.com/olegchomp/
twitter.com/oleg__chomp
Discord:
discord.com/invite/wNW8xkEjrf
Blendshape list:
"eyeBlinkLeft", "eyeLookDownLeft", "eyeLookInLeft", "eyeLookOutLeft", "eyeLookUpLeft", "eyeSquintLeft", "eyeWideLeft", "eyeBlinkRight", "eyeLookDownRight", "eyeLookInRight", "eyeLookOutRight", "eyeLookUpRight", "eyeSquintRight", "eyeWideRight", "jawForward", "jawLeft", "jawRight", "jawOpen", "mouthClose", "mouthFunnel", "mouthPucker", "mouthLeft", "mouthRight", "mouthSmileLeft", "mouthSmileRight", "mouthFrownLeft", "mouthFrownRight", "mouthDimpleLeft", "mouthDimpleRight", "mouthStretchLeft", "mouthStretchRight", "mouthRollLower", "mouthRollUpper", "mouthShrugLower", "mouthShrugUpper", "mouthPressLeft", "mouthPressRight", "mouthLowerDownLeft", "mouthLowerDownRight", "mouthUpperUpLeft", "mouthUpperUpRight", "browDownLeft", "browDownRight", "browInnerUp", "browOuterUpLeft", "browOuterUpRight", "cheekPuff", "cheekSquintLeft", "cheekSquintRight", "noseSneerLeft", "noseSneerRight", "tongueOut"
Perfectly explained. Thank you!
Sorry but I don't see the comment with the list of blend shapes?
Found out it was in another video: ruclips.net/video/tgq9m1HgASE/видео.html
this is a good step in the right direction, but in no way will pass for cinematics/up close dialogue. is there a way to make better facials? or do we have to still use something like character creator.
Great vid! How come the animation at the end is not as accurate as the example animation you showed at the very beginning? thanks
Hi, I am going to create the advanced project that you made using azure-tts, gpt4, nlp and more. Are there any resources you can point me to?
Thank you
does the audio work doing it this way? currently the audio doesnt work for pixel streaming
Thanks this is a great stuff. I have purchased the advanced tutorial few months ago and made good progress with the help of it. However I am trying to achieve something very essential in this workflow. Play some body animation in sync with the facial animation. I have some animation asset which I want to play when the face weights are being received by the live link or when the audio is playing in Unreal Engine. Essential and simple as it sounds. I haven't had a break through yet. Tried many things. a) "isPlaying (face)" returns false as there is no animation being played on the face its the ARkit face weights. b) Binding an event to "on Live Link updated" to play animation was totally useless, it just acts essentially like a tick in editor and when playing, does not indicate anything useful. c) "on controler map updated" does not get fired when weights are being passed d) Get Animation Frame data is always returning false. obviously as animation is not being played here. Any pointers would be appreciated.
is there a way to access the metahuman facial rig controls in blueprint (the facial rig controller sliders)? I'd rather map the incoming data myself using the metahuman facial controllers..
Hi thanks for your tutorial. Unfortunately at 3:25 my animation does not work for the arkit asset.
hello, we have developed a n ai assistant and using tts we get audio response. how can we connect audio to A2F ?
Hi! The tutorial selling on boosty is more complete? Or is the same of this video? I have a project to develop something like your example using gpt. But I don't have much knowledge about this flow of the video, so if the tutorial is more complete will be great.
Amazing, Thanks a lot. It so helpful. Can we link Maya with Audio 2 Face and adjust with blend shapes also with UE live link?
Hello, after clicking on "setupblendshapesove", the blue head did not follow. I checked the facsSloper to ensure it was the same as in your video. Please tell me what to do
me tooo im stuck at the same place! any one HELP?
Hey thanks for this! Is there documentation for maybe just using OSC to manipulate the facial animations and skip all of the omniverse stuff? I'm not trying to do anything with text to speech. Either way, great tutorial :)
Hey! If you familiar with python, look in part about facsolver script. You will find list of blend shapes in first comment and just pass values to those elements in list.
If you familiar with TouchDesigner, create constant chop with channel names as items in blendshape list (ex. eyeBlinkLeft) and send them with OSC to UE
The UE avatar is lagging a bit, while the rest works fine. Any idea why? @VJSCHOOL
How were you able to get audio to stream from the gpt tts to audio2face?
hi, thanks for the great video. Do you know how to send the driven audio from A2F to UE?
Excellent tutorial.. I get to the part at 1:47 validating script editor and get error 22 Invalid Argument. Any ideas what I'm doing wrong.
Thank you very much!
Hey, I am using audio2face 2023.2.0 and my Metahuman looks like she had a stroke. Does the Blendshape list work for this Version of audio2face also?
And if I focus un unreal engine, my audio2face is starting to lag. Is it just because of my PC-Specs?
hallo,how can I make it support automatic emoticons?
Hello, thank you for your tutorial. I encountered a problem in A2F. After writing the script according to the tutorial, the a2f Data Conversion tab of the audio2face interface will disappear directly. What is the reason?
This happened if you made mistake somewhere in code or osc library not installed correctly
I am in the enabling facial poart but the AnimGraph doesn't have a custom control function nor a Modify curve, I am not sure what is wrong
You can create manually
@@VJSCHOOL I did that but now it's giving me a warning saying cannot copy property (TMap -> TMap), and the animation is not working
Does the updated Audio2Face livelink plugin now support this? I can't for the life of me figure out how to configure the blueprint to work similar to this.
With new update you don’t need to follow this tutorial
@@VJSCHOOL Thanks for the clarification. Still new to working with the Audio2Face application and plugin. Do you have any plans to upload an updated tutorial using the new updates to the plugin? I'm personally having issues linking everything up, and a tutorial would be super helpful. Thanks for everything! Awesome tutorial!
а морф такой убогий на липсинке пушо нет ру фонем или по какой то иной причине? я обычно выражаюсь резковато, нет намерений кого то задеть, действительно интересует ответ на вопрос. В целом я уже на пути к покупке айфона, но все же пока есть надежда наткнуться на что то более ли менее не ковыляющее черезмерно.
Hi, thanks for your tutorial! It is really amazing to have this workflow.
I met a question in A2F, I cannot find the data convertion panel, and I trid to trun it on in toolbar, it shows error said AttributeError:'AudiotoFaceExportExtention' object has no attribute '_exporter window'. Any clue for this? Thank you!
me either
hey! I think it's better to ask on a2f forum.
forums.developer.nvidia.com/c/omniverse/apps/audio2face/
Quick update. Sometimes this happened if you made mistake in facsSolver script
Hi! Firstly, great video- thank you!
I implemented all the steps in the video, including the last part about the OSC disconnecting fix. Despite that, for some reason, the connection keeps breaking and then comes back on after a couple of seconds. Is there a way/is it possible to keep the osc connection open indefinitely without any disconnections?
Set variable with osc server. It should solve the issue
I hope you can create a tool that converts ARKit model exported as JSON file to CSV format exported by LiveLink Face. This way, we can use the LiveLink Importer plugin in UE to apply it to Daz characters without real-time recording.
Hi thanks for this video, i just got an issue, i followed every step you did, but when i client on Localhost in the software, i get an error (failed to stat url omniverse://) how can i fix it ? thanks
Try to install nucleus server.
ruclips.net/video/Ol-bCNBgyFw/видео.html
@@VJSCHOOL yeah i installed nucleus server, created an account, everything is fine but for some reason, A2F data conversion tab doesn't exist, i tried everything , any clue?
@@aymenselmi8318 did you try to open pre-made scenes?
what if Facial Animation Live Link is included in the sequence? so Facial Animation is recorded into the sequence
You can install Omniverse plugin for UE and export A2F animation as USD file. After that you can animate it with sequencer.
it doesn't work, I've tried it still doesn't work
What step doesn’t work?
@@VJSCHOOL I don't know, I've followed the steps on audio2face 2022.2.0 and unreal 5.1.1 but can't connect between omniverse and unreal. is there something wrong huh? I feel I have followed the steps
First, try to print values, then:
If values not printed, means that something wrong with OSC. Check script in a2f or OSC server in UE.
If values printed, something wrong with animBP
@@VJSCHOOL I suspect it's OSC server, what should I do?
@@VJSCHOOL there is a message from the log "LogOSC: Warning: Outer object not set. OSCServer may be collected garbage if not referenced." Why?
Do you have to have an iPhone for the bootsy project?
Nope. Boosty tutorial uses Al to generate speech and facial animation. You
will need a little bit of Python knowledge to follow Boosty tutorial.
Is your blueprint on the Blueprint site?
Nope
This was not real time it was using a pre-recording audio
You can change audio player in A2F to streaming and use for ex. microphone as input
@@VJSCHOOLCould you please provide a video tutorial on this topic? Also, it would be helpful if you could pace the tutorial slower. I find that many RUclipsrs go through the steps quickly, which can be challenging for beginners like me.
@@REALVIBESTV search for audio2face livelink, there is a lot of tutorials
Good
Hi Oleg, would you be interested in helping us with a AI Avatar project as a consultant?
Друг, анимации кривые плагин сырой, хоть и времени занимает больше, но лучше и проще с помощью facelink делать все это.
А с английским не паришься вообще
Так идея в том, что можно сделать генерацию ответов и голоса с помощью ИИ и использовать под разные задачи. Анимацию нужно настраивать под каждую голову, а не брать дефолтные значения.
Записать лицо через facelink это совсем другая область применения.
Great video🎉 everything works but the osc disconnects itself every 10 s and takes a while to reconnect- even though i done the checker. Im using ue 5.1 could you help me?
Try to create variable with osc server, it should solve this.
can it be used in ue5??? 🤔