Tutorials: how to use the plugin

Поделиться
HTML-код
  • Опубликовано: 22 окт 2024

Комментарии • 239

  • @AltVR_YouTube
    @AltVR_YouTube Год назад +12

    Thanks for this perfect tutorial! You should really consider making these videos publicly findable. Other versions that are paid will show up in results, but not this SDK. Also, it would be awesome if these could be uploaded in 1440p or 4K in the future for better blueprint text readability

    • @ryudious
      @ryudious 2 месяца назад

      I found it fine THis is public....

    • @AltVR_YouTube
      @AltVR_YouTube 2 месяца назад

      @@ryudious Well, my comment was from over a year ago

  • @arielshpitzer
    @arielshpitzer Год назад +2

    It's updated. i think i saw a diffrent video looking almost the same. amazing work !

  • @TheAIAndy
    @TheAIAndy Год назад +1

    LOVE this tutorial, thank you so much! I am wondering if you would consider making a tutorial on how you got them to sit as a presenter, including face & body animation + studio + camera angles? Also... I don't know if this is out of reach, but can you get the hands to gesture based on the loudness or audio waves? Love your plugin, trying to do a bunch of cool things with it. thank you so much for these & newest tutorials!

    • @metahumansdk
      @metahumansdk  Год назад +2

      Hi!
      We used regular control rig to add poses in the sequencer timeline and make body animation manually in this tutorial

    • @TheAIAndy
      @TheAIAndy Год назад

      @@metahumansdk haha as a beginner I have no idea what that means 😂 I’ll try to find a tutorial searching some of the words u said

    • @metahumansdk
      @metahumansdk  Год назад +1

      When you add MetaHuman to the level sequence you can see that he have control rig and you can set any position for all parts of the MetaHumans body.
      Here you can get more information about control rig docs.unrealengine.com/5.2/en-US/control-rig-in-unreal-engine/

  • @flytothetoon
    @flytothetoon Год назад +3

    Lipsync looks perfect! In the description of your plugin said that "Support different face emotions". Is it possible with MetaHuman SDK to generate emotions by audio speech - like with nVidia Omniverse? Is it possible even to create with MetaHuman SDK the facial animation with blinking eyes?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi Fly to the Toon!
      You can select in the ATL eye blinking, also it works for ATL nodes.

  • @AICineVerseStudios
    @AICineVerseStudios Год назад +1

    Hi There , the Plugin is great and it really works well , however, after 10 to 15 generations of facial animations, I am getting error message that I ran out of tokens. Also from your website its not clear if this is a paid service or not. Now even for testing , how many tokens does one has ? and if the tokens will runout , what to do about it then? Can this plugin be used in a production grade application, although I am just doing a POC as of now but I want to be sure about your offering.

    • @metahumansdk
      @metahumansdk  Год назад +1

      Hi!
      At the moment there is no limits. Probably your token was generated before we present personal account. We make few announces in our discord about tokens that were not linked to personal accounts at the space.metahumansdk.io/ no longer work.
      Here is the video about token attachment or generating new in the personal account: ruclips.net/video/3wmmaE-8aoE/видео.html&lc=UgxrVCl4HvIS5P9loWR4AaABAg&ab
      If it doesn't help please tell us and we try tio help with your issue.

  • @TimothyMack-s6x
    @TimothyMack-s6x Год назад +4

    This is mind blowing!!!!!!

  • @LouisHirtz
    @LouisHirtz Год назад +1

    Hi, thank you for this detailed tutorial! Im an trying to create lipsync only from text input without using the bot. I want to avoid the delay due to the TTS function as much as possible. Is this possible to create a buffer to send chunk of sound to the ATL while TTS is working? (like you did with the ATLstream). (Im kind of a beginner in this field).

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! Currently our plugin just send full message to TTS services but you can separate text and send smaller parts manually.

  • @kreamonz
    @kreamonz 5 месяцев назад +1

    hello! I generated a face animation and audio file (the time in the video is 5:08), I go into it, this file is only 125 frames, although the audio lasts much longer. In the sequencer, I add audio and generated animation and the animation is much shorter, and when stretching the track, the animation repeats from the beginning. Please tell me how to adjust the number of frames per second?

    • @kreamonz
      @kreamonz 5 месяцев назад

      I mean, how to edit the number of sampled keys/frames

  • @danD315D
    @danD315D Год назад +2

    Is it possible for audio to lip sync to work on other 3d character models, rather than meta human ones?

    • @metahumansdk
      @metahumansdk  Год назад +1

      Hi!
      Sure it is! You can find in the plugins files face example which is a custom mesh. Use ARKit or FACS rigged model to use animations from the MetahumanSDK.

  • @k动画的肥虫
    @k动画的肥虫 Год назад

    Excuse me, is the facial expression in your video generated by Metahuman SDK automatically while speaking? Or was it processed by other software? When using ChatGPT for real-time voice-driven input, can the model achieve the same level of facial expressions as yours? Thank you.

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! You can choose different emotions at the moment of lip sync generation from audio (speech to animation stage)

  • @borrowedtruths6955
    @borrowedtruths6955 Год назад +1

    When I add the voice animation to the face, the head detaches, and the audio begins immediately. I have a walk cycle from mixamo in the sequencer and would like to have it start at a certain time in the time frame.
    Can you help with these two issues? Thank you.

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      We recommend you to use this tutorial ruclips.net/video/oY__OZAa0I4/видео.html
      Please be careful at the 3-28 timestamp because many people skip this moment and fix didn't work for them 😉
      If you need more advice please contact us in discord discord.gg/MJmAaqtdN8

    • @borrowedtruths6955
      @borrowedtruths6955 Год назад

      @@metahumansdk Thanks for the reply, I do have another question though. How do I add facial animations without a live link interface, i.e., a cell phone or head camera. Unless I'm mistaken, I have to delete the face widget to add the speaking animation to the sequencer. In either case, I appreciate the help.

    • @metahumansdk
      @metahumansdk  Год назад

      @borrowedtruths6955 , our plugin generate facial animation from the sound (16-bit PCM wav or ogg). So you didn't need to use any device for mocap, just generate animation and add it to your character or use blueprints to do it automatically.
      We also showed it in our documentation docs.metahumansdk.io/metahuman-sdk/reference/metahuman-sdk-unreal-engine-plugin/v1.6.0#in-editor-usage-1

    • @borrowedtruths6955
      @borrowedtruths6955 Год назад

      @@metahumansdk Thanks, I appreciate your time.

    • @ayrtonnasee3284
      @ayrtonnasee3284 8 месяцев назад

      i have the same problem

  • @realskylgh
    @realskylgh Год назад

    Great, does the combo do ATL Strinming things as well?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      We are working on it. If all goes fine we add it in the nearest releases on 5.2

  • @SKDyiyi
    @SKDyiyi Год назад

    Hello, your plugin is very useful. I am using a self-designed model with ARKit. However, I have encountered a problem. I can generate facial movements smoothly, but I lack neck movements. Is there a solution to this? My model does not split the head from the body.

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! If your avatar have not separated model you can blend an animation for the body and neck with our facial animation.

    • @SKDyiyi
      @SKDyiyi Год назад

      @@metahumansdk Yes I do do that now. Meaning if I don't separate my head from my body I won't be able to generate neck action automatically through the plugin?

    • @metahumansdk
      @metahumansdk  Год назад

      You can mark Neck Movement in the ATL node to add it to the animation in MetahumanSDK plugin

  • @lukassarralde5439
    @lukassarralde5439 Год назад

    Hi. This is a great video tutorial. Could you please share how to do this setup PLUS adding a TRIGGER volume to the scene? Ideally, I would like to have a firstperson or third person character game that wehn goes to the VOLUME TRIGGER, the TRIIGER willl start the meytahumanSDK to talk. Can you show us how to do that in the BP? Thank you!!

    • @metahumansdk
      @metahumansdk  Год назад

      Well, i think you can sstart from the audio triggerst provided by UE documentation docs.unrealengine.com/4.26/en-US/Basics/Actors/Triggers/
      I'll ask to the team about cases for games may be we can create tutorial about it.

  • @Relentless_Games
    @Relentless_Games 6 месяцев назад +1

    Error: fill api token via project settings
    First time using this sdk, how can I fix this?

    • @metahumansdk
      @metahumansdk  6 месяцев назад

      Please contact us through e-mail support@metahumansdk.io we will help you with token.

  • @Bruh-we9mv
    @Bruh-we9mv 8 месяцев назад

    Nice tutorial! However, if I input a somewhat large text, it stops midway. What could be the issue? I've tested stuff, and as it seems the node "TTSText to Speech" has a time limit on sound. Can I somehow remove that?

    • @Bruh-we9mv
      @Bruh-we9mv 8 месяцев назад

      @@domagojmajetic9820 Sadly no, if I find anything I will write here

    • @metahumansdk
      @metahumansdk  8 месяцев назад

      At the moment limits for free tariff is 5 sec to generate animation. You can use it for two days for free but the limit is 5 second of generated animation.

    • @gavrielcohen7606
      @gavrielcohen7606 7 месяцев назад

      @@metahumansdk Hi, great tutorial. I was wondering if there is a payed version where we can exceed the 5 second limit?

    • @metahumansdk
      @metahumansdk  7 месяцев назад

      @gavrielcohen7606 hi!
      Shure! At the moment registration at our website is temporary unavailable so please let us know if you need one at the support@metahumansdk.io 😉

  • @jumpieva
    @jumpieva Год назад +1

    The thing I have a problem with is that the facial animations are getting more realistic, but the stilted non human sounding audio is not reconciling well. Is this an option that will be fine tuned enough to make it for cinematics/close up dialogue?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! You can choose different TTS options such as Google, Azure and others.

  • @honglabcokr
    @honglabcokr Год назад +1

    Thank you so much!

  • @Jungleroo
    @Jungleroo Месяц назад

    How are you making the head and shoulders move along with the speech too?

    • @metahumansdk
      @metahumansdk  Месяц назад +1

      Metahumans have 2 skeletons, one for the head and one for the body. You can direct animations to both skeletons at the same time and set them up in a suitable way so that the movements match your wishes.

  • @NeoxEntertainment
    @NeoxEntertainment Год назад +1

    Hey great totorial but i cant find the mh_dhs_mapping in the PoseAsset of the Node Make ATL Maappings info at 8:41 and i guess thats why the lip sync dont work on my end
    does anyone knows where i can find it ?

    • @metahumansdk
      @metahumansdk  Год назад +1

      Hi!
      Please open Content Browser settings and enable Engine and Plugins content as on the screenshot
      cdn.discordapp.com/attachments/1148305785080778854/1148984020798021772/image.png?ex=65425cc1&is=652fe7c1&hm=e75cc52cd3ece4f43e143a87745fd25fd2b78032fa09c3b2d931bf50e68a0b45&

  • @abhishekakodiya2206
    @abhishekakodiya2206 Год назад +3

    not working
    for me plugin doesn't genrates any lipsync anim

    • @metahumansdk
      @metahumansdk  Год назад +1

      Please, send us more details to the our discord server or mail support@metahumansdk.io
      We will try to help with your issue

    • @mistert2962
      @mistert2962 Год назад +1

      Do not use too long audio files. 5 minutes of audio will make that SDK not work. But 3 minutes will work. So the solution is: Split your audio in 3 minute parts.

  • @mn04147
    @mn04147 Год назад +1

    thanks for your greak Plugin!

  • @borrowedtruths6955
    @borrowedtruths6955 Год назад

    I must be missing something, I have to delete the Face_ControlBoard_CtrlRig in the sequencer after adding the Lipsync Animation, or the Metahuman character will not animate. I have no control over the face rig. Is there a way to have both?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! In the Sequencer Control rig overrides animation so you need to turn off Control rig or delete it if you want to use prepared animation on the avatar's face or on the body.

  • @ai_and_chill
    @ai_and_chill Год назад

    how do we get our animations to look as good as the one in this video for the woman in front of the blue background. the generated animations are good, but not as expressive as her. it looks like you're still using the lip sync animation code, but you're having her eyes stay on focus with the viewer. how are you doing that?

    • @metahumansdk
      @metahumansdk  Год назад +1

      We use postprocess blueprint for eye focus locations. An example you can find here: discord.com/channels/1010548957258186792/1089932778981818428/1089940889192898681
      And for animation we use EPositive emotion so it looks more expressive in our opinion.

  • @ffabiang
    @ffabiang Год назад

    Hi, thank you so much for this video, it is really useful. Can you share some facial idle animations for our project to play while the TTS->Lipsync process is being made? Or do you know where can we find some of those?

    • @metahumansdk
      @metahumansdk  Год назад +4

      Hi ffabian, you can use wav file without sound to generate facial animation from our SDK then use it for your project as idle😉

    • @ffabiang
      @ffabiang Год назад

      ​@@metahumansdk Hi, when I import an empty audio file (1 min long) and use the "Create Lipsync Animation" option I get a facial animation that is almost perfect but the metahuman's mouth is opening continuously and moving as if he is about to say something, is there a parameter that can fix that?

  • @corvetteee1
    @corvetteee1 Год назад

    Quick question. How can I add an idle animation to the body? When I've tried it so far, the head comes off of the model. Thanks for any help!

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      You need to add node Slot - Default Slot between ARKIT input and Blend Per Bone node and make blend through Root bone. Here is one of discussion about it in our discord server discord.com/channels/1010548957258186792/1155594088020705410/1155844761056460800
      Also we showed other but more difficult way with State Machines ruclips.net/video/oY__OZAa0I4/видео.html&lc=UgzNwmwaQIB3hOhKE7F4AaABAg

  • @uzaker6577
    @uzaker6577 Год назад

    Nice tutorial, very intresting and useful. I'm wondering is there any solution for ATL speed? Mine works slow, it takes near 10 seconds to generate animation.

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      Delay highly depends on the network connection and length of the sound.
      Can you share more details in our discord community about ATL/Combo nodes and sound files that you using in your project ?
      We will try to help.

  • @GenesisDominguez-n1c
    @GenesisDominguez-n1c 13 дней назад

    hi! I am having problems with the blue print, since nothing about metahuman sdk appears in the functions, could you help me with that?

    • @metahumansdk
      @metahumansdk  12 дней назад

      Hi!
      Api Manager has been renamed to Lipsync Api Manager in the latest version of the plugin.
      Please try to call plugin functions through this name.

  • @anveegsinha4120
    @anveegsinha4120 8 месяцев назад +2

    2:12 hi, I dont see the Create Speech from text. I have added the API key as well.

    • @metahumansdk
      @metahumansdk  7 месяцев назад

      Hi!
      Did you try it on a wav file?

    • @chBd01
      @chBd01 Месяц назад

      @@metahumansdk hello is this only for version 5.1 below? not 5.4?Thank you

    • @metahumansdk
      @metahumansdk  Месяц назад

      @chBd01 You can find the 5.4 version in the marketplace
      www.unrealengine.com/marketplace/en-US/product/digital-avatar-service-link

  • @rajeshvaghela2772
    @rajeshvaghela2772 Год назад

    great tutorial.I got a perfect lip synch,but only one issue is the animation doesn't stop after the sound completes,can you help me out?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      Please share your blueprints to our Discord server discord.gg/MJmAaqtdN8 or to the mail support@metahumansdk.
      You also can check out included demo scenes in the UE content browser All>Engine>Plugins>MetahumanSDK Content>Demo

  • @blommer26
    @blommer26 10 месяцев назад

    Hi great tutorial. in the minute 05:07, while I tried to create lipsync animation from my audio, UE5 5.1.1 created the file (with the extension .uasset) but it did not show up in my assets. Any idea?

    • @metahumansdk
      @metahumansdk  10 месяцев назад

      Hi!
      Can you please share more details, it would be great if you can attach log file of your project (the directory looks like this ProjectName\Saved\Logs\ProjectName.log) and send it to us for analysis in our discord discord.gg/MJmAaqtdN8 or to the support@metahumansdk.io

    • @Ali_k11
      @Ali_k11 9 месяцев назад

      h have same problem

    • @metahumansdk
      @metahumansdk  9 месяцев назад

      Hi!
      @Ali_k11, can you give some details about your issue?

  • @TheOsirisband
    @TheOsirisband 3 месяца назад

    thanks for posting the video. really inspiring. i just want to clarify, is it possible to make the metahuman speak in bahasa Indonesia? have some difficulties to develop this kind of product. really need your help. thanks in advance

    • @metahumansdk
      @metahumansdk  3 месяца назад

      Hi! Azure and Google TTS standard voices are currently supported. As far as I know, Azure should have a language id-ID Indonesian (Indonesia).
      Also you can use your TTS to send audio to the ATL (Audio To Lip-sync) node.

  • @MilanJain-y4s
    @MilanJain-y4s Год назад

    Is it possible to display the finished digital human package, including its lip sync animation and perhaps GPT integration, on a mobile device. Would the rendering by client or server side?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! It depends on your solution. You can make a stream and make render on a server or you can make an app that will use client's device resources.

  • @hardikadoshi3568
    @hardikadoshi3568 8 месяцев назад

    I wonder if there is anything similar for Unity platform as well? Would be great if there is support available as the avatars look great.

    • @metahumansdk
      @metahumansdk  7 месяцев назад

      Hi! At the moment we are only working with Unreal Engine. We may consider other platforms in the future, but there are no specifics about other platforms yet.

  • @syedhannaan2974
    @syedhannaan2974 3 месяца назад

    I am trying to to create a virtual-voice assistant that is integrated with chatgpt and talks to me with gbt based responses, i have created the voice assistant and it works perfectly and generates voice and text output could you please tell me how to utilize this response output and convert it to lip sync voice and animation on meta humans, i want to send the text/voice outputs generated by my python code and use it to convert to lipsync what are the communication methods or is there a tutorial for the same

    • @metahumansdk
      @metahumansdk  3 месяца назад

      You can use Talk Component>Talk Text for your task, you only need to precede the text to generate the voice and animation.
      ruclips.net/video/jrpAJDIhCFE/видео.html

  • @arianakis3784
    @arianakis3784 8 месяцев назад

    I say go to the moon for a walk, and as soon as I spoke, I called to return, hahhahaaaa

  • @realskylgh
    @realskylgh Год назад

    I have a question, When using ATL Stream, the moment the sound wave comes in, the digital human will pause for 3 or 4 seconds. It should be preparing for animation. How to avoid this strange pause?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! We are working on the delays for now but on current version 3-4 seconds for the 1st chunk is nirmal situation.

  • @asdfasdfsd
    @asdfasdfsd Год назад

    Why it doesn't show 'plugins' and 'engine' folders like yours after i created a new blank project?? If i need to add them manually, how and where to get them?

    • @metahumansdk
      @metahumansdk  Год назад

      You need to mark it in the settings of Content Browser window

  • @damncpp5518
    @damncpp5518 4 месяца назад

    im with ue 5.3.2 and play animation node is not found. I only get Play animation with finished event and play animation time range with finished event...They are not suitable with getface node and metahuman sdk combo output animation

    • @metahumansdk
      @metahumansdk  4 месяца назад

      Hi!
      If i understand it right you have a delay between start of animation and sound.
      You can try to use Talk Component whis is much easier to use and include prepared blueprints for all requests in runtime ruclips.net/video/jrpAJDIhCFE/видео.html
      If you need more advice please visit our discord discord.com/invite/kubCAZh37D or send an e-mail to the support@metahumansdk.io

  • @dyter07
    @dyter07 Год назад +1

    Well, this 2000 years kater joke was good. I am waiting just 3 hours now to have the Metahuman loaded, LOL

  • @dome7415
    @dome7415 Год назад +1

    awesome thx!

  • @NiksCro96
    @NiksCro96 8 месяцев назад

    Hi, is there a way to do audio input as well as text input. Also is there a way for answer to be written as text in widget blueprint.

    • @metahumansdk
      @metahumansdk  8 месяцев назад

      Hi!
      You can send 16-bit PCM wave to the ATL/Combo nodes on the Lite, Standart and Pro tariffs, if you using Chatbot tariff plan you can use ATL Stream or Combo Stream nodes.
      I also recommend you to use Talk Component because it make your work with plugin much easier. We have tutorial about Talk Component here ruclips.net/video/jrpAJDIhCFE/видео.html

  • @SaadSohail-ug9fl
    @SaadSohail-ug9fl 5 месяцев назад

    Really good tutorial! Can you also tell me how to achieve body and head motion with facial expressions while metahuman is talking? Just like you have talking metahumans in your video

    • @metahumansdk
      @metahumansdk  5 месяцев назад

      Hi!
      You can generate animation with emotions from our plugin or use additive blending to add your own emotions directlly to selected blend shapes.

  • @ahmedismail772
    @ahmedismail772 Год назад

    it's so useful and informative thank you very much, I have a small question can we add another languages to the list I didn't find the (EChat language enum)

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! You can use most languages from Azure or Google TTS by voice ID of it. An example of use with our demo scenes that included in the MetahumanSDK plugin you can find here (updated) ruclips.net/video/cC2MrSULg6s/видео.html

    • @ahmedismail772
      @ahmedismail772 Год назад

      @@metahumansdk the link guide me to private video

    • @metahumansdk
      @metahumansdk  Год назад

      @Ahmed Ismail my bad, replaced it to the correct link ruclips.net/video/cC2MrSULg6s/видео.html

  • @devpatel8276
    @devpatel8276 Год назад

    Thanks a lot for tutorial! I have a problem, combo request has a longer delay, how can we do the audio to lip sync streaming (the dividing chunks mechanism thing) using combo request?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! To use the generated audio in parts, first you need to call the Text To Speech function and then call the ATL stream function.

    • @devpatel8276
      @devpatel8276 Год назад

      @@metahumansdk And that can't be done by combo right?

    • @metahumansdk
      @metahumansdk  Год назад

      You can add the same pipeline but connect it to other head so you can use few metahumans in the same time.

  • @juanmacode
    @juanmacode Год назад

    Hi, I have a project and I'm trying to do the lip sync in real time, but I get this error, does anyone know why: Can't prepare ATL streaming request with provided sound wave!

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! Could you please specify how you are generating the soundwave and provide logs if possible?

  • @mwa8385
    @mwa8385 4 месяца назад

    Can we have a step-by-step screen shots of it, please? it's very hard to follow the steps

    • @metahumansdk
      @metahumansdk  3 месяца назад

      Please visit our Discord server discord.com/invite/kubCAZh37D or ask about advice to the e-mail support@metahumansdk.io

  • @skyknightb
    @skyknightb Год назад

    Looks like server is off or out of reach for some reason, the api url shows different errors when trying to access it, be it generating the audio file or using an already generated one to create the lipsync animation or is the api url wrong?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi Skyknight!
      Can you tell little more about errors to our support on support@metahumansdk.io?

    • @skyknightb
      @skyknightb Год назад

      @@metahumansdk I'm already getting support on your discord, thanks :D

  • @unrealvizzee
    @unrealvizzee Год назад

    Hi, I have a non Metahuman character with ARKit expressions (from Daz studio). How can I use this plugin with my character ?

    • @metahumansdk
      @metahumansdk  Год назад

      You need to use skeleton of your avatar in the ATL node and arkit mapping mode.
      You can find an examples of level blueprints in the plugin files that included in every plugin version. In most of them we use custom head.

  • @sanjay1994.
    @sanjay1994. 15 дней назад

    The lip Sync is only working for 5 seconds. It is not working for longer audio files.

    • @metahumansdk
      @metahumansdk  15 дней назад

      Hi!
      The limit for generating one animation of 5 seconds is present only on the Trial plan.
      If you have a different plan, please email us at support@metahumansdk.io and we will check your account.

  • @rafaeltavares6162
    @rafaeltavares6162 Год назад

    hello, i followed all the steps, but my Metahuman has a problem with the reproduction of the voice. in sentiesi when I enter the game my character starts talking and after a few seconds the audio starts again, it's as if there were 2 audios one above the other.
    I don't know if this has happened to anyone else.
    Can you give me some advice to solve this problem?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      Is it possible to share a blueprint in our discord server?
      Also you can try to use state machine and synchronize face animation with audiofile as shown in this video: ruclips.net/video/oY__OZAa0I4/видео.html

  • @phantomebo6537
    @phantomebo6537 10 месяцев назад

    I generated the LipSync Animation just like at @19:00 and the animation preview seems fine. but when i drag and drop it into the MetaHuman Face the animation doesnt work. Can someone tell me what am i missing here

    • @metahumansdk
      @metahumansdk  10 месяцев назад

      Hi!
      Please make sure that you selected animation mode as Animation Asset and your animation generated for Face Archetype skeleton with metahuman's mapping mode.
      More details you can find in our documentation: docs.metahumansdk.io/metahuman-sdk/reference/metahuman-sdk-unreal-engine-plugin/audio-to-lipsync
      Also you can ask for help in our Discord discord.gg/MJmAaqtdN8

  • @TheOsirisband
    @TheOsirisband 3 месяца назад

    im stuck here, min 1:10 when importhing the metahuman to unreal engine via bridge.
    already download the metahuman preset, but when I add the metahuman to UE 5, nothing happen, can someone help me on this one?

    • @metahumansdk
      @metahumansdk  3 месяца назад

      Hi!
      When you have already downloaded metahuman in Quixel Bridge you need to export it to the project. After that you need to open content browser in the project and find MetaHumans folder which contains exported metahumans.

  • @k动画的肥虫
    @k动画的肥虫 Год назад

    How to synchronize facial expressions with mouth movements? Could you provide a tutorial on this? Thank you

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! You can select facial expressions when generating from audio to lip sync (speech to audio conversion stage), and they will be synchronized automatically.

    • @k动画的肥虫
      @k动画的肥虫 Год назад

      Hi! Is the 'Explicit Emotion' option selected in the 'Create MetaHumanSDKATLInput' tab?

    • @k动画的肥虫
      @k动画的肥虫 Год назад

      I selected 'Ehappy' and it works, but selecting 'Eangry' doesn't have any effect. Do you have any solutions or tutorials for this issue? Thank you!

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! Can you please clarify, is the avatar not displaying the desired emotion or is the expression of the avatar not matching the chosen emotion.

  • @guilloisvincent2286
    @guilloisvincent2286 Год назад

    Would it be possible to put a TTS (like MaryTTS) or an LLM (like llama) in the c++ code, to avoid network calls and that it is free?

    • @metahumansdk
      @metahumansdk  Год назад

      You can find detailed instructions on how to use on the official websites of MaryTTS and Llama LLM. It would be great if you could share your final project with us.
      If we speak about internet avoidance currently our SDK works only with internet connection but you can generate pool of facial animations for your project and then use that animations offline.

  • @charleneteets8227
    @charleneteets8227 Год назад

    When I try to put a idle animation the head will break off to respond and won't idle with the body! Not sure how to proceed. It would be great if you had a video on addle a idle animation next.

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      You can try this video to fix the head ruclips.net/video/oY__OZAa0I4/видео.html&lc=Ugz9BC

  • @skeras1171
    @skeras1171 Год назад

    Hi,
    When i try to choose mh_dhs_mapping_anim_poseasset in Struck ATLMappingsInfo, I can't see this pose asset. How can i create or how can i find this asset? Can you help be that subject? Thank's in advance, have a good work.
    Best regards.

    • @metahumansdk
      @metahumansdk  Год назад

      Hi @skeras!
      You need to mark for showing Engine Content and Plugins Content in the Content Browser

    • @skeras1171
      @skeras1171 Год назад

      @@metahumansdk Done,Thanks.

  • @luchobo7455
    @luchobo7455 Год назад

    Hi I really need your help, in 6:29 i drag and drop my BP_metahuman but is not showing up in the blueprint, don't know why

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      You need to use metahuman from the Outliner of your scene but not directly from the Content Browser.

  • @enriquemontero74
    @enriquemontero74 Год назад

    hello one question this is compatible with eleven labs api?? or voice notes? thanks

    • @metahumansdk
      @metahumansdk  Год назад +1

      Hi!
      If they produce 16 bit wav files you can easely use it with our MetahumanSDK plugin.

  • @honwe
    @honwe Год назад

    Hello, why do I follow your steps, at 12:03, the sound ends but the mouth moves on and doesn't stop

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! Could you please clarify if you are experiencing any performance issues?

  • @Ysys-king
    @Ysys-king Год назад

    Hi, I want the metahuman to voice the text I entered in the field below. but only sound working, no face animation. can you help me how can i solve it?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      You can try to use our demo scenes which included in the plugin content and compare level blueprints, also you can koin our Discord community and share more details about your issue: discord.gg/MJmAaqtdN8

  • @Jungleroo
    @Jungleroo Месяц назад

    Has anyone got this working with a reallusion character creator rigged model? Did you have to seperate the head? Which preset did u use?

    • @metahumansdk
      @metahumansdk  Месяц назад

      Hi!
      They support the ARKit blendshape set after version 3.4, so you can just select the ECustom option in the ATL Mapping Mode settings, this should help.

    • @Jungleroo
      @Jungleroo Месяц назад

      @@metahumansdk ok, and under ECustom option, what mapping asset and bone asset do i select? as if i dont select any, the anim it creates is blank.

    • @metahumansdk
      @metahumansdk  Месяц назад

      If possible, please share information about Unreal Engine version, send us the project log file via discord discord.com/invite/MJmAaqtdN8 or email support@metahumansdk.io.
      At the moment we can't reproduce the error and animation is created correctly for custom meshes without additional mapping options.

  • @sumitranjan7005
    @sumitranjan7005 Год назад

    this is great plugin with more detailed functionality also is it possible to integrate our own custom chatbot api? if yes please share a video

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! You can use any solution just connect your node with text outtput to the TTS node and then use regulat pipeline with ATL.
      As example you can use this tutorial when we use OpenAI plugin for chatbot ruclips.net/video/kZ2fTTwu6BE/видео.html

  • @funkyjeans8667
    @funkyjeans8667 8 месяцев назад

    it only seems to able to generate 5 second lipsync animation. Am i doing something wrong or longer animation is a paid option.

    • @metahumansdk
      @metahumansdk  8 месяцев назад

      If you use a trial tariff plan you can generate 5 seconds of ATL per one animation only.

  • @qinjason1199
    @qinjason1199 Год назад

    The wave that the editor can play, error after using ATL input : -- LogMetahumanSDKAPIManager: Error: ATL request error: {"error":{"status":408,"source":"","title":"Audio processing failed","detail":"Audio processing failed"}} where should i check?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi, Qin Jason!
      It looks like you try to use TTS and ATL in the same blueprint. This is known issue and we working on it.
      Currently you can try to use combo node or generate animation manually in the project. Feel free to share more details in our discord server discord.com/invite/MJmAaqtdN8

    • @qinjason1199
      @qinjason1199 Год назад

      TTS accessed from other cloud services,but it's really in the same blueprint.Would splitting into multiple blueprints avoid this problem?

  • @Ali_k11
    @Ali_k11 9 месяцев назад

    when i try the sdk on UE 5.3 i get no tts permission error,what's the matter?

    • @metahumansdk
      @metahumansdk  9 месяцев назад

      Hi!
      TTS available for Chatbot tariff plan only.
      You can find more details about tariffs in your personal account at the space.metahumansdk.io/#/workspace or in our discord in this message discord.com/channels/1010548957258186792/1068067265506967553/1176956610422243458

  • @jaykunwar3312
    @jaykunwar3312 Год назад

    can we make a build(exe) by using metahumansdk in which we can upload audio and metahuman start speaking and body idle animation?? please help

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      Sure, we released demo project with all that functions yesterday and we share it in our discord: discord.com/channels/1010548957258186792/1068067265506967553/1143934803197034637

  • @krishnakukade
    @krishnakukade Год назад

    I'm beginner in Unreal Engine and don't know how to render the animation video, i tried multiple ways but not seems to work, can anyone tell me how to do this? or any resources please...

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      You can use this official documentation from the UE developers docs.unrealengine.com/5.2/en-US/rendering-out-cinematic-movies-in-unreal-engine/

  • @leion44
    @leion44 Год назад

    When will it be available for UE.2?

    • @metahumansdk
      @metahumansdk  Год назад +1

      We planned to release the MetahumanSDK plugin forUnreal Engine 5.2 this month.
      Our release candidate for UE 5.2 available from this link drive.google.com/uc?export=download&id=1dR30LXOwS1eEuUQ9LdQk9441zBTODzCL
      You can try it right now 😉

  • @ragegohard9603
    @ragegohard9603 Год назад

    👀 wow !

  • @方阳-q6q
    @方阳-q6q Год назад

    Hi,I want to add some other facial movements when talking how can I do it like blinking etc.

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! You can bland different facial animations in an animation bluprint. Also at the stage Speech To Animation you can choose to generate eye and neck animations.

    • @方阳-q6q
      @方阳-q6q Год назад

      @@metahumansdk Hello, I want to read the WAV audio file under a certain path on the local computer when the game is running, and then use a plug-in to drive MetaHuman to play the audio and synchronize the mouth shape. I found a blueprint API, Load Sound from File, can this read a file from a local path? Does the File Name in this API refer to the file name of the read file? So where is the path of the read file? Can you set the path of the file you want to read?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! Yes, this function can read the path to the local file. In this parameter you must specify the path to your audio file.

    • @方阳-q6q
      @方阳-q6q Год назад

      Hello, I would like to ask a question, the animation generated by text only has the mouth animation, how can I integrate this generated mouth animation with my other facial animations to make its expression more vivid? I wanted to fuse it at run time, and what I didn't understand was how to do this while the program was running

    • @metahumansdk
      @metahumansdk  Год назад

      You can try to use blend for animations that you want to combine.
      You can get more details about blend mode in the official documentation for Unreal docs.unrealengine.com/5.2/en-US/animation-blueprint-blend-nodes-in-unreal-engine/

  • @anveegsinha4120
    @anveegsinha4120 7 месяцев назад +2

    I am getting error 401 no ATL permission

    • @metahumansdk
      @metahumansdk  7 месяцев назад

      Hi!
      It should depends on the tariff plan. If you are using trial version you have limit to generate maximum 5 seconds per animation.
      If you are at the Chatbot tariff plan you need to use ATL Stream but not regular ATL.
      Regular ATL available on the Liet, Standard and Pro tariffs.

    • @BluethunderMUSIC
      @BluethunderMUSIC 7 месяцев назад

      @@metahumansdk That's not really true cos I am getting the SAME error and I tried with sounds ranging from 0.5 seconds to 8 seconds. How do we fix this because it's impossible to do anything now.

    • @metahumansdk
      @metahumansdk  7 месяцев назад

      Can you please send us logs to our discord discord.gg/MJmAaqtdN8 or support@metahumansdk.io?
      We will try to help you with this issue but we need more details about your case.

  • @honwe
    @honwe Год назад

    At 10:11 in the video, when I scroll over it shows that the type of "CurrentChunk' is not compatible with Index, I don't know what's wrong

    • @honwe
      @honwe Год назад

      10:10

    • @honwe
      @honwe Год назад

      hello can you help me with this problem

    • @ffabiang
      @ffabiang Год назад

      hi, make sure CurrentChunk is of type integer aswell as index

    • @honwe
      @honwe Год назад

      @@ffabiang thank you

  • @kirkr
    @kirkr Год назад

    Is this still working? Says "unavailable" on the Unreal Marketplace

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! That was marketplace servers maintenance, now plugin is available to download.

  • @Silentiumfilms007
    @Silentiumfilms007 2 месяца назад

    I need to know how to copy face reaction and lip syncing via android phone and also need to know how to motion movement's, thank you

    • @metahumansdk
      @metahumansdk  2 месяца назад

      Hi!
      Currently, our plugin only supports Windows and Linux operating systems.

    • @Silentiumfilms007
      @Silentiumfilms007 2 месяца назад

      @@metahumansdk will it work on every metahuman? And is it free?

    • @metahumansdk
      @metahumansdk  Месяц назад

      Hi!
      You can use the plugin for free for two days after registering at the space.metahumansdk.io/

  • @I-MM-O-R-T-A-L
    @I-MM-O-R-T-A-L Год назад

    I want the metahuman to start talking only when im close to him, how i can achieve that?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      You can try to use trigger events that start do something when trigger is activated. In the unreal documentation you can find more information about it docs.unrealengine.com/4.26/en-US/Basics/Actors/Triggers/

  • @rachmadagungpambudi7820
    @rachmadagungpambudi7820 Год назад +1

    how to give flashing mocap?

    • @metahumansdk
      @metahumansdk  Год назад +1

      We didin't use mocap, our plugin generate animation from the sound

    • @rachmadagungpambudi7820
      @rachmadagungpambudi7820 Год назад

      I like Your Plugin 🫡🫡🫡👍 thank you

  • @aihumans.official
    @aihumans.official Год назад

    where I can connect my dialogflow chatbot? api key??

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! At the moment, our plugin uses GPT chat, you can try to connect any chat bot yourself using the example of our integration. It will be great if you share the result with us.

  • @benshen9600
    @benshen9600 Год назад

    When will the combo request support Chinese?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      Currently we using google assistance only for answers in the combo requests so it depends on google supported languages developers.google.com/assistant/sdk/reference/rpc/languages
      I can't promise that we will add new language soon but we have plans to make our solution more friendly to all countries.

  • @umernaveed6936
    @umernaveed6936 Год назад

    Hi, Guys.I have been trying to figure this out for a week now the problem is how can we attach dynamic facial expressions and body gestures with chat gpt responces. Eg if the text returned is happy then the character should make a happy face and if he is angry then it should be an angry face. can someone help me with this

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! Emotions are selected when you creating audio tracks from the text are selected in a special drop down menu. Please try

    • @umernaveed6936
      @umernaveed6936 Год назад

      @@metahumansdk can you elaborate a little on this as i am still stuck

    • @umernaveed6936
      @umernaveed6936 Год назад

      @@metahumansdk Hi, man can you guide me on how i can create the emotions as i am still stuck on the facial expression parts and the explicit emotions when setting the metahuman character

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      Sorry for the late answer.
      We shared blueprint that can help to focus yeys on something here: discord.com/channels/1010548957258186792/1131528670247407626/1131993457133625354

  • @AlejandroRamirez-ep3wo
    @AlejandroRamirez-ep3wo Год назад

    Hi, does this support Spanish or Italian?

    • @metahumansdk
      @metahumansdk  Год назад +1

      Hi Alejandro Ramírez!
      You can use any language you want because animation is created from sound.

  • @dreamyprod591
    @dreamyprod591 6 месяцев назад

    is there any way to integrate this on a website

    • @metahumansdk
      @metahumansdk  6 месяцев назад

      Sure, you can try to make a pixel streaming project for example.

  • @ПопулярновБългария

    my head is detached now

    • @metahumansdk
      @metahumansdk  Год назад +1

      Hi Популярно в България !
      You need to use Blend Per Bone node in the Face AnimBP to glue head to the body when both parts are animated.

    • @Enver7able
      @Enver7able Год назад

      @@metahumansdk How to do this?

    • @Fedexmaster91
      @Fedexmaster91 Год назад

      @@metahumansdk great plugin, everythings works fine for me but Im having also this issue, when playing the generated face animation the head detach from the body

    • @Fedexmaster91
      @Fedexmaster91 Год назад

      @@Enver7able I found this video on their discord channel:
      ruclips.net/video/oY__OZAa0I4/видео.html&ab_channel=MetaHumanSDK

    • @ПопулярновБългария
      @ПопулярновБългария Год назад

      @@metahumansdk thanks!

  • @BAYqg
    @BAYqg Год назад

    Unavailable to buy in Kyrgyzstan =(

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      Please check that
      1. Other plugins is available
      2. If you try to use our site make sure that EGS louncher is started
      3. EGS louncher is updaterd

  • @LoongKinGame
    @LoongKinGame Год назад

    I cant find the ceil

  • @sumitranjan7005
    @sumitranjan7005 Год назад

    can we get sample code git repo?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi! You can find plugin files in the engine folder \Engine\Plugins\Marketplace\DigitalHumanAnimation

    • @sumitranjan7005
      @sumitranjan7005 Год назад

      @@metahumansdk sample code of the project not the plugin to get started

    • @metahumansdk
      @metahumansdk  Год назад

      We also have some demo level blueprints with some cases of use that included in every plugin version so you can use it as a project.
      You can find that in the demo folder of plugin.

  • @commanderskullySHepherdson
    @commanderskullySHepherdson Год назад

    was pulling my hair out wondering why I couldnt get the plugin to work, then realised I hadnt generated a token! 🙃

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      Thank you for the feedback! New version of MetahumanSDK plugin is on mopderation now and this one have more useful messages about token. We hope this changes will make plugin's behavior more predictable

  • @mahdibazei7020
    @mahdibazei7020 5 месяцев назад

    Can I use this on Android?

    • @metahumansdk
      @metahumansdk  5 месяцев назад

      Hi!
      We didn't support mobile platforms but you can try to rebuild our plugin with kubazip for android. It might work, but I can't guarantee it.

  • @리저드
    @리저드 2 месяца назад

    The video level in the way you show is like un3. sorry

  • @mohdafiqtajulnizam9421
    @mohdafiqtajulnizam9421 Год назад

    Please update this to 5.3 ....please!?

  • @EnricoGolfettoMasella
    @EnricoGolfettoMasella Год назад

    The girls need some love dude. They look so sad and depressed :P:P...

  • @Silentiumfilms007
    @Silentiumfilms007 2 месяца назад

    5.4 please

    • @metahumansdk
      @metahumansdk  2 месяца назад

      Hi!
      You can find a test build for 5.4 in our discord discord.com/channels/1010548957258186792/1010557901036851240/1253377959700463647

  • @inteligenciafutura
    @inteligenciafutura 6 месяцев назад

    se debe pagar para usarlo, no funciona

    • @metahumansdk
      @metahumansdk  5 месяцев назад

      Hi!
      Can you please share more details about your issue?
      Perhape this tutorial can help you ruclips.net/video/cC2MrSULg6s/видео.html

  • @inteligenciafutura
    @inteligenciafutura 6 месяцев назад

    spanish?

    • @metahumansdk
      @metahumansdk  5 месяцев назад

      MetahumanSDK is language independent. We are generate animation from a sound but not from a visemes.

  • @bruninhohenrri
    @bruninhohenrri 6 месяцев назад

    Hello, how can i use the ATLStream animation with an Animation Blueprint ? Metahumans have a postprocessing AnimBP, so if a run the raw animation basically it messes up with the body animations

    • @metahumansdk
      @metahumansdk  6 месяцев назад

      Hi!
      Please try to start from Talk Component. This is the easiest way to use Streaming options.
      Here is tutorial about it ruclips.net/video/jrpAJDIhCFE/видео.html
      If you still have some issues please visit our discord discord.gg/MJmAaqtdN8

  • @theforcexyz
    @theforcexyz Год назад

    hi, im having problem at 2:32, when i generate my text to speech it does not appear in my folders :/

    • @metahumansdk
      @metahumansdk  Год назад

      Hi!
      Can you please check that your API token is correct in the project settings?
      If your API token is correct please send us your log file to the discord discord.gg/MJmAaqtdN8 or mail support@metahumansdk.io

  • @v-risetech1451
    @v-risetech1451 Год назад

    Hi,
    when i try to do same things from last tutorial, i can t see mh_ds_mapping in my project. Do you know anything about this for solve?

    • @metahumansdk
      @metahumansdk  Год назад

      Hi V-Risetech!
      Please select Show Engine Content in Content Browser settings it should help.
      We also send screenshot to the same request in our discord: discord.com/channels/1010548957258186792/1067744026469601280/1068066997675495504