Think I figured it out. I was doing import from the file menu, not from the import from the actual audio file. User error! Thanks for the support. I did notice a difference in the lip sync as well. Thanks again.
Question.. If you record your audio in Audition, but then whittle it down in Premiere Pro while video editing, how does one go about re-importing the edited audio back into Audition to export a new .AIF file, for use in Character Animator? I am trying to make a video with an animated character whose mouth is synced to the voice recording, all over top of a video in the background. I tried to do this but ended up with Character Animator not being able to sync up the transcript to the new audio. Make sense?
I would think you would be able to temporarily mute any audio tracks you don't want in Premiere so only the single character voice track is audible, then do the transcript and export based off of only that (AIF is a Premiere export option, so no need to go back to Audition). Character Animator lip sync doesn't do well with music, sounds, etc. Or if I'm misunderstanding the issue let me know!
@@okaysamurai - What happened was, say, my original audio file was 20 minutes long. Once I got into Premiere Pro and synced my gameplay video with my audio commentary, I trimmed them both down to remove verbal goofs, dead spots, etc., So now my audio file totals about 8 minutes long but is in separate events in my Premiere Pro timeline. I exported that to Audition, but the timestamps didn't come through correctly, so when I re-exported it as an AIF to Character Animator, the transcript didn't match the audio file. But you mentioned exporting as an AIF is possible in Premiere Pro, and that may scratch the itch. I'll look for that and see how it goes. Thanks for being so responsive, as always! You rock!
Dave, Thanks for all the information on Character Animator. I just started a RUclips shorts channel last week using puppet maker and have over 9000 views in my first week, with zero experience as an animator. This is going to be fun! Thanks again! The channel name is Lemon Top.
Thank you, simply brilliant. Is it possible to add the subtitles to the timeline similar to what happens in Premiere? I see the viseme layer but having a layer for the subtitle would be very handy for animating and timing.
Very accurate and works great. However, it seems like a LOT of extra steps to go into Premiere, create the treanscript, import to Character Animator, and then process it. Are there any plans to make this a feature that can be done using only Character Animator?
I wish the "what is wrong with this one" was clearer. I use this a lot, and often there will be a handful of sections that it can't do. It's always the same "computer lip sync failed: check that the audio matches the transcript". which is of course what I did meticulously already while still in Premiere. I can't tell where it's best to do corrections so that it retains the right timing: in premiere in the transcript phase? the subtitle phase? or the SRT phase inside CH? I look in CH and it's definitely perfect so theres nothing I can do to fix it except go and manually ad phonemes or record myself over it. I wish there were messages like "audio doesn't match transcript" or "transcript doesn't match audio" which is more specific. because sometimes it doesn't sound exactly like the word that's intended and it may be hung up on that. Just re-doing it produces the same result. Or even an option for when it fails in certain places, then just for those parts go ahead and use regular lip syncing to do it just for those parts. That would produce a finished lip sync immediately and then leave markers for where it had to default to the regular syncing. Seems like it could do that with a quick second pass using it's own failure markers from the first pass.
We love this but run into “Compute Lip Sync failed: check that the transcript matches the audio”. Any tips? As far as we can tell, the transcript does match the audio!
Thanks for the feedback, I'll pass it on to the developers. I would try splitting up the audio and transcript into smaller chunks and see if that helps. If the problem persists, please upload a screenshot, video, audio file and transcript, or your File > Export > Puppet file to the official forums at adobe.com/go/chfeedback so we can take a closer look.
hello, when i double click the audio file which i imported into CH,i don't see the transcript white board under Audio-Properties.... is that must be aiff format audio?
After the lip sync is computed, you should be able to use whatever triggers you want as usual. If that's not the case, please upload a screenshot, video, or your File > Export > Puppet file to the official forums at adobe.com/go/chfeedback so we can take a closer look.
In a normal rig setup the body is connected to the head, so it should move along with the head - you can make this more dramatic by adjusting the Face > Head Position Strength parameter, which will make more left/right sway movement.
@@okaysamurai Thanks! Quick follow-up: I can't animate it like how I move hands through click and drag? like, move the head side to side just by dragging it?
Yes, if you turned head tracking off and added a draggable handle instead you should be able to animate that way as well: ruclips.net/video/7pgm_u4VkQk/видео.html
I'm running into an issue with the auto transcription. First off, when I put in a transcript it gives me the error that the transcript doesn't match the audio, but it does. And then if I import an SRT and choose "computer from audio and transcript" literally nothing happens. The SRT was made in Premiere and I haven't changed it at all. Any thoughts on what is happening?
Huh that's a weird one, I was just using an imported SRT yesterday and it worked for me. I would maybe try in shorter chunks (5-10 seconds of audio) and see if that works. But if the command isn't even doing anything that seems really strange. If the problem persists, please upload a screenshot, video, or your audio and transcript files to the official forums at adobe.com/go/chfeedback so we can take a closer look.
Hmmm I can't remember as my stuff is usually under 5 minutes! If this happens I would just split the audio track up and do the second part. If you need help let us know at adobe.com/go/chfeedback.
Every time I try this feature I get a warning that it has not worked. If I try and just sink from recorded voice, it tells me to select a puppet and sound file. I do this and it keeps saying the same. It used to work until the last update a few days ago.
If it is telling you to select the puppet and sound file and you've selected both in the timeline and have all your behaviors on, this seems like a bug. If the problem persists, please upload a screenshot, video, or your File > Export > Puppet file to the official forums at adobe.com/go/chfeedback so we can take a closer look.
If I want to use 2 different (or even multiple) characters that are speaking to each other would I need to create a different scene for each character? Thanks
Nope, you can put two characters into the same scene if you want! Just drag them from the project panel into the timeline. Whichever one is selected will be the one you are recording with.
When exporting to an SRT file, if you have a lot of "dollars" in your script, the transcript will show "$". For best viseme match, do you need to go through the transcript and spell out "dollars" for each "$"? And similarly, if you say "one thousand" the transcript will say "1,000". Is it safest to go through the transcript and spell out all your numbers? Thanks -
Yes, lip sync is going to use whatever audio is active in the timeline. So for two characters in the same scene, I would split up the audio (ideally in an audio app like Audition or other audio editors) into two tracks, one for each character. Then you can mute one track, select a puppet, and run the lip sync for them before moving on to the next puppet. Though a lot of times I will just do one character per scene and then composite them in Premiere Pro or After Effects.
I have a few sections in my video where I receive the ‘Compute Lip Sync failed: check that the transcript matches the audio’. When I check the transcript, it looks good. But i don’t see a way to confirm or verify that it’s correct. And there are no visemes created for that portion. How would you suggest fixing it? Thank you Dave!
Yeah, currently the only way to test is to try the process all over again. I've found that longer transcripts can tend to run into more issues, so if I run into issues, I'll try computing shorter portions to see what works. Worst case scenario, you can re-compute the broken sections without a transcript - it just involves a lot of timeline / transcript management and is a little messy. This is the 1.0 of this feature, I hope we can help make it better in future releases!
I am brand new to Character Animator. In your RUclips about transcript-based lip sync, you mentioned it is better to type numbers. Does it also help to type unique pronunciations, for example, instead of having the name Yvette in the script, should I type the name as it sounds: Eevet?
You know, that's a good question and I'm not sure. I would probably try the "correct" way and see what happens before making any adjustments. If I understand correctly, it's more about the pronunciation matching the text than if it's a real word or not.
The update just went out this morning - in the creative cloud app, try clicking "check for updates" in the upper right corner of the updates screen to see CH v 22.1.1.
Created a .srt file in Premiere, but Character Animator won't import it? I tried it in both the beta and most recent release. What am I missing? Thanks for your help.
Hmmm,. that hasn't happened to me yet. Do you see an error message or anything? Please post the .srt if you can to adobe.com/go/chfeedback and we can try to take a closer look.
@@okaysamurai Thanks for the response. I am on Windows 11. Says: Error in Ch Verion 22.2x18 No files were imported because none were recognized as supported formats. I will send the srt file to chfeedback as well. Thanks for your help
@@okaysamurai This is what also happens when I try to post srt file to chfeedback The attachment's scene - clay 2.srt content type (application/octet-stream) does not match its file extension and has been removed.
Frustrated for sure. Windows 11, Latest Character Animator build and it just won't recognize a .srt file. I even created a .srt file in both PRemiere and RUclips. Not sure where to get help with this. Thanks for any guidance you can provide.
Were you able to post on the chfeedback forums? I didn't see any messages there. If it won't allow for an SRT upload, please upload it to google drive or dropbox or something and post a link to the file so we can test it as well.
This apparently does not work in Windows 10. As noted below. No SRT file shows up to do and import. When I do shorter than 160 seconds and paste the text in all I get is "Compute Lip Sync failed: Check that the transcript matches the audio."
What do you mean by "no SRT file shows up to do and import?" You would need to generate an SRT via Premiere Pro or elsewhere (as shown at 6:00), and then click the import button above the transcript window to import the SRT. If the problem persists, please upload a screenshot or video to the official forums at adobe.com/go/chfeedback so we can take a closer look.
@@okaysamurai One of those days I don't feel that bright. I had the wrong folder is why it did not show up. Feel reeeeeellll smart. Sorry about that. Testing it out now. Thank you for being nice enough to reply.
@@okaysamuraiBy the way, Any way to compute from just the transcript without the audio? This is a song with a lot of "noise" from the track which I'm sure makes it much harder.
Currently no, it has to use the audio to try and match things up. Sometimes you can do some basic audio cleanup using the essential sound panel in Premiere Pro that can help.
is there any way to disable this? im using ch for malay language videos. when i imported my wav file, the only option is to compute lip sync using audio and transcript. i think we should be able to proceed without transcript? when i import my wav file without the transcript, it says 'no transcript for audio'
the srt exported from premiere is not working in Character.... I am getting unnecessary metadata with the export. Suggestions? Example... 00:00:00,000 --> 00:00:01,568 Hi, this is Jim. 2 00:00:01,568 --> 00:00:03,837 In this video, I will talk about three
Looks like the SRT has some extra formatting. I would double check your export options from PR and try to turn those off (I've never seen that before).
I really love this feature in Adobe character animator but there is one big issue that I've been having everytime I render my video the lip sync no longer matches the video please someone help
Sounds like a framerate bug. If it persists, please post a screenshot, video, or File > Export > Puppet (.puppet file) to adobe.com/go/chfeedback for more direct help.
Thanks for all the valuable informations you share with us! I'm learning a lot in very short time by your amazing videos. I'm trying to create a simple animation with a nice lip sync but I'm Turkish and my audio and transcript is in Turkish. I realized that both Character Animator and Premiere don't have any option for Turkish text and audios. Is there anyone can help me about an accurate Turkish lip sync method on Character Animation?
Unfortunately the transcript is currently English only, but the general mic or audio lip sync is trained on universal sounds and should work in any language. You may want to go into the app's audio preferences and adjust the sensitivity to make more/less mouths show up until you find the right balance. Hope that helps!
amazing! If you have two people speaking (in a pre-recorded audio so no option to record on separate tracks) is there a way to separate the SRT/tracks so that you can easily use the audio for one character and 2nd track for a 2nd character?
Not currently. You would need to either a) make an audio file and SRT for each speaker, or b) have one with both voices and just delete the generated lip sync parts from the character who isn't saying it. B is probably easier. Hopefully we can make this better in the future!
Hello Dave, sounds like an awesome feature. For my own channel I don't use any voice audio, is there any possibility I could just use a transcript only for the lip sync...?
Transcript based lip sync needs audio and the transcript to work. That being said, you could always just mute an audio track before exporting and no one would know!
Yes. However, there is a new lip sync feature coming soon that will let you select a language to base the lip sync model off of, which should help. No non English transcript support yet, but thanks for the feedback!
Currently it works best with English, but it should work with other latin character (a-z) languages as well. If there are things we can improve please let us know in the forums: adobe.com/go/chfeedback
Right now it is English only. Other latin alphabet (a-z) languages may work, but get varied results. Non-latin languages will not work. We hope to expand this feature to more languages in the future.
If you transliterate your script into roman/latin characters (a-z), it might work for any language, depending how close the phonemes are to English. Please let us know!
Currently you can try other languages as long as they have latin (a-z) symbols, but English is the most optimized language and YMMV. We hope to support more in the future!
The automatic generation of a transcript from audio in PPro does currently require an internet connection. Once you have a transcript, the Compute Lip Sync command in Ch does not require a connection.
If you're having trouble, please share your audio file and transcript to adobe.com/go/chfeedback and we can try to help, and the data would be helpful for future iterations.
Those visemes can be so tedious to get right when working with a lot of dialogue, so this is a great addition!
Wow, I didn't know premier pro had the ability to make timed captions from dialogue. This will make lipsynching so much better!
Thanks again... between this, full motion tracking and the motion library, there are so many new choices and opportunities. This is awesome!
Love it. This was the best part of the recent beta. Works like a charm.
I've been waiting for this release! This will save me so much time and give my animations more professional quality! Thanks!
Think I figured it out. I was doing import from the file menu, not from the import from the actual audio file. User error! Thanks for the support. I did notice a difference in the lip sync as well. Thanks again.
Question.. If you record your audio in Audition, but then whittle it down in Premiere Pro while video editing, how does one go about re-importing the edited audio back into Audition to export a new .AIF file, for use in Character Animator? I am trying to make a video with an animated character whose mouth is synced to the voice recording, all over top of a video in the background. I tried to do this but ended up with Character Animator not being able to sync up the transcript to the new audio. Make sense?
I would think you would be able to temporarily mute any audio tracks you don't want in Premiere so only the single character voice track is audible, then do the transcript and export based off of only that (AIF is a Premiere export option, so no need to go back to Audition). Character Animator lip sync doesn't do well with music, sounds, etc. Or if I'm misunderstanding the issue let me know!
@@okaysamurai - What happened was, say, my original audio file was 20 minutes long. Once I got into Premiere Pro and synced my gameplay video with my audio commentary, I trimmed them both down to remove verbal goofs, dead spots, etc., So now my audio file totals about 8 minutes long but is in separate events in my Premiere Pro timeline. I exported that to Audition, but the timestamps didn't come through correctly, so when I re-exported it as an AIF to Character Animator, the transcript didn't match the audio file.
But you mentioned exporting as an AIF is possible in Premiere Pro, and that may scratch the itch. I'll look for that and see how it goes.
Thanks for being so responsive, as always! You rock!
Dave, Thanks for all the information on Character Animator. I just started a RUclips shorts channel last week using puppet maker and have over 9000 views in my first week, with zero experience as an animator. This is going to be fun! Thanks again! The channel name is Lemon Top.
Great work!
Now I can save a lot of time with this! cool feature and tutorial
Thank you, simply brilliant. Is it possible to add the subtitles to the timeline similar to what happens in Premiere? I see the viseme layer but having a layer for the subtitle would be very handy for animating and timing.
Sadly this isn't in CH...yet. Agreed, this would be great - for now, PR is your best bet.
How to make that text input show up? I could not find it anywhere...
Select the audio file in the project panel. If you don't see it, you need to update via your Creative Cloud desktop app.
This was so useful - i'll be using later! (also had no idea about the premiere pro tip)
I just tried this on Beta Version on PC and It works there! I will try it out as it looks promising!
Very accurate and works great. However, it seems like a LOT of extra steps to go into Premiere, create the treanscript, import to Character Animator, and then process it. Are there any plans to make this a feature that can be done using only Character Animator?
Agreed, that would streamline the process significantly. This is the v1 of this feature so I hope we can integrate something like that in the future.
I wish the "what is wrong with this one" was clearer. I use this a lot, and often there will be a handful of sections that it can't do. It's always the same "computer lip sync failed: check that the audio matches the transcript". which is of course what I did meticulously already while still in Premiere. I can't tell where it's best to do corrections so that it retains the right timing: in premiere in the transcript phase? the subtitle phase? or the SRT phase inside CH? I look in CH and it's definitely perfect so theres nothing I can do to fix it except go and manually ad phonemes or record myself over it. I wish there were messages like "audio doesn't match transcript" or "transcript doesn't match audio" which is more specific. because sometimes it doesn't sound exactly like the word that's intended and it may be hung up on that. Just re-doing it produces the same result. Or even an option for when it fails in certain places, then just for those parts go ahead and use regular lip syncing to do it just for those parts. That would produce a finished lip sync immediately and then leave markers for where it had to default to the regular syncing. Seems like it could do that with a quick second pass using it's own failure markers from the first pass.
Great detailed feedback, thanks - I will pass this on to the developers of this feature.
"Birdhouse in Your Soul, great song."
...As if I needed another reason to like you. 😂😂
We love this but run into “Compute Lip Sync failed: check that the transcript matches the audio”. Any tips? As far as we can tell, the transcript does match the audio!
Thanks for the feedback, I'll pass it on to the developers. I would try splitting up the audio and transcript into smaller chunks and see if that helps. If the problem persists, please upload a screenshot, video, audio file and transcript, or your File > Export > Puppet file to the official forums at adobe.com/go/chfeedback so we can take a closer look.
hello, when i double click the audio file which i imported into CH,i don't see the transcript white board under Audio-Properties.... is that must be aiff format audio?
If you're not seeing the white box you may have an older version - try updating to 22.1.1 (it just came out a week ago).
@@okaysamurai saw it , thank you !!!
This tutorial is so helpful!!! Everything is so well explained and easy to follow!!!
Why can't I use triggers when using this method?
After the lip sync is computed, you should be able to use whatever triggers you want as usual. If that's not the case, please upload a screenshot, video, or your File > Export > Puppet file to the official forums at adobe.com/go/chfeedback so we can take a closer look.
How can I have the character upper body sway while I use audio to synch? The audio works but characters looks stiff. Any suggestions?
In a normal rig setup the body is connected to the head, so it should move along with the head - you can make this more dramatic by adjusting the Face > Head Position Strength parameter, which will make more left/right sway movement.
@@okaysamurai Thanks! Quick follow-up: I can't animate it like how I move hands through click and drag? like, move the head side to side just by dragging it?
Yes, if you turned head tracking off and added a draggable handle instead you should be able to animate that way as well: ruclips.net/video/7pgm_u4VkQk/видео.html
@@okaysamurai Yessssss! exactly what i was looking for thank you!
Thanks for the Tutorial. Wonderful addition to Character Animator.
I'm running into an issue with the auto transcription. First off, when I put in a transcript it gives me the error that the transcript doesn't match the audio, but it does. And then if I import an SRT and choose "computer from audio and transcript" literally nothing happens. The SRT was made in Premiere and I haven't changed it at all. Any thoughts on what is happening?
Huh that's a weird one, I was just using an imported SRT yesterday and it worked for me. I would maybe try in shorter chunks (5-10 seconds of audio) and see if that works. But if the command isn't even doing anything that seems really strange. If the problem persists, please upload a screenshot, video, or your audio and transcript files to the official forums at adobe.com/go/chfeedback so we can take a closer look.
@@okaysamurai Thanks for the info and sorry for the late reply. I haven't had time to jump back into this. Hopefully I can get it work!
Is there a time limit to the lip sync generation? Mine stopped at 5 mins, no errors, but my audio is actually 6 and a half mins...
Hmmm I can't remember as my stuff is usually under 5 minutes! If this happens I would just split the audio track up and do the second part. If you need help let us know at adobe.com/go/chfeedback.
this is very helpful. this will save us so much time. thank you so much.💖
Every time I try this feature I get a warning that it has not worked. If I try and just sink from recorded voice, it tells me to select a puppet and sound file. I do this and it keeps saying the same. It used to work until the last update a few days ago.
If it is telling you to select the puppet and sound file and you've selected both in the timeline and have all your behaviors on, this seems like a bug. If the problem persists, please upload a screenshot, video, or your File > Export > Puppet file to the official forums at adobe.com/go/chfeedback so we can take a closer look.
@@okaysamurai thank you, will do.
Sir, I tried my own audio, but the window parameter on the right side (record mode)
just not appear what so ever.
You need to update in the creative cloud app to CH v22.1.1.
If I want to use 2 different (or even multiple) characters that are speaking to each other would I need to create a different scene for each character? Thanks
Nope, you can put two characters into the same scene if you want! Just drag them from the project panel into the timeline. Whichever one is selected will be the one you are recording with.
@@okaysamurai Thanks Dave, appreciate the reply 👍
Bravo! Thank you Dave. This made a big difference, especially with scenes containing singing.
When exporting to an SRT file, if you have a lot of "dollars" in your script, the transcript will show "$". For best viseme match, do you need to go through the transcript and spell out "dollars" for each "$"? And similarly, if you say "one thousand" the transcript will say "1,000". Is it safest to go through the transcript and spell out all your numbers? Thanks -
Great question and I'm not 100% sure, but yes, to be safe I would probably spell everything out.
@@okaysamurai thanks. That was my assumption; I've already gone ahead and done that, just to be safe.
Hi over there! Thanks for your videos, I really appreciate. One question, my transcript window doesn't appear? Do you know why?
Maybe an older version? Try updating to the most recent one and it should show up...
Now this is definately something I will check out
Dave I am having problems w my audio and synching my voice for different characters do I need different tracks for each character??
Yes, lip sync is going to use whatever audio is active in the timeline. So for two characters in the same scene, I would split up the audio (ideally in an audio app like Audition or other audio editors) into two tracks, one for each character. Then you can mute one track, select a puppet, and run the lip sync for them before moving on to the next puppet. Though a lot of times I will just do one character per scene and then composite them in Premiere Pro or After Effects.
I have a few sections in my video where I receive the ‘Compute Lip Sync failed: check that the transcript matches the audio’. When I check the transcript, it looks good. But i don’t see a way to confirm or verify that it’s correct. And there are no visemes created for that portion. How would you suggest fixing it?
Thank you Dave!
Yeah, currently the only way to test is to try the process all over again. I've found that longer transcripts can tend to run into more issues, so if I run into issues, I'll try computing shorter portions to see what works. Worst case scenario, you can re-compute the broken sections without a transcript - it just involves a lot of timeline / transcript management and is a little messy. This is the 1.0 of this feature, I hope we can help make it better in future releases!
I am brand new to Character Animator. In your RUclips about transcript-based lip sync, you mentioned it is better to type numbers. Does it also help to type unique pronunciations, for example, instead of having the name Yvette in the script, should I type the name as it sounds: Eevet?
You know, that's a good question and I'm not sure. I would probably try the "correct" way and see what happens before making any adjustments. If I understand correctly, it's more about the pronunciation matching the text than if it's a real word or not.
Just looked at this with .wav file for audio source on Windows pc. NO option for text import shows up.
Maybe this is for BETA version? Or MAC only?
The update just went out this morning - in the creative cloud app, try clicking "check for updates" in the upper right corner of the updates screen to see CH v 22.1.1.
Created a .srt file in Premiere, but Character Animator won't import it? I tried it in both the beta and most recent release. What am I missing? Thanks for your help.
Hmmm,. that hasn't happened to me yet. Do you see an error message or anything? Please post the .srt if you can to adobe.com/go/chfeedback and we can try to take a closer look.
@@okaysamurai Thanks for the response. I am on Windows 11. Says: Error in Ch Verion 22.2x18 No files were imported because none were recognized as supported formats. I will send the srt file to chfeedback as well. Thanks for your help
For whatever reason when I go to import the srt file it does not show up in the choices. Character Animator seems to ignore the file. Weird
@@okaysamurai This is what also happens when I try to post srt file to chfeedback The attachment's scene - clay 2.srt content type (application/octet-stream) does not match its file extension and has been removed.
Frustrated for sure. Windows 11, Latest Character Animator build and it just won't recognize a .srt file. I even created a .srt file in both PRemiere and RUclips. Not sure where to get help with this. Thanks for any guidance you can provide.
Were you able to post on the chfeedback forums? I didn't see any messages there. If it won't allow for an SRT upload, please upload it to google drive or dropbox or something and post a link to the file so we can test it as well.
This apparently does not work in Windows 10. As noted below. No SRT file shows up to do and import. When I do shorter than 160 seconds and paste the text in all I get is "Compute Lip Sync failed: Check that the transcript matches the audio."
What do you mean by "no SRT file shows up to do and import?" You would need to generate an SRT via Premiere Pro or elsewhere (as shown at 6:00), and then click the import button above the transcript window to import the SRT. If the problem persists, please upload a screenshot or video to the official forums at adobe.com/go/chfeedback so we can take a closer look.
@@okaysamurai One of those days I don't feel that bright. I had the wrong folder is why it did not show up. Feel reeeeeellll smart. Sorry about that. Testing it out now. Thank you for being nice enough to reply.
@@okaysamuraiBy the way, Any way to compute from just the transcript without the audio? This is a song with a lot of "noise" from the track which I'm sure makes it much harder.
Currently no, it has to use the audio to try and match things up. Sometimes you can do some basic audio cleanup using the essential sound panel in Premiere Pro that can help.
@@okaysamurai I understand. Thank you for the info.
is there any way to disable this? im using ch for malay language videos. when i imported my wav file, the only option is to compute lip sync using audio and transcript. i think we should be able to proceed without transcript? when i import my wav file without the transcript, it says 'no transcript for audio'
oops found the solution already. thanks dave for the tutorial. Ch really changed my life
Amazing. Thank you to all the team
How did you make that audio file pls? Thank you for the video.
You can record it in any audio program, like Adobe Audition.
really really love this new feature!
the srt exported from premiere is not working in Character.... I am getting unnecessary metadata with the export. Suggestions?
Example...
00:00:00,000 --> 00:00:01,568
Hi, this is Jim.
2
00:00:01,568 --> 00:00:03,837
In this video, I will talk about three
Looks like the SRT has some extra formatting. I would double check your export options from PR and try to turn those off (I've never seen that before).
I really love this feature in Adobe character animator but there is one big issue that I've been having everytime I render my video the lip sync no longer matches the video please someone help
Sounds like a framerate bug. If it persists, please post a screenshot, video, or File > Export > Puppet (.puppet file) to adobe.com/go/chfeedback for more direct help.
Hi Dave! When I try to export my text transcription there is no option in my list for SRT format. Am I missing something in my Premiere Pro install?
Try doing it from captions instead of transcript.
I'm pretty sure this just changed my life!
I STILL listen to that song and love it!
Thanks for all the valuable informations you share with us! I'm learning a lot in very short time by your amazing videos. I'm trying to create a simple animation with a nice lip sync but I'm Turkish and my audio and transcript is in Turkish. I realized that both Character Animator and Premiere don't have any option for Turkish text and audios. Is there anyone can help me about an accurate Turkish lip sync method on Character Animation?
Unfortunately the transcript is currently English only, but the general mic or audio lip sync is trained on universal sounds and should work in any language. You may want to go into the app's audio preferences and adjust the sensitivity to make more/less mouths show up until you find the right balance. Hope that helps!
@@okaysamurai I'll check it again and find a nice balance on preferences, thanks for reply and help!
amazing! If you have two people speaking (in a pre-recorded audio so no option to record on separate tracks) is there a way to separate the SRT/tracks so that you can easily use the audio for one character and 2nd track for a 2nd character?
Not currently. You would need to either a) make an audio file and SRT for each speaker, or b) have one with both voices and just delete the generated lip sync parts from the character who isn't saying it. B is probably easier. Hopefully we can make this better in the future!
This is incredible. Can’t wait for more updates as I’m an avid user!
This is great. Without the transcript does importing the audio give more accurate mouth motions than live recording in the scene?
Yep. Having used it now it feels wayyy more accurate. Way less false positives (mouths that don't match up) and much better timing. Totally worth it.
Hello Dave, sounds like an awesome feature. For my own channel I don't use any voice audio, is there any possibility I could just use a transcript only for the lip sync...?
Transcript based lip sync needs audio and the transcript to work. That being said, you could always just mute an audio track before exporting and no one would know!
Thanks for this, I have been waiting for it! So great!
Does it work with slang contractions like "ain't"?
It should!
Does it still support only English Dave?
Yes. However, there is a new lip sync feature coming soon that will let you select a language to base the lip sync model off of, which should help. No non English transcript support yet, but thanks for the feedback!
@@okaysamurai thank you so much Dave. Great news.
This is super helpful! thanks!
Looking something like this for a long time!
Thank you, but what about other languages? Can i do lips sync in Arabic for example?
Unfortunately only English is currently supported.
awesome feature! can't wait to try it out! thank you so much for you tutorial!
Is there any more languages can I add or it's limited only in English one??
Currently it works best with English, but it should work with other latin character (a-z) languages as well. If there are things we can improve please let us know in the forums: adobe.com/go/chfeedback
Can you do a skateboarding animation?
We have a skateboarder character here: adobe.com/go/ch_puppetmaker
This is going to save me days!
Thank You!!!
It's amazing. Thank you SO MUCH😀✋👍❤️
So exciting! Thank you dave!!
The transcript tool is great, I have an accent so it's awesome for accessibility.
Thank you so much Dave, a question: does it work with other languages, such as arabic?
Right now it is English only. Other latin alphabet (a-z) languages may work, but get varied results. Non-latin languages will not work. We hope to expand this feature to more languages in the future.
If you transliterate your script into roman/latin characters (a-z), it might work for any language, depending how close the phonemes are to English. Please let us know!
Does it work for all languages
Currently it is English only unfortunately.
Great video! Thank you!
Thanks Dave
Can I use another language than English?
Currently you can try other languages as long as they have latin (a-z) symbols, but English is the most optimized language and YMMV. We hope to support more in the future!
Yay! Anything to save time on the visemes!
Great work!
Transcript-based Lip Sync needs internet to work?
I don't think so, but I haven't tried it without it yet!
The automatic generation of a transcript from audio in PPro does currently require an internet connection. Once you have a transcript, the Compute Lip Sync command in Ch does not require a connection.
@@CoSA_DaveS I have only the script(text) so no internet is needed in Ch, right? I know PPro need internet to work
@@trudiecutour5916 You need the script (text) and also the corresponding speech audio. Then no need for the internet.
@@CoSA_DaveS thank you
Nice thing but in dutch this all does not work, not in premiere and also not in CH mmm
Yes, we hope to add broader language support in the future - thanks for the feedback!
Is this a reupload?
I talked about this feature in the "New Beta Updates" video a few months ago, but this is new and for the official release!
This would be amazing if it ever actually worked. Never once have I had an SRT not cause issue after issue with lip synching. They need to fix this.
If you're having trouble, please share your audio file and transcript to adobe.com/go/chfeedback and we can try to help, and the data would be helpful for future iterations.
Very well !
Sweet
This has to be the best thing happen to me in my digital life and it makes Adobe CA more irresistible
dont work in spanish
Yes, currently this is optimized for English - we hope to make it work for more languages in the future.
auto trascribe using OpenAI Whisper, will support non english languages very easily
They Might Be Giants Reference : NERD ALERT
🧐🧐🧐🤘🙌‼️