TUTORIAL - Runway Multi-Speaker Lip Sync

Поделиться
HTML-код
  • Опубликовано: 10 сен 2024

Комментарии • 48

  • @massimilianosessantini671
    @massimilianosessantini671 27 дней назад +10

    I use Runway and Hedra for lip sync. Runway team MUST take a look at Hedra because is awesome. Runway lip sync works better with an animated video of a person it is too static with a sigle pic (you must have a pay account), hedra uses a single pic of a person and gives you back all body and facial animation with lip sync and AI voices (512x512 resolution, free account).

    • @aivideoschool
      @aivideoschool  27 дней назад +2

      Totally agree with this!

    • @RyanVPatterson
      @RyanVPatterson 23 дня назад

      Can Hedra export in different resolutions? I could only export square hmm...

  • @Smirk_Station
    @Smirk_Station 28 дней назад +5

    Just got the unlimited! “Stay up til 3 generating videos”. You got me!!

    • @aivideoschool
      @aivideoschool  28 дней назад +3

      Unlimited! I'm jealous. You are definitely not going to be getting any sleep any time soon.

    • @DirectorDanielMason
      @DirectorDanielMason 28 дней назад

      @@aivideoschool Actually, I have an idea for you… Let me know if I should quickly post a zoom link for sometime today or tomorrow. I would delete it from the comments after that, but let me know if there would be a good time for a quick zoom with screen sharing:)…

    • @aivideoschool
      @aivideoschool  27 дней назад +1

      There's an email button on my YT channel page, email that with info.

  • @DirectorDanielMason
    @DirectorDanielMason 28 дней назад +3

    Since people are facing forward, you could probably split screen and intercut the different jury members, so that more than four talk, by recording it twice (for now cropping out the first 4 so the next 4 are the “new” first 4).
    Then splice the shots together in the order you want, blending the portions of the shots together in the video editor or any program with compositing (perhaps something free like the basic Da Vinci Resolve:).
    Maybe when blending together you can take reaction shots from different takes at different moments, so everyone stays lively (feathering the edges between characters to blend the whole scene together and keep it lively/to sell it as real).
    Very good tutorial!:)

    • @aivideoschool
      @aivideoschool  28 дней назад +1

      I think this split screen approach is a great tip!

  • @wholeness
    @wholeness 27 дней назад +3

    That animation was a winner l, been waiting for someone to nail that. Thanks

    • @aivideoschool
      @aivideoschool  27 дней назад

      Thanks! I really like that style and it actually works with lip sync

  • @CineFlavor
    @CineFlavor 16 дней назад +1

    I just uploaded a short test of me morphing into Tony Soprano and then using LivePortrait to animate the face exactly how I wanted it to. And it works 10x better and looks more realistic. But the only issue is that you have to track it to your face in After Effects or another software

    • @aivideoschool
      @aivideoschool  15 дней назад +1

      Nice work, dropping the link to your test here: ruclips.net/user/shortsNYs4pOcg6AY?si=Upv-Mhsg7VGZGldA

  • @RyanVPatterson
    @RyanVPatterson 23 дня назад

    I love that Luma + Runway walking/talking scene! I totally get how that's an aha moment for you. Appreciate all your videos and enthusiasm 🙏

    • @aivideoschool
      @aivideoschool  21 день назад +2

      Thank you! It's genuinely exciting to see how much has changed in less than two years. I feel like Flux to Luma/Kling/Gen-3 to Lip Sync is such a good workflow right now.

  • @taoprompts
    @taoprompts 26 дней назад

    oh wow, I didn't realize it could do multiple people, that's pretty cool

    • @aivideoschool
      @aivideoschool  26 дней назад

      By the way, I love your channel, keep it up!

  • @AdamReese-wl1fz
    @AdamReese-wl1fz 25 дней назад

    Looks like AI FILM tools are finally catching up... But still in early stages. I assume in 3-6 months, we will have more control over camera, facial animations etc

  • @NEURAMos
    @NEURAMos 27 дней назад +4

    Lip Sync is a cool tool, but it's a shame it only works with human faces.

    • @aivideoschool
      @aivideoschool  27 дней назад

      I know, you'd think it wouldn't be too hard to adapt it to alien/humanoid type creatures at least.

  • @Bartetmedia
    @Bartetmedia 27 дней назад +1

    It's a great tool to start with but they need to make it better cause it's a little glitchy.

  • @FilmSpook
    @FilmSpook 17 дней назад

    Very Awesome, thanks Brother!! Just subscribed!!

    • @aivideoschool
      @aivideoschool  16 дней назад +1

      awesome, thank you for being here!

  • @TashaCaufield
    @TashaCaufield 27 дней назад

    Great epsiode. Always nice to see a tutorial on some of the lesser-talked about but massively important features on these apps 🤗

    • @aivideoschool
      @aivideoschool  27 дней назад +1

      Thanks Tasha! Once they figure out how to get those profile shots talking, it's going to be a different game. Can't wait to see what you do next!

    • @TashaCaufield
      @TashaCaufield 27 дней назад

      @@aivideoschool Thanks! I took a meeting today with an app working on over the shoulder lip syncs, etc, so hopefully things continue to improve at a rapid pace 🤞🏼😊

  • @ScullyPop
    @ScullyPop 20 дней назад

    Great channel you have!

  • @shireensingh2834
    @shireensingh2834 27 дней назад +1

    i didn't know about this feature . thanks for making this video . a question- can we upload our own voice in it like the voice cloning feature . can we do that?

    • @aivideoschool
      @aivideoschool  27 дней назад

      You can upload your own voice like I did 00:08 but I believe you need to do the voice cloning in ElevenLabs. You are able to do voice-to-voice in Runway, so you could speak with your inflections etc and transfer it to sound like one of their voices.

  • @justshredit7952
    @justshredit7952 День назад +1

    Is there a way to lip sync to voice tracks you’ve recorded already? Thinking of using it to make an avatar for a speaker in a film. Any suggestions?

    • @aivideoschool
      @aivideoschool  День назад +1

      You can upload audio files in Runway and other lip sync tools to do exactly that. The lipsync doesn't have to be ai generated voices

    • @justshredit7952
      @justshredit7952 15 часов назад

      @@aivideoschool thanks! Any avatar speaking apps out there? Soup to nuts replace a talking head with an avatar?

  • @Caryson122
    @Caryson122 26 дней назад

    Nicely done!

  • @snip-snap-a-snipy-to-the-snap
    @snip-snap-a-snipy-to-the-snap 27 дней назад +1

    Could you make a song video with just providing an mp3?

    • @aivideoschool
      @aivideoschool  27 дней назад +1

      I believe you need a separated vocal track for best results. I tried this a month or so ago with another lip sync tool called Hedra and I had to separate the vocals from the music. CapCut has a great Vocal Isolation filter but it's a Pro feature.

    • @snip-snap-a-snipy-to-the-snap
      @snip-snap-a-snipy-to-the-snap 27 дней назад

      @@aivideoschool I have the clear vocals in a separate file. Thanks ill try it!

  • @BabylonBaller
    @BabylonBaller 27 дней назад

    If it cant handle a slightly angled face like at 14:15 then its not ready for filmmakers just yet as over the shoulder shot is the core of storytelling and this may not get that right however were headed in a positive direction for sure. Another year or two and wamn

    • @aivideoschool
      @aivideoschool  27 дней назад

      Totally agree. A few months ago I tried an establishing shot with two people then cut to each of them in an over the shoulder style shot. Looking at it now, it's amazing how much better the lip sync is. But I think the over-the-shoulder shot might be one "cheat" for filmmaking with the current limitations. ruclips.net/video/nL9UowfT0PE/видео.htmlsi=p7T4JqO36_vFuYRh&t=557

    • @BabylonBaller
      @BabylonBaller 27 дней назад +1

      @@aivideoschool I started following you some time ago with the same interest on making stories by leveraging AI, but it didn't take long for me to realize that technology just wasn't there yet. So I turned a completely different direction and decided to dive deep into learning 3D animation in the likes of iclone and unreal engine. My gamble was by the time I Master these tools, AI would have made enough advancements that I'll be able to combine the two to get perfectly Controllable characters, environments and camera angles while extracting the realism from generative AI. I have faith those two will meet soon and by then I will have everything ready on my side 🚀

  • @jasiraslam
    @jasiraslam 27 дней назад

    Great video!

  • @MettameowChannel
    @MettameowChannel 25 дней назад +1

    Hi I love your video but listening to your narration is a bit distracting since your voice seems to be unbalanced coming out more on the right side of the headphone... So other than that pls keep up the great work 👍

    • @aivideoschool
      @aivideoschool  25 дней назад +1

      Thanks for pointing this out, I didn't notice when I was editing on my laptop speakers but I just looked in CapCut and the left channel is a little lower than right. This was the first audio I recorded with a Vocaster I recently got so it's probably a setting or dial I need to adjust there. Thanks for the heads up, I'll keep an eye out on the next one!

    • @MettameowChannel
      @MettameowChannel 25 дней назад

      ​@@aivideoschoolGlad it was helpful to you. I know it's not easy to make a video but I'm thankful for your hard work 🙏

  • @sachindatt4045
    @sachindatt4045 27 дней назад

    As good as it is, it still can't do animal faces talking. If it does that then it will be something! Otherwise Pika and hedra are also really good at human face lip sync. I can't think of any scenario where more than two people will be talking n a singke camera shot without a change in camera angle. It's definitely a good display of capability, bit it does not have much practical application in film making.
    Even if you are able to show multiple people talking in a single frame without switching camera, it will look very unreal and mechanical. Only one scenario looks legit that is two people walking and talking facing camera in the same frame. Other than that there's not much of practical application.
    What is really needed is talking while turning head and animals talking.

    • @aivideoschool
      @aivideoschool  27 дней назад

      100% to all of this. If you could control camera movement and have characters talking in profile, I could see how that might work for a scene with multiple speakers but we're probably a year or two away for that. (Watch it come out in three weeks now that I said that)

  • @WanderlustWithT
    @WanderlustWithT 12 дней назад

    Runways lip sync is absolute garbage, they really need to invest more in improving it because gen 3 is so damn good, it's a shame the lip syncing aspect is very lacking and def not ready for anything that looks passable. Still images look extremely robotic, let's not pretend that it looks good. Hedra is miles ahead of them. If Hedra can release lip sync on video, it would be awesome.