Microsoft's AI Lip Sync. MIND. BLOWN!

Поделиться
HTML-код
  • Опубликовано: 20 сен 2024

Комментарии • 29

  • @TheThriVeFactory
    @TheThriVeFactory 4 месяца назад +4

    They are almost there. Since I found your channel last month, it looks like you're looking for the same exact thing I am looking for, along with a lot of other story tellers. I've been keeping an eye on you becase I know once this THING we are looking for in the AI world to come together finally comes together, you'll be the one who talks about it. I just need the consistant character formula + proper lip syncing + enviroment control. I feel Eleven Labs has the voice creation down with the ability to not only create voices, but now use that voice with your own voice controlling it. Exciting times.

    • @HaydnRushworth-Filmmaker
      @HaydnRushworth-Filmmaker  4 месяца назад +1

      Absolutely! I’m right there with you!! No matter how I try though, I can’t handle the whole spectrum of voices for the characters in my movie (even with audio tools like ElevenLabs), so I’ll still need actors help for now.

    • @PhilAndersonOutside
      @PhilAndersonOutside 4 месяца назад +2

      @HaydnRushworth-Filmmaker I've noticed this as well. It takes a lot of effort for me to read a few lines into Eleven Labs, then have that converted as a female voice (I'm male) but have it _not_ sound like me to an extent because of my natural inflections and speech patterns.

    • @HaydnRushworth-Filmmaker
      @HaydnRushworth-Filmmaker  4 месяца назад

      @@PhilAndersonOutside agree entirely. We still need a female voice actor for authentic female inflections and delivery.

  • @MrBrady95
    @MrBrady95 4 месяца назад +3

    "Don't look at the camera" toggle button. 😁

  • @jcolabzzz
    @jcolabzzz 4 месяца назад +3

    I was rooting for EMO but VASA 1 just incredibly sophisticated .
    *Microsoft and major AI authorities waiting for perfect digital signature system. And I think we need that.

    • @HaydnRushworth-Filmmaker
      @HaydnRushworth-Filmmaker  4 месяца назад

      That’s a really good point. If there’s a solid system in place for digital signatures and some kind of traceability, then it allows Microsoft (in theory) to offload liability into the creator. Job done. Hands washed (so to speak).

  • @heartshinemusic
    @heartshinemusic 4 месяца назад +1

    All these lip-sync platforms give us square examples like this Microsoft one (but also the EMO platform, that was also promising a while back) god forbid that they don't give filmmakers some aspect ratio options, like 16:9. I'm wondering when we can finally get our hands on these lip-sync features.

    • @HaydnRushworth-Filmmaker
      @HaydnRushworth-Filmmaker  4 месяца назад +1

      Absolutely. I think the hopeful thing is that there seems to be a race between multiple teams to introduce lip sync products, so this is where, ultimately, a free market economy and the influence of market demand should lead to worthwhile tools.

  • @user-Cyrine
    @user-Cyrine 4 месяца назад +1

    Love your videos so much! Can you make a tutorial video on FlexClip’s AI tools? Really looking forward to that!🥰

  • @ManifestTheMusic
    @ManifestTheMusic 4 месяца назад +2

    Great video. Thanks for the helpful info❤❤

  • @twilightfilms9436
    @twilightfilms9436 4 месяца назад +1

    I understand your frustration with img2img. I lost all my battles trying to make developers to understand that models should be trained looking in all directions, but I gave up about a year ago…..now, there are some open source AI models where you can control the gaze direction, but you’ll need a coder to create an API to use them. Annoyingly enough, the model is being used but several free and paid apps, but only to redirect the gaze to the camera. But the same model can do whatever you want with the eyes direction. Other than that, don’t waste your breath because there’s no way around in the matter…..even Sora has the same issue…..typos by iOS.

    • @HaydnRushworth-Filmmaker
      @HaydnRushworth-Filmmaker  4 месяца назад

      That just adds insult to injury, but it’s good to know that the ability is clearly already available. We now need enough serious filmmakers to ask for it, then one of these AI video teams will spot an opportunity to get ahead and implement it in the next version of their tool.

  • @daveindezmenez
    @daveindezmenez 4 месяца назад +1

    To be really useful, considering your "beautiful girl" example, the AI should ask the user questions to narrow down what is sought.

    • @HaydnRushworth-Filmmaker
      @HaydnRushworth-Filmmaker  4 месяца назад

      Agree entirely. The ideal project-wide settings will be selected from drop down lists and check boxes as well as free-text descriptions. As it happens, I finally got access to the Beta version of LTX Studio and it has all of those, it’s genuinely really impressive. I’m literally editing a long, “first impressions” video about it right now.

  • @PhilAndersonOutside
    @PhilAndersonOutside 4 месяца назад +1

    The future is going to be absolutely incredible. Future as in, early 2025.

    • @HaydnRushworth-Filmmaker
      @HaydnRushworth-Filmmaker  4 месяца назад +1

      Absolutely, and this is the right time to be catching the wave and developing skills so that we reach the limits of the tech sooner rather than later. Then, as new developments emerge, we’ll be better prepared to mow the most of them with a shorter learning curve and an established workflow.

    • @PhilAndersonOutside
      @PhilAndersonOutside 4 месяца назад +1

      @HaydnRushworth-Filmmaker I agree 100%. I think it's a mistake for someone to wait for Sora, Stability, etc. to come out with their video, or wait for anything.
      The smart thing is to use what is out there now, even if you produce junk with it that you wouldn't show your best friend. This builds knowledge, it builds a foundation that you can build upon when future AI portals and apps release.
      This is why I do not consider myself an AI artist, or "expert", no matter what I learn. I am a "user" or _power user._

    • @HaydnRushworth-Filmmaker
      @HaydnRushworth-Filmmaker  4 месяца назад

      @@PhilAndersonOutside I like that distinction, although it might interest you to know that the difference between an amateur and a professional is just a single penny. As soon as somebody pays you anything, you become, in that instant, by definition, a professional, or in other words, somebody who does something as a profession. An amateur, by contrast, does something because they love it. Literally, that's it. When you look at the background of the work amateur, it comes from French / Latin amare, or to love. It literally translates as a motivation of love, not money.

    • @PhilAndersonOutside
      @PhilAndersonOutside 4 месяца назад +1

      @@HaydnRushworth-Filmmaker I guess I'm. a professional then!
      I often consider someone a "professional" when they are earning their primary income doing something. Then again, if someone is unemployed, and makes $.01 selling an AI creation, this technically makes them a professional I suppose!
      To be honest, I love doing this too.

    • @HaydnRushworth-Filmmaker
      @HaydnRushworth-Filmmaker  4 месяца назад +1

      @@PhilAndersonOutside of course. Bear in mind that you could be earning $1million per year, but if your lifestyle is costing you $1.1million per year you'd still be in debt every year by another $100k.

  • @darkman237
    @darkman237 4 месяца назад +1

    GAAHH!!!!

  • @robertruffo2134
    @robertruffo2134 3 месяца назад +1

    Hey man, maybe get back to using real cameras. There will never be money in something extremely easy to do, that millions of other people have access to, including in countries where average salary is $100 a week There will never be a career that pays in "A.I. filmmaking",, Maybe you're trying to scam a few bucks here, to people with delusional dreams, maybe you're going to sell a course or something, but that's about all that can happen with your "skills" (that anyone can master in less than day - writing prompts is EASY to master). Now maybe A.I. will reduce the value of traditional filmmaking to $0 (I doubt it) but the value of A.I. "filmmaking" is already $0 (you can't even copyright A.I. "work") so...

    • @HaydnRushworth-Filmmaker
      @HaydnRushworth-Filmmaker  3 месяца назад

      Hey, thanks for the feedback. I actually agree with you.
      I've joined a local actors workshop so that I can find real human actors to work with because from what I can see of AI tools, the best version to get authentic, human-like results will be focusing heavily on video-to-video instead of text-to-video or image-to-video. I actually made a video about the camera and filming gear that you'll still need in order to get the best results with AI as the technology continues to evolve.
      In terms of not making money from something that anybody can do, I agree 1000%. From where I stand, I see a bucketload of AI video content on social media that is really only getting traction because of the novelty factor of the AI tools, and you're right, anybody can use the avalanche of "free" Beta version tools to create that kind of content. I believe the most enduring content will always be based around a solid, core idea or story, where the AI tool becomes invisible, and just sits in the background. As with a great story, nobody cares what computer you used to type it on or what word-processor you used because it's just a creative tool. AI video needs to become the same thing, a creative tool for human creatives to use to tell stories and share big visions and ideas that they would otherwise struggle to bring to life.
      In terms of the low income challenges that most of the world faces, I couldn't agree more. In fact, that's the reason I love the possibilities of AI, because it allows a guy like me, struggling to get by with a family of 5 living in a 2-bedroom apartment and living in Yorkshire (not Hollywood, and not even London) to have half a chance to bring a big-vision story to life and share it with the world. I love the democratisation of these kinds of tools precisely because of how empowering they are for little guys in obscure areas like me.
      As it happens, I also share your distaste for using RUclips as a platform for selling online courses. I've talked about this with my wife at some length, and even though the various RUclips "gurus" would be telling me that's what I need to do to make some decent money (so that I can move my family into a family home with a garden for my children with at least three bedrooms), it just doesn't feel like the right thing for me. Instead, I'm going to figure out ways to sell my story ("Telepathy SUCKS!"... you can buy the ebook on Amazon... though I haven't even promoted that yet). I'm going to try a visual e-book, an audio book, a visual audio book (stills and then video clips played whilst the story is being read) and I'll eventually start uploading scenes from the movie to RUclips and try to monetise that (in a few years from now, with the AI really catches up).
      The one thing that I definitely agree with you is that if something is easy to master, it will never make me money, but as I'm discovering with AI tools, its easy to create one, single shot, but it's incredibly difficult to create a coherent, connected narrative with fluid continuity and authentic, human moment, facial expressions and voice intonation. For that level of professionalism, I believe it will always be a highly skilled filmmaking endeavour, and so that's why I'm learning as much as I can as quickly as I can. I'm not at all a highly-skilled user, but then, at this stage, nobody is. So, the real skill is to be willing to learn as fast as possible.
      I love your passion and enthusiasm. Keep it up :-)

  • @hukes
    @hukes 4 месяца назад +1

    AI generated content is the worst best idea.