Runway Act One- testing performance capture with the crazy new magic

Поделиться
HTML-код
  • Опубликовано: 24 ноя 2024

Комментарии • 29

  • @upscalednostalgiaofficial
    @upscalednostalgiaofficial 29 дней назад +1

    with viggle, what I usually do is do video enhanced using Krea AI. You get additional detail of the character and blend the character with the background well. It sometimes fix the jittery clips from viggle. After that, I apply a a frame interpolation using Flowframes to convert it to 60fps clip. If you lose the identity of the character, I usually do a faceswap using either Roof unleashed or FaceFusion.

    • @hyperbolicfilms
      @hyperbolicfilms  28 дней назад

      I have used Krea for this in the past too, and it does work well. The 10 second limit is a pain, but the results are great. I hadn't heard of Flowframes. Thanks, I'll check it out!

    • @armondtanz
      @armondtanz 28 дней назад

      I used to make stuff in unreal. I joined sum intensive courses where they had worked on movies. They ALWAYS said right at the beginning to work in 24fps when making movies . Maybe to many frames gives the look a strange artificial vibe.

    • @hyperbolicfilms
      @hyperbolicfilms  28 дней назад

      @@armondtanz You're right, but the reason it's a good idea to make Viggle footage into 60 fps is that it smooths out some of the jittering that Viggle causes when it outputs a video. Then you can turn use that 60 fps video in a 24 fps timeline to get more consistent motion.

    • @armondtanz
      @armondtanz 28 дней назад

      @@hyperbolicfilms oh ok. I'll have to check out viggle.
      Did u ever see the workflow of the guy who made the joker walk on stage viral.
      He put it thru comfy.
      I'm a complete noob when it comes to comfyui. Looks insane.
      His end result was so polished. You could see textures and shadows in clothes. They were not there in original viggle version.

    • @hyperbolicfilms
      @hyperbolicfilms  28 дней назад

      @@armondtanz I haven't seen the Joker one. But Eric Solorio did a Deadpool video with Viggle and Comfy that came out really good.

  • @stevensteverly
    @stevensteverly 29 дней назад +1

    what's the max resolution like? you could make the shots much more dynamic with a simple pan or camera shake...
    also is there the option to have it render without the background (ie as an alpha)? if so I can see this being a decent tool for some indie people. if not then it's kinda meh

    • @hyperbolicfilms
      @hyperbolicfilms  28 дней назад

      The resolution is 1280x768. There is no option to do any camera movement, so you would have to add that in post production. The background is limited to whatever is in your input image, so no alphas.
      It's a step in the right direction, but not a magic bullet.

  • @LeonGustin
    @LeonGustin Месяц назад +1

    See now they need to combine Act One with vid2vid, plus not to mention vid2vid needs to have at least a 1min generation

    • @LeonGustin
      @LeonGustin Месяц назад +1

      Amazing work, love the perseverance on getting your idea to reality.

  • @MabelYolanda-c9i
    @MabelYolanda-c9i 29 дней назад +2

    Run Viggle through Krea and you’ll be amazed…..

    • @hyperbolicfilms
      @hyperbolicfilms  28 дней назад

      I did that a few weeks ago, and it did give amazing results. The 10 second limit in Krea is a bit of a bottleneck, but it definitely gives great and consistent results.

  • @JayJay3D
    @JayJay3D 28 дней назад +1

    I may be wrong but doesnt Hedra and Live Portrait do the same??

    • @hyperbolicfilms
      @hyperbolicfilms  28 дней назад +1

      Hedra uses audio to automatically animate a photograph, but you don't get control over how it moves the face.
      Live Portrait is similar, but the results of Act One are much better. With Live Portrait, some face movements add jitter to the face.
      Act One also seems to work well on stylized faces, which I don't think is the case for Live Portrait. At least I can't remember seeing any results of stop-motion style characters.

    • @JayJay3D
      @JayJay3D 28 дней назад +1

      @@hyperbolicfilms Cheers for the reply, be interesting to see the coming updates from Viggle, Hedra and poss Live Portrait - lots of compation with AI tools now :D

  • @HumanOpinions-bz9ky
    @HumanOpinions-bz9ky Месяц назад +1

    Thats what it did to me. Can't recognize human face. I was a bit disappointed. Not to be greedy but I'm looking forward to the days we can move our head, move arms around and even allow props that we are holding. Only THEN ... a Vid Jedi will you be.

    • @armondtanz
      @armondtanz 28 дней назад +1

      The ultimate would be a gaussian splat type scenario where it's almost like 3d software. U can pan out of your scene and see yer 3d world.
      I think midjourney are launching it or looking to.
      They said there video gen is ready but they wanna better it.
      They also talk of 3d environment, so maybe that's the future.

  • @rodrigobarrosempreendedor
    @rodrigobarrosempreendedor 25 дней назад

    Congratulations on the video.
    Doubts:
    1. 10 credit per 1 second is very expensive. In the UNLIMITED plan it should be possible (as the name says) to create in an unlimited way right?
    2. Can I upload an audio ready for the character to speak? Or does it have to be my own voice straight?
    3. If I record my voice in a language (for example English) can I change it to Portuguese in the Runway itself or will I have to take it to Elevenlabs later and change it?
    4. Because if I take it to Elevenlabs and change the language, then I’ll need another AI to do the lip sync, right?
    Congratulations again on the video!

    • @hyperbolicfilms
      @hyperbolicfilms  25 дней назад

      1. In theory. I think they slow you down after a certain number of credits.
      2. You have to upload a video of someone acting. It's essentially like a motion capture for the face/head.
      3. I don't think Runway has any translation functions.
      4. If you want to take a photo and an audio clip and make a talking head, there are other tools that do that. Kling does it indirectly. Hedra is probably the easiest way to do this.

  • @С.Н-ш2у
    @С.Н-ш2у Месяц назад +1

    I have seen a workflow where the video was first animated using RunWay (ImageToVideo), and then a character's head was mounted on this animated video, animated in LivePortrait. Do you think it is possible to combine Viggle and RunWay(ActOne) in such a process.

    • @hyperbolicfilms
      @hyperbolicfilms  29 дней назад

      I don't think this would work because Act One only takes images as an input. LivePortrait is the only tool that I know of that works on video like that.
      You could use the Runway lipsync on a Viggle clip, but I don't know if it would fix the resolution of the Viggle clip. That to me is where Viggle is falling apart. It doesn't use the full resolution of your input image, or doesn't upscale the results.

    • @С.Н-ш2у
      @С.Н-ш2у 29 дней назад

      @@hyperbolicfilms I meant: 1) animate the head in Act One (picture + video with facial expressions). 2)animation in Viggle (the same picture, only on a smaller scale, with arms and legs + video with body movements). 3) Superimposition of the head from point No. 1 on the body from point No.2.
      P.S. A lot of hassle, and you can't twist your body much, but your head is in adequate quality.

    • @armondtanz
      @armondtanz 28 дней назад

      ​@@С.Н-ш2уI tried this a while back with hedra and my body. I tried to composite it.
      It's never going to work. The human body is so so complex every tiny movement has a 100 off shoots which all need perfect sync otherwise it looks like crap.
      Funny thing is my test was the simplest , I was talking into the camera and it still looked like a bad bad animation.
      I've wasted over $1000 and 100s of hours trying to crack this. Its 100% not worth it.

    • @С.Н-ш2у
      @С.Н-ш2у 28 дней назад

      @@armondtanz Thanks for your experience

    • @armondtanz
      @armondtanz 28 дней назад

      @@С.Н-ш2у i learned the hard way, stubborn fool who didnt check out the competition.. lol, thats probably the easiest way to work something out, if no one else is doing it, its probably not going to work...
      The only way it can come off in the slightest is if you are a super advanced animator and can match the movement with advanced motion tracking and stabilizers, then sit the head on using tracking markers...
      But even then its still gonna look unnatural and everyone will be focusing on this head that's not quite sitting right on the body.
      Other factors will come into play, like lighting, 3d rotation, your neck muscles not reacting to your shoulder muscles. If you look at all great animators (tex avery). That part of the body is so so crucial. Theres so much expression in head-nick-shoulders.
      Thats why these new AIs look a bit a flat, that area is really behind.

  • @ShoKnightz
    @ShoKnightz 27 дней назад

    What do you use for virtual sets/ backgrounds?

    • @hyperbolicfilms
      @hyperbolicfilms  13 дней назад

      These backgrounds were generated in Midjourney itself, along with the character.

  • @Mr.Superman2024
    @Mr.Superman2024 29 дней назад +1

    Good, but so confused what you trying to deliver ib yuor video. So confusing