At the time of making this video. There weren't any camera controls. So it was a case of putting the clip in and pressing go, and Runway would do what it wanted. It seemed that landscape shots typically had forward motion automatically. They then added a motion slider so you could increase motion in a scene (which was a bit hit-and-miss). They then added camera controls so you can now select zoom in and increase the intensity. Or don't add any camera movement via the settings and try out varying motion settings to see what you get.
Yeah completely agree. It's not perfect yet, but if you run a generation a couple of times you can pick the best of the bunch. i.e. ruclips.net/video/IhI1l3sF-Gc/видео.html. Then you can use Topaz AI and under 'Enhancement' try out the 'Face Enhance' option. It won't fix a really warped face but it does improve the consistency. Plus, with more time/effort you can try out a few complex stable diffusion-based options to vastly improve facial detail and consistency.
That's the big question. I'll get back with a proper reply in the future, maybe sharing my opinion in a video. In a nutshell I think whilst AI will unlock plenty of efficiencies and increase the potential creative options and quality (Vs cost) I do think the role of an animator will be here for a long time. Especially for exact work for a client (rather than abstract music videos, or fun social media content). Though the processes used to create it will evolve a lot, smaller teams will be able to do alot more. But a good design eye, ability to fine tune and more will be key. This needs more than a couple of paragraphs 😅. ( At our studio we've just completed a hand drawn animation for a client... It took bloomin ages but looks great and I haven't seen any AI provide the control we'd have needed for that... ... Yet)
It's certainly not cheap. But in its defence it's not simply upscaling an image, which would show a more pixelated image. It uses its AI model to figure out what the extra pixels should be and so creates a higher detailed image . Plus the slow Mo accurately interpolates the footage to creates new frames that are based on the movement. Then there are enhancements like removing flicker or enhancing faces or details ... So lots of cool image changes are taking place. All together I think that allows them to justify the big price.
Completely agree... it's not cheap. If we weren't using it for other professional work at our own animation studio, it would be a tricky cost to justify simply for AI tinkering.
a bit funny to call " in-depth tutorial" an emerging technology which is basically useless in its current state. Yeah, a novelty that shows potential, but still. Not sure why anyone would require tutorials for something which is clearly going to be either abandoned or evolved into something different.
Wow. Thanks for the update and quick response!!!
How about custom model. Can we generate videos with it
How do you get the camera do move over the landscape like that?
At the time of making this video. There weren't any camera controls. So it was a case of putting the clip in and pressing go, and Runway would do what it wanted. It seemed that landscape shots typically had forward motion automatically. They then added a motion slider so you could increase motion in a scene (which was a bit hit-and-miss). They then added camera controls so you can now select zoom in and increase the intensity. Or don't add any camera movement via the settings and try out varying motion settings to see what you get.
I am unable to download the "generated image to video clips" twice. How can we download twice?
is it always interval of 4 seconds.....can we have a clip of les say 5.3 seconds?
I need help
Everytime i try to export the green screens or any video it just cuts and skips frames and I don’t know
Why
Amazing! Hopefully PikaLabs follows suit.
Absolutely. It's god to have two popular video generation optios to keeping pushing each other forward.
Thhank you plz let me know how generate video. From text propt to video alsa the magic to video is. Not available.
I have used this runway yml and it generates some awkward face...more enhancement required in gen2 when they making image to video animation
Yeah completely agree. It's not perfect yet, but if you run a generation a couple of times you can pick the best of the bunch. i.e. ruclips.net/video/IhI1l3sF-Gc/видео.html. Then you can use Topaz AI and under 'Enhancement' try out the 'Face Enhance' option. It won't fix a really warped face but it does improve the consistency. Plus, with more time/effort you can try out a few complex stable diffusion-based options to vastly improve facial detail and consistency.
Name of the song please!
I feel like the model went worse since the update. the animations are warping the images extremley
Yeah interesting... wasn't sure myself either. Running some more outputs later so might test how it treats some older shots too.
Love it 🤩
I have a question for you, Im an animator too. Do you think AI well replace us?
That's the big question. I'll get back with a proper reply in the future, maybe sharing my opinion in a video.
In a nutshell I think whilst AI will unlock plenty of efficiencies and increase the potential creative options and quality (Vs cost) I do think the role of an animator will be here for a long time. Especially for exact work for a client (rather than abstract music videos, or fun social media content). Though the processes used to create it will evolve a lot, smaller teams will be able to do alot more. But a good design eye, ability to fine tune and more will be key.
This needs more than a couple of paragraphs 😅.
( At our studio we've just completed a hand drawn animation for a client... It took bloomin ages but looks great and I haven't seen any AI provide the control we'd have needed for that... ... Yet)
Nope...in future we can edit ai animation ..same with graphics desiner and coding..we can edit and create some our own things with mix ai...
in my opinion topaz is pretty expensive, you could just batch upscale the exported frames and import t5hem again... and this for free and not for 300
It's certainly not cheap.
But in its defence it's not simply upscaling an image, which would show a more pixelated image. It uses its AI model to figure out what the extra pixels should be and so creates a higher detailed image . Plus the slow Mo accurately interpolates the footage to creates new frames that are based on the movement. Then there are enhancements like removing flicker or enhancing faces or details ... So lots of cool image changes are taking place. All together I think that allows them to justify the big price.
Topaz is great but (minimum) 300 dollars makes it too steep for casual users.
Completely agree... it's not cheap. If we weren't using it for other professional work at our own animation studio, it would be a tricky cost to justify simply for AI tinkering.
I agree. Paying for so many things. It's just too much.
a bit funny to call " in-depth tutorial" an emerging technology which is basically useless in its current state. Yeah, a novelty that shows potential, but still. Not sure why anyone would require tutorials for something which is clearly going to be either abandoned or evolved into something different.
Extend feature is still trash lol