@@anigah 3,000 credits = 300 seconds of Gen-3 Alpha video $ 30 USD Don't forget runway cherry picks all the clips they use so you'll end up wasting a lot of credits in the process.
So many of my renditions are just slow zoom ins without anything else interesting happening. I feel like I've wasted a lot of credits for my image to video needs. I wish those didn't count or I could find an easy way to get credits back on ones that clearly didn't work. Does anyone else find the same thing happening? Do all of you just enter prompts all the time?
Yeah it's pretty bad. I've been trying to use it for a work proof-of-concept project and it's just not controllable enough. You need to re-roll a LOT to get anything useable, to the point that you run out of credits before you can get a single useable shot. If you want a specific shot, it's probably not going to happen without luck, hundreds of dollars, and a lot of time.
@@lukematheny4380 Good points, it would make sense for anyone who really wants to use it to get the 100ish/month subscription to get as many attempts out of it as possible.
It has limited uses anyway because of the censorship. I had trouble creating any kind of narrative tension in movies I was trying to animate. Creating antagonists in a realistic, riveting way in your story? Good luck with that. Wait for Grok to move video as they are less censorial over there.
I've just tried gen2. its very very bad. How can I know if gen3 is worth my money if i can't try it out atleast once or twice before purchasing a 15 dollar month
Let me save you the money, don't try it, it is works, the turbo does a great job but it changes the color of what you give it, and the alpha which is supposed to be the big thing creates nightmare fuel results, if you are planning a horror movie go for it then, otherwise not worth it. Try Pika instead, that one works.
How do I know how many credits a Gen 3 video is consuming? I just took out a "Pro" subscription a few days ago and it's indicating that I've already used up my entire 2250 credits in less than 4 days after generating just a small handful of image upload to video. How is this possible ??!! Where am I able to view how many credits I'll be using "before" clicking "generate" ... ie: it's of no use in making a decision whether it's worth using up so many credits if I'm only able to "afterwards" look at the menu bar to see how many credits have been consumed ...
Generations with Gen-3 Alpha cost 10 credits per second of video. You can currently export in 5 and 10 second durations, so the costs are as follows: 5 seconds: 50 credits 10 seconds: 100 credits Generations with Gen-3 Alpha Turbo cost 5 credits per second of video: 5 seconds: 25 credits 10 seconds: 50 credits
Ok, just so you know, Gen 3 Aplha is shit. If you're thinking of using it for the IMAGE to VIDEO feature don't even bother. Tried without prompt, with their suggested prompt, with my own, even with prompt generated by other ai tools. Shit over shit every time! It's frustrating that you have to use your credits for each test, as many have already pointed out. It feels unfair to be charged for every attempt, especially when the tool doesn’t perform as expected.
hello, i'm a paid member. i have uploaded a photo of me, but when runway generate the video it doesn't keep my "aspect" (my face, skin color, etc). I mean, the video results is not me. Is there any prompt or way of tell runway that keep my face and skin color in the video result? Thanks!
This service is NOT worth the money at all. There are soooooo many other services that do a much better job at animating images than Runway. Alpha 3 Turbo is a complete joke! It barely animates anything. It keeps most of the image static for the most part. IF you do get an image to animate, like legs moving, the rest of the image will be static. The demos they display are 100% absolutely misleading.
Goes to show that the good things will go to the centralized solutions way rather than in open-source alternatives. Such a shame. AI should be Open-Sourced and everyone who would like to invest in hardware be capable of running it locally rather be dependent and buy out infinite amounts of credits.
I have a question. Can you upload say an image of a specific t-shirt, then ask runway to put that exact tshirt on a cat doing a catwalk in a fashion show ?
Complete crap - in Gen 2 instead of movement in every 4th attempt you come out with a blurred image after 3rd seconds, mega frustrating how many credits go to waste. I'm close to unsubscribing. It's starting to lose its meaning.
Your product is crap, honestly, it doesn't work the way it's shown and the price is horrible, you pay to not even receive what you expect, for me you can delete it, it won't make a difference.
I issued an unlimited amount for a hundred bucks, since firing tons of credits is nonsense. as a result. I came across an interesting detail: (image+ prompt=> video) visualizations of Afro, Asian, Latin characters are perfectly detailed without any problems. In order to get at least an average result with a European, you need to be a genius. P.S. According to prompta: "A young, beautiful pop singer, dressed in a yellow top and a blue denim skirt, stands in the recording room in front of a stand with a microphone and sings. The picture was taken from the studio side, through a glass window separating the studio's hardware room and the recording room"(without specifying race)- The Hindu was generated THREE TIMES. All the details were done exactly, but the strange randomness of race is slightly surprising.
Runway ML has come a long way and this is so exciting. I cant wait to use gen3 alpha
Just gotta fix the crazy dumb pricing now
Can you give me an example of the pricing to make a 5 minute video
@@anigah 3,000 credits = 300 seconds of Gen-3 Alpha video
$ 30 USD
Don't forget runway cherry picks all the clips they use so you'll end up wasting a lot of credits in the process.
If the quality is good like Kling AI, it might be worth it. Their Gen-2 product is pretty bad though
@@tstone9151 I think once you get the perfect prompt and do a few iterations with it you do get some stunning stuff. Right now, im sticking with kling
@@anigahImage to video gen 3 barely works. If you don't buy the unlimited plan don't even try another plan.
So many of my renditions are just slow zoom ins without anything else interesting happening. I feel like I've wasted a lot of credits for my image to video needs. I wish those didn't count or I could find an easy way to get credits back on ones that clearly didn't work. Does anyone else find the same thing happening? Do all of you just enter prompts all the time?
Yeah it's pretty bad. I've been trying to use it for a work proof-of-concept project and it's just not controllable enough. You need to re-roll a LOT to get anything useable, to the point that you run out of credits before you can get a single useable shot. If you want a specific shot, it's probably not going to happen without luck, hundreds of dollars, and a lot of time.
@@lukematheny4380 Good points, it would make sense for anyone who really wants to use it to get the 100ish/month subscription to get as many attempts out of it as possible.
The pricing is unacceptable
I agree
It has limited uses anyway because of the censorship. I had trouble creating any kind of narrative tension in movies I was trying to animate. Creating antagonists in a realistic, riveting way in your story? Good luck with that. Wait for Grok to move video as they are less censorial over there.
Not working, it only zooms in the images with very little motion. You have to try many many times to see results.
Not at all, please try entering the prompt correctly.
@@systemhacking I tried it the first day and it didn't do much, however it's working better since yesterday with better results than Luma.
@@systemhacking pay for my credits to test it
@@systemhacking he's right, i try 10 times and creates something completely different, unrelated to the image
Any chance to make loop animation? it will be a game changer...!
+1
why am i not able to find it in the website?
I've just tried gen2. its very very bad. How can I know if gen3 is worth my money if i can't try it out atleast once or twice before purchasing a 15 dollar month
if you cant even spend 15 dollars to test you should stop using AI all together.
Let me save you the money, don't try it, it is works, the turbo does a great job but it changes the color of what you give it, and the alpha which is supposed to be the big thing creates nightmare fuel results, if you are planning a horror movie go for it then, otherwise not worth it. Try Pika instead, that one works.
How do I know how many credits a Gen 3 video is consuming? I just took out a "Pro" subscription a few days ago and it's indicating that I've already used up my entire 2250 credits in less than 4 days after generating just a small handful of image upload to video. How is this possible ??!! Where am I able to view how many credits I'll be using "before" clicking "generate" ... ie: it's of no use in making a decision whether it's worth using up so many credits if I'm only able to "afterwards" look at the menu bar to see how many credits have been consumed ...
I read that 1 second of generated video = 1 credit.
Generations with Gen-3 Alpha cost 10 credits per second of video. You can currently export in 5 and 10 second durations, so the costs are as follows:
5 seconds: 50 credits
10 seconds: 100 credits
Generations with Gen-3 Alpha Turbo cost 5 credits per second of video:
5 seconds: 25 credits
10 seconds: 50 credits
How can i change the aspect ratio to 9:16 for Tiktok?
I figured it out, you're supposed to set it to Gen Alpha Turbo at the top, gen 3 alpha doesn't support 9:16. For anyone else wondering.
Pricing could be a little (just a little) better and would love a vertical mode as well (I bet it’s coming but I want it nao!)
is it possible to have a size of 9:16
I subscribe to Gen3-alpha
I'd like to know also
Nice!
Love it❤
Ok, just so you know, Gen 3 Aplha is shit. If you're thinking of using it for the IMAGE to VIDEO feature don't even bother.
Tried without prompt, with their suggested prompt, with my own, even with prompt generated by other ai tools. Shit over shit every time!
It's frustrating that you have to use your credits for each test, as many have already pointed out. It feels unfair to be charged for every attempt, especially when the tool doesn’t perform as expected.
its possibly to refund? happens the same to me I can't wait for a year to try with kling
hello sir ! can any one say a prompt for image to walking shot
i dont find text/image to video , i have only generative video, can you hel me?
Runway is really useful, I will always support you.
is it any good? please anyone test it yet? share your thoughts
It doesn't work. If you don't buy the unlimited plan it is not worth it.
I payed 110 dollars for a year, the only thing it does is just zoom the picture and sometimes deform it..
For the unlimited subscribers, does it cost any credit? or would I would have to pay for every generation?
@@csansolo it doesn't cost extra credits. You can generate unlimited gen 3 prompts
hello, i'm a paid member. i have uploaded a photo of me, but when runway generate the video it doesn't keep my "aspect" (my face, skin color, etc). I mean, the video results is not me. Is there any prompt or way of tell runway that keep my face and skin color in the video result? Thanks!
Hello, were you able to find it?
So happy about this!!👍
Very cool function
This is an awesome product. Just expensive though.
Awesome
I bet this is not what realy it do.. dont want to pay to find out, please anybody can confirm that this is how it works?
Price is very expensive
Please decrease the pricing
mostly didnt work fine, i gave him image and he creates something completely different, unrelated to the image
I ALREADY TRIED IT .... its SHIT, and I see everybody says the same
very difficult program. I have to read to much for get another problems, I ask for my money back, nooooo
This service is NOT worth the money at all. There are soooooo many other services that do a much better job at animating images than Runway. Alpha 3 Turbo is a complete joke! It barely animates anything. It keeps most of the image static for the most part. IF you do get an image to animate, like legs moving, the rest of the image will be static.
The demos they display are 100% absolutely misleading.
Amaging Ai to make Video
Goes to show that the good things will go to the centralized solutions way rather than in open-source alternatives. Such a shame. AI should be Open-Sourced and everyone who would like to invest in hardware be capable of running it locally rather be dependent and buy out infinite amounts of credits.
Im getting good results with photo to video, text to video not so. good results though for my film storyboard. thanks.
I have a question. Can you upload say an image of a specific t-shirt, then ask runway to put that exact tshirt on a cat doing a catwalk in a fashion show ?
@@deepdiver849 Try the motion brush tool. I haven't used it yet. But that does sound quite complex.
my videos don't even move.. lol
ㅈㄴ비싸네..
Complete crap - in Gen 2 instead of movement in every 4th attempt you come out with a blurred image after 3rd seconds, mega frustrating how many credits go to waste. I'm close to unsubscribing. It's starting to lose its meaning.
I will be taking my AI video production to the next level
Your product is crap, honestly, it doesn't work the way it's shown and the price is horrible, you pay to not even receive what you expect, for me you can delete it, it won't make a difference.
If it's not free, it's useless.
Why Chinese version is free
Make it free
I issued an unlimited amount for a hundred bucks, since firing tons of credits is nonsense. as a result. I came across an interesting detail: (image+ prompt=> video) visualizations of Afro, Asian, Latin characters are perfectly detailed without any problems. In order to get at least an average result with a European, you need to be a genius.
P.S.
According to prompta: "A young, beautiful pop singer, dressed in a yellow top and a blue denim skirt, stands in the recording room in front of a stand with a microphone and sings. The picture was taken from the studio side, through a glass window separating the studio's hardware room and the recording room"(without specifying race)- The Hindu was generated THREE TIMES. All the details were done exactly, but the strange randomness of race is slightly surprising.