@@Lucas-of9ek 1. Upload a picture of your character to the Internet 2. Go to Midjourney 3. Type out your prompt for desired image (don’t hit enter yet) 4. Type: --cref and then paste the link of your image uploaded to the Internet 5. Optional: type --cw followed by 1 to 100, 1 being vaguely adhering to the design of your character and 100 being strongly adhering to the design of your character 6. Type whatever other parameters you need, then click enter/done/run/wherever it is Note: it’s far from perfect
Tell them to have the option (for more points or whatever) to generate a depth map video along with it. That way you can use it for compositing in some additional stuff.
wow...I make my own videos, can I add with Runway any visual effects over my work?...for example addind a new character or visual fx around my character?...is it posible with this tool, or only generates with ni interaction with real videos?...tnks
Nice but I need able to generate consistent characters, like I can with Midjourney faces, and then use those as input into video. Gen 3 Alpha has no way to make consistent characters, even when using same seed.
What's up BM, yo I know this post is a month old, but peep the Gen 3 Alpha Turbo x Flux1 flow ruclips.net/video/1FUHjx8n0Is/видео.htmlsi=Nkfs1twvJHuM1uF3
OMG ! Is the OpenAI ROBOT now Dancing SALSA too ? 🎶 🎵 OpenAI my Sugar Papi, by PEACHY da WHUUPi on RUclips, Insta,Spotify . HILARIOUS. CAN YOU GUESS WHICH A.I. I used ?
runway V3 is an upgrade with many outstanding features, thank you
Thank you for the real impressions!
The video of the Asian girl wearing the wig, how did they maintain the character consistency? Any ideas?
Were I to guess, they used --cref as a parameter in Midjourney and then used those images for image to video on Runway.
@@FRandAI Hi, how do they do that exactly?
Thanks
@@Lucas-of9ek 1. Upload a picture of your character to the Internet
2. Go to Midjourney
3. Type out your prompt for desired image (don’t hit enter yet)
4. Type: --cref and then paste the link of your image uploaded to the Internet
5. Optional: type --cw followed by 1 to 100, 1 being vaguely adhering to the design of your character and 100 being strongly adhering to the design of your character
6. Type whatever other parameters you need, then click enter/done/run/wherever it is
Note: it’s far from perfect
Thanks for doing that to show what it’s like 👍
Tell them to have the option (for more points or whatever) to generate a depth map video along with it. That way you can use it for compositing in some additional stuff.
Is it possible to run this locally without being required to use the website?
Thanks for the tips!
yo thanks for the video mate, I dont know if its only me but the yellow subtitles are super distracting for me.
wow...I make my own videos, can I add with Runway any visual effects over my work?...for example addind a new character or visual fx around my character?...is it posible with this tool, or only generates with ni interaction with real videos?...tnks
why dont you try a long prompt
Thank you for your efforts !!!
Nice but I need able to generate consistent characters, like I can with Midjourney faces, and then use those as input into video. Gen 3 Alpha has no way to make consistent characters, even when using same seed.
its okay, still ALOT to improve
Can you use your own image
Yes
Thanks 😊
I used it for a month and canceled. definitely a bit janky as is. looking forward to future versions though.
What's up BM, yo I know this post is a month old, but peep the Gen 3 Alpha Turbo x Flux1 flow ruclips.net/video/1FUHjx8n0Is/видео.htmlsi=Nkfs1twvJHuM1uF3
Just not practical yet unless you want horror content lol
the prompts were awful
OMG ! Is the OpenAI ROBOT now Dancing SALSA too ? 🎶 🎵 OpenAI my Sugar Papi,
by PEACHY da WHUUPi on RUclips, Insta,Spotify . HILARIOUS.
CAN YOU GUESS WHICH A.I. I used ?