I Analyzed 500+ LumaLabs AI Generations: Here's How to Prompt
HTML-код
- Опубликовано: 27 июл 2024
- After meticulously analyzing over 500 prompts on LumaLabs AI, I'm sharing my discoveries on how to create mind-blowing AI-generated videos. Dive into this comprehensive guide and learn the secrets to mastering fidelity, enhancing dynamic motion, and crafting perfect prompts for your next viral video!
Timestamps:
0:00 - Introduction to LumaLabs AI
0:20 - LumaLabs vs. Other AI Video Generators
0:45 - My 500+ Prompt Analysis Method
1:10 - Rating Video Fidelity
1:35 - Assessing Motion in AI Videos
2:15 - Calculating Overall Generation Scores
2:30 - The "Enhance" Feature: Fidelity vs. Motion
4:30 - Crafting the Perfect Prompt
5:00 - Subject Matter and Prompt Complexity
5:35 - Creating Your Own Prompt Engine
6:30 - The Power of Image Prompts
7:10 - Using End Frames for Amazing Transformations
7:35 - Best Practices for End Frame Usage
Unlock the full potential of LumaLabs AI with insights from my extensive research. Whether you're a video pro or just getting started, this guide will elevate your AI-generated content. Don't forget to like, subscribe, and share your LumaLabs creations in the comments!
#LumaLabsAI #AIVideoAnalysis #PromptEngineering #ContentCreation
Great analysis man. Finally, someone who's providing actual value rather than just giving AI news.
Can’t get away from all the “news.” The worst part might be the hype. I’d say more than half the things coming out aren’t worth the time. I’ve scrapped so many video ideas after trying out a new “game changer”
Yes i also get tired of all those aI News videos.
This why I love RUclips… this is exactly the channel I was looking for! Great video 🙏🏽
Thank you ❤️
Best analysis out there to date. Thanks!!
Thank you! It took a lot of time (and $$$ lol)
By watching this video I realized how similar this is to handling a professional video or photo shoot - deciding on a simpler prompt, disabling enhance, etc. are the equivalents of pulling back on the ISO, adjusting white balance or increasing shutter speed.
That’s a really interesting way of looking at it. I totally agree
Fantastic work here, making a very useful guide and exploring the black box of Luma using the scientific method! Absoolutley love this! Sharing.
Thank you!!! Couldn’t have said it better myself. I’m trying to use this as a tool but was getting frustrated with the inconsistency. There’s more to unlock here but it’s a start.
Thank you, this was genuinely really useful. Going to experiment a little more with this now.
I’m so glad you found value here! My findings are just the tip of the iceberg. Good luck and please share if you figure more out about the platform.
Excellent overview on LUMA, and thumbs up on the analysis. I have followed your advice on promoting creating a GPT for Luma. Just started with Luma labs on the Free Teir, yet to test out my GPT. as they are only allowing 5 generations a day. Remember to turn on your Thank you button for those of us who might like to buy you a coffee.
Love it! I tried a few of the available GPTs out there while making this video and I was a bit underwhelmed.
Also, appreciate you for sharing about the "Thank You" button. I had no clue that was a thing. I just looked into it but looks like my channel is too small for now. One day!
Great work! Thank you for this analysis. :)
Of course!!
I love your method and useful results! (just subscribed)
Thank you! It took a few do overs and some proper planning but we got there
Amazing tip about creating a prompt machine with Claude, very useful!
Thank you! lol my brain hurt thinking up new prompts. It’s a great assistant
Anyone who goes in this deep gets my sub anyway. Well done!
Thank you! Here I was wondering if I was getting too nerdy and anyone would care 😂
cool vid and nice info, the part about enhanced on or off was really interesting because ive just been leaving it on by default without even thinking that turning it off may have better results in some cases
Same here! That was the most surprising thing to me.
Excellent video!
Appreciate the kind words!
Thank you for this helpful video! 😊
Super helpful video thank u
I really appreciate you saying that!
one of the best analysis thanks
I really appreciate that!
Thank you for your hardwork ❤ nice tutorial 👍🙏
Thank you!! I really appreciate those kind words 🥹
Super helpful thank you!
You're very welcome!
thank you mate! Great video
Absolutely! Hope it helped
Great video!!
Thank you!!
Great info thanks for sharing subbed ❤
Thank you so much!! 🥹
helpful advicde on that claude prompt generator! did that thnx
It’s the perfect assistant for when you need a little extra creativity or structure
THANKS! that was great
Of course! Thank you
Really helpfull ! 🙏
Thank you
Thank you for watching!
Quality stuff 👏🏽👏🏽👏🏽
Thank you!!
I always saw that Enhance off worked a lot better for doing image to video. Well edited and well written video! I want to also point out that Luma is incredibly good at animating 3D renders for some reason. I guess it's from hours of Pixar and Dreamworks films fed into its machinery?
That is great to know! I ran out of credits on multiple paid accounts so I’m waiting for those credits to recharge 😅
The 3D renders are the ones I want to test out next!
😂😂😂
How do you do image to video? I don’t have that option in Luma labs…
@@DerekMurphycreativindie on the left side of the prompt box there is an icon of an image. If you click this, not only you go to image to video function but also you can use two images as a base for a video that combines both of them
@@CinematicSoundMaestro that's awesome thanks
非常有用的经验!
Thanks
Which AI program can you upload your own music forvAI to create a synchronized video,
Thank you
amazing video
Thank you!!
Great video ❤
Can you share your findings on what the best camera movement prompts are?
And do you also use most of the prompt from a MJ image in Luma?
I found that sometimes no prompt and only a camera movement prompt can work as well.
Anyway, thanks for your help and have a nice weekend, mate
Thank you! I think I’m going to do another video on it. I’ve been using Runway and Luma both a great deal and learned a lot
more.
Because my prompts were so varied in subject and movement, 500 generations aren’t enough to give a solid answer. I’m getting closer and I think it has more to do with finding the sweet spot of subject, reference image and prompt.
Totally agree on the no prompt / just camera strategy!
(2:50) I think it's important to understand what "enhance prompt" does. Let's say your prompt is "kitten eating breakfast". Because there are a total of three tokens, each token is about 33% weighted (technically "kitten" gets a boost of 7% because it's up front and "breakfast" gets a decrease of 7% because it's "deeper" in the prompt). However, "breakfast" is a problem because did you mean "cat food" (as in a "kitten's breakfast") or "people food" (as in the "breakfast" you would expect with a prompt like "selfie with breakfast").
"Enhance" changes both of those results. First, it turns your three-token prompt into a paragraph (such as "a kitten with a beautiful tail engages in the consumption of a meal of food for a cat"). Notice it has already solved the problem of the nebulous nature of the word "breakfast". It has also altered the "weights" of those original tokens--shifting the output to some lesser-used prompts like "beautiful kitten" and "consumption".
The elevator explanation is this: if you want a soft image with lots of creativity for the AI to play around with the output, check "enhance prompt", but if you want strict adherence then write your own pithy and targeted prompts manually.
Great explanation!
@dirtydevotee thank you for explaining this. I canceld my subscription because I found it just produced garabe no matter what I prompted but to be fair I know realise I was prompting like it was SD1.5. I'm going to give it another go with this suggestion to see if can improve results. 😊
That werewolf scene at the end is great.
Have u tried this one?
Using the most outlandish words. Sometimes that works if things arent working.
Like frenzied, frantically, desperate, ...
Just type the most extreme words, AI cant do what you ask but it's like a nudge in the right direction.
Clearly you’ve been doing some experimenting too! I did notice that, at times, I’d throw in some intense word and it’d help.
The biggest issue seems to be with my reference images. They need to be very similar between morphs. A little extra color here, or lighting there and there’s either some dramatic change or just a fade out / fade in between images.
NICE
😘
🔥
❤️
This is fascinating. Great work!
I wonder how Runway Gen 3 compares. I haven't subscribed to Luma so my experience there is limited.
It’s funny, the snow beast I couldn’t get to work was inspired by a Runway 3 demo. I’ve only used Runway briefly but I’ll say this, I struggled to get a werewolf transformation out of Luma but Runway did a bang-up job first go.
@@EasyStartAI Interesting. So with my Gen 3 unlimited I should be good to go with making a snow monster short haha
There's something about Luma physics or human movement that I find odd many times but again, haven't used it all that much. But so far I'm also struggling with getting results from Runway...
Would be interesting if you do use Gen 3 at some point to compare. Subbed to you so I'll look out for more.
@@leonpeeon I'll definitely being diving into Runway next. I agree with your sentiment on Luma. What appealed to me was the image-to-video. I have no doubt Runway will incorporate this soon but I definitely struggle quite a bit more with text-to-video generations.
got you A sub..
I really appreciate it! Just trying to have some fun and hopefully provide some decent info along the way.
WTF bro your a genius
LOL! Far from it. Just have nothing better to do…
Could this knowledge be feed back in claude, so it can refine it prompting?
I actually fed it my whole data sheet and asked for insights. They were, meh. At first I started to refine it (too long, too many instructions ) and it definitely helped.
Eventually, I realized that even with 500+ prompts of data, it was still too few. For it to work as we want, I need thousands more rows of data and I need to add more columns like subject, image description, movement requested, etc. Then I think we’d be getting somewhere.
I might revisit this in the future or build a GPT around it for everyone to use (assuming it’s useful).
Yes, the luma AI is very good, but the resulting video is very blurry, the details are lost.
How can you keep the camera not moving when using a picture? Enhance off but doesn't work
Ha! It's tougher than you'd think. I tried a few out like static shot, freeze frame, still shot but surprisingly "still image" is giving me the best results (enhance off).
Ex: still image, jellyfish swimming
@@EasyStartAI Thank you so much. I appreciate your help 😃
That was a great video. I think you channel will grow. They say that Kling AI video generation, is great. Maybe you'll find time to check it out? I haven't done so myself yet.
Thank you for those kind words! Yes, I have my eye on Kling and Runway. Hope to get something out about those.
do you have the best prompts you made anywhere I'm trying to train chatgpt into generating good prompts
Yes. Give me a day and I’ll send
@@EasyStartAI thx
how to use it privte in kaggle or some thing else
500 generations? So youre the reason the que was taking ages gee thanks bro 😑
😂 Guilty.
now if AI could fix your low audio next print you'd be square
Appreciate the feedback! I had some audio issues this time around. We’ll get it right on the next one.
@@EasyStartAIThere's an AI that can fix audio issues! Adobe has an audio cleaner called Adobe Podcast - Enhance Speech that enhances speech recordings and removes ambient noises. Producing studio quality audio and it's free to use. Loved the in-depth look at your approach to generation with Luma!
@@EasyStartAI hey no problem if you ever need any help I'm a mix and master engineer. Great video.
Thank you! TBH I just need to invest in a better setup. It’s like dongle city getting everything connected. I actually did run it through Adobe Podcast Enhance, the output seemed off to me on this one(typically it’s great). I messed with EQ, bought some Waves plugins, but eventually just posted it and vowed to do better next time. Appreciate the recos though
@@EasyStartAII dont think the audio was that bad. 11 labs do a audio clean up now. 😊
The biggest flaw is,Characters will automatically change their faces.I am Asian.I'm just trying to make the character smile.But the final effect is,I'm transformed into a European I don't recognize.
Totally agree! World building is great but keeping humans consistent is SO hard / impossible.
There needs to be some movement and duration sliders (along with model improvements). Because all generations default to 5 seconds, it overcomplicates a simple smile.
Yeah, but the solution is I have to deepfake every face to make it consistent.
I doubt the creators of luma ai were this intense about it-it’s just a silly video generator, after all! But here you are, analyzing it like it's rocket science!
Haha, I was curious!
The makers do care about it obviously and they’re making money off it and offer price plans for businesses. What a silly comment you made.
@@LethalMartialArtist Oh sure, because what the world really needs is to drop cash on an AI video generator that turns five seconds of cats into... whatever. It’s absolutely critical to some industry, though for the life of me, I can't figure out which one.
@@Metarig again you’re showing a lot of naivety, the platform and many other competitors platforms have recently being released. Clearly you think it’s going to stay for 5 seconds of video forever? It will change as well as many other features in time and improve. The same way a lot of tech does that you use online. So because you can’t think of anything it means the technology is redundant 🤷♀️
@@LethalMartialArtist I get it, technology is my bread and butter. I earn from it, but I also know what's just hype. Sure, one day you might just type in a prompt and get a decent video back, but let's be real, that day is not coming anytime soon. Honestly, I'm not even sure I'll see it happen in my lifetime.
Bro be honest
really you analyzed 500+ Generations?
Haha! Got to like 300 something then started to try keyframes. A lot of them which I didn’t use a prompt for or didn’t feel like saving all the images so I stopped analyzing in my sheet.
I did generate well over 500 generations (not cheap). I just “analyzed” the keyframes portion in my head.😏
@@EasyStartAI Any way it's still huge and thanks for these efforts mean while me: 1st effort 😃 5s😎 10 😦 then 😡