I had to subscribe for this one. I'm going to be checking out your videos. 10/10 Im still learning but I can tell that i'm getting better. A lot of people think its easy but they dont realize how much work and time goes into videos like this one. I work a full time job and work on videos when I can. So far primarily 9:16 on tiktok but when i get better I want to do 16:9 on YT. If you have any tips please let me know Thanks
Thank you so much for subscribing and for your kind words.You’re absolutely right, creating videos takes a lot of time and effort, but it’s so rewarding to see progress like you’re describing. One tip: try experimenting with different image styles and visual approaches in your videos. It’s a great way to develop your unique style and keep things fresh for your audience.
@@AISimplified-lv8gs thank you for the advice. I'm only maybe 2 months in and really only nights and weekends. So far I've only made narration videos but I'd like to learn all I can. Can you elaborate on your tip as far as what you mean by different image styles and visual approaches. I'd love to try anything I can to keep developing. Thanks for the advice.
You can use Suno AI to create a custom theme song! It allows you to generate both the lyrics and the singing voice. You can start by writing the lyrics that match the mood and style of your theme. Then, use Suno AI to produce the singing. It’s a great way to create a personalized and unique soundtrack
Consistent character generation with 100% accuracy is still a major pain point for most AI tools because it requires understanding intricate details of a character’s features from multiple angles and in various expressions. While tools like MidJourney or Leonardo AI can create stunning individual renders, maintaining consistency across different poses, lighting conditions, and scenes is challenging. Most of these tools rely on generalized models that aren’t trained on specific datasets tailored to a single character. For true consistency, custom models need to be trained using a large dataset of the character from various angles, often requiring advanced techniques like LoRA. This limitation makes it difficult to ensure accuracy in dynamic or cinematic scenes, especially when high-quality, detailed, and repeatable results are required
@@AISimplified-lv8gs thanks for info Boss, so it's almost impossible to generate and keep consistent Alien creature character as of now, especially from one image and especially with user friendly websites?
To craft a music video that visually syncs with the beats of your techno track, start by generating thematic images with Midjourney based on the mood and rhythm of your music. Once you have these images, animate them using Kling AI or Runway to add dynamic movements that match the intensity of your track. Finally, assemble all these elements in a video editing software, carefully aligning each segment with the beats to create a visually engaging experience that enhances the auditory impact of your music
I have encountered feelings about this, because in a few years we are going to have full movies that have absolutely no actors in them, probably not even rigging actors at all.
Yeah. I'm excited to see where this journey takes us, but it also raises interesting questions about the role of human creativity in this AI-driven future
Some open-source models, like Stable Diffusion allow that for research or creative purposes.These models can be run locally if your system meets the hardware requirements.
Im spanish and the video translation was horrible, most of the sound it did were not even words, just random noises 😂 although this tutorial was extremely helpful mate, thanks a lot
You're right-I need to improve on the sound effects! I was running out of time since the image generation took way longer than expected. I'll definitely put more focus on that in future videos. Thanks for the feedback! 😊
I worked hard to get images only from Midjourney to keep the same look for the first 30 seconds. Then I switched to ChatGPT's DALL-E to speed things up because I'm running out of time. But if we spend more time, we can make the images more consistent from MJ
I personally don’t have time to set up all these software tools (like dealing with DLLs) on another machine, maintain a GPU, or go through workarounds for each step just to get things running. Plus, the local setups often lack the easy-to-use, richer interfaces and high-quality results that I want. That’s why I prefer using cloud-based apps, even if they’re paid-they’re much faster, easier to access, and offer better features without all the hassle!
99% of people watching this don't have time, skills or powerful enough hardware to set up local enviroments, what are you smoking? Cloud based tools are the way to go.
i saw hundred of tutorials, u r one of best 5
Your comment made my day, thank you!
Amazing content like always keep up the good work bro!
Amazing tutorial , with good suggested links !
I had to subscribe for this one. I'm going to be checking out your videos.
10/10
Im still learning but I can tell that i'm getting better.
A lot of people think its easy but they dont realize how much work and time goes into videos like this one.
I work a full time job and work on videos when I can.
So far primarily 9:16 on tiktok but when i get better I want to do 16:9 on YT.
If you have any tips please let me know
Thanks
Thank you so much for subscribing and for your kind words.You’re absolutely right, creating videos takes a lot of time and effort, but it’s so rewarding to see progress like you’re describing. One tip: try experimenting with different image styles and visual approaches in your videos. It’s a great way to develop your unique style and keep things fresh for your audience.
@@AISimplified-lv8gs thank you for the advice. I'm only maybe 2 months in and really only nights and weekends. So far I've only made narration videos but I'd like to learn all I can. Can you elaborate on your tip as far as what you mean by different image styles and visual approaches. I'd love to try anything I can to keep developing. Thanks for the advice.
People are watching this 100 years in the future and laughing at us
This is awesome
Вот это действительно произведение цифрового искусства!!! 👍👍👍
Very cool video!
Also using MidJourney Style Ref helps w color and tone consistency between images. I haven’t tried Luma so will checkout, great video 🎉
Nice thanks , do you have any suggestions on making a custom theme song with lyrics and singing?
You can use Suno AI to create a custom theme song! It allows you to generate both the lyrics and the singing voice. You can start by writing the lyrics that match the mood and style of your theme. Then, use Suno AI to produce the singing. It’s a great way to create a personalized and unique soundtrack
You got crazy midjourney prompts man, amazing!
How you come up with the inspiration?
Thanks! I like to think of unusual or strange scenes, and then I ask AI tools like GPT for cool ideas
Is there user friendly website, which can generate exact cinematic constant character from one reference image?
Consistent character generation with 100% accuracy is still a major pain point for most AI tools because it requires understanding intricate details of a character’s features from multiple angles and in various expressions. While tools like MidJourney or Leonardo AI can create stunning individual renders, maintaining consistency across different poses, lighting conditions, and scenes is challenging. Most of these tools rely on generalized models that aren’t trained on specific datasets tailored to a single character. For true consistency, custom models need to be trained using a large dataset of the character from various angles, often requiring advanced techniques like LoRA. This limitation makes it difficult to ensure accuracy in dynamic or cinematic scenes, especially when high-quality, detailed, and repeatable results are required
@@AISimplified-lv8gs thanks for info Boss, so it's almost impossible to generate and keep consistent Alien creature character as of now, especially from one image and especially with user friendly websites?
i have made a techno track, how can i make music video which also chnages with beats? thanks also all the best
To craft a music video that visually syncs with the beats of your techno track, start by generating thematic images with Midjourney based on the mood and rhythm of your music. Once you have these images, animate them using Kling AI or Runway to add dynamic movements that match the intensity of your track. Finally, assemble all these elements in a video editing software, carefully aligning each segment with the beats to create a visually engaging experience that enhances the auditory impact of your music
Nice complete video, I am just getting weary of the epic voice.
why you use vmeg? do you have multiple accounts? Btw nice work:)
No, I just promoted that product during the filming process, emphasizing that it is useful for multilingual translation
I have encountered feelings about this, because in a few years we are going to have full movies that have absolutely no actors in them, probably not even rigging actors at all.
Ultimately in short, It will become automated with the artist's previous works and data.
Yeah. I'm excited to see where this journey takes us, but it also raises interesting questions about the role of human creativity in this AI-driven future
I have to ask.
Can someone use nudity? Or is it unavailable? Violence and horror are allowed.. Gore, too.. but nudity..
Some open-source models, like Stable Diffusion allow that for research or creative purposes.These models can be run locally if your system meets the hardware requirements.
Imagine if somebody saw this ten years ago 😂😂😂😂😂
Im spanish and the video translation was horrible, most of the sound it did were not even words, just random noises 😂 although this tutorial was extremely helpful mate, thanks a lot
Sorry about the translation issues! I'll definitely work on fixing that in future videos. Thanks for your understanding and support!
@@AISimplified-lv8gs its not your fault mate, is just the platform doesnt do the job. Id use gpt or elevenlabs instead for translating audio
Sound effects you did not did much.... :P
You're right-I need to improve on the sound effects! I was running out of time since the image generation took way longer than expected. I'll definitely put more focus on that in future videos. Thanks for the feedback! 😊
@@AISimplified-lv8gs I think sound design is gonna be one of the most important things in the future of AI videos
How to make movie where every max 5 seconds you have no consistency 😂
I worked hard to get images only from Midjourney to keep the same look for the first 30 seconds. Then I switched to ChatGPT's DALL-E to speed things up because I'm running out of time. But if we spend more time, we can make the images more consistent from MJ
Se nota que no sabes hablar Español la opción de VMeg te dio una traducción casi incomprensible. No funcional.
Sorry for that, I will check it next time
Unless these AI apps are for local installation, don't waste your time advertising them.
I personally don’t have time to set up all these software tools (like dealing with DLLs) on another machine, maintain a GPU, or go through workarounds for each step just to get things running. Plus, the local setups often lack the easy-to-use, richer interfaces and high-quality results that I want. That’s why I prefer using cloud-based apps, even if they’re paid-they’re much faster, easier to access, and offer better features without all the hassle!
99% of people watching this don't have time, skills or powerful enough hardware to set up local enviroments, what are you smoking? Cloud based tools are the way to go.
First, so pin me!