What an impressively easy way to completely change your cinematography quality! Dear sir, your video was so well made that it really made me think that this is not that hard and all it requires it a bit of foresight. My head is buzzing with possibilities.
Thank you for letting me know! It's great hearing that the video is doing what it intended for! It really isn't hard to create a matte this way. I began trying generative fill tool with some photos I had. When I learned how easy it is, I started pondering how useful it could be for video. I want to try the doorway trick shown @ 1:19. Thanks again for the comment! I'd love to see what you come up with!
Thank you very much for this tutorial, you have opened a range of possibilities in my mind. I'll show you what I'm going to do with time. You really are a teacher. 🙌🏽🙌🏽
That's the most clever use of generative AI I learned about, it's coherent, production ready, the quality is awesome and the results are almost invisible. Great work!!!!
Do you mean expand all surrounding sides? If so, you could probably use the same method as this video. ruclips.net/user/shortsStLzReK03aQ?si=8xxxEBWa81oklAHR You won't want to have and movement on any of your edges. This one may also help you make sense of the concept. ruclips.net/user/shortsn_aEut13WJc?si=PmFpWC7rUD3GwNPj I'm working on a few new longer videos that should be coming soon!
Thanks for the comment! The Beta version is what it is (amazing still). It will be a higher resolution soon though. In the meantime, I'm going to keep learning how to use it.
Great Video. Can I create the effect of a person walking in front of the Generative fill back ground (upper body showing over the horizon for example) or will the person have to stay below the created background for the entire shot?
That’s a great question. If you want to be faster, try not cross the effect (matte) with any motion. What you’re asking for could be created by: 1. Using a green screen. 2. Rotoscoping around your upper half (as per your example), but this is my least favorite option. (I’m a bit lazy sometimes) 3. You could also use Final Cut Pro’s Scene Removal Mask (if you use FCP) You’ve given me some ideas for content! Thank you!
@@ProfKlaus Thanks! I bought the program an hour or so ago and am playing with it. One more question if you have time. I've been using magix 2021 for video editing. What do you recommend for video editor?
@TUCOtheratt For me personally, Final Cut Pro. I know it inside and out and I’ve always loved it. For creating RUclips videos, it’s so fast to edit on for me. For you, I would recommend Premier Pro if you have not started on another program yet. Premier Pro is a very good editor and I do use it from time to time. You may even have access to it with your Adobe subscription. I’ve noticed Adobe has played the game right with Colleges and Universities by giving full access to Adobe Cloud while a student. Smart. Many business ask that you edit with Premier if you do work for them. (So they can open the files down the line again) I think people should be able to edit on whatever works for them but Adobe wins by having schools and big business’s using their Product. I’ll continue to use FCP as my main editor though. 😉
@@TUCOtheratt You could do something like this. www.dropbox.com/scl/fi/gjspqrekzz2r4k4w7xexi/Mountains-and-Cactus.jpg?rlkey=061x4zj956aj66gg8cf56vvz6&dl=0 Cool channel by the way!
For me this is an amazing demonstration. I imagine it won't be long before this will be able to be done in the video apps with live backdrops instrad of only still shots. It's really something to know where you placed the mask, and yet not be able to detect any signs of transition between the layers.
It is impressive. I’m excited to see what “generative fill” will do when it’s released into Photoshop officially. Hopefully they change the name. Haha It just doesn’t roll off the tongue very well. Thanks for the comment!
The ONLY problem with this stuff is when you want to do different angles of the same location. Mountains will never be the same ones. Can definitely be smart and work around it.
I could see that being the sort of thing that's easily resolved in the future. Just being able to do this as it is now is a huge step, and as generative AI continues to improve I'm sure being able to generate consistent images will be one of the first things that's figured out. Along with fingers... 😀 Be cool to see what these tools offer in a year or two.
Even tho its impressive, there are many improvements Adobe should bring to the table for the resolution situation, being able to output only 1024x1024 ai generative fills, its quite a limit. For your example of paintings there are clear gaps between the original footage and the generated layers, because of the ai output. that could be solved by generating separate 1024x1024 square selections to expand the image, but it takes way much longer. Adobe should soon add up support for larger resolutions.
Thank you! I love hearing things like that! I guess I left some of it a bit too loud for some on this. I have to consider all types of setups I suppose.
I think what you’re asking is, “are you able to carry the look of one image into another but from another angle? If so, I don’t think there is a way to do that automatically (yet). It’s possible to get what you need but it would likely take several attempts to get an appropriate match.
@@ProfKlaus yeah I guess the word would be continuity. Placement of props, actors, lighting etc. It seems like AI does not yet take this kind of thing into consideration. Especially placement of physical objects in the space.
@@scotthannan8669 I used to work for a major corporation in the Media Arts department. The company changed thier branding and as a result they limited the fonts, colours, and even the type of video/photos we could use. It seemed very constricted at first but once we began following the new brand rules, our creativity came through in ways we wouldn't have discovered otherwise. Sometimes limitations bring out some pretty unique solutions from us humans. Lol It is possible though but it would take a little more time. You got me thinking and inspired some ideas! I hope none of this seemed too preachy. Thanks!
@@ProfKlaus I once heard a drama professor tell his students that limitations are actually beneficial for creativity. I was surprised because I expected to hear a liberal take from this group (unrestricted freedom). He explained just what you did. Working around limitations often leads to unexpected insights and interesting alternatives there were never considered. Sometimes a liberal and unrestricted situation simply leads to lame outcomes because it doesn’t challenge the artistic side of our human nature.
great, but problems in these kinds of shots might become if the clouds are not moving at all + the trees are static and so. but definitely an amazing thing with lots of uses.
Thanks so much for your comment. I think that the key to success with this tool is subtlety.. If people naturally believe what they are seeing, they will be less critical.
I wonder if Generative Fill will require Creative Cloud subscription to be used in future. I read Photoshop Beta is currently free for Adobe Photoshop users but in future, i worry about that Adobe might charge more only because of this feature; similar to other graphical ai developers who are already charging their users monthly for usage of their online ai application interfaces.
Such an insane feature but I have a question. Let's say you wanted to use a real mountain in the background, like Everest, from a stock video, how would I tweak this process to generative fill a blend between the clip of the mountain and my own footage?
would take a off your everest footage and arrange it in PS with your other video, compositing it, then gen fill what you need to fill to make it all blend.
Good question. I'll try to make a short this week to show it being done. If you apply the same methodology as this video, it should make sense. I'll also make a long form video going through in detail as soon as I can. 😊
I would be curious to watch the final edit because the generative fills were all different and so the final edit would look odd because the backgrounds are all different. Was that just for the demo? What would be cool would be if you could get the AI to generate the same scene from the different perspectives of the video footage. I guess 3D environment rendering will be the next step in AI?! I wonder if that kind of thing exists for Blender.
It was for the demo. This particular video was about how to use the Generative fill, but a scene from multiple angles would be a great video. The one thing is… if I were to make a video and not tell anyone that I used the effect, it’s likely nobody would notice. But when you do a video about it. Then the highly critical eye will be out on the prowl. haha. I’m awaiting a new Air 3 drone which will give me a stable tripod shot from high up. My old drone isn’t always that stable in a slight wind. I think I’ll do another demo that ties together several shots to make a full scene. Thanks for the question and the idea!
That last shot where you are driving off along the road, you could generate a cliff on this side of the car could look cool like you driving along a cliff edge. Anyway great video, thanks..
Thanks for your comment. If you are making something that requires multiple similar effects, you'll need to think ahead creatively. Using the AI to complement the shots but not ot to create the video. If you look at some of the examples from "old Hollywood" in the video, some of those are just one shots. Establishing shots. You should use the feature to improve your individual shots with subtle adjustments. Like covering or adding things. Using creative fill subtly will likely go unnoticed if people aren't aware it's been used. My video demonstration on how it works required something more obvious. 😉 You could also use Gen Fill to blend 2 or more shots together. That way you could use multiple angles from a location too. ruclips.net/user/shortsn_aEut13WJc?si=_sCEukrLJk206qcD
Hi Dave, I'm not sure if its the best way, but for my video I used the Rain filter in FCP and found it useful. It allows you to adjust in a way that I found to be pretty easy. The settings will be unique to whatever scene you create but here is mine.
@@ProfKlaus This is by far the most inspiring video on Generative compositing, and the rain effect at the end, knocked the ball outta the park; thank you so much. for sharing your creative knowledge.😊
Thanks for the question! I’m on vacation, but will be able to answer you clearly in a couple of days. The short answer is I used a few frames from a white solid from Generators in Final Cut Pro (by titles). Just a few frames. I don’t recall if I used a blend mode or just t started at 70% opacity and took it don’t to 0%. I just need to look when I am back by my computer. I hope that helps for now.
Just how specific instructions can you give the AI to generate fills? With few words it made an acceptable job, but I'm sure with detailed instructions it can make better.
Yes my demonstrations in the video were simple solutions. I suggest with any effect that subtlety is better. If you were watching one of those scenes in a video that wasn’t about Generative Fill, you probably would have just accepted it without suspecting that something had been altered. However, you can get pretty darn specific. For smaller fills the shape of your selection helps. You can add complements to your scenes. Like flowers on a desk or an entirely different office space in the background. You can be specific on type, color, type of vase. You can add broken down cars on the side of a road. It takes lighting into consideration. It’s pretty amazing. When using the effect with video, you don’t want to cross the effect with movement (unless you don’t mind masking). With a little foresight to consider where you can add things, your imagination is the limit. You should download a trial of Photoshop and give it a whirl. It’s now out of Beta and part of regular Photoshop.
Yes static is key. My DJI drone has a mode called tripod which helped. You can use the most steady 4-5 seconds for your shot. Plus stabilization if needed. Thanks for the comment!
Oh, another thing... If you're using a LUT with your video, along with color grading, should you export the rendered frame or the original untouched video? With the former, you can't go back and change the grading, contrast, etc. because it won't apply to the matte. With the latter, I guess you should include the LUT, as that's color space info that should have been metadata but that's the difference between dSLR's and real video cameras. But, if it's not graded, will the "off" color cast or whatever hurt the generative fill AI?
I understand what you are asking. In this case I did not do any color correction before exporting a picture for photoshop. It was the video as I shot it. I did not use a LUT in this tutorial. The rain and camera tilt effect was created once the mask was applied. I have used the effect with color corrected footage and it still looks great. The Generative Fill feature is surprising.
I’ve had no luck with generative fill. Artefacts and colour discrepancy. All AI imaging I have used like mid journey have been useless in getting images that match my vision. And yes I have researched prompting and done some homework. But it never kicks out what I am after. Fine for random, non-specific work. Better to learn to make it yourself or hire an artist.
I have had issues with prompting as well. However I have had no issues with colour discrepancies. Are you including a small sample of what you are connecting the image to? Thanks for the comment.
Weird. In PS, it's been amazing: in and out paintsingspecific items and backgrounds I ask for, AND it follows the depth, color, light and shadow direction and perspective of what I'm basing the prompts on in my original images. As of yesterday Midjourney (which I don't care for) actually introduced some amazing in painting features too, by the way.
TIP: When using the Lasso tool.... if you hold down the OPT key (Macintosh) the lasso temporarily turns into the Polygonal Lasso tool... meaning you can just click around on your image and the lasso will create a straight selection line between where you left off and where you licked. You can click multiple points to cover large distances quickly, or click and drag to temporarily draw curved lines again (or you can simply release the OPT key to return back to regular Lasso mode). I find this most helpful as it's like having two tools in one, as you don't have to keep your finger down on the mouse button the whole time.
@@bogdan3978 Yes. You can use the exact same method with Premier. PNG’s are transparent so can just lay it on top of your footage. (I hope I answered your question correctly).
That is correct. Still a good time to learn how you can use it when it becomes part of Photoshop officially. Update *** it is now been added into Photoshop this week. You can use it commercially😉
@@peacefusion Adobe just released Generative Fill officially into Photoshop this week. It is now okay to use commercially. The AI gets its source imagery from Adobes own stock footage service. I believe the artists receive a payment of some sort
Just did this on my 2011 xD. In my case, getting at that 14 mm bolt was a real chore. The rest of the operation probably amounted to 15 minutes or less, but that one bolt was easily a half hour between removing amd reinstalling it. Thanks for the video.
You gave multiple examples of when it just works easily. How about showing us what to do when the camera shifts _slightly_ during the clip? What about a case where there _is_ movement crossing the matte? In _Indiana Jones_ there was a scene where the actor's legs crossed over the matte, and a technician painted a matte for each frame just on the legs. I think there should be an automated way to say "make a mask for anything moving in this region"? Easy is easy; I didn't get anything out of this video that I hadn't already learned from Piximperfect's brief video showing one example. Show us the *hard* , the cases that really happen.
Movement isn’t possible with this effect. I was clear that it had to be a static shot. You would have to make a cut and go to a different angle when crossing the mask at this level. Sorry I didn’t reach your expectations.
Using Davinci Resolve in fusion you can make that clip in 3D and track the data of video clip then just scale and transform your fill clip accordingly and I'm 90% sure it will work.
@@ihadanamebutitwasbadsonowi8186 This here has nothing to do with green screen. No one shooting documentaries is using green screen to begin with. What is your point?
I actually don't think it would be a very honest documentary to use this feature in outside scenes. (subject depending). However, it could still be used to enhance the interviews and making them more pleasing to the viewer.
I completely agree with you. We are fooled all the time in modern documentaries. To me, a documentary should be an unbiassed opinion and showing what has happened or what is happening without any altering. Presenting facts and letting the audience form an opinion. These days, the filmmakers are often trying to prove a point, and showing only what goes towards that point. None of this was the intention of this video lol. Thanks so much for commenting. I enjoy hearing from everyone.
If someone experienced in the craft uses Generative Fill, they would probably be light years ahead still. I imagine that if were to provide this particular videos example scene as a concept for a scene in a film or show... you would be able to make it so much better using the same software! It's not going away so use it to your advantage! Thank you for the comment!
The problem is the nature of capitalism. Instead of this giving me more options and freedom in my work, it's probably just going to mean our clients are going to expect us to be able to work faster and for less. I've been in this industry for almost a decade and every year all I hear is "we need everyone to work faster." I hope you're right and this does lead to better quality work. That would be nice.@@ProfKlaus
Informative but can you rethink your background music and volume? Overshadows your hard work I think?
Thanks.
I agree. Better ducking would help. Awesome content nonetheless!
@ProfKlaus No man, I really your background music. I think you used around 4-5 different music. Sorry. But I like it 😂
@@reign5426 Very much appreciated.
@@quirkworks4076 I appreciate your comment and noted!
I can barely believe what I see online at times, and this is extraordinary technology. Incredible.
I completely agree! Thanks for your comment!
BEST TUTORIAL ON GENERATIVE FILL!!!
I am humbled by your statement! Thank you!
That's d most interesting editing video I've watched this year👌🏻🥂
Thanks!
@@ProfKlaus 🎯🎯🎯
Keep rocking bro.NICE video.I love it
I love your comment!
What an impressively easy way to completely change your cinematography quality! Dear sir, your video was so well made that it really made me think that this is not that hard and all it requires it a bit of foresight. My head is buzzing with possibilities.
Thank you for letting me know! It's great hearing that the video is doing what it intended for!
It really isn't hard to create a matte this way. I began trying generative fill tool with some photos I had. When I learned how easy it is, I started pondering how useful it could be for video.
I want to try the doorway trick shown @ 1:19.
Thanks again for the comment!
I'd love to see what you come up with!
there is no video on youtube like this one thanks for sharing ur ideas
So nice of you. Thank you. It means a lot!
This video helped me to understand the vast possibilities of generative fill@@ProfKlaus
Great tutorial! Thank you so much! Clear and precise!
Thank you for the comment!
Thank you very much for this tutorial, you have opened a range of possibilities in my mind. I'll show you what I'm going to do with time. You really are a teacher. 🙌🏽🙌🏽
Thanks Alex! I can't wait to see what you create!
If there's anything you like to learn in particular, I'm always open for suggestions!
That's the most clever use of generative AI I learned about, it's coherent, production ready, the quality is awesome and the results are almost invisible. Great work!!!!
Thanks for comment! I appreciate it!
Your videos are simply amazing 👏
You made my day!
Great, step by step tutorial! Thank you so much. Great way to embrace new technology. Creative minds will always find a way to be creative.
Thanks for the comment! I'm happy you enjoyed the video!
Thank you for creating this tut! Very helpful!
Thanks for taking the time to let me know!
Time to make our blockbuster movie ourselves. Awesome work
Yes indeed! It opens so many options!
Thanks for the comment!
Thank you. This is brillant!
Thank you! I’m so glad you enjoyed the video!
Thanks for showing actual potential of generative fill and its use..
My pleasure!
Great video professor 🙂. Editing videos is now much creative.
Thank you!
Thank you for sharing your tips. Very clear 😊
Thank you! I plan on creating more very soon. I also just recieved an Air 3 Drone so I will have a more stable shot from above moving forward. :)
Absolutely genius! I would mention that you want the drone shots to be as stable as possible and to stabilise in post to remove any shake
This is true. I did stabilize 2 of the shots.
Great help & insight :)
Thanks for the great comment!
This is massive :)
Great work!
Thanks so much!
Oh! So beautiful🎉
This is massive. But how simple it is!
Creating the perfect ambiance for your peaceful moments. Remember to like and subscribe for more serene tunes! 🍃💙
Awesome. Thanks and Good Luck!
Thanks so much!
Awesome video. Great channel. Liked & subscribed
Thank you for saying that!
Thanks, this is inspiring.
Thank you for letting me know! It means a lot!
🔥🔥 video
Thank you very much!
That was so great! Thanks 🙏
I appreciate your comment!
amazing mate. Thanks for the great idea.
Thanks for the comment!
This was informativ and great, and your story telling with the ambient music is relaxing. Perfect length, thank you🎉
Thanks for the nice comment. I appreciate it!
Well done. Thanks for this!
My pleasure. 😉
Amazing. Can we see another video for how to use PS beta to zoom out a video in final cut?
I mean widening a scene
Do you mean expand all surrounding sides? If so, you could probably use the same method as this video. ruclips.net/user/shortsStLzReK03aQ?si=8xxxEBWa81oklAHR
You won't want to have and movement on any of your edges. This one may also help you make sense of the concept.
ruclips.net/user/shortsn_aEut13WJc?si=PmFpWC7rUD3GwNPj
I'm working on a few new longer videos that should be coming soon!
@@ProfKlaus almost the same what I meant. Cool. Thank you so much
Great tip. Thanks for sharing!
Thanks for commenting!
Really well done
Thanks for the comment. I love positive feedback!
Great idea, have you thought about frame tracking the generated backgrounds for clips with movement ?
I have but haven’t had any luck with it yet. I’ll certainly post something when I do! thanks for the comment!
Awesome content!
Thanks! I'm glad you enjoyed it!
Well made tutorial
Thank you
Thank you 😊
Wow !! Thx dude !
No problem!
Great content! The only downside is that the resolution of the image produced is not very high. I am optimistic that it will improve with time.
Thanks for the comment! The Beta version is what it is (amazing still). It will be a higher resolution soon though. In the meantime, I'm going to keep learning how to use it.
Right; you left out the step to upscale the resolution, or generate in pieces if it's a long stretch across the frame.
@@JohnDlugosz I didn’t upscale anything. Everything that was done is shown in the video.
Great Video. Can I create the effect of a person walking in front of the Generative fill back ground (upper body showing over the horizon for example) or will the person have to stay below the created background for the entire shot?
That’s a great question. If you want to be faster, try not cross the effect (matte) with any motion.
What you’re asking for could be created by:
1. Using a green screen.
2. Rotoscoping around your upper half (as per your example), but this is my least favorite option. (I’m a bit lazy sometimes)
3. You could also use Final Cut Pro’s Scene Removal Mask (if you use FCP)
You’ve given me some ideas for content! Thank you!
@@ProfKlaus Thanks! I bought the program an hour or so ago and am playing with it. One more question if you have time. I've been using magix 2021 for video editing. What do you recommend for video editor?
@TUCOtheratt For me personally, Final Cut Pro. I know it inside and out and I’ve always loved it. For creating RUclips videos, it’s so fast to edit on for me.
For you, I would recommend Premier Pro if you have not started on another program yet. Premier Pro is a very good editor and I do use it from time to time. You may even have access to it with your Adobe subscription.
I’ve noticed Adobe has played the game right with Colleges and Universities by giving full access to Adobe Cloud while a student. Smart. Many business ask that you edit with Premier if you do work for them. (So they can open the files down the line again)
I think people should be able to edit on whatever works for them but Adobe wins by having schools and big business’s using their Product.
I’ll continue to use FCP as my main editor though. 😉
@@ProfKlaus Thanks!😃
@@TUCOtheratt You could do something like this. www.dropbox.com/scl/fi/gjspqrekzz2r4k4w7xexi/Mountains-and-Cactus.jpg?rlkey=061x4zj956aj66gg8cf56vvz6&dl=0
Cool channel by the way!
great work
Thank you!
For me this is an amazing demonstration. I imagine it won't be long before this will be able to be done in the video apps with live backdrops instrad of only still shots.
It's really something to know where you placed the mask, and yet not be able to detect any signs of transition between the layers.
It is impressive. I’m excited to see what “generative fill” will do when it’s released into Photoshop officially. Hopefully they change the name. Haha It just doesn’t roll off the tongue very well.
Thanks for the comment!
The ONLY problem with this stuff is when you want to do different angles of the same location. Mountains will never be the same ones. Can definitely be smart and work around it.
I think the same, it's an amazing tool but we can have a lot of continuity problems
I could see that being the sort of thing that's easily resolved in the future. Just being able to do this as it is now is a huge step, and as generative AI continues to improve I'm sure being able to generate consistent images will be one of the first things that's figured out. Along with fingers... 😀
Be cool to see what these tools offer in a year or two.
Professor ♥
That's awesome
Thank you for leaving a comment!
Even tho its impressive, there are many improvements Adobe should bring to the table for the resolution situation, being able to output only 1024x1024 ai generative fills, its quite a limit.
For your example of paintings there are clear gaps between the original footage and the generated layers, because of the ai output. that could be solved by generating separate 1024x1024 square selections to expand the image, but it takes way much longer. Adobe should soon add up support for larger resolutions.
Yeah. I think that will be different once they move past the beta stage. Fingers crossed.
dude your music selection is amazing
Thank you! I love hearing things like that!
I guess I left some of it a bit too loud for some on this. I have to consider all types of setups I suppose.
Thanks, amazing stuff!
Thank you!
so dope
I wonder how you can achieve consistency between images where you might have the room at different angles
Easy! Just manually adjust. Not everything will be perfect.
I think what you’re asking is, “are you able to carry the look of one image into another but from another angle?
If so, I don’t think there is a way to do that automatically (yet). It’s possible to get what you need but it would likely take several attempts to get an appropriate match.
@@ProfKlaus yeah I guess the word would be continuity. Placement of props, actors, lighting etc. It seems like AI does not yet take this kind of thing into consideration. Especially placement of physical objects in the space.
@@scotthannan8669 I used to work for a major corporation in the Media Arts department. The company changed thier branding and as a result they limited the fonts, colours, and even the type of video/photos we could use.
It seemed very constricted at first but once we began following the new brand rules, our creativity came through in ways we wouldn't have discovered otherwise.
Sometimes limitations bring out some pretty unique solutions from us humans. Lol
It is possible though but it would take a little more time. You got me thinking and inspired some ideas!
I hope none of this seemed too preachy. Thanks!
@@ProfKlaus I once heard a drama professor tell his students that limitations are actually beneficial for creativity. I was surprised because I expected to hear a liberal take from this group (unrestricted freedom).
He explained just what you did. Working around limitations often leads to unexpected insights and interesting alternatives there were never considered. Sometimes a liberal and unrestricted situation simply leads to lame outcomes because it doesn’t challenge the artistic side of our human nature.
Great idea
Thanks!
Thanks
OSM 🤩😍
Thank you!
Thanks for this video, it helped me a lot 💪🏻
I love to hear that! Thanks!
Nice Professor!
Thank you very much! I just figured out some other methods! More to come soon!
@@ProfKlaus Love the channel, keep it going!
Great channel! Very informative and interesting.
@@gabrieleausten2600 Glad you liked it!
great, but problems in these kinds of shots might become if the clouds are not moving at all + the trees are static and so. but definitely an amazing thing with lots of uses.
Yes. You always need to consider these type of things.
(Next time you watch Spielberg's Jaws, watch the sky from shot to shot)
This is really impressive
Thanks so much for your comment. I think that the key to success with this tool is subtlety.. If people naturally believe what they are seeing, they will be less critical.
@@ProfKlaus Yes, the tool can be very useful when the shot is filmed steadily and when the AI generates good images that look real!
Adobe should integrate this feature into premiere and AE :)
Heard the will soon. Going to make my videos so much cooler
I wonder if Generative Fill will require Creative Cloud subscription to be used in future. I read Photoshop Beta is currently free for Adobe Photoshop users but in future, i worry about that Adobe might charge more only because of this feature; similar to other graphical ai developers who are already charging their users monthly for usage of their online ai application interfaces.
I'm thinking it will probably be in Photoshop since it is in the "Photoshop Beta". You never know though.
Such an insane feature but I have a question. Let's say you wanted to use a real mountain in the background, like Everest, from a stock video, how would I tweak this process to generative fill a blend between the clip of the mountain and my own footage?
would take a off your everest footage and arrange it in PS with your other video,
compositing it, then gen fill what you need to fill to make it all blend.
Good question. I'll try to make a short this week to show it being done. If you apply the same methodology as this video, it should make sense. I'll also make a long form video going through in detail as soon as I can. 😊
Heres a quick short I threw together for you. ruclips.net/user/shortsn_aEut13WJc?si=ujcK9z8Hlj59Xgms
Thanks Ma men! Blessing
Awesome thanks!
I would be curious to watch the final edit because the generative fills were all different and so the final edit would look odd because the backgrounds are all different. Was that just for the demo? What would be cool would be if you could get the AI to generate the same scene from the different perspectives of the video footage. I guess 3D environment rendering will be the next step in AI?! I wonder if that kind of thing exists for Blender.
It was for the demo. This particular video was about how to use the Generative fill, but a scene from multiple angles would be a great video. The one thing is… if I were to make a video and not tell anyone that I used the effect, it’s likely nobody would notice. But when you do a video about it. Then the highly critical eye will be out on the prowl. haha.
I’m awaiting a new Air 3 drone which will give me a stable tripod shot from high up. My old drone isn’t always that stable in a slight wind. I think I’ll do another demo that ties together several shots to make a full scene. Thanks for the question and the idea!
*good job bro*
Thanks!
Super🎉
Woah! This is very cool 😎
Thank you! I just learned some more methods that may speed this process up a bit too! Watch for more videos!
That last shot where you are driving off along the road, you could generate a cliff on this side of the car could look cool like you driving along a cliff edge. Anyway great video, thanks..
I love that idea!
Problem is the lack of consistency. If you have two shots that are supposed to be in front of the same mountain range how do you get that to match?
Thanks for your comment. If you are making something that requires multiple similar effects, you'll need to think ahead creatively. Using the AI to complement the shots but not ot to create the video.
If you look at some of the examples from "old Hollywood" in the video, some of those are just one shots. Establishing shots.
You should use the feature to improve your individual shots with subtle adjustments. Like covering or adding things. Using creative fill subtly will likely go unnoticed if people aren't aware it's been used.
My video demonstration on how it works required something more obvious. 😉
You could also use Gen Fill to blend 2 or more shots together. That way you could use multiple angles from a location too.
ruclips.net/user/shortsn_aEut13WJc?si=_sCEukrLJk206qcD
@@ProfKlaus thanks for the insightful answer! Great perspective
Whats the best way to add rain in FCP?
Hi Dave, I'm not sure if its the best way, but for my video I used the Rain filter in FCP and found it useful. It allows you to adjust in a way that I found to be pretty easy. The settings will be unique to whatever scene you create but here is mine.
The filter is under stylize, Go here to see my settings. :) www.dropbox.com/scl/fo/oqtb5gmzpnzde3bohrqmq/h?rlkey=snsi9l5d0bb2ecf43mwuvo18h&dl=0
@@ProfKlaus This is by far the most inspiring video on Generative compositing, and the rain effect at the end, knocked the ball outta the park; thank you so much. for sharing your creative knowledge.😊
@@daveschmidt9367 Thank you for the kind words! It's means a lot!
nicely done ;o)
brilliant
Thank you!
I really want to do this as a way to make vertical videos out of longform.
I’m actually working on a video that will show you how to do that. :)
Is it allowed to use the elements for paid work?
It is completely allowed now that it is part of Photoshop officially.
I think this technique is also interesting if you mix it with your own static background shots taken to blend it perfectly in
I completely agree!
MAy I ask where did you get the lightning overlay.
Thanks for the question!
I’m on vacation, but will be able to answer you clearly in a couple of days. The short answer is I used a few frames from a white solid from Generators in Final Cut Pro (by titles). Just a few frames. I don’t recall if I used a blend mode or just t started at 70% opacity and took it don’t to 0%. I just need to look when I am back by my computer. I hope that helps for now.
@@ProfKlaus Thank you, I use Premier Pro not familiar with using Final Cut Pro
@@THEVROMANEMPIRE the same method will apply. Just put a white solid for a a couple of frames at a time on a layer above the video.
Maybe add a before/after clip. Its sometimes not easy to understand instantly which is the original.
🤣
@@MountMatze Your answer make no sense.
Good advice. Appreciate you letting me know it wasn’t clear 😉
Just how specific instructions can you give the AI to generate fills? With few words it made an acceptable job, but I'm sure with detailed instructions it can make better.
Yes my demonstrations in the video were simple solutions. I suggest with any effect that subtlety is better. If you were watching one of those scenes in a video that wasn’t about Generative Fill, you probably would have just accepted it without suspecting that something had been altered.
However, you can get pretty darn specific. For smaller fills the shape of your selection helps. You can add complements to your scenes. Like flowers on a desk or an entirely different office space in the background. You can be specific on type, color, type of vase. You can add broken down cars on the side of a road. It takes lighting into consideration. It’s pretty amazing. When using the effect with video, you don’t want to cross the effect with movement (unless you don’t mind masking). With a little foresight to consider where you can add things, your imagination is the limit.
You should download a trial of Photoshop and give it a whirl. It’s now out of Beta and part of regular Photoshop.
Nice job!
How about the tracking? I did a same shot with a drone but there is a slight movement in it... You then can see its fake
Yes static is key. My DJI drone has a mode called tripod which helped. You can use the most steady 4-5 seconds for your shot.
Plus stabilization if needed.
Thanks for the comment!
Oh, another thing...
If you're using a LUT with your video, along with color grading, should you export the rendered frame or the original untouched video? With the former, you can't go back and change the grading, contrast, etc. because it won't apply to the matte. With the latter, I guess you should include the LUT, as that's color space info that should have been metadata but that's the difference between dSLR's and real video cameras. But, if it's not graded, will the "off" color cast or whatever hurt the generative fill AI?
I understand what you are asking. In this case I did not do any color correction before exporting a picture for photoshop. It was the video as I shot it. I did not use a LUT in this tutorial. The rain and camera tilt effect was created once the mask was applied.
I have used the effect with color corrected footage and it still looks great. The Generative Fill feature is surprising.
❤5 hundred and eighty second subscriber
Thank you so much for subscribing! Really!
good tutorial thank you
Your Welcome!
I’ve had no luck with generative fill. Artefacts and colour discrepancy. All AI imaging I have used like mid journey have been useless in getting images that match my vision. And yes I have researched prompting and done some homework. But it never kicks out what I am after. Fine for random, non-specific work. Better to learn to make it yourself or hire an artist.
I have had issues with prompting as well. However I have had no issues with colour discrepancies. Are you including a small sample of what you are connecting the image to?
Thanks for the comment.
Weird. In PS, it's been amazing: in and out paintsingspecific items and backgrounds I ask for, AND it follows the depth, color, light and shadow direction and perspective of what I'm basing the prompts on in my original images. As of yesterday Midjourney (which I don't care for) actually introduced some amazing in painting features too, by the way.
TIP: When using the Lasso tool.... if you hold down the OPT key (Macintosh) the lasso temporarily turns into the Polygonal Lasso tool... meaning you can just click around on your image and the lasso will create a straight selection line between where you left off and where you licked. You can click multiple points to cover large distances quickly, or click and drag to temporarily draw curved lines again (or you can simply release the OPT key to return back to regular Lasso mode). I find this most helpful as it's like having two tools in one, as you don't have to keep your finger down on the mouse button the whole time.
Thanks for the great tip!
I’m assuming strictly tri pod videography for this ?
At this point yes. Although almost all of my shots were from a drone.
@@ProfKlausdid premier somewhat stick the new png to follow the videos?
@@bogdan3978 Yes. You can use the exact same method with Premier. PNG’s are transparent so can just lay it on top of your footage.
(I hope I answered your question correctly).
These are all steady filmed shots… but if the camera moves or, for example, the car drives into the „ filled area“…how does that work ?🤷🏻♂️
Thanks for the question. You can only do this with static shots. You can add motion in post as I did in the example.
I like your background music
Thanks for letting me know! I liked it too.
@ProfKlaus You are welcome. And great content, it's straight to the point and easy to follow. I subscribed and turned on all notifications 🤍
Isn't photoshop's generative fill technically in beta and not able to be used commercially?
That is correct. Still a good time to learn how you can use it when it becomes part of Photoshop officially.
Update *** it is now been added into Photoshop this week. You can use it commercially😉
I thoight adobe already covered all thier AI copyright.
@@peacefusion Adobe just released Generative Fill officially into Photoshop this week. It is now okay to use commercially. The AI gets its source imagery from Adobes own stock footage service. I believe the artists receive a payment of some sort
General Kenobi??
Huh?🤔 lol
please dont use the background NOISE...its so distracting :(
Thanks for the comment.
I did not use any. It could be because Generative Fill is in beta and only does 1080 X 1080.
He meant the annoying music in your video 😅
DaVinci Pro? You mean DaVinci Resolve?
My bad. Sorry about that.
Just did this on my 2011 xD. In my case, getting at that 14 mm bolt was a real chore. The rest of the operation probably amounted to 15 minutes or less, but that one bolt was easily a half hour between removing amd reinstalling it.
Thanks for the video.
😁 I am Phil.
Generative Phil.
Okay, name order, but it's funny, imho. :D
Haha. Nice.
Generative Fill describes the action but I personally find it hard to say. It really doesn't roll off the tongue.
You gave multiple examples of when it just works easily.
How about showing us what to do when the camera shifts _slightly_ during the clip?
What about a case where there _is_ movement crossing the matte? In _Indiana Jones_ there was a scene where the actor's legs crossed over the matte, and a technician painted a matte for each frame just on the legs. I think there should be an automated way to say "make a mask for anything moving in this region"?
Easy is easy; I didn't get anything out of this video that I hadn't already learned from Piximperfect's brief video showing one example. Show us the *hard* , the cases that really happen.
Movement isn’t possible with this effect. I was clear that it had to be a static shot. You would have to make a cut and go to a different angle when crossing the mask at this level. Sorry I didn’t reach your expectations.
You would need to use camera projections, but this technique only works for a small amount of motion
Using Davinci Resolve in fusion you can make that clip in 3D and track the data of video clip then just scale and transform your fill clip accordingly and I'm 90% sure it will work.
@@theshowreelproductions2097 that's amazing! I’d love to see how that is done :)
Wow
make more
Thanks for the encouragement!
I have more on the way! Sept has proven to a busy time!
May I ask what video topics you would prefer seeing?
@@ProfKlaus i would like any
So documentaries and YT videos will be fake too in the future? Just great.
bro what? green screens have existed for a long time
@@ihadanamebutitwasbadsonowi8186 This here has nothing to do with green screen. No one shooting documentaries is using green screen to begin with. What is your point?
I actually don't think it would be a very honest documentary to use this feature in outside scenes. (subject depending). However, it could still be used to enhance the interviews and making them more pleasing to the viewer.
idk i just feel like if things like documentaries really wanted to trick the audience, it would not be very hard to even without ai
I completely agree with you.
We are fooled all the time in modern documentaries. To me, a documentary should be an unbiassed opinion and showing what has happened or what is happening without any altering. Presenting facts and letting the audience form an opinion.
These days, the filmmakers are often trying to prove a point, and showing only what goes towards that point.
None of this was the intention of this video lol.
Thanks so much for commenting. I enjoy hearing from everyone.
Are you able to do this with a green screen background behind you while speaking? Noob here
Yes. I used one here.
Only 1024x1024px for the moment.
This is true currently, but learning how to use the tools is crucial.
This is true currently, but learning to use the tools is crucial.
Ah yes, my entire job as a matte painter decimated by adobe. Thank you.
If someone experienced in the craft uses Generative Fill, they would probably be light years ahead still.
I imagine that if were to provide this particular videos example scene as a concept for a scene in a film or show... you would be able to make it so much better using the same software! It's not going away so use it to your advantage!
Thank you for the comment!
The problem is the nature of capitalism. Instead of this giving me more options and freedom in my work, it's probably just going to mean our clients are going to expect us to be able to work faster and for less. I've been in this industry for almost a decade and every year all I hear is "we need everyone to work faster." I hope you're right and this does lead to better quality work. That would be nice.@@ProfKlaus
@@fretstain I totally understand your point.
as a matte painter you actually have an advantage if you make use of this tool.