I came here after finding one of Storybook Studios shorts, wanted to learn about ControlNets and how they worked, excellent demo, I was able to recreate what you taught quickly.
Thanks for keeping it simple. I'm saving this tutorial. I saw Space Vets and thought it was really well done. I'm currently working to do my own animated movie with AI right now so It's good to see others doing this.
@@albertbozesan Thank you. It's called Escape From Planet Omega-12. It's more adult-oriented sci-fi (Think old-school stuff like Heavy Metal or Fire and Ice) but it's on my RUclips page as the starter video. I would love for you to check it out. I've been doing art and film for a long time and although I'm by no means a technician, I'm very excited about the new era of AI filmmaking. I see people like you as pioneers, making movie history, so if I can carve out a small part for myself in all of this, I'll be very happy. Please, stay in touch. Cheers.
@@albertbozesan Thanks. Yeah, it's on my channel, titled Escape From Planet Omega-12. Although, I'll say that what I'm doing right now has already gone way beyond what I've posted. I'll be updating, soon. Cheers.
@@albertbozesan it changes the shape of the fall-off If I'm not mistaken, it's more effective at keeping background elements that are far away from. The camera than the original equation. The curve has a bend rather than a straight line on the graph.
I have found stacking depth, line art, normal, and other controlnets in Krita’s stable diffusion, referencing the appropriate blender render pass, a good way to go. I have made several videos about this on my channel.
Glad you like it! You can find the model on Xinsir’s huggingface. Download diffusion_pytorch_model.safetensors and name it how you like. huggingface.co/xinsir/controlnet-depth-sdxl-1.0
Great tutorial. By the way, do you have any idea how to create a visualization of a particular house on a photo with empty parcel. Do you have an idea what workflow to adopt?
If you don’t need a different angle of the house, use a Canny ControlNet, img2img of the photo with a medium denoise. Make sure your prompt is very descriptive. Then inpaint the parcel.
Works like a charm! My idea of using the depth map to drive an animation in Stable Diffusion did not work out that well though, so maybe I need to make Animations in Blender and only use generated textures from SD? 🤔 We'll see ...
It’s a perfectly valid idea! You can steer animations using depth, it just needs to be a rather complex AnimateDiff workflow in ComfyUI. I’ll have a course up semi-soon that includes something like that.
Been looking into Blender and possibly turning some of our short films into animation types. For copyright and trademark purposes, how safe is it being Open Source? 🤔
Blender is used by massive commercial studios. It’s safe. Just make sure you download the official version off blender.org, there are some fakes out there.
great tutorial. I've followed every step, but at the render stage, the image is a deapth map but just a black to white gradient, it does not recognize the depth of the meshes. I don't find the solution. It renders a flat image
Maybe your camera is outside the room? In that case it would be rendering the outside wall - the “backface culling” I set up at the beginning is only for the viewport preview, not the render, unfortunately.
Hej, I finally followed this tut, but when I dragged the window and couch to the room, they position themselves on top of the room or outside of the cube. I could not get them to sit at the 3D cursor, as in, on the floor. I have to use g and choose an axis to move them into position. Any ideas?
@@albertbozesan wrote "Is “snapping” on at the top of your viewport?" No it is not on (magnet). It is not such a biggy. I follow other tuts and then I forget what I did.
Question: about the 2 images at 12:00 and 12:06, how do you ensure the wall texture behind the guy is the same texture (like precisely) as the one on the wall seen from the first image?
Ease of use for viewers! Comfy scares beginners away from AI and can be frustrating even for experienced users if you just want to do something quick and simple. That said, I have a ton of ComfyUI content coming soon 👀
Hi mr. Albert. I’m following you for a longtime, I just couldn’t bring myself to cold message on Linkedin. I found some company (selling guitar courses), based in Helsinki, looking for an AI video creator intern - i know you are not an intern but I thought you might be interested in reaching out to them maybe you can collab in the future. I am not affiliated in any way shape or form with them, just saw the ad. Cheers!
Hi Cosmin! Thanks for reaching out and sharing this - don’t worry, feel free to connect on LinkedIn anytime! I’m very happy as Creative Director at Storybook Studios, but I’ll push this comment. Maybe somebody in this community will find it interesting!
I am likely to wait until there is a prompt app to generate .blend files. I am also likely to wait until the fucking nerds stop trying to make me learn more complicated shit to do shit nowadays !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! NO CODE SOLUTIONS !!!!!!!!!!!!!!!!!!!! PROMPT TO COMPLETE OUTPUTS ONLY !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
I came here after finding one of Storybook Studios shorts, wanted to learn about ControlNets and how they worked, excellent demo, I was able to recreate what you taught quickly.
Thank you! Glad it was helpful :) more coming soon.
One of the best teachers for AI visual art stuff, thanks for helping me keep abreast of these tools as the industry changes
That means a lot, thank you! A big project is coming up with more AI lessons, stay tuned :)
Exactly what I needed. Thank you.
You're welcome!
Nice to see you back mate,
Thank you for sticking around!
Skimmed, but looking forward to watch and following more in depth with this technique. Great stuff.
Thanks for keeping it simple. I'm saving this tutorial. I saw Space Vets and thought it was really well done. I'm currently working to do my own animated movie with AI right now so It's good to see others doing this.
Thank you!! Feel free to share your project if you want to, I’m very curious 😄
@@albertbozesan Thank you. It's called Escape From Planet Omega-12. It's more adult-oriented sci-fi (Think old-school stuff like Heavy Metal or Fire and Ice) but it's on my RUclips page as the starter video. I would love for you to check it out. I've been doing art and film for a long time and although I'm by no means a technician, I'm very excited about the new era of AI filmmaking. I see people like you as pioneers, making movie history, so if I can carve out a small part for myself in all of this, I'll be very happy. Please, stay in touch. Cheers.
@@albertbozesan Thanks. Yeah, it's on my channel, titled Escape From Planet Omega-12. Although, I'll say that what I'm doing right now has already gone way beyond what I've posted. I'll be updating, soon. Cheers.
RUclips recommended and subscribed. Looking forward to watching future and past videos.
That’s great to hear! Thanks.
Excellent video... I'm a blender newbie and I could follow along. I appreciate all the tips and being concise!
Thank you for letting me know!! Clarity was super important to me, I know how tricky Blender can get.
What a nice trick!
Glad you like it!
Actually so much more than a trick. This tech is key for any filmmaker who wants to do more than just trailers! Thanks for the awesome explanation!
Great job.
One thing I would do differently is use
y = 1 / (depth+1)
(nodes for addition and division; no need to invert or colorize)
Thanks! I don't quite follow - is this not just a different node for the same result? How does it save effort?
@@albertbozesan it changes the shape of the fall-off
If I'm not mistaken, it's more effective at keeping background elements that are far away from. The camera than the original equation.
The curve has a bend rather than a straight line on the graph.
@@AZTECMAN I see! Thanks for the tip!
I have found stacking depth, line art, normal, and other controlnets in Krita’s stable diffusion, referencing the appropriate blender render pass, a good way to go. I have made several videos about this on my channel.
great tutorial! Thanks!!
Where can I download the control net model Depth Xinsir that you use?
Glad you like it! You can find the model on Xinsir’s huggingface. Download diffusion_pytorch_model.safetensors and name it how you like. huggingface.co/xinsir/controlnet-depth-sdxl-1.0
Here you go! huggingface.co/xinsir/controlnet-depth-sdxl-1.0/tree/main
Rename diffusion_pytorch_model.safetensors
Do you have tutorials on how you did the Space Vets art??? Looks great!
Thank you! Check out the making of vid on the website, and the CivitAI talk linked in the description for more infos 😄
Good jobs
Great tutorial. By the way, do you have any idea how to create a visualization of a particular house on a photo with empty parcel. Do you have an idea what workflow to adopt?
If you don’t need a different angle of the house, use a Canny ControlNet, img2img of the photo with a medium denoise. Make sure your prompt is very descriptive. Then inpaint the parcel.
Works like a charm!
My idea of using the depth map to drive an animation in Stable Diffusion did not work out that well though, so maybe I need to make Animations in Blender and only use generated textures from SD? 🤔 We'll see
...
It’s a perfectly valid idea! You can steer animations using depth, it just needs to be a rather complex AnimateDiff workflow in ComfyUI. I’ll have a course up semi-soon that includes something like that.
Don't forget to render in EEVEE folks. Save yourself time.
Been looking into Blender and possibly turning some of our short films into animation types.
For copyright and trademark purposes, how safe is it being Open Source? 🤔
Blender is used by massive commercial studios. It’s safe. Just make sure you download the official version off blender.org, there are some fakes out there.
great tutorial. I've followed every step, but at the render stage, the image is a deapth map but just a black to white gradient, it does not recognize the depth of the meshes. I don't find the solution. It renders a flat image
Maybe your camera is outside the room? In that case it would be rendering the outside wall - the “backface culling” I set up at the beginning is only for the viewport preview, not the render, unfortunately.
@@albertbozesan yes it fixed it ! thanks you =)
Hej, I finally followed this tut, but when I dragged the window and couch to the room, they position themselves on top of the room or outside of the cube.
I could not get them to sit at the 3D cursor, as in, on the floor. I have to use g and choose an axis to move them into position.
Any ideas?
Is “snapping” on at the top of your viewport?
@@albertbozesan wrote "Is “snapping” on at the top of your viewport?"
No it is not on (magnet).
It is not such a biggy.
I follow other tuts and then I forget what I did.
Question: about the 2 images at 12:00 and 12:06, how do you ensure the wall texture behind the guy is the same texture (like precisely) as the one on the wall seen from the first image?
We don’t. We prompt and curate the results carefully.
@@albertbozesan Thanks for the answer!
Very very nicely done. I love this method.
As you said, this give one much much more control of what one wants!
Thank you! Glad you like it 😄
any reason why you don't use comfyui instead?
Ease of use for viewers! Comfy scares beginners away from AI and can be frustrating even for experienced users if you just want to do something quick and simple.
That said, I have a ton of ComfyUI content coming soon 👀
Flux didn't care much for my reference image no mater how much I prioritized Controlnet.
I prefer controlnet with SDXL, more reliable at the moment. You could use Flux as img2img in a final pass?
@@albertbozesan Thanks! I won't bang my head against the wall looking for other workarounds.
Hi mr. Albert. I’m following you for a longtime, I just couldn’t bring myself to cold message on Linkedin.
I found some company (selling guitar courses), based in Helsinki, looking for an AI video creator intern - i know you are not an intern but I thought you might be interested in reaching out to them maybe you can collab in the future. I am not affiliated in any way shape or form with them, just saw the ad.
Cheers!
Hi Cosmin! Thanks for reaching out and sharing this - don’t worry, feel free to connect on LinkedIn anytime!
I’m very happy as Creative Director at Storybook Studios, but I’ll push this comment. Maybe somebody in this community will find it interesting!
@@albertbozesanok thank you for the reply. I’ll forward the job listing in a PM then, maybe your studio might be interested
I am likely to wait until there is a prompt app to generate .blend files. I am also likely to wait until the fucking nerds stop trying to make me learn more complicated shit to do shit nowadays !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! NO CODE SOLUTIONS !!!!!!!!!!!!!!!!!!!! PROMPT TO COMPLETE OUTPUTS ONLY !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
And this is why the nerds get to make cool stuff first 🤓
@@albertbozesan NERDS !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!