Not sure if it would give better results, but you can get a depth map by enabling "z" pass, then using the compositor. In the compositor, you can output depth socket into normalize node, then into Math (subtract from 1) node (to invert) then to output. Might yield better results than mist but can't really say for sure since SD just using "overall" feel of it.
Nice to see we are already at that point. This can be nice for future AI based lookdev, once we achieve a decent amount of consistency for style of objects. would love to see some examples of 3d meshes turned to depth maps and how consistent they look.
Its funny that you stopped at the difficult part which is when you change the perspective and use img2img (depth) again it still tends not to keep the same building. Unless it is already pre colored till a extend.
Having spent years mastering Blender, the sense of excitement I feel now, as I look towards the horizon filled with new opportunities, is truly unparalleled. The journey with Blender has been a transformative one, marked by continuous learning and growth. Today, as I stand on this precipice, eager to dive into what lies ahead, the prospect of exploring these new avenues fills me with an exhilarating sense of anticipation.
Pro tip: If you hold down the Windows key and tap the plus key to open the magnifier tool (or click the start button and type magnifier) and then hold ctrl + alt + i you can invert the colors of your screen when making your adjustment in Blender. Just make sure to set the magnifier to 100% so that it's not zoomed in and shows your full screen.
Thank you! This might be a bit too much, but for anyone curious if we can invert Mist in Blender you would go into the Compositor nodes area and add a ColorRamp node, and reverse the Black and White settings. You'll need to tweak things until it looks right for your scene.
Just projection map it to the mesh then bake to your UV set and clean up the texture after. If a few different projections were made in the same style you could probably do a large chunk of texturing done very quickly. Sometimes I wish I worked in 3D still with all the new stuff coming out.
There's also a plugin for rendering directly with AI in Blender. Not sure how it compares though as I've not tried it. Would enable a swifter workflow though.
yes, but krita has all the power of a full image editing program giving so many more options to quickly adapt and edit. This video only shows the 3D input part. But if you check out my recent art jam live streams, you can see me using krita for painting as well
That last move you made, rotating the scene in Krita, sold the workflow to me. Now I need to go back and refresh my old Blender brain muscles. Thank you Olivio!
@@Av-uv6xu Thank you for pointing that out. You are right we are not importing a 3d model into Krita. That means a few additional steps to get the scene you like and I am OK with that.
I have been learning Blender since 1 year now. Now I have jumped on to the ConfiUI and Stable Diffusion bandwagon. This is exactly what I was looking for.
Using something like ReShade I could see an interesting situation where a person could get depth maps from games and re-skin them for really cool screenshots, or re-imaginings of old games.
In a world where the boundaries of reality stretch and bend, we journey from the tangible, three-dimensional spaces we know so well into the vast, uncharted territories of Artificial Intelligence. This narrative unfolds, revealing the genuine, unparalleled potency that AI holds, not just as a tool, but as a transformative force reshaping our world, our perceptions, and perhaps, our very essence.
This is very cool, lets hope for some simplier workflow soon tho. Very handy to just model and allow AI to take care of imagine texturing, lighting etc.
This is something, you could even get some textures ideas to project back into the model later on. It is at least a beyond polished starting point. I wonder how it would look with simpler boxes and shapes, if it would be possible to give you a ‘detailed’ home out of boxes that then we can use back in Blender to actually model it away.
Yeah, rotate the image until you're perpendicular to the wall. Then spam a few AI gens and you have a bunch of material you can splice together in photoshop. Really huge time saver.
this is indeed a great workflow! thank you! is there a way to animate things like this? maybe there is the option to save a sequence of depthIMGs and to import it //as a seq ... into KRITA ... would be great to see a way ... have an egg-siting Easter!
I find that in order to preserve extra details is better to use depth plus canny maps. You also need yo adjust a but what's more important depending on your image, sometimes the prompt is more important, sometimes it's controlnet
I don't think you mentioned about resolution when exporting from Blender. What should be take into consideration? Should I always export on a standard resolution for SD 1.5, respectively SDXL?
The video effectively demonstrates how to blend 3D modeling with AI for creative compositions, offering clear, step-by-step instructions suitable for a range of skill levels. It leverages free tools like Blender, making it accessible, and showcases innovative applications of AI in art, encouraging experimentation and creativity.
Finally installed the kirta plugin. It's *so* powerful. The AI selection mechanism is great too--it's easy to touch up only the part I want to change. I can also mask out various layers if one AI generation is better in one part than another. Guess I should experiment with depth maps and poses. My favorite part is the upscale--it takes forever, but it seems to reuse the prompt to fill in detail keeping the image mostly the same. A few minutes later on an old P5000 graphics card, and I've got a great 4k image. It's more much powerful than RealESRGAN. It's also more powerful than Dall-E 3 simply because you can modify images, though it's tempting to start with a Dall-E 3 image and tweak it in krita ai diffusion.
Sorry, I installed stable diffusion and all the related stuff, but there's no way to find a folder or file named A1111 on my pc!! I also tried to write the path to stable diffusion folder but no use...
This technique may answer a lot about our own reality, the black and white misty place is referred as the Ethereal plane, it's somewhat of a 2.5D dimension, and you have a 4D Dimension which is a collective consciousness that projects a 3D reality we all agreed on like a shared dream.
quicktip for cinema 4D users: you can create a material with a b/w gradient in the luminance channel and use it as material in the render settings with material override to convert your whole scene to a depth map ♥ extra tip: you can hook the start and end coords of the gradient to the position of 2 separate null objects via xpresso to control the gradient.
Pretty cool. But, as it's based on comfyUI in the background, is it possible with this AI plugin inside of Krita to save the workflow? Like comfyUI embeds the workflow that created the image inside the image. Can that be done here?
Krita and AI is really super, especially since it just requires some clicks to install :) I just wonder, is there any AI, LoRA or checkpoint, that can cut out characters to separate them from the background? Like the depth map, just with also throwing out the background.
Greetings, my most excellent companions! I'm eager to know, how are each of you navigating the intricate pathways of our vast and unpredictable temporal journey today?
I do the similar, but use the composing and rendering with Z depth (Blender 3D). Then when I get image and depth I send result to the automatic111 img2img with controlnet depth. Btw. I'm tried to render dragons, but even when I has a 3D model with textures and depth and canny I could't get fine result from AI.
@@OlivioSarikasSorry but I'm not sure what exactly you mean "ranges". If you mean "darker-deeper" - yes, it's has. But no so contrast image like in your video.
@@OlivioSarikas What exactly do you mean by ranges? Are you talking about the mist start / stop that you were changing? The z-depth doesn't, because it is a rendering of the actual z-buffer from the graphics pipeline. But I suspect that you should be able to accomplish the same sort of result with some additional nodes in the compositing editor.
Sooner than we think, this is how games will be rendered .. and not so long after, not even 3d models will be needed anymore. just some persistence for the AI. now all we need is a holo deck
I´m using 4x NMKD Superscale that comes with when you install AI Diffusion for Krita. I like it, and that i can weight it adding details or not.@@KDawg5000
I've actually been using 3D with Stable Diffusion + ControlNet for a few months now. While 3D works alright with rendered line art and uses the image as a guide in ControlNet for anime line art, it still has limitations. AI-generated art can struggle with clean lines (eew enlarged images) and complex perspectives, especially in building compositions. But it works fine for... that boring perspective, mainly with the whole building on the screen. (AI still is dumb af, and I expected it'd be way better than what we have today. Apparently, they will never fix the same issues.)
@@liialuuna Yup, I tried depth with MLSD using line art rendered by 3ds Max as a guide, along with all the other possible combinations. But here's the kicker: I was trying to use it for comics, which demands the most complex perspectives possible. AI just can't handle that yet. So, I gave up and now only use AI to add textures to my 3D renders and drawings. It's definitely unusable for serious gigs at this point in time.
awesome. so you can download any picture, ai recognise it and make a 3d models. in different style. and you can tell it what you like or dont like and moderate this 3d world. 😮 and even download scenes with actors, like from any movie, choose avatars and create even movies, moderate scenes like a director of a movie 😮
Don't just turn the view...that's not accurate. You should better rotate the model, just press R, X, 180> R, Z, 180 or press R, hold Ctrl and drag the mouse, to rotate aligned to the grid. And don't forget to apply rotation and scale. (Ctr+A) If you get to more advanced rendering e.g. with shading, Normal maps a.s.o it will caus problems. So better do it the right way from the beginning. Up and Down is very important in the 3D universe And you don't have to hide the Overlays, it will not be shown in render.
Hmmm, I wonder where you got this idea?! hehehe - Spoiler, it was me in the live stream chat mentioning untextured low poly for depth maps ;-) You can check in your live replay of the chat on the channel. Seeing all these positive reactions; You're all welcome! :P
you could try to use IPadapter to get a similar style than the image that you created and liked. however the details are still going to be different. that's just how AI works at this point in time
As cool as this might be and look, be warned, if you want to use this in any agency setting, make sure you know how to edit your files exactly to client spec and that you know how to make this an open file that clients creative team can amend and alter. Have fired more than enough junior artists that have incredible portfolios and then they cant do the work because they relied on AI.
to be really fair, i think 3d models to Ai image is a weird non usefull process, sine ai image are already amazing... Me what im waiting its a real powerfull Image to 3d model with texture lol...
The point of this is control and in that regard AI is anything but amazing. Try to create a AI image from a specific viewing angle, using a specific focal length without guidance from any image input. You can't.
To be honest just using a 3d software to create a single image or small animation is not logical thing to learn. I see lots of people put their time to learn blender and try to create environments etc. However, AI can easily create short videos or a single image. My advise to people is learning technicial parts. Learn a game engine, shader creation etc. There are so many free videos regarding blender. But most of the people can not make money at all. Don't waste of your time. If we consider this video, it can be helpful for architectures or level designers who create a blockout scene of a game.
Interesting, but why not just render it in Blender? Then you have actual control not some ai random generation. Better still make the model yourself. Is it that modern generations all want no work and no input, just easy lazy results? End results are not really the point! Making is the pleasure. It is like the pretence flicking paint at a canvas is art.
you are right! personally i have wasted a lot of time experimenting with ai while i could have learned to get better in blender. you can make some quick images with this technique described here but it's all just random stuff and you have no control. it's great to explore some ideas though.
There's an infinite number of things a person can create, but a finite amount of time and energy they can devote to creating. There is joy in the process of creating things, but I personally struggle with finishing projects when I start getting a lot of ideas for other projects piling up. Using AI to speed up the creation process - even if it means giving up some control - is worth it for those cases. Rendering in Blender requires you to nail down every single detail of the scene (detailed enough model and textures, lighting, grass blowing in the wind, sunsets, etc), which takes a lot of time. If you don't care about the minutiae of the scene but focus more on the big picture, then the time to create the minutiae doesn't bring you joy either. If you can control the big picture using 3D, then use AI to fill in the minutiae, you can complete the work to your liking and feel satisfied with it. On some basic level the art community understands this already. It's why there are marketplaces for 3D models and textures for other artists to use. It's not "lazy" to use those provided resources, it's just picking what your project needs.
No, no, no! 10 minutes of precise clicking in different apps, and calling it power of AI. Just wait a few months and it should really help you. A bit selfish, but last time i checked the goal was to be served, not to serve apps.
Not sure if it would give better results, but you can get a depth map by enabling "z" pass, then using the compositor.
In the compositor, you can output depth socket into normalize node, then into Math (subtract from 1) node (to invert) then to output.
Might yield better results than mist but can't really say for sure since SD just using "overall" feel of it.
I think he tried to simplify and avoid blender specifics.
Nice to see we are already at that point. This can be nice for future AI based lookdev, once we achieve a decent amount of consistency for style of objects. would love to see some examples of 3d meshes turned to depth maps and how consistent they look.
Its funny that you stopped at the difficult part which is when you change the perspective and use img2img (depth) again it still tends not to keep the same building. Unless it is already pre colored till a extend.
of course it's not keeping the same building. just use blender for your project instead of ai.
@@liialuuna This is a combination of both, and soon it will probably possible.
@@liialuuna just use blender...
Just do this... Still missing the point.
Having spent years mastering Blender, the sense of excitement I feel now, as I look towards the horizon filled with new opportunities, is truly unparalleled. The journey with Blender has been a transformative one, marked by continuous learning and growth. Today, as I stand on this precipice, eager to dive into what lies ahead, the prospect of exploring these new avenues fills me with an exhilarating sense of anticipation.
Pro tip: If you hold down the Windows key and tap the plus key to open the magnifier tool (or click the start button and type magnifier) and then hold ctrl + alt + i you can invert the colors of your screen when making your adjustment in Blender. Just make sure to set the magnifier to 100% so that it's not zoomed in and shows your full screen.
Thank you! This might be a bit too much, but for anyone curious if we can invert Mist in Blender you would go into the Compositor nodes area and add a ColorRamp node, and reverse the Black and White settings. You'll need to tweak things until it looks right for your scene.
After so many years working in Blender, my heart flutters, seeing new possibilities.
My thoughts exactly!
Just projection map it to the mesh then bake to your UV set and clean up the texture after. If a few different projections were made in the same style you could probably do a large chunk of texturing done very quickly. Sometimes I wish I worked in 3D still with all the new stuff coming out.
The "possibility" of hacks mimicking great in-depth texturing and rendering in 5 seconds via generators to compete with your job? Yay, so exciting.
@@f4ust85 luddites can scream at traffic
Pretty soon you won't even need to know how to use Blender.
very cool, thanks Olivio, always love your videos for when I need to find out what awesome new stuff is going on in AI!
This is a really nice technique, i wanna use it in my next set design project! Thank you❤
Always something new to learn in Blender but this feels like a quantum leap using AI. Thank you.
I really get a lot out of these always new and magical lectures. thank you!
There's also a plugin for rendering directly with AI in Blender. Not sure how it compares though as I've not tried it.
Would enable a swifter workflow though.
yes, but krita has all the power of a full image editing program giving so many more options to quickly adapt and edit. This video only shows the 3D input part. But if you check out my recent art jam live streams, you can see me using krita for painting as well
Wonderful. Thank you. Very instructive and it seems a much better approach. More flexible.
That last move you made, rotating the scene in Krita, sold the workflow to me. Now I need to go back and refresh my old Blender brain muscles. Thank you Olivio!
last move was rotating in blender
He switched back to Blender to rotate the scene.
@@Av-uv6xu Thank you for pointing that out. You are right we are not importing a 3d model into Krita. That means a few additional steps to get the scene you like and I am OK with that.
Then you should check out a addon for Krita called "Blender Layer", that syncs in realtime your blender scene into the AI Diffusion plugin in Krita.
I have been learning Blender since 1 year now. Now I have jumped on to the ConfiUI and Stable Diffusion bandwagon. This is exactly what I was looking for.
I'm happy to see Blender + Krita + ComfyUI! Open Source software rules! ^_^
As a blender-head _ I'm very happy you are directing attention to Blender. this is truly one of the the best workflows, depth map is so powerful,
Using something like ReShade I could see an interesting situation where a person could get depth maps from games and re-skin them for really cool screenshots, or re-imaginings of old games.
this is freaking amazing, thanks for sharing!
In a world where the boundaries of reality stretch and bend, we journey from the tangible, three-dimensional spaces we know so well into the vast, uncharted territories of Artificial Intelligence. This narrative unfolds, revealing the genuine, unparalleled potency that AI holds, not just as a tool, but as a transformative force reshaping our world, our perceptions, and perhaps, our very essence.
Why dont you just slap the depthmap into ComfyUI and use it as controlnet depthmap? Must be much easier?
Can you make a new 1111 installation tutorial for newbies? I really cant get those earlier tutorials
Type on youtube :
Royal skies 1 min install stable diffusion.
But instead look not at automatic1111 but ForgeUI instead. Same interface but faster
This is very cool, lets hope for some simplier workflow soon tho.
Very handy to just model and allow AI to take care of imagine texturing, lighting etc.
The future is now! Amazing stuff. 👍
This is something, you could even get some textures ideas to project back into the model later on.
It is at least a beyond polished starting point.
I wonder how it would look with simpler boxes and shapes, if it would be possible to give you a ‘detailed’ home out of boxes that then we can use back in Blender to actually model it away.
Yeah, rotate the image until you're perpendicular to the wall. Then spam a few AI gens and you have a bunch of material you can splice together in photoshop. Really huge time saver.
This is huge for game developers and artists. It's going to save so much time.
Thank you, Olivio! That is amazing!
Wow. Thanks for sharing this.
I miss the affinity photo tutorials, i hope you get well soon
Beautiful idea, thanks a lot. How could we do the same process with an animted scene?
this is indeed a great workflow! thank you! is there a way to animate things like this? maybe there is the option to save a sequence of depthIMGs and to import it //as a seq ... into KRITA ... would be great to see a way ... have an egg-siting Easter!
Great videos. Is there any cloud based image generator that allows you to use this depth maps to create images?
Great workflow & thanks for sharing 👍🏻
I find that in order to preserve extra details is better to use depth plus canny maps. You also need yo adjust a but what's more important depending on your image, sometimes the prompt is more important, sometimes it's controlnet
I don't think you mentioned about resolution when exporting from Blender. What should be take into consideration? Should I always export on a standard resolution for SD 1.5, respectively SDXL?
The video effectively demonstrates how to blend 3D modeling with AI for creative compositions, offering clear, step-by-step instructions suitable for a range of skill levels. It leverages free tools like Blender, making it accessible, and showcases innovative applications of AI in art, encouraging experimentation and creativity.
Ive been using this tech, it is really great. Thanks for the tutorial
Finally installed the kirta plugin. It's *so* powerful. The AI selection mechanism is great too--it's easy to touch up only the part I want to change. I can also mask out various layers if one AI generation is better in one part than another. Guess I should experiment with depth maps and poses.
My favorite part is the upscale--it takes forever, but it seems to reuse the prompt to fill in detail keeping the image mostly the same. A few minutes later on an old P5000 graphics card, and I've got a great 4k image. It's more much powerful than RealESRGAN. It's also more powerful than Dall-E 3 simply because you can modify images, though it's tempting to start with a Dall-E 3 image and tweak it in krita ai diffusion.
is krita plugin going to work fine on gtx 1070?
Sorry, I installed stable diffusion and all the related stuff, but there's no way to find a folder or file named A1111 on my pc!! I also tried to write the path to stable diffusion folder but no use...
This technique may answer a lot about our own reality, the black and white misty place is referred as the Ethereal plane, it's somewhat of a 2.5D dimension, and you have a 4D Dimension which is a collective consciousness that projects a 3D reality we all agreed on like a shared dream.
quicktip for cinema 4D users: you can create a material with a b/w gradient in the luminance channel and use it as material in the render settings with material override to convert your whole scene to a depth map ♥ extra tip: you can hook the start and end coords of the gradient to the position of 2 separate null objects via xpresso to control the gradient.
Nice! Can you render a sequence of images?
Pretty cool. But, as it's based on comfyUI in the background, is it possible with this AI plugin inside of Krita to save the workflow? Like comfyUI embeds the workflow that created the image inside the image. Can that be done here?
Great work, but you don't need controlnet?
Krita and AI is really super, especially since it just requires some clicks to install :) I just wonder, is there any AI, LoRA or checkpoint, that can cut out characters to separate them from the background? Like the depth map, just with also throwing out the background.
Thanks!
Love your tutorials
I don’t get too excited about all the cool single images people can generate. What’s fun to me is the video creation potential of things like this.
This is the future of video game rendering, which will be incredible.
I'm looking forward to see the other way around, from 2d images, to 3D
Greetings, my most excellent companions! I'm eager to know, how are each of you navigating the intricate pathways of our vast and unpredictable temporal journey today?
do i need a nvidia gpu for this to work?
whaat is the music at 14.second?
It's simplöy called "The Heavy Metal" by fatbunny on Envato Elements
Great vid, subscribed
Olivio, how did you know i was looking videos like this! superb! please more videos about marrying AI generated photos and 3dscenes! Thank you!
Can’t you just flip the depth map in blender?
yes, probably. i'm new to belder
Can AI be used to convert an animation of a character into something that looks like a video of a real person? That's what I would like to know.
This is why I learned to use blender ^^
I do the similar, but use the composing and rendering with Z depth (Blender 3D). Then when I get image and depth I send result to the automatic111 img2img with controlnet depth.
Btw. I'm tried to render dragons, but even when I has a 3D model with textures and depth and canny I could't get fine result from AI.
does Z Depth also have ranges?
@@OlivioSarikasSorry but I'm not sure what exactly you mean "ranges". If you mean "darker-deeper" - yes, it's has. But no so contrast image like in your video.
@@OlivioSarikas What exactly do you mean by ranges? Are you talking about the mist start / stop that you were changing?
The z-depth doesn't, because it is a rendering of the actual z-buffer from the graphics pipeline. But I suspect that you should be able to accomplish the same sort of result with some additional nodes in the compositing editor.
I've been using this method ever since controlNET supported depth maps
Amazing!
Danke Olivio! Schön dich zu sehen
hey markus, dich gibt's auch noch. Schreib mir mal :)
@@OlivioSarikas Jaaa mich gibts auch noch ^^ Aber mit dieser neuen Version von Blender komm ich garnicht klar.... o.O
Sooner than we think, this is how games will be rendered ..
and not so long after, not even 3d models will be needed anymore. just some persistence for the AI.
now all we need is a holo deck
Can generate an image sequence?
should be possible, you could try to use IPA or Img2Img to keep the style more consistent. Or a Lora traint on the style you want
Gone are the days when they made expensive movie sets.
that's EXACTLY what i was looking for...
time to relearn Blender I guess. Let's make some lemon juice. cheers Olivio
If Krita is going to fully embrace AI, it would be nice if they incorporated a tool that works like (and as well as) Magnific for upscaling & detail.
With the plugin he uses in Krita you can switch from Live to Generate and to Upscale.
@@DeltaZ10000 What is your opinion of the upscale results?
I´m using 4x NMKD Superscale that comes with when you install AI Diffusion for Krita. I like it, and that i can weight it adding details or not.@@KDawg5000
Definitely useful for creating consistent concept arts. But that's it. Not useful for other 3D tasks.
it's "3D to Ai" not "Ai to 3D". So this isn't intended for 3D tasks, it's intended to use 3D as a versatile input to create better 2D images
@@OlivioSarikas Yup. As I said, useful for creating things like concept art.
AI to 3d mesh ?
its impssible ?
thanks
I have Blender and never used it, watching this I can see why.
also, I couldn't resist.... "yaml be there..."
I've actually been using 3D with Stable Diffusion + ControlNet for a few months now. While 3D works alright with rendered line art and uses the image as a guide in ControlNet for anime line art, it still has limitations. AI-generated art can struggle with clean lines (eew enlarged images) and complex perspectives, especially in building compositions. But it works fine for... that boring perspective, mainly with the whole building on the screen.
(AI still is dumb af, and I expected it'd be way better than what we have today. Apparently, they will never fix the same issues.)
yes, it's dumb af, but you can make some complex buildings too, a lot depends on your controlnet settings and the details in your depth map.
@@liialuuna Yup, I tried depth with MLSD using line art rendered by 3ds Max as a guide, along with all the other possible combinations. But here's the kicker: I was trying to use it for comics, which demands the most complex perspectives possible. AI just can't handle that yet. So, I gave up and now only use AI to add textures to my 3D renders and drawings. It's definitely unusable for serious gigs at this point in time.
Man I wish I haven't sold my 3d printer.
Interesting 👍👍
awesome. so you can download any picture, ai recognise it and make a 3d models. in different style. and you can tell it what you like or dont like and moderate this 3d world. 😮 and even download scenes with actors, like from any movie, choose avatars and create even movies, moderate scenes like a director of a movie 😮
no, this goes the other direction: turning 3D models into 2D images
@@OlivioSarikas a "prnt scrn" button? 😳👍
That's great, now AI can actually work.
woww.
Someone should do Magic Eye with this depth technique 😅
You can reproject this in texture and make ai-texture artisit)
Don't just turn the view...that's not accurate.
You should better rotate the model, just press R, X, 180> R, Z, 180 or press R, hold Ctrl and drag the mouse, to rotate aligned to the grid.
And don't forget to apply rotation and scale. (Ctr+A)
If you get to more advanced rendering e.g. with shading, Normal maps a.s.o it will caus problems. So better do it the right way from the beginning.
Up and Down is very important in the 3D universe
And you don't have to hide the Overlays, it will not be shown in render.
So it's 3D to 2D Ai right?
wow
Hmmm, I wonder where you got this idea?! hehehe - Spoiler, it was me in the live stream chat mentioning untextured low poly for depth maps ;-) You can check in your live replay of the chat on the channel.
Seeing all these positive reactions; You're all welcome! :P
This is the most basic use of control net for well over a year now.
@@audiogus2651Not of Blender
The intro do be banging
oh, I thought its an actual 3d model
as 3d artist i already down it just after controlnet invented
The problem is that when rotating it doesn't matter if you use the same seed it will give you a completely different image.
you could try to use IPadapter to get a similar style than the image that you created and liked. however the details are still going to be different. that's just how AI works at this point in time
just ask midjourney for this feature.
Great. I was doing this last 12 months. I thought it was a common trick. ;D
Just use the C4D plugin for SD (and use Forge) its vastly superior and will render 4k
Great video.
Blender is 4.1??? Anyone think they went from 2.x to 4.x too quickly?
As cool as this might be and look, be warned, if you want to use this in any agency setting, make sure you know how to edit your files exactly to client spec and that you know how to make this an open file that clients creative team can amend and alter. Have fired more than enough junior artists that have incredible portfolios and then they cant do the work because they relied on AI.
Why do you rock soo hard!!??? 😂
this can be done way easier in other 3D packages (I mean the whole export/render to depthmap)
but not all of them are free, of course ;)
to be really fair, i think 3d models to Ai image is a weird non usefull process, sine ai image are already amazing... Me what im waiting its a real powerfull Image to 3d model with texture lol...
The point of this is control and in that regard AI is anything but amazing. Try to create a AI image from a specific viewing angle, using a specific focal length without guidance from any image input. You can't.
To be honest just using a 3d software to create a single image or small animation is not logical thing to learn. I see lots of people put their time to learn blender and try to create environments etc. However, AI can easily create short videos or a single image. My advise to people is learning technicial parts. Learn a game engine, shader creation etc. There are so many free videos regarding blender. But most of the people can not make money at all. Don't waste of your time. If we consider this video, it can be helpful for architectures or level designers who create a blockout scene of a game.
Interesting, but why not just render it in Blender? Then you have actual control not some ai random generation. Better still make the model yourself. Is it that modern generations all want no work and no input, just easy lazy results? End results are not really the point! Making is the pleasure. It is like the pretence flicking paint at a canvas is art.
No control in project create random stuff not art directable then need many changes 99.9 % specific
The only way to stop it is not to support it
@@EvokAi Not really, it has been invented and will be used. individuals will soon not even know they are consuming ai content.
you are right! personally i have wasted a lot of time experimenting with ai while i could have learned to get better in blender. you can make some quick images with this technique described here but it's all just random stuff and you have no control. it's great to explore some ideas though.
There's an infinite number of things a person can create, but a finite amount of time and energy they can devote to creating. There is joy in the process of creating things, but I personally struggle with finishing projects when I start getting a lot of ideas for other projects piling up. Using AI to speed up the creation process - even if it means giving up some control - is worth it for those cases. Rendering in Blender requires you to nail down every single detail of the scene (detailed enough model and textures, lighting, grass blowing in the wind, sunsets, etc), which takes a lot of time. If you don't care about the minutiae of the scene but focus more on the big picture, then the time to create the minutiae doesn't bring you joy either. If you can control the big picture using 3D, then use AI to fill in the minutiae, you can complete the work to your liking and feel satisfied with it.
On some basic level the art community understands this already. It's why there are marketplaces for 3D models and textures for other artists to use. It's not "lazy" to use those provided resources, it's just picking what your project needs.
No, no, no! 10 minutes of precise clicking in different apps, and calling it power of AI.
Just wait a few months and it should really help you.
A bit selfish, but last time i checked the goal was to be served, not to serve apps.