OUTDATED | How to Texture 3D Assets with Dream Textures AI - Stable Diffusion Blender Tutorial 2023
HTML-код
- Опубликовано: 20 сен 2024
- Get Carson Katri's Dream Textures Addon here: github.com/car...
It's new software and may be buggy - for support, check the GitHub discussions!
Want to support me? Get the A.I. writing tool that I use! Sudowrite gifts you 10.000 bonus words only with this link: www.sudowrite....
----------------------------------------------
Did you like this vid? Like & Subscribe to this Channel!
Follow me on Twitter: / albertbozesan
For another creative way to use Blender and depth maps to make your AI Art, check out this new video by Patrick Galbraith! ruclips.net/video/L6J4IGjjr9w/видео.html
Great video as always! Thanks for the shoutout.
@albertbozesan how well does it do PS1 graphics
@@thomaskrogh1244 Great question. I found this PS1 lora model on civitai, could be worth trying out: civitai.com/models/55613
What an amazing add-on. I use blender for still images which means I would never have to worry about textures anymore.
In that case check out this other new video - which by pure coincidence also has a pirate theme. Patrick uses Blender as a foundation to make a gorgeous still image: ruclips.net/video/L6J4IGjjr9w/видео.html
Deep Bump addon will take your dream texture and add an AI normal map to your object
Im new to blender so I have no idea what this means
Awesome tutorial! this will be one of my standard videos for mapping with Dream Texture for my animation short. Thank you again!
This vid is pretty old - check out stableprojectorz.com for a better free option to texture assets!
thank you Mr Albert Bozesan
You’re very welcome!
Thanks for trying out Dream Textures!
Thank you so much for making it!! Incredibly valuable.
It's very helpful! I'm the only one 3d asset artist in office, so I really need it! Thank you!!
You're very welcome! Glad it was helpful.
Interesting info
really practical use of dreamtexture. Thank you.
dude 😂😂😂 it took me whole day to texture paint a wooden crate. thank god your video showed in the feeds.
from start to finish, this is awesome!
Thank you!!
Gah, you beat me to making a Dream Textures video 😤 Great vid, I'm gonna send my audience here for a full explanation
Hehe. Thanks! Look forward to your vid 😄
thx bro for sharing this cutting edge knowledge 🤝🙌
This was cool. Thanks for sharing
Glad you enjoyed it! Thanks for watching.
Ian Hubert is going to like this for sure! LOL
top workflow. thank you
Love it!❤
very cool tech but the output looks mostly useless for any real game object. quality is so far subpar it could only be used as maybe a far away or super low poly object unfortunately. hopefully the image generation improves over time as this capability is huge
It’s excellent for prototyping and shows a lot of promise, but yes. Not for production.
Bro you are amazing tysm you have my sub.
Thanks!!
Hey man is it alright if I use a 15 second clip of your video in one of my videos about AI? I'll credit this video in my description
Hey there! Sorry, I missed this comment. If it’s still relevant to you, go ahead! Thanks.
Can you suggest a way to generate a texture on top of an existing object? The simplest example: a face is created, and freckles need to be added to it. Freckles need to be a separate texture/separate image. The result will not be suitable if the face is blurred with freckles. The final result should be transparent in .png format and the contours/positions should match the contours/silhouette of the face.
I thought about this and I think AI just isn’t there yet. Maybe somebody could program something but it sounds like you’d be better off doing that traditionally, especially with freckles which you could just paint over.
I think the issue here is the freckles being a separate image. You can use variance of your current texture to regenerate a new image with freckles. When you think about why we put things like freckles in separate images reusability and consistency are the primary ones I think of. I don’t think you need that as much when you can create new content so quickly. For consistency having the AI model trained on that style that need starts reducing as well. So if you need this, it’s not possible yet, but you might not need it anymore.
Of course, there are various control methods that partially solve the problem.
However, treating each element as a separate layer, would greatly simplify the process.
For example, sky, grass, trees, and building could each be treated as a separate layer, but AI could take into account each of them, depending on whether they are visible or not.
Imagine generating these specific objects, which you can manipulate later - change their size or position.
There would be no need to delve deeper into areas that are already generated correctly.
Inpainting works similarly, but the issue is that it's always connected to an existing image.
@@pastuh I’m not sure how to do that. My remarks are based on tiny changes like freckles, dirty, scratch marks, that kind of thing.
this is crazy!
I have GTX 2060 Super, can it enough and faster for AI generate textures like you?
I’m on an RTX 2070 Super, so yours would be slower but worth a shot!
Imagine an update where it can apply bump maps! Ooh, that's a point, do you know of any AI that generates pump maps like this stable diffusion model generates textures?
I don’t believe an AI is necessary to create normal and bump maps in most cases :) check out my other video where I use a free online service:
ruclips.net/video/P4EXq2qpCE0/видео.html
@@albertbozesan thank you!
That is crazy. I wonder if it would work better in orthographic mode.
Yes, very well. Check out my vid on isometric assets, the same method can be applied to texture the models :)
I can't force it to work. Done everything as on video but It's given me some strange results, it's fully ignore my prompts. Any ideas what going wrong?
I don't know why, when I click to the zip file in option "install" with Blender I have nothing, and after that when I click on dream texture, I can't activated him :/
I just imagined doing this with a VR headset on,
I bet that would be awesome. I have some VR content coming up!
holly molly 😍💫🌟👏
I have stable diffusion installed on another server here at my house. And we use the web UI remotely on our laptops. Does that plug in have the ability to just point to the API inside of automatic111? If you're not familiar, that is basically the standard web UI people use for stable diffusion local installations.
It might. Maybe ask that on the project’s GitHub, I’m sure the creators will know!
You make great video's, sir.
I lost track of the dumpmap though.
Can this version also bake normals?
It can’t make normals directly but you can use the technique I show in this video :) just convert the baked diffuse online: ruclips.net/video/P4EXq2qpCE0/видео.html
I can´t find the "stabilityai/stable-diffusion-2-depth" Model in the list.
This is a very old video, from before ControlNet was a thing. I think it’s different now.
thanks for the cool video, but why doesn't it work for me instead of a picture or texture, I just have a white background and everything after passing the download
What PC specs do you have? Are you on NVIDIA?
can some one help me iam getting
FileNotFoundError(2, 'The system cannot find the path specified')
OMG I loved, I'm starting now with 3D texture, I saw in a group a platform that calls "With Poly" does anyone know? do you know if it's good?
I haven’t heard of it
did not work in my case. btw to make some game props and environment concept art which models are best to do the job? I just want to create concepts so i can use proper colour and theme to make my 3D models in Blender and the textures :D. Any recommendationn for a good model for game concept art?
I recommend OpenJourney v1, it’s an excellent model trained on Midjourney results. My go-to-favorite for great colors and style.
What didn’t work for you in Dreamtextures?
This add-on is so buggy. I could barely get it to work and even when I did - the generated texture did not follow the uv at all. I don't see how it's using the depth here.
It’s a really old add-on at this point, using a pretty bad generation of SD model (2.1) unfortunately.
super agradecido
😮
Thanks for video, but.. how did you get the -2-depth model? where did you download it from?
The addon downloads it as the default :) there should be a button in the preferences when you install it
@@albertbozesan no, there is not. I had to download another version of Dream textures, which is not so heavy. It has a searchable download.
@@morozicBROTHERS I don't know what went wrong without having seen your setup. But I'm glad you got it working.
I'm not sure how to export the finish model to unity. I can get the basic outline of my object with a white material, but I want it to save all my colors.
It would be cool if you added to the video "how to export to unity."
That’s a little too specific for my video and a little off topic 😄 I use the format fbx, that’s a good one for game engines imo
He said you needed to be good at blender to use it properly in the vid, not knowing the basic texture function of the program indicates otherwise.
UV mapping is not working for me when Im trying to to edit the faces
What specifically isn’t working?
@@albertbozesan after getting texture from dream button. When I go to UV mapping and I always get UV in unwrapped mode, unlike yours. Even for a cube my UV unwrapped looks like a T Shape on floor
I have the same issue, I don't have the perspective UV like shown in the video
Whats the config, the machine needed for running this addon in Blender?
Same as Stable Diffusion itself. I’m running it well on an RTX 2070S but I’m sure it works on much slower cards.
i keep getting an error after i install it and try to activate it
update: i got everything installed but now when i try make the textures it says "unsupported model, select a depth model". but i am using the cube like you did. when i click project dream texture it just makes a black texture.
Huh. Is it giving that error when you have a depth model installed? That’s super important, it won’t work with normal SD 1.5 models.
@@albertbozesan i dont think i have it installed, because its not giving me the option to use one, but i cant find where to install it? i did everything you did, but i dont have it for some reason
@@swiftiii. it should give you the option to download and install a depth model in the Preferences > Addons > Dream Textures menu. I had some trouble early on as well, due to the wrong version of Blender - check the issues on Github and see if you're using a version that's compatible with the addon.
Great "delete a cube, create a cube" -meme. I don't dare say anything about your pirate flag
Can you make it use LoRAs?
You could nowadays. This video is super old.
How do you get the options menu to your right? Can't find the tabs.
Edit: nvm found it. Blender is so unintuative. I hate it.
For future viewers: that tab opens with the shortcut key N
Great video!
I got a problem and I don't find an answer on the discussion thread. If any one knows what happens I would appreciate your help.
I got the addon and the stable diffusion depth version installed, but when I click "project dream texture" I only get a black object (sometimes pink, like if the texture would be missing). It puts the UV's in perspective but it seems like it does not render anything. I don't get the noise pattern to start with like in your video, just black. If I go to the shading panel I only have the principled BSDF and a Image texture plugged in. But I don't get the material with the numbers that you get. I just get "diffused-material" and it shows me the image texture node but without texture. I got the last blender version so I'm not sure what could be wrong.
Thank you, and hope others can find the answer also if get the same problem.
Thanks! The fact that it stays on “diffused-material” tells me it never really finishes generating. Do you get any error messages in the command terminal that opens with blender? That’s where you can see more info about what’s happening.
@@albertbozesan Thanks! the error was:
OSError: Error no file named diffusion_pytorch_model.bin found in directory C:\Users\PoW_S/.cache\huggingface\diffusers\models--stabilityai--stable-diffusion-2-depth\snapshots\d41a0687231847e8bd55f43fb1f576afaeefef19.
So I downloaded again the whole stable-diffusion-2-depth and it works now! If someone gets this error, download the model again and also make sure to press again on "activate" model and "install or open" inside the addon settings window.
THe depth model is no longer there
Sorry about that, these things change every couple of weeks sometimes. It’s easier now with ControlNet, check out the newer versions of this Blender plug-in.
i dont know why, maybe because i use amd, but my images are white..
Probably, yeah. The compatibility section of the addon says it’s been tested with CUDA and Apple Silicon.
@@albertbozesan found it out... have to use CPU only
I added the same prompts and never got those nice results, they were all ugly. :(
Haven’t seen this video in a while - but did you have the same presets selected as I did? They add stuff to the prompt in the background.
Could this be paired with a uv map?
There is one, albeit a really ugly one. If you want a clean one, you can bake the texture to a UV map of your choice.
@@albertbozesan Ah. I guess it's kind of essential SD uses the original 3d image in the creation process...
When I get more experienced with Training models, I might try training one specifically on UV maps.
Thanks for the video, I'm definitely going to be using this going forwards!
@@RealShinpin the problem with that is it’s very difficult to understand what’s going on in a UV texture without the context of the 3D model. You could maybe train one on a very specific uv map of a character or prop.
@@albertbozesan Yeah, that's what I figured. I was theorizing that it might be possible to train the model on uv maps, in a manner similar to how controlnet works. I think I underestimate the difficulty and extra tools I'd need to develop.
@@RealShinpin it would be an incredible tool and I'm not saying it's impossible, so keep me updated if you do figure it out!!
I've followed the tutorial exactly and it's not working at all. It just gives it one shitty color, and it looks nothing like wood.
Edit: Ok, there is a new option now called "Bake" that is required to hit before it'll actually do anything, but the end result was still that it only put the wood on like a third of the box in the middle...
Edit edit: I can get a somewhat shittier result by increasing steps and turning off half precision. I don't know why this works so much worse than in the vid.
That sounds like a UV Map problem. Did you switch to your baked UV after connecting your new texture?
@@albertbozesan
I started all over and redid everything and this is what it looks like: i.imgur.com/7XrfejH.png
@@albertbozesan Sorry, I'm very new to Blender so it's all very confusing to me and I don't understand what you mean.
Edit: Well, I retried it a couple of times with mostly that same broken result, and now randomly it worked and the UV map isn't all fucked up...
@@PaonSol yeah, so on the right of the image you can see that the UV map of the cube does not match up to your texture. After you have baked, you need to activate your new UV Map and plug the baked texture into your shader.
I understand if that’s confusing, but unfortunately that’s the best way I can explain it in written form. I recommend rewatching the video or checking out some other tuts on baking and UV mapping. Otherwise it will just be frustrating.
@@albertbozesan Thanks for the reply! Blender is unfortunately the most frustrating program I have ever used in my life. It took 2 hours and tutorials just to figure out some of the camera stuff.
But then people tell me the other 3D renderer programs are also obtuse and frustrating to learn so I guess I'm stuck with it.
One thing they miss is adding a offline no token method
No that’s what I’m using :) it runs locally
@@albertbozesan ah i just noticed they had the cuda version too on blendermarket i saw the dreamtextures one had that on thx for the reply xD
It's the biggest piece of dog shit, fully of bugs, erros, fails, crashes, a total mess addon.
It’s also over a year old and has since been overtaken by ControlNet.