Thanks for your valuable information, basically manually sketching really difficult and waste of time to produce good output, but this is awesome software to use to build a good outputs, and I am long time to this kind of software to develop with Autodesk alias auto thanks to God and you
Hello. First of all, thank you very much for this great tutorial. Second, I have a question: how can i re-render the rendered images with stable diffusion so that i can render from other views or angles of the subject?
Which model/version of Stable Diffusion are you using? I followed the installation video (updated version) of the one you linked, and got 1.4, 1.5 (pruned-emaonly safetensors) and the 1.5 inpainting one. The results are not as good as for example I would get from Lexica. Maybe I need to train the model?
@@HandleBar3D, I stuck with it, and I'm glad I did :) things are pretty crazy with ControlNet, Lora's and IPAdapter.. even started switching over for experimentation to ComfyUI. Still, it can be tricky to get fine control over a model, especially for products that aren't cars. I'm definitely getting useful results for concept generation, CMF variations, etc. Thanks for kicking off this fascinating journey for me!
I’m using a 3090 but I definitely cut the wait time in video editing to save time. 😂 I wish it was that fast. It usually takes my computer about 15 seconds in real time for each image generation
You get a subscribe and a like just for taking the time to make written instructions =D I love RUclips for inspiration but for trying out for myself something written is MUCH better :)
Great workflow and such a kind heart to share the knowledge! Thank you Raymundo. One question - is it possible to use more than one input image to mix up the art direction? Will be installing and trying very soon...
Hello, thanks for the kind words, sorry for the delay, if you want use multiple images, that will be more along the lines of training, the channel I link to has some videos on how to do this.
First: thanks for this informative video! Appreciated! This all seems great but what would really make it a no-brainer to me would be the option to highlight only certain areas of the image with a paint brush for any software to only continue making iterations of that highlighted area. For example: I'm super happy with one single car design but the front lights still look wrong to me. I'd then choose a paint brush-like or selection tool to ONLY highlight the front lights and let the software generate further iterations of the chosen image with different front lights only. Is that possible in Stable Diffusion or any other software like Midjourney atm?
Yeah, if it helps you out, you can feel free saying you showed it to others sooner, but this technique has been around for months to the general public as soon as image2image was available, and theoretically earlier than that, as it was obvious average non-Google employees, non-OpenAI employees and other insiders would be able to do this eventually from looking at the various latent diffusion papers. Still, I am glad you are showing the power of this tech to people that either are scared it will take their job, or say AI is rubbish because it can't create hands properly or the hands and poses aren't art directable. This tech has been available for months, not years, and randoms expect an Avengers movie with a few clicks. Humans are so tiresome.
Man this is fantastic thanks, exactly what I was looking for. I just wanted to ask you one question. I cannot see all the pictures I have generated in the screen on the right, I can only see one by one and as soon as i generate a new image the previous one disappears. Do you know why? Also do you know how to create HD images? THank you again for sharing, this is so useful and as you said it will be the future
My guide in my page has instructions on how to make them HD. Every image you generate is actually auto saved in the output folder of where you installed stable diffusion. You can find them there. The latest release also has an image browser integrated into the UI. Hope this helps
Awesome Ray...thanks so much for sharing. quick question off topic how do you get the "dark theme" background for SD's user interface? Is that somewhere in the settings....
@@scottadam4268 yeah I think the reason is I have my chrome set to dark mode, so maybe that controls the look not stable diffusion, but yeah it’s always been dark for me
Hi Raymundo, unrelated but I have a question about vred professional, each time I add my environment it's doesn't appear on the scene where the car is until I turned on the raytracing , how can I make it appear without turning on the raytracing?
Hey! i'd love to chat on this topic with you. I'm an automotive design student / programmer and I've been training my own AI for designs as well. I'd love to chat on this topic with you!
The courts are slow. Maybe in 2023 there will be some early cases, but who knows how long it will take to create laws around this. IMO, it is impossible to have concrete laws around this. I, as an artist, already can emulate any other artists style by hand, so just because someone else can push a few buttons can emulate someone's style, what is the difference? They are just about to do the same thing faster with fewer button clicks and hand gestures. Times have changed, and there is NO going back.
@@monkey4102 The law needs to be expanded to include AI artwork, I agree things are moving way too fast to stop what will be inevitable once way or another.
I've been impatiently waiting for this one. Thank you Raymundo!
Going to watch it now.
Yessss welcome back ray, thanks for covering this amazing topic.
I hope you’re doing fine 😊❤
Thankyou, I am doing very well, I wish I had more time to produce more content but luckily my business keeps me very busy.
Thank you for this Tutorial Ray. Really insightful
Bravo! Raymundo
Fantastic! Thank you so much for doing this!
Excellent - thank you for sharing!
Thank you for making and posting this video Raymundo! Please keep up the amazing work!
Absoluty LOVE the rich juicy environments!!! Fantastic 😍😍😍😍😍😍😍😍
This is future of design!
Thanks for your valuable information, basically manually sketching really difficult and waste of time to produce good output, but this is awesome software to use to build a good outputs, and I am long time to this kind of software to develop with Autodesk alias auto thanks to God and you
wow. it's really useful and helpful. thank you. :)
Thank you for your time and info with this, this is huge.
Hello. First of all, thank you very much for this great tutorial.
Second, I have a question: how can i re-render the rendered images with stable diffusion so that i can render from other views or angles of the subject?
This is awesome!! Thanks a lot for sharing!
Which model/version of Stable Diffusion are you using? I followed the installation video (updated version) of the one you linked, and got 1.4, 1.5 (pruned-emaonly safetensors) and the 1.5 inpainting one. The results are not as good as for example I would get from Lexica. Maybe I need to train the model?
I think I was using the 1.5 when I recorded this. Did you use the same prompts I did?
@@HandleBar3D, I stuck with it, and I'm glad I did :) things are pretty crazy with ControlNet, Lora's and IPAdapter.. even started switching over for experimentation to ComfyUI. Still, it can be tricky to get fine control over a model, especially for products that aren't cars. I'm definitely getting useful results for concept generation, CMF variations, etc.
Thanks for kicking off this fascinating journey for me!
Awesome tutorial! For products that it may be unfamiliar with would you suggest training a model/Lora?
I’ve never done it but I’m sure the results would help with your goal
Your generations seemed so fast! What graphics card are you using?
I’m using a 3090 but I definitely cut the wait time in video editing to save time. 😂 I wish it was that fast. It usually takes my computer about 15 seconds in real time for each image generation
You get a subscribe and a like just for taking the time to make written instructions =D I love RUclips for inspiration but for trying out for myself something written is MUCH better :)
Great workflow and such a kind heart to share the knowledge! Thank you Raymundo. One question - is it possible to use more than one input image to mix up the art direction? Will be installing and trying very soon...
Hello, thanks for the kind words, sorry for the delay, if you want use multiple images, that will be more along the lines of training, the channel I link to has some videos on how to do this.
How long does it for you to run one Generation of an image at 119 steps? What hardware are you using?
First: thanks for this informative video! Appreciated!
This all seems great but what would really make it a no-brainer to me would be the option to highlight only certain areas of the image with a paint brush for any software to only continue making iterations of that highlighted area.
For example: I'm super happy with one single car design but the front lights still look wrong to me. I'd then choose a paint brush-like or selection tool to ONLY highlight the front lights and let the software generate further iterations of the chosen image with different front lights only.
Is that possible in Stable Diffusion or any other software like Midjourney atm?
This is called in painting and it is very easy to use, theirs some examples in the guide I linked to in the description
Yeah, if it helps you out, you can feel free saying you showed it to others sooner, but this technique has been around for months to the general public as soon as image2image was available, and theoretically earlier than that, as it was obvious average non-Google employees, non-OpenAI employees and other insiders would be able to do this eventually from looking at the various latent diffusion papers. Still, I am glad you are showing the power of this tech to people that either are scared it will take their job, or say AI is rubbish because it can't create hands properly or the hands and poses aren't art directable. This tech has been available for months, not years, and randoms expect an Avengers movie with a few clicks. Humans are so tiresome.
Man this is fantastic thanks, exactly what I was looking for. I just wanted to ask you one question. I cannot see all the pictures I have generated in the screen on the right, I can only see one by one and as soon as i generate a new image the previous one disappears. Do you know why? Also do you know how to create HD images?
THank you again for sharing, this is so useful and as you said it will be the future
My guide in my page has instructions on how to make them HD.
Every image you generate is actually auto saved in the output folder of where you installed stable diffusion. You can find them there.
The latest release also has an image browser integrated into the UI. Hope this helps
Awesome Ray...thanks so much for sharing. quick question off topic how do you get the "dark theme" background for SD's user interface? Is that somewhere in the settings....
I don’t know 🤷♂️ I was wondering that myself cause I’ve seen the white theme, but I’m assuming it’s in the settings
@@HandleBar3D Okay than....when you installed it on the first go ...was it defaulted to the dark theme?
@@scottadam4268 yeah I think the reason is I have my chrome set to dark mode, so maybe that controls the look not stable diffusion, but yeah it’s always been dark for me
you can add "/?__theme=dark" ad end of the of the string in the address of the browser
Hi Raymundo, unrelated but I have a question about vred professional, each time I add my environment it's doesn't appear on the scene where the car is until I turned on the raytracing , how can I make it appear without turning on the raytracing?
I haven’t used it in years, wouldn’t know
your videos are amazing.would you please create a tutorial about animation,IK solver and bone in Alias?
Hey! i'd love to chat on this topic with you. I'm an automotive design student / programmer and I've been training my own AI for designs as well. I'd love to chat on this topic with you!
For sure, send me an email, ray@handlebar3d.com, would love to see the results your getting
Great video but AI-generated digital artwork may not be copyright protected. Not yet atleast, I expect that to change within months.
The courts are slow. Maybe in 2023 there will be some early cases, but who knows how long it will take to create laws around this. IMO, it is impossible to have concrete laws around this. I, as an artist, already can emulate any other artists style by hand, so just because someone else can push a few buttons can emulate someone's style, what is the difference? They are just about to do the same thing faster with fewer button clicks and hand gestures. Times have changed, and there is NO going back.
@@monkey4102 The law needs to be expanded to include AI artwork, I agree things are moving way too fast to stop what will be inevitable once way or another.
pքɾօʍօʂʍ