I guess you could use a combination of this Dreambooth integration and Ebsynth, to create animations based on the keyframes, that you created in Blender.
I dont think that this is something that us creators should be afraid of. I see this tool as a way of generating large volumes of different ideas quicker than you would have on your own. You then pick the best parts of each to use as the basis of your actual design, in which you would complete using traditional methods.
I felt the same. "This could facilitate architectural concepts to explore different spatial sensations and styles, that we could then improve into an actual, detailed and documented final design.
I agree, it's a great way to establish a foundation to build upon, and a wonderful accessibility tool for those who struggle with Aphantasia or other conditions
@@fritt_wastaken and it never will . rise in computation speed doesnt necessitates the emergence of consciousness. anyone who has ever made a claim of AI turning consciousness should and will deserve a the highest Nobel prize in the field of science for solving the HARD PROBLEM OF CONSCIOUSNESS.
Do you know how i can render these AI pictures in higher resolutions? maybe 4k and higher? Because when i try to select a different scale/different resolutions i get an error and it tells me that it’s too high… So when i finished rendering the quality isnt really like i want it to be…
Dream Textures runs locally and uses SD, and allows you to use renders as input images. It's handicapped by low resolutions though, but still worth investigating if you could build something on top of it.
Besides first comment... Another question: Is it feasible to get all frames of an animation to stick to a style and a seed and deliver very stable images without massive variation in between them? Your pyramids in the desert example is perfect for this.. You had an iteration where the sand was quite natural, showing waves in the sand... Say you would have made an animation, there, with a camera zooming in, with mounds, moving, and trees slowly appearing... And that you would like the same sand texture to remain, from a frame to the next... Could that be done with this version, or would it need improvements, prior?
Kind of, after doing a few animations using this addon (currently have to export each frame individually) you need to set your likeness anywhere from .5-1.0, and then turn off the seed setting so it uses the same algorithm for every new frame.
Yep, low to mid-tier artists are f'd. What I want to know now is how future artists will be trained since it was usually the case that a pyramid would form with a bunch of novice aritsts at the bottom being raised by the top of the pyramid. With the bottom being nearly cut off who will be those being trained and would paying such a high cost for training even be worth it since so few will make it?
who is us? maybe if you only create cubes yes... you will be out of job, but from what i can see ai will going to open lots of creativity jobs and contents , artists will be needed as always better focus on learning skills and not focus of fear
Need a way to connect this to run with local stable diffusion instead of dream studio. ds is worthless really. posed wooden textured mannequin, not even female shape it takes it as nude object and censors it. wtf.
Commenting before watching. Just because the title alone is so exciting. I've been going deep with SD (downloading the just released 1.5 checkpoint as we speak!..🎉) and just a few days ago I was wondering, is this the death of Ray tracing? We already have that multicolored mask thing for compositing, can't remember what it's called... Truly with the right setup stable diffusion won't need any more than that segmented view with object IDs and some basic descriptors of the objects/materials..? I mean, you can effectively already do this!.. and again, I haven't even watched the video yet!.. 🤷🏽♂️🤣🤣👍
I love this, I had no idea AI was expanding so far into the creative space, I know there's AI art but I never would've seen it in 3d modelling yet. This is fantastic for adding extra levels of accessibility to people with disabilities that may have trouble seeing fine detail, navigating or using blender as a whole or what have you. I know a blind friend who wants to get into making models and objects, and having an AI assisting them along the way and helping bring their "mess" (their words) to life as an intricate and detailed project makes me so happy to imagine!!
I was secretly already hoping this would become an reality one day. I've always wondered what would happen if you let an AI render a 3D scene instead of using traditional raytracing or rasterization.
@@defaultcube5363 People are already charging less due to all the competition and saturation in the market already. And by charging less, others feel the need to charge less in order to compete, and therefore everyone is going to struggle terribly financially. If we all charge appropriately (higher), no one has to low ball themselves and go poor and get burnt out. But now we have to worry that AI tuk err jerbsss!!
Get More Blender Addons: bit.ly/3jbu8s7
Get A.I Render Tool: bit.ly/3gsanNR
When I think my mind can't be more blown away by all the AI madness....
lol Great summary. Same here.
Yeah machine learning 🥴
I guess you could use a combination of this Dreambooth integration and Ebsynth, to create animations based on the keyframes, that you created in Blender.
I dont think that this is something that us creators should be afraid of. I see this tool as a way of generating large volumes of different ideas quicker than you would have on your own. You then pick the best parts of each to use as the basis of your actual design, in which you would complete using traditional methods.
AI for making concepts is great, but today it only works for that, we'll see in a few years.
I felt the same. "This could facilitate architectural concepts to explore different spatial sensations and styles, that we could then improve into an actual, detailed and documented final design.
I agree, it's a great way to establish a foundation to build upon, and a wonderful accessibility tool for those who struggle with Aphantasia or other conditions
the orchestra will always need a composer
Sounds overly optimistic. That only assumes that AI won’t get good enough to replace artists, which it probably will.
The only reason I still use belnder to create unique thing id because I have total control to make artwork
Insane... I kinda love and hate this at the same time.
This is great, really loving all the SD integrations in Blender 😃
If you have a local install of stable diffusion could you run this locally instead of through Dreambooth?
is there a way to connect it to stable diffusion if you are running it locally?
Thanks for the resources!! All really useful 👍 🙂
Brilliant job on the video, and inspiring content
Pretty soon Blender will just be an AI text prompt!
The universe was made from a text prompt
Ok but imagine a whole comprehensive movie script converted into an AI generated animated film
@@Echidna23Gaming It's coming!
No thankyou
I hope so
Things are getting crazy
does this require a powerpul computer to render like it usually does or it works in a different way? and can it render realistic images?
I bet in a bit one could write a script and an ai would turn that into a fully animated movie
No need… the AI would write the script for you as well…😅😢😂
@@mmorenopampin AI doesn't have creative power you still need to evaluate it and correct.
@@MODEST500 I'd doesn't have creative power *yet*
@@fritt_wastaken and it never will . rise in computation speed doesnt necessitates the emergence of consciousness. anyone who has ever made a claim of AI turning consciousness should and will deserve a the highest Nobel prize in the field of science for solving the HARD PROBLEM OF CONSCIOUSNESS.
@@MODEST500 ... lets see about that 😀
Do you know how i can render these AI pictures in higher resolutions? maybe 4k and higher? Because when i try to select a different scale/different resolutions i get an error and it tells me that it’s too high… So when i finished rendering the quality isnt really like i want it to be…
what AI tools are you using in the intro from 0:00 to 0:20 ?
its only for blender?
is there a maximum number of images that can be generated? Or something like credits
I'm with the same question in mind right now
It's very cool, but I'll wait until something can run SD locally.
Dream Textures runs locally and uses SD, and allows you to use renders as input images. It's handicapped by low resolutions though, but still worth investigating if you could build something on top of it.
Its a bit scary how much AI stuff can do lately, its scary but cool at the same time
very scary for me since I'm an 3d and 2d artist
Think about 50 years from now 😵💫😵
honestly.... this AI thing is madness.
6:13 now that is super cool 🤗
The year 3000 came early this year
Mind-blowing stuff.
Hey, this is amazing! Can I use it to render animations? When I click render animation, it skips the Ai render on each frame.
I was wondering this same thing.
I am waiting for sequence AI render
Nice I threat it for inspiration
is working for all graphic card? or just better for specific graphic card??
I believe dream studio is doing all the AI stuff so it should work with any card
It renders in the cloud. No special gpu necessary.
This is powerful!
how did u paste the api key?
I am terrified yet excited
Me to, the horror of AI take over so all can create art... not the arts, no not the arts! :-D
Besides first comment... Another question: Is it feasible to get all frames of an animation to stick to a style and a seed and deliver very stable images without massive variation in between them? Your pyramids in the desert example is perfect for this.. You had an iteration where the sand was quite natural, showing waves in the sand... Say you would have made an animation, there, with a camera zooming in, with mounds, moving, and trees slowly appearing... And that you would like the same sand texture to remain, from a frame to the next... Could that be done with this version, or would it need improvements, prior?
Kind of, after doing a few animations using this addon (currently have to export each frame individually) you need to set your likeness anywhere from .5-1.0, and then turn off the seed setting so it uses the same algorithm for every new frame.
You had me excited thinking it can generate models...
Spectacular.
How we can render animation with same style?
by using "Greg Rutkowski" and "Artgerm" prompt
Yo that is fucking insane
Interesting use of in painting.. wondering if this works for a screen shot then importing into Stable Diffusion A1111
You could do this as well with Midjourney by uploading your pic.
Great video bro
Wow, so much possibility!
i can see us out of jobs in the near future
Yep, low to mid-tier artists are f'd. What I want to know now is how future artists will be trained since it was usually the case that a pyramid would form with a bunch of novice aritsts at the bottom being raised by the top of the pyramid. With the bottom being nearly cut off who will be those being trained and would paying such a high cost for training even be worth it since so few will make it?
It still needs people who have a sense for style But i am scared non the less
who is us? maybe if you only create cubes yes... you will be out of job,
but from what i can see ai will going to open lots of creativity jobs and contents , artists will be needed as always
better focus on learning skills and not focus of fear
hahah its crawling shit for now its even hard to use this in real job
I don’t
**Update:** AI Render now supports animations!
Can i use this AI to make pictures and post it on instagram?
Are there any restrictions there?
Thanks.
Don't see this as something I'd ever use.
cool!
Only works for NVIDIA GPU owners unfortunately.
Fortunately*
would be awesome to see this integrated with blenders pano rendering for making VR backgrounds
Need a way to connect this to run with local stable diffusion instead of dream studio. ds is worthless really. posed wooden textured mannequin, not even female shape it takes it as nude object and censors it. wtf.
Can I use this for your animations
AI scares the shit outta so I guess concept artists may lose their job
You still need artist to train the ai and you can't get the ai to make exactly what the user wants
i think concept artists will use ai more than ai replacing artists
@@jaijiu well nowadays you're right, let's what we'll get in 10 years
More that the focus on the role might change.
Teams might shrink too
There goes the artist creativity down the drain.
Yeah, as if it's not okay for artist to use refference
does this file your renders online?
this is amazing thank you
🤩
Is there a version that uses a local install of stable diffusion?
A version is coming up, will follow up with that and make an updated video.
There's a node for that ?
😍
I thought I could render it myself owooo 😳
Fuck me .. its over 9000!!!
That's so cool!!!!
For a business pipeline, I see potential in productivity. From an artists angle, straight up cheating.
Commenting before watching. Just because the title alone is so exciting. I've been going deep with SD (downloading the just released 1.5 checkpoint as we speak!..🎉) and just a few days ago I was wondering, is this the death of Ray tracing? We already have that multicolored mask thing for compositing, can't remember what it's called... Truly with the right setup stable diffusion won't need any more than that segmented view with object IDs and some basic descriptors of the objects/materials..?
I mean, you can effectively already do this!.. and again, I haven't even watched the video yet!.. 🤷🏽♂️🤣🤣👍
Can you do this with an animation as well?
The next step is to have AI that uses AI to create prompts to generate artwork.
I wonder if this could be used for free.
How many images can you generate for free?
Every day I wake up to more and more lit ai news. What a time to be alive
Yeah !!
with new version u can run the AI localy if u know how to setup so no more u have to pay to dream studio
Is this AI stealing from other artists in order to train it though?
Not any more than artists steal from other artists.
Nope, I would look up a video on how diffusion models work, it's really interesting
@@litjellyfish similar to comparing a fisherman to a trawling company, they both fish.
Perfect for making storyboards
why am i even studying when ai is taking over the buissness anyway
concept artist tool
Looks awesome, and then you realise how many jobs this evolution in AI is going to cost...
o.o~ wow ~~~~
No huggingface api for this addon? We are forced to use dreamStudio? Unfortunate. Went from excited to instantly bummed out.
I'm working on local installation as we speak. I'm hoping to have it out today 🤞
Holy shit
Now I’m starting to pity those wonderful talented artist whose talent and skills will now be thing of the past.
Damn...
Ambiguous, but remarkable. It's an interesting.
I love this, I had no idea AI was expanding so far into the creative space, I know there's AI art but I never would've seen it in 3d modelling yet.
This is fantastic for adding extra levels of accessibility to people with disabilities that may have trouble seeing fine detail, navigating or using blender as a whole or what have you. I know a blind friend who wants to get into making models and objects, and having an AI assisting them along the way and helping bring their "mess" (their words) to life as an intricate and detailed project makes me so happy to imagine!!
Well I'm glad I'm out of the CGI game, this is really something.
Also, there are youtube videos showing new AI based animation tools providing fully animated character just by typing a text prompt.
example... Drum kit - (kick, Snare, Nice tutorial-Hats ETC) all RED, app all Purple, Keyboards all Yellow, softs all Green and on..
Just one api plugin... not local
I'm working on local installation as we speak. Hoping to have it out today 🤞
Awesome to combine with one shader to rule them all addon.
With all the work to use the ai renderer it's easier to make your own render
says i have buy credits...
I was secretly already hoping this would become an reality one day.
I've always wondered what would happen if you let an AI render a 3D scene instead of using traditional raytracing or rasterization.
what happens if i don't use 3.3
I got better performance with 3.3
You'll explode , cause a fire and burn down the neighborhood.
It supports 3.0 - 3.3. And others said it works on 2.93LTS, although I didn't personally test that yet.
✅✅✅
GAAAAAAAAAAAAAAAAAA
bruhhh
Ebsynth is better
First one to comment
Fuck AI
The future is now, old man.
ALL ARTISTS ARE GOING TO LOSE THEIR JOBS. Thanks AI devs
Yay!
Dey tuk err jerrbs!!!
cope, charge less, you are way too expensive
@@MaximilianonMars LOLLL
@@defaultcube5363 People are already charging less due to all the competition and saturation in the market already. And by charging less, others feel the need to charge less in order to compete, and therefore everyone is going to struggle terribly financially.
If we all charge appropriately (higher), no one has to low ball themselves and go poor and get burnt out.
But now we have to worry that AI tuk err jerbsss!!
AI and Blender ...
or just load your render on local SD for free