Correct. For Blender to work properly the default cube must be sacrificed and then resurrected whilst murmuring "ecruos nepo" under your breath. It is written.
The reason why the results don't look that good is because the result depends on your viewport position, if you're too zoomed out the result will be too low res for the model, also depending on what angle you're looking from. Another feature to consider is generating using a depth map, as it improves the result significantly. Also the results depend highly on the model you're using, prompts for the cobblestone texture were too simple, SD 2.1 requires a more detailed description of what you're looking for.
If you are using the 2.0 or 2.1 models then you should be using a base size of 768 x 768, as this is what the model was trained on and is the optimum resolution. 512 x 512 is for the 1.x series of models.
I always make the default cube a hero!! Even though AI is quite powerful, I still feel like it takes away a learning curve or skill check or the experience-builder that is required for these kinds of things to improve an actual artists skills... but of course many people can jump off this to get a boost or quick projects done, but I feel it takes a lot away from the personal aspect of unique and individual art at this point, but I have used AI a lot before too so it goes both ways!!
I think A.I. open source is the future. You teach your own A.I. to assist you in your work. And create your own tools and applications to work with. And the best ones you share with other people. The time when we waited for developers to come up with solution is over. Now we make our own solutions, we are the solution.
the generated texture isn't seamless on that cube because the default cube is given skybox mapping, not tiled mapping. You need to edit the UV map to fix that, possibly remap as lightmap packed or a standard unwrap
You can generate 1024x1024 image with the addon, its just because its set to 512x512 by default but you can go up to 1024 and with the upscalling you can get your image 10 4096x4096
Very nice! For those of us that are not so artistically inclined, this will be a big help at least in prototyping and maybe more. I can definitely see this improving to some amazing levels in the future.
Definitely a cool tool that I am going to use immediately. I have already been using MidJourney for image generation. I've used it mostly for floor and landscape textures in Unreal Engine, it can make some very unique designs, one of which I actually did a little bit of touching up to in Photoshop and it might make it to the final version of the game. With AI as a concept though, I am pragmatic about it. I understand that it is concerning but, the genie is out of the bottle, at this point all you can do is learn to use it. Complaining about it and not learning to utilize it won't make it go away. I feel like AI to millennials is computers to boomers, some boomers decided to embrace it and are better for it, while others pretended they could live without computers and have effectively made themselves unable to operate in modern society.
At 4:29 obviously it's not going to be seamless because you're looking at it on the cube which depends on how you UV unwraped it, but if you would take the 2D texture and repeat it on the XY plane it would be perfectly seamless.
Wow, this is impressive. This early demo may not be perfect, but imagine what we'll be able to do in another 5 years. Things are changing very quickly.
Exactly. That seems to be lost on a lot of the detractors of these new AI solutions. Do they threaten traditional jobs? Absolutely. Just as digital artists replaced hand-drawn artists in many industries because it is faster and more flexible while many traditional artists cried that digital isn't real art. Artists who can work _with_ AI tools will likewise thrive as the AI tools will help improve their workflow and output.
I guess in 10-20 years time , we will have UI free Blender where we will talk to AI avatar to create anything. It is already fascinating to see the capabilities of the AI and its implications. By the way, as much as I could try prompts on several platforms it comes to the video ram size where the resolution is capped. Of course the training resolution is important but it doesn't limit it. And unfortunately none of the current outputs are production worthy but for mere testing prototyping.
The good thing is that no matter how much technology helps non creative people make things it will never make them more creative and that's the only good part of all this.
@@ayoubbelatrous9914 the computer is just mimicking what an actual creative person would do with ease. And I'll repeat again. It will never make them more creative. Not sorry that this is hard for you to understand. maybe an AI can explain it to you though.
@@TriangleHarder AI doesn't imitate it learns same way a human learns but much faster and cheaper. it wont make a person creative that is true. this is just the beginning wait a few years until is get advanced enough to the point where even the best artists wont be able to compete and even if they are able to compete it will be cheaper faster and better than most artists. you don't need to be good at art to accomplish what you tryna do is what am trying to say.
Nice to see. where this leads to. But in this technical status I`m (still) not convinced to use it. In the time I have satisfying render results, I`d textured it by myself.
Wow ! nice but what about styling a set ob objects with the same style ? Every object will be textured alone with no connection with others? Keeping the same style for multiple object is a pain isn't it?
its an aligorithm basically googling images stealing them and tweaking parameters . i was hoping it would generate raw textures from scratch, perhaps substance painter will master it in future .
One does not simply celebrate that an entire profession which created the art and entertainment we love since forever is risking going jobless. I'll not be that ungrateful.
Hope have something could do recreation from my messy mesh obj model to blender model, so I could have more control of the bevels and sub-devision of my model
Can you use custom AI models on this thing? Like say I'm making something anime-style, can I put AnythingV3 or NovelAI instead of Stable Diffusion there to generate the textures?
I don't quite see any benefit of using it as a blender plug-in yet. Projecting depth "texture"? Useless. Generating 2D textures? Automatic1111 works just as well including seamless bit and a wide variety of memory options and upscaling. I can generate 1024 textures natively with -medvram and do a single upscale to 2048 with 768*768 pieces on a 3080. If I had 24Gb GPU I'd be able to do even better. One thing that was nice in this implementation is nice categorization of prompts - ie "close up shot", "Full color", "long exposure" etc, instead of a very long list of carefully crafted prompts in automatic1111.
Don't know why it doesn't work for me. Using Blender 3.4, download free version for AMD GPU but it just doesn't work. Can't use search for model because it doesn't appear and it show that the add-on is not complete. Why?
I tried to reply with a imagur link to my tests and seemed to not work so I'll just keep it short and say it's wonky but it does work, a bit better for faces than for body, some types of models work better than others, it does take your vertices in consideration after all so a well organized mesh is recommended. I got some very cool results, and you can always edit/clean them up later.
This is great. I love playing around with stable diffusion it's addicting. Btw, your generations were really bad because you're basically using bad sampling methods and even then not enough steps for them. With DPM++ Karras you can use 15 steps and get really good results. Set CFG to something around 9 or 10.
Ah ok thanks, will look into that. One thing I have found with Stable Diffusion is its REALLY easy to look it up or make it hang when you play with settings, so it's made me really hesitant to experiment
I never seem to be able to get the same quality as what other people do with stable diffusion, maybe I'm just not using the optimal parameters, though I guess more like its the trial and error nature, you need to take a fair few samples and on my gpu it can take a little while...
LOL !!!! Playing loto in blender 🤣 Push the button and see if you won the 1st price !!!!! The awesome-texture-just-fitting-your-dream-and-it-is-seamless..... 😂 All those who won played at it ! and YOU will win ! ( provided you have eternity in front of you ) okay this is really amazing :D Now please, go to Blender artists and post a WIP done with AI textures ;) I'd be pleased to discuss about it ! Happy blending ! EDIT: oh and don't forget to ask AI for AO, height, roughness and albedo :P hehehehe
chatgpt can apparently write blender python code, albeit not perfect for now. At some point it may be able to create some generative geometry for you. It is still good to understand how the blender python scripting works in the first place.
It is really unnecessary at the moment, there are way better ways like we model and texture for years. These methods are on a top peak of quality and easyness. This copyright theft AI is nothing.
Gamefromscratch Hey I liked what you are showing except you move way to fast for me. Also you should use the screen keys mod that is available so viewers and follow more easily. Also Norman Rockwell is a drawing style not architectural style gothic greek new england santa barbara bungalo, mission style would be examples. the AI looks like it had seen an abandoned house drawn in norman rockwell style. thats all glad for the heads up about modern texture. c heers
Abstract color and texture mixing is the only use of Ai I'm interested in, right now. But I've got too many ethical reservations about Stable difussion and all the other image stealing databases. I'll wait until someone makes an ethical public domain database.
I can generate a 512x512 image on my old GTX 960 in ~30 seconds. But you also have to consider VRAM being a limiting factor, Blender+SD running together can clog it up quickly.
I've stopped using Stable Diffusion locally because of that. Use it a lot from rendering prompts the web though. I think in this video, there was also a cloud option presented, using a Dream Studio key.
@@johnhershberg5915 Yes. Mixamo is closer to using a prompt, but that is not AI. Rokoko can also make rigging and animation faster, you can use there video's of yourself posing for the movements.
Not that I know of, but there already were experiments with textTo3dModel and the results were fairly decent for a first attempt. I also saw some textTo3dModel magic in UE4 realtime demo, but the results were very basic and without textures. However stay tuned, I don't doubt we will be seeing some generators soon
@@gamefromscratch Yes, AI can already make Meshes in Blender. I saw a guy on RUclips using Point-E in Blender. It works, but the results are terrible. The results on creating meshes and animations in Blender using Chat GPT look a lot better, but I haven't tried that myself yet. The RUclipsr that uses Point-E, also explains how to do it. His name is Blender Smoothie 3d, his video on how to install the newest 3d generator in Blender is named: "How You can run Point-E and create any 3d object for free !?"
10:50 - no! even as a background it's a trash. Maybe there are some parameters to generate it longer but with much better results? It's definitely didn't took all the important 3d details as the information about "what house it will be".
This is ultimately going to be a market of the future. Ethical data models. Going to need to be an easy way to audit the data sets. I'm somewhat shocked this wasn't tackled upfront.
You seem like an awesome guy and I hate to go negative but it sure feels sometimes like "games from scratch" becomes a less and less appropriate name for this channel.
Oh that shipped a very very very very long time ago ;) GameFromScratch was actually the name of the blog from well over a decade ago. The intention of course was to document the creation of a game from scratch (well duh). Then I started making more and more tutorials as I was exploring tech, first tech like LibGDX and later SFML, then engines like Godot. At the point I started the RUclips channel, it was to create tutorials to supplement the website, so I kept the same naming. ... and years later, here we are.
There's already Magic3D from Nvidia. So the tech is there, we just don't have a convenient UI for it like we have for pictures generated by Stable diffusion!
These AI stuff is taken over the world, feel sorry for all the professors in universities asking students for 1000 research papers, or all the concepts artiest around the world, all the pro programmers .. ets .. now Ai can make their work sometimes even better and in many many verities and won’t take them seconds.. and the scary thing is that this is just the start.. imagine 5 years from now when the Ai become even way more smarter… ppl can have the full knowledge of how to make Nuclear..
would be cool if stable diffusion wouldnt have build almost their entire database on stolen work. Skim through the social media channels of big artists, especially illustrators and a LOT are against it and have even seen part of their signature and past works on stable diffusion generations, all without compensation. So yea, using SD in blender if you want to actually make money with it, is copyright dubious at best and also exploitative of the artists who build its database against their will. Cool thing if it was ethical, but sadly it isnt
I agree. But it is an idea of where things are going...like most of the AI out today. I wouldn't use any of them in my work (except maybe Midjourney). But holy cow, what's gonna be in one year?
Seriously underwhelmed. It’s a cool add-on but your results were nothing close to mind-blowing. It seems like you were rushing to put out a video before anyone else instead of completing something interesting and then make a video of that.
Links
gamefromscratch.com/dream-textures-stable-diffusion-for-blender/
-----------------------------------------------------------------------------------------------------------
*Support GFS on Patreon* : www.patreon.com/gamefromscratch
*GameDev News* : gamefromscratch.com
*GameDev Tutorials* : devga.me
*Join us On Discord* : discord.com/invite/R7tUVbD
*Twitter* : twitter.com/gamefromscratch
-----------------------------------------------------------------------------------------------------------
How about using the AI to create the 3d model itself? Are we at that point yet?
@@DelanaMcKay With the level of technology today that should be possible.
It's a cool tool, for sure, but still, you are using Blender wrong. You should have deleted the default cube and re-added the very same cube.
@@UnderfundedScientist r/wooosh
Correct. For Blender to work properly the default cube must be sacrificed and then resurrected whilst murmuring "ecruos nepo" under your breath. It is written.
@@UnderfundedScientist cringe overused sentence
This cube survived the abyss of cubes
@@UnderfundedScientist bro missed the joke
The reason why the results don't look that good is because the result depends on your viewport position, if you're too zoomed out the result will be too low res for the model, also depending on what angle you're looking from. Another feature to consider is generating using a depth map, as it improves the result significantly.
Also the results depend highly on the model you're using, prompts for the cobblestone texture were too simple, SD 2.1 requires a more detailed description of what you're looking for.
make your own tut, you can hype on this topic.
If you would make a full tutorian itll be nice
Thanks!
Ditto. Would like to see your tut on it
closing up on the AI bot's crotch for the transition was crazy
If you are using the 2.0 or 2.1 models then you should be using a base size of 768 x 768, as this is what the model was trained on and is the optimum resolution. 512 x 512 is for the 1.x series of models.
"The Blender operating system" what a time to be alive
Thanks for checking it out! For the default “stable-diffusion-2-1” model you’ll get better results with 768x768 images
05:18 - this is the most beautiful flawless texture ever created in this universe. thx for sharing, have fun
I always make the default cube a hero!!
Even though AI is quite powerful, I still feel like it takes away a learning curve or skill check or the experience-builder that is required for these kinds of things to improve an actual artists skills... but of course many people can jump off this to get a boost or quick projects done, but I feel it takes a lot away from the personal aspect of unique and individual art at this point, but I have used AI a lot before too so it goes both ways!!
I think A.I. open source is the future. You teach your own A.I. to assist you in your work. And create your own tools and applications to work with. And the best ones you share with other people.
The time when we waited for developers to come up with solution is over. Now we make our own solutions, we are the solution.
the generated texture isn't seamless on that cube because the default cube is given skybox mapping, not tiled mapping. You need to edit the UV map to fix that, possibly remap as lightmap packed or a standard unwrap
You can generate 1024x1024 image with the addon, its just because its set to 512x512 by default but you can go up to 1024 and with the upscalling you can get your image 10 4096x4096
Generate by frame is to use is in animations. It is updating the angle of mapping as angles of preview change.
Very nice! For those of us that are not so artistically inclined, this will be a big help at least in prototyping and maybe more. I can definitely see this improving to some amazing levels in the future.
Definitely a cool tool that I am going to use immediately. I have already been using MidJourney for image generation. I've used it mostly for floor and landscape textures in Unreal Engine, it can make some very unique designs, one of which I actually did a little bit of touching up to in Photoshop and it might make it to the final version of the game.
With AI as a concept though, I am pragmatic about it. I understand that it is concerning but, the genie is out of the bottle, at this point all you can do is learn to use it. Complaining about it and not learning to utilize it won't make it go away. I feel like AI to millennials is computers to boomers, some boomers decided to embrace it and are better for it, while others pretended they could live without computers and have effectively made themselves unable to operate in modern society.
At 4:29 obviously it's not going to be seamless because you're looking at it on the cube which depends on how you UV unwraped it, but if you would take the 2D texture and repeat it on the XY plane it would be perfectly seamless.
Wow, this is impressive. This early demo may not be perfect, but imagine what we'll be able to do in another 5 years. Things are changing very quickly.
Exactly. That seems to be lost on a lot of the detractors of these new AI solutions. Do they threaten traditional jobs? Absolutely. Just as digital artists replaced hand-drawn artists in many industries because it is faster and more flexible while many traditional artists cried that digital isn't real art. Artists who can work _with_ AI tools will likewise thrive as the AI tools will help improve their workflow and output.
I hope they get something similar to mid journey eventually. It's a lot better for people like you and me.
just amazing!!! In future this will help so much the production of environments and characters
I guess in 10-20 years time , we will have UI free Blender where we will talk to AI avatar to create anything. It is already fascinating to see the capabilities of the AI and its implications. By the way, as much as I could try prompts on several platforms it comes to the video ram size where the resolution is capped. Of course the training resolution is important but it doesn't limit it. And unfortunately none of the current outputs are production worthy but for mere testing prototyping.
Hopefuly youre wrong
In 10-20 years time you won't even be talking to the AI, it'll be telling you what to say.
@@dave3269 I am not but I am. I am not . I am. I am not . I am . Scary stuff.
The good thing is that no matter how much technology helps non creative people make things it will never make them more creative and that's the only good part of all this.
Facts
Facts x2
you don't need to be creative if a computer can do it for you.
@@ayoubbelatrous9914 the computer is just mimicking what an actual creative person would do with ease. And I'll repeat again. It will never make them more creative. Not sorry that this is hard for you to understand. maybe an AI can explain it to you though.
@@TriangleHarder AI doesn't imitate it learns same way a human learns but much faster and cheaper. it wont make a person creative that is true. this is just the beginning wait a few years until is get advanced enough to the point where even the best artists wont be able to compete and even if they are able to compete it will be cheaper faster and better than most artists. you don't need to be good at art to accomplish what you tryna do is what am trying to say.
My suggestion will be to add tool name in title of video for future search
Wow this is game changing for Developers and Artist
Use it for individual textures, works flawlessly
hello do you know how to make the upscale ai works please ?
3D Artist + Blender + A.I.= 🤯
Nice to see. where this leads to. But in this technical status I`m (still) not convinced to use it. In the time I have satisfying render results, I`d textured it by myself.
Wow ! nice but what about styling a set ob objects with the same style ? Every object will be textured alone with no connection with others? Keeping the same style for multiple object is a pain isn't it?
its an aligorithm basically googling images stealing them and tweaking parameters . i was hoping it would generate raw textures from scratch, perhaps substance painter will master it in future .
One does not simply celebrate that an entire profession which created the art and entertainment we love since forever is risking going jobless. I'll not be that ungrateful.
"Seamless" only considers tiling. It has no access to the UVmap or geometry; the cube edges and corners would only look right by luck.
Hope have something could do recreation from my messy mesh obj model to blender model, so I could have more control of the bevels and sub-devision of my model
Can you use custom AI models on this thing? Like say I'm making something anime-style, can I put AnythingV3 or NovelAI instead of Stable Diffusion there to generate the textures?
You have the ability to upload your own model, so I assume the answer is yes. That said, the details of making models is beyond me.
Couldn't you change the texture clamping/repeating mode for more seamlessness?
I don't quite see any benefit of using it as a blender plug-in yet. Projecting depth "texture"? Useless.
Generating 2D textures? Automatic1111 works just as well including seamless bit and a wide variety of memory options and upscaling.
I can generate 1024 textures natively with -medvram and do a single upscale to 2048 with 768*768 pieces on a 3080. If I had 24Gb GPU I'd be able to do even better.
One thing that was nice in this implementation is nice categorization of prompts - ie "close up shot", "Full color", "long exposure" etc, instead of a very long list of carefully crafted prompts in automatic1111.
this could be very useful when you're just sampling parts of the generated texture to stencil.
is there a way to automatise the iteration more? Like tell blender to create 10 or 100 images in a row ?
awesome, now if they were to add a ChatGPT that will write out all the custom tool creations you need
a Thumbnail could have said - Mind Blending..
CAN YOU, make a bunch of blocks, select them, then texture them all as a City?
Wow. I remember a time when it used to take longer to load a single image on dial up internet. And it wasn’t even generated by an ai…
super cool i hope we can do 3d models soon !
there are some 3d model generating ais actually, they arent too good with topology but at least its something
Don't know why it doesn't work for me. Using Blender 3.4, download free version for AMD GPU but it just doesn't work. Can't use search for model because it doesn't appear and it show that the add-on is not complete. Why?
how well does this work with character models?
Probably just as "good" as with this simple low poly house...
I tried to reply with a imagur link to my tests and seemed to not work so I'll just keep it short and say it's wonky but it does work, a bit better for faces than for body, some types of models work better than others, it does take your vertices in consideration after all so a well organized mesh is recommended. I got some very cool results, and you can always edit/clean them up later.
Is it only for texture or can it create objects as well ?
i don't know why but dream doesn't show in my 3D view... maybe because i got mine from blender market, anyone have the same issue??
You need to have the image editor open to view the Dream tab in the n-panel
This is great. I love playing around with stable diffusion it's addicting.
Btw, your generations were really bad because you're basically using bad sampling methods and even then not enough steps for them. With DPM++ Karras you can use 15 steps and get really good results. Set CFG to something around 9 or 10.
Ah ok thanks, will look into that. One thing I have found with Stable Diffusion is its REALLY easy to look it up or make it hang when you play with settings, so it's made me really hesitant to experiment
you could probs use an ai image upscaler to upscale the texture.
love ChatGPT - this will be fun ;)
Just Scary.
can you use the new shape scutoid?
The Norman Rockwell house looks like something from Hello Neighbor
You cant just skip baking UV, need to do UV manually if you can look around the object...
Cool!
I never seem to be able to get the same quality as what other people do with stable diffusion, maybe I'm just not using the optimal parameters, though I guess more like its the trial and error nature, you need to take a fair few samples and on my gpu it can take a little while...
LOL !!!!
Playing loto in blender 🤣
Push the button and see if you won the 1st price !!!!! The awesome-texture-just-fitting-your-dream-and-it-is-seamless..... 😂
All those who won played at it !
and YOU will win ! ( provided you have eternity in front of you )
okay this is really amazing :D
Now please, go to Blender artists and post a WIP done with AI textures ;)
I'd be pleased to discuss about it !
Happy blending !
EDIT: oh and don't forget to ask AI for AO, height, roughness and albedo :P hehehehe
Gonna wait for someone to create geometry node AI that does modeling, texturing, and animating for me 😂
I'm making a AI called "Father" than can run your family without you. 🤣
chatgpt can apparently write blender python code, albeit not perfect for now. At some point it may be able to create some generative geometry for you. It is still good to understand how the blender python scripting works in the first place.
It is really unnecessary at the moment, there are way better ways like we model and texture for years. These methods are on a top peak of quality and easyness. This copyright theft AI is nothing.
Ok ai needs to chill out on my fields. This is getting ridiculous
wow!
awesome
Gamefromscratch Hey I liked what you are showing except you move way to fast for me. Also you should use the screen keys mod that is available so viewers and follow more easily. Also Norman Rockwell is a drawing style not architectural style gothic greek new england santa barbara bungalo, mission style would be examples. the AI looks like it had seen an abandoned house drawn in norman rockwell style. thats all glad for the heads up about modern texture. c
heers
Abstract color and texture mixing is the only use of Ai I'm interested in, right now. But I've got too many ethical reservations about Stable difussion and all the other image stealing databases. I'll wait until someone makes an ethical public domain database.
Or just buy Substance Painter(Steam version).
Does it work the computer too hard? I don't have a super fast or modern gpu and my cooler isn't that great
AI are usually cloud based
@@qpaoziwu This is not, it's running locally
I can generate a 512x512 image on my old GTX 960 in ~30 seconds. But you also have to consider VRAM being a limiting factor, Blender+SD running together can clog it up quickly.
Please improve your cooling - it's important for your PC longevity ;-)
I've stopped using Stable Diffusion locally because of that. Use it a lot from rendering prompts the web though.
I think in this video, there was also a cloud option presented, using a Dream Studio key.
What we need is a tool like that to generate animations for characters. Most time-consuming part IMO
Cascadeur does this.
@@somedude5951 But not by prompt right? You still pose and animate as usual, and it completes it via AI?
@@johnhershberg5915 Yes. Mixamo is closer to using a prompt, but that is not AI. Rokoko can also make rigging and animation faster, you can use there video's of yourself posing for the movements.
It’s coming. Animating by prompt will be HUGE. Run to point A, stop look behind, pick nose, turn back to front and run over the hill…to the future.
This is amazing.
I was so excited then those horrible cobblestone results 😂😂😂
Yooo, I don't want to say I told you so but...
Theytook'ourJoobs!
That starting zoom was just wrong 🤣
i wonder if they'll have a button to hide the original artists' signatures on 'your' final texture
Lol I've never seen real signatures generated by AI before, only squiggles
Awesome...
is there a way to use AI to create meshes and objects (like the house object)? i think that would be the coolest feature.
In Blender, no, I dont think so yet. That said, there are a number of AI based 3D model generators in development, from NVIDIA, Google and others.
Not that I know of, but there already were experiments with textTo3dModel and the results were fairly decent for a first attempt. I also saw some textTo3dModel magic in UE4 realtime demo, but the results were very basic and without textures. However stay tuned, I don't doubt we will be seeing some generators soon
You can get ChatGPT to write a python script that creates a model in Blender.
Magic3D (Nvidia), Point-E (OpenAI) and DreamFusion (Google) exist, but aren't publicly available.
I expect we'll have something we can play with soon.
@@gamefromscratch Yes, AI can already make Meshes in Blender. I saw a guy on RUclips using Point-E in Blender. It works, but the results are terrible. The results on creating meshes and animations in Blender using Chat GPT look a lot better, but I haven't tried that myself yet. The RUclipsr that uses Point-E, also explains how to do it. His name is Blender Smoothie 3d, his video on how to install the newest 3d generator in Blender is named: "How You can run Point-E and create any 3d object for free !?"
Considering all the error messages i get when I try and install it, I consider it a complete waste of time.
I’m just waiting for text to video
And
From normal video to avatar video 🤓
I AM DEMO-VERSION MYSELF
nice
Is it free to use this program completely, so can we translate images as much as we want?
10:50 - no! even as a background it's a trash. Maybe there are some parameters to generate it longer but with much better results?
It's definitely didn't took all the important 3d details as the information about "what house it will be".
It's a very cool idea. I'd like to see where it goes.
How do we know if the training data for the AI models were obtained with permission? You don't want to get into legal problems later on.
This is ultimately going to be a market of the future. Ethical data models. Going to need to be an easy way to audit the data sets. I'm somewhat shocked this wasn't tackled upfront.
You seem like an awesome guy and I hate to go negative but it sure feels sometimes like "games from scratch" becomes a less and less appropriate name for this channel.
This quite literally allows you to make a game from scratch
I would love for Mike to make more tutorials. It's why I originally followed his channel. He's a very good teacher.
Oh that shipped a very very very very long time ago ;)
GameFromScratch was actually the name of the blog from well over a decade ago. The intention of course was to document the creation of a game from scratch (well duh). Then I started making more and more tutorials as I was exploring tech, first tech like LibGDX and later SFML, then engines like Godot. At the point I started the RUclips channel, it was to create tutorials to supplement the website, so I kept the same naming.
... and years later, here we are.
Great info, but why you have to babble do fast? Are you being timed? It seems so many tutes are done a break-neck speeds… 😶🙀
😎
It would perfect if there is an AI 3d modeling plugin. Yes, especially human modeling, which is quite organic and there are various styles.
Soon it'll be generating 3d models. That's going be cool!
There's already Magic3D from Nvidia. So the tech is there, we just don't have a convenient UI for it like we have for pictures generated by Stable diffusion!
@@pescadorCardoso I'll have to check that out.
luckily this is unprofessional
These AI stuff is taken over the world, feel sorry for all the professors in universities asking students for 1000 research papers, or all the concepts artiest around the world, all the pro programmers .. ets .. now Ai can make their work sometimes even better and in many many verities and won’t take them seconds.. and the scary thing is that this is just the start.. imagine 5 years from now when the Ai become even way more smarter… ppl can have the full knowledge of how to make Nuclear..
would be cool if stable diffusion wouldnt have build almost their entire database on stolen work. Skim through the social media channels of big artists, especially illustrators and a LOT are against it and have even seen part of their signature and past works on stable diffusion generations, all without compensation.
So yea, using SD in blender if you want to actually make money with it, is copyright dubious at best and also exploitative of the artists who build its database against their will.
Cool thing if it was ethical, but sadly it isnt
This is not how it is done :o
You have to delete the cube and then add it again.
Blasphemy!
So you generated 3 cobblestone textures that look like trash? Really powerful addon :)
To be fair I'm really really really really really bad at AI prompts.
Like... Really. Bad.
@@gamefromscratch Come on mate, you say what you want. Youre bad at... talking? :D
I agree. But it is an idea of where things are going...like most of the AI out today. I wouldn't use any of them in my work (except maybe Midjourney). But holy cow, what's gonna be in one year?
Seriously underwhelmed. It’s a cool add-on but your results were nothing close to mind-blowing. It seems like you were rushing to put out a video before anyone else instead of completing something interesting and then make a video of that.
Getting error here
Another tool I won't be able to use
future is bright for people who wish to use it
No.. way
16 minutes of nothing , wass of time
Have you tried gpt chat to generate blender scripts?
AI is awesome? I dunno...give it maybe another few years.
lol so much whinin in the comments "iTs nOt lEgAl" "sToP sTeAlInG oThEr´z pEoPlE sTuFf"
Cool stuff