I'm very positive about Stable Diffusion, its very exciting, not like people who complain about it all the time . However, i'm trying to look how i can use it in the Arch Viz industry, but there's just not a lot it can do. If you ask it to draw a pretty house, yes it can do that. If you ask it do draw a very specific house based on very specific architectural drawings, measurements and QS materials & surfaces, not at all! Its a toy at this moment in time. It can do great organic, general, abstract pictures, textures, etc. not a lot of very specific tasks in the practical real world.
I don't think at the moment you can use it professionally as a One-Button-Solution kind of thing. For me it is a gamechanger because I can quickly try out new ideas, get references. It's basically a reference search engine on steroids which can give you an idea about how the final result could look like. So for me at least it is an inspiration collection tool, because as you described it lacks the fine control and precise revisions required in daily production. So try to use it what it's good for and we will see what the future brings :-)
I´ve been using here in the office basicaly to find moods. If you have a reference image it works even better, it´s perfect to bring to a meeting and decide the project direction, even though the builing is not exactly how it should be.
Exactly this. I tried it today (tydiffusion) after fiddling around with Stable Diffusion for one year. It's got still a way to go. But it's usefull to make a scribble/reference, as Jonas said.
I am guessing this won't work for animation yet? This seem like it will treat each frame as a single pictureso It can't create waves, foam as If the ship is moving forward.
thanks alot! i thikn it can help in many parts of the image especially in archviz were you always want great photorealistic surroundings. this will help and cut the time in half!
its really good to explore the posibilities where your art can go, but in the end you have to do it your self to reach that level. otherwise you will end up cycling for options that never ends
Yeah that’s also how I see it at the moment. It’s a bit useless if you can’t combine it with the traditional way of doing things as it lacks control and precision and is just too random. But it will give you lots of ideas and inspirations which you can then utilize to build something new out of it.
Great basic trainin video, thank you. I just bumped into a problem with depth - it always sends in just blank black viewport - no depth at all. Tried adding all max (arnold, v-ray) cameras, did not help. Any idea how to solve this?
Great video as usual! I have a question: is your video sped up during the image generation phase (after you press the Generate Image button) ? I own an i9 13900KF + 4080 rtx gpu PC and it takes much longer than shown here. Is there any particular setting to tweak for gaining speed performance ? Thank you for answering. EDIT: I've already changed the sampler to a GPU one but the change in speed is almost unnoticeable...
Great intro Jonas. Thx a lot for this. Can we create materials for our objects that look good if the object is rotated? Like texture maps etc? I tried a tennis ball on sphere, but it looks bad when I rotate the sphere.
@JonasNoell is it my system or is it just really slow to render in viewport. Maybe you can show a timelapse or something to get an idea of the actual speed? Just tried the cat and it placed my model in front of the cat (behind - blurry).
I speeded up the generations around 500%, but the speed depends on your GPU. I use a 4090 for reference so if you use weaker hardware it will of course cause delays. Also the resolution you chose plays an important factor. You could also choose smaller resolution and try to use upscaling which should be faster
Thank you Jonas for the video... may I ask you if there is an option to use inpaint mask feature, where you can improve some parts locally, especially 3d people?
There doesn't seem to be an option for this (yet?) though ComfyUI/SD supports this. I hope this will be just a matter of time as this is the first release of the tool. This would be also my most requested feature. I would also like to have localized prompts for diffferent ObjectIDs or Masks for example, also possible in ComfyUI but not existent within tyDiffusion yet...
@@JonasNoell thanks for reply, i was thinking creating a flat plane and projecting ai generated texture such as wooden plank, stone wall etc then exporting that texture.
"Tydiffusion comfyUI engine could not be started.Try running 3ds max as administrator." I see this box. I have this problem can’t generate.Please how can I solve this.
Have you tried running 3ds max as an administrator? :-) If yes try to get some support from TyFlow directly, I can't give you much technical support as it worked flawless for me
Yeah recently going through all my recent projects and test what the AI would have come up with. You can get a good feeling for its potential but also it’s limitations! 😃
Another amazing vid @JonasNoell! This is indeed a game changer for me. Trying it out for matching background landscapes instead of tediously modeling/lighting/texturing. Using this combined with other AI tools like Topaz upscaler or Magnific to further add enhancements.... the possibilities are crazy! Clients will especially like this and you can charge them an additional fee for design explorations or artistic styles.
Yeah exactly that what I was thinking as well, should work really well for MattePainting replacements, even with the possibility to project that directly on some simple geometry for parallax. Can definitely simplify a lot of think you would normally have some high quality assets or some matte painting skills to have decent results
@@JonasNoell hmmm never thought of using it for parallax. That would be amazing use for 3d orthographic projections for say 3d floor plans! Gotta try it out.
do you know why mine isnt working? when i press to generate image it makes my viewport black and i cant fix even that. it doesnt generate at all and i have the default settings. does anybody know why?
I haven't heard or read anything about those requirements as of now, but I haven't really tried animation yet so I can't tell for sure. Where did you read this?
@@JonasNoell i cant find the thread where i read this, i think it was tyson himself on the tyflow forum.. explanation was that to render animation you need to load multiple large ai models into vram..
@@pantov Ok interesting, as said I will try animation soon do see how this works. Though I have a GPU that has 24GB Ram, so if that's the requirement I probably wouldn't notice it. Maybe just make a post in the tyflow Forum if you want a reliable answer. 🙂
@@JonasNoell yeah i was planning to that a bit later, tyson is flooded with forum posts at the moment :-) i got 11gb on my workstation, but i got a render node that has 16gb but slower gpu, havent tried it there yet.. thanx for your reply, looking forward to your furter tyflow vids. cheers mate!
great tutorial! Have you experimented with animation yet? When I get a prompt that looks good as an image but then run it through animation, it looks completely different. Have you experienced that? I'm so new to AI I'm probably being stupid ;)
Haven't checked out animation yet but will definitely do so. I can imagine making a more advanced 2nd tutorial that covers those features. This one here was mainly for the basics to get you started.
Amazing work! Sorry just so I understand - it looks like you can use this diffusion AI to make 2D images... but can you use it to export Video Clips, and use the 3D model to have multiple camera angles etc, cheers :)
Yes you can do video but there are issues about inconsistency and just general weirdness that will happen, so I don’t think you can get something too productive out of it at the moment
Es ist extrem schade, dass du deine Videos hauptsächlich auf Englisch veröffentlichst. Es gibt viel zu wenig deutschen Content in dieser Qualität und dementsprechend ist für viele Interessierte der Zugang zu den vorgestellten Techniken nur über Umwege möglich.
Naja für mich als Creator macht es mehr sinn den Content für die größtmögliche Zielgruppe veröffentlichen, und das ist nunmal die Englische Sprache. Außerdem würde ich vermuten, dass Deutsche die 3D machen und kein Wort English sprechen eine sehr sehr kleine Zielgruppe :-)
@@JonasNoell Nun, Hand aufs Herz, es gibt wahrscheinlich 100 weitere Creator die die selben Thematiken abhandeln, der Mehrwert geht dabei für den "Konsumenten" gegen Null. Es geht dabei auch nicht darum, ob man einer Sprache mächtig ist oder nicht, sondern vielmehr darum welche man zu 100% sofort versteht. Es gibt massig Beispiele wo man sich traut, die "Nische" zu bedienen und Content in anderen Sprachen aufzubereiten, die damit zudem durchaus erfolgreich sind. Sieh dir bspw. Stolz3D an, der seinen Content für FreeCAD quasi ausschließlich auf Deutsch macht und damit eine recht große Community um sich ausgebaut hat. Ich glaube du könntest damit auch eine sehr dankbare Klientel beglücken. 🙂
Of course, it is just using ComfyUI in the background. It can't do anything that you couldn't already do with ComfyUI. The novelty is the convenience and direct integration making it frictionless to use.
back those days that your rendering should be based on your vray masters, but now this ai change the game, that's rediculous back then spending much time from your vray fuck masters!
I don’t think that much has changed, as the tools in their current state don’t produce final, controllable and consistent results. You would still have to set this up traditionally.
It is just camera projection the image one the model, there are lots of issues as stretching and being dependant on the camera perspective. Unwrapping is not dead for sure, but I guess there will be AI unwrapping tools which will do the job in the future ;-)
it's nothing but a peace of shit!!!!!!!! because it does not allow you to use your checkpoint . it even does not allow you to deselect the model you do not want to use for the download?!!!!!!!!! why should I have to download so many unnecessary things
Why everyone is always so sure that it is useless in a professional environment. In its current state it has many usecases already if you are creative about implementing it.
For me it is gamechanging as before I didn't use ComfyUI/SD for my daily work as it was too troublesome to export everything, leave your 3d software, deal with the hacky user experience of ComfyUI, so apart from some simple initial testing I didn't use it much. Now with tyDiffusion that is different as it is integrated so seamlessly that you would be stupid to not use it at least during concepting phase or when figuring out how you want the end result to look like. It is not a One-Button-Final-Result thing but more like a image search engine on steroids which can give you much better and faster ideas and references than traditionally.
I agree. Not to mention the complexity of a single render setup. Also there are other AI generators out there which make something similar. Though I'm a paid tyFlow user I cannot see to much use cases... maybe a viewport IPR and / or an animation rendering mode would be a real game changer.
@@JonasNoell Im sorry Noel but I cant agree with you there. If I wanted some lovecraftian abstract approximation to life to know what something might maybe look like, I'd just sniff some shrooms. This is just the enshitification of your ability to imagine. I am yet to see anything spewed by these scrape generators that I can honestly say that is interesting to me. Plus the whole morality of using these generators in the first place. I hope you don't start putting out more content related to this.
So you can’t even see this being useful as an inspirational tool? So I’m a full time lighting and shading artist and I’m confronted every project every day with a blank viewport filled with 3d models and some rough description of what the client wants. How would that not be useful to me? It’s basically reference search on steroids. The client wants a pirate ship made out of cheese? Just see what would come out of the AI and see how I would translate that into a 3d scene and shader? Good luck finding something like this on google. I think you assume it is a solution to just press a button and get something finished. It’s not like this. I really struggle to see how you can see 0 use case of this. Nobody is forcing you to make trippy drug trip animations, you can use it for whatever you want. Use it for what’s useful to you. I’m not saying it’s the be all and end all of traditional way of doing things, as of now it’s a tool that can be used and should be used. And if that becomes a valuable tool for me I will of course continue to make videos about it as I try to provide most valuable content to my followers out of a production perspective. How about you make a video how crappy and useless it is if that is your opinion. Would be interested to see your take on it.
tyDiffusion uses ComfyUI in the backend, so yes it is the same stuff. The innovation is the direct and seamless integration into the 3d software, which at least for me makes it accessible. Didn't use it before through ComfyUI as it was annoying, slow, hacky but now through tyDiffusion I do and would be stupid no to. In CGI NOT experimenting with it would be a no go. If you literally can't see any usecases for this and it is and always will be a waste of time for you then I can't help you :-)
Maybe AI in CGI is no go but CGI in AI is the best way to go. Imagine someday we can create a box on the ground and type: "Bush", then create a Cylinder, animate it go pass the bush and type: "A male human wearing a suit". Then type for the whole scene: "A Male human walking next to a bush". I mean using 3D object and camera to guide everything is the best way to have control over AI image generator. If We want we could create a more complex scene by using low poly human model instead of a cylinder and actually animate him to do what you want. And If you want to go extreme, you could still make a complete model and animate it just like you do right now and use AI simply to render the scene very fast.
In summary, spending years and years studying 3dsmax and many plugins is of no use, I lost years of study... it is best to put this program aside and just concentrate on studying how to generate prompts to achieve the same result in a second without having than wasting time modeling in 3D, materials, lights, effects, etc., with a single click on an AI you get a spectacular result.
✅Check out Patreon for all my scene files, bonus videos, a whole course on car rendering or just to support this channel 🙂
patreon.com/JonasNoell
It's not the end, it's the dawn of new approach for the best results.
Yeah, stuff will be changing, and probably changing fast. Curious to see how workflows will look like 2 years down the line from now
very interesting stuff! thank you for the video, much appreciated. cheers.
I'm very positive about Stable Diffusion, its very exciting, not like people who complain about it all the time . However, i'm trying to look how i can use it in the Arch Viz industry, but there's just not a lot it can do. If you ask it to draw a pretty house, yes it can do that. If you ask it do draw a very specific house based on very specific architectural drawings, measurements and QS materials & surfaces, not at all! Its a toy at this moment in time. It can do great organic, general, abstract pictures, textures, etc. not a lot of very specific tasks in the practical real world.
I don't think at the moment you can use it professionally as a One-Button-Solution kind of thing. For me it is a gamechanger because I can quickly try out new ideas, get references. It's basically a reference search engine on steroids which can give you an idea about how the final result could look like. So for me at least it is an inspiration collection tool, because as you described it lacks the fine control and precise revisions required in daily production. So try to use it what it's good for and we will see what the future brings :-)
I´ve been using here in the office basicaly to find moods. If you have a reference image it works even better, it´s perfect to bring to a meeting and decide the project direction, even though the builing is not exactly how it should be.
Exactly this. I tried it today (tydiffusion) after fiddling around with Stable Diffusion for one year. It's got still a way to go. But it's usefull to make a scribble/reference, as Jonas said.
Use d5 render with its ai assistant....its great ..i recommend it.
I am guessing this won't work for animation yet? This seem like it will treat each frame as a single pictureso It can't create waves, foam as If the ship is moving forward.
Amazing introduction to this wonderful tool ! Thanks a bunch
Glad it was helpful!
Hi, can you explain me how to add new Loras, because I have tried many ways and I get error. Thanks
thanks alot! i thikn it can help in many parts of the image especially in archviz were you always want great photorealistic surroundings. this will help and cut the time in half!
its really good to explore the posibilities where your art can go, but in the end you have to do it your self to reach that level. otherwise you will end up cycling for options that never ends
Yeah that’s also how I see it at the moment. It’s a bit useless if you can’t combine it with the traditional way of doing things as it lacks control and precision and is just too random. But it will give you lots of ideas and inspirations which you can then utilize to build something new out of it.
Great basic trainin video, thank you. I just bumped into a problem with depth - it always sends in just blank black viewport - no depth at all. Tried adding all max (arnold, v-ray) cameras, did not help. Any idea how to solve this?
Great video as usual! I have a question: is your video sped up during the image generation phase (after you press the Generate Image button) ? I own an i9 13900KF + 4080 rtx gpu PC and it takes much longer than shown here. Is there any particular setting to tweak for gaining speed performance ? Thank you for answering. EDIT: I've already changed the sampler to a GPU one but the change in speed is almost unnoticeable...
Hi, no the video is edited for the generation parts roughly 5x faster. This one is on a 4090
@@JonasNoell thank you very much for answering! Keep rocking!!! ❤
AWESOME, KEEP IT UP😊😊😊
Thank you for the tutorial. Great content and i will play with it soon.
Great intro Jonas.
Thx a lot for this.
Can we create materials for our objects that look good if the object is rotated?
Like texture maps etc?
I tried a tennis ball on sphere, but it looks bad when I rotate the sphere.
Thanks!
@JonasNoell is it my system or is it just really slow to render in viewport. Maybe you can show a timelapse or something to get an idea of the actual speed? Just tried the cat and it placed my model in front of the cat (behind - blurry).
I speeded up the generations around 500%, but the speed depends on your GPU. I use a 4090 for reference so if you use weaker hardware it will of course cause delays. Also the resolution you chose plays an important factor. You could also choose smaller resolution and try to use upscaling which should be faster
It doesn't work in my system and it appears an error about "torch not compiled with cuda enabled"
Thank you Jonas for the video... may I ask you if there is an option to use inpaint mask feature, where you can improve some parts locally, especially 3d people?
There doesn't seem to be an option for this (yet?) though ComfyUI/SD supports this. I hope this will be just a matter of time as this is the first release of the tool. This would be also my most requested feature. I would also like to have localized prompts for diffferent ObjectIDs or Masks for example, also possible in ComfyUI but not existent within tyDiffusion yet...
Are you able to export the baked textures with mesh?
It is not really baked, just camera projected. And yes you can export this.
@@JonasNoell thanks for reply, i was thinking creating a flat plane and projecting ai generated texture such as wooden plank, stone wall etc then exporting that texture.
wow really a game changing
"Tydiffusion comfyUI engine could not be started.Try running 3ds max as administrator." I see this box. I have this problem can’t generate.Please how can I solve this.
same problem
@@mehmetyigit2330 same here!
Have you tried running 3ds max as an administrator? :-) If yes try to get some support from TyFlow directly, I can't give you much technical support as it worked flawless for me
Oh and try upgrading your GPU drivers
tried running as admin but still no success. even though the models are downloaded they dont show up under: Generate>Model>[none found]
Dreams come true
if i need to save the final result as the Png file what can I do ?
Amazing. Thanks for this tutorial. :)
Thank you Jonas!
Interestig, how to do inpaint through this stuff?
When is MAYA ASSIST coming out ?
Nice Take
please which version of 3ds max do you use?
2024
Amazing
Amazing..
Tydiffusion is awesome! Great video. Its so fun to try different looks of an image and so fast!
Yeah recently going through all my recent projects and test what the AI would have come up with. You can get a good feeling for its potential but also it’s limitations! 😃
Another amazing vid @JonasNoell! This is indeed a game changer for me. Trying it out for matching background landscapes instead of tediously modeling/lighting/texturing. Using this combined with other AI tools like Topaz upscaler or Magnific to further add enhancements.... the possibilities are crazy! Clients will especially like this and you can charge them an additional fee for design explorations or artistic styles.
Yeah exactly that what I was thinking as well, should work really well for MattePainting replacements, even with the possibility to project that directly on some simple geometry for parallax. Can definitely simplify a lot of think you would normally have some high quality assets or some matte painting skills to have decent results
@@JonasNoell hmmm never thought of using it for parallax. That would be amazing use for 3d orthographic projections for say 3d floor plans! Gotta try it out.
'a really interesting video - thanks.
do you know why mine isnt working? when i press to generate image it makes my viewport black and i cant fix even that. it doesnt generate at all and i have the default settings. does anybody know why?
Hey! Does anyone knows why I can't make it work with viewport depth? Only color mode seems to work
I also cant use edge mode "Cannot execute because node CannyEdgePreprocessor does not exist", I have the same models than in the video
Try to check at TYflow support forums, I can’t give you technical support
Great ❤ that helps a lot for me to kick start 🎉
Does it work with free version of Tyflow ?
As indicated in the video: Yes
is it true that animation features only work with 24gb gpu's? if so, does the vram has to be on a sinle gpu, or would a 2nd gpu solve this?
I haven't heard or read anything about those requirements as of now, but I haven't really tried animation yet so I can't tell for sure. Where did you read this?
@@JonasNoell i cant find the thread where i read this, i think it was tyson himself on the tyflow forum.. explanation was that to render animation you need to load multiple large ai models into vram..
@@pantov Ok interesting, as said I will try animation soon do see how this works. Though I have a GPU that has 24GB Ram, so if that's the requirement I probably wouldn't notice it. Maybe just make a post in the tyflow Forum if you want a reliable answer. 🙂
@@JonasNoell yeah i was planning to that a bit later, tyson is flooded with forum posts at the moment :-) i got 11gb on my workstation, but i got a render node that has 16gb but slower gpu, havent tried it there yet.. thanx for your reply, looking forward to your furter tyflow vids. cheers mate!
great tuotrial
How to uninstall Ty diffusion Ai in 3ds Max?
great tutorial! Have you experimented with animation yet? When I get a prompt that looks good as an image but then run it through animation, it looks completely different. Have you experienced that? I'm so new to AI I'm probably being stupid ;)
Haven't checked out animation yet but will definitely do so. I can imagine making a more advanced 2nd tutorial that covers those features. This one here was mainly for the basics to get you started.
@@JonasNoell Nice one. I'll look forward to it :)
Amazing work! Sorry just so I understand - it looks like you can use this diffusion AI to make 2D images... but can you use it to export Video Clips, and use the 3D model to have multiple camera angles etc, cheers :)
Yes you can do video but there are issues about inconsistency and just general weirdness that will happen, so I don’t think you can get something too productive out of it at the moment
Waiting for tyre track in mud material
Thank you!!!
so why not just run the render thru the ai? *sigh*
Because rendering and Generative AI is something completely different
what components are in computer hardware?
RTX 4090 but you can use much cheaper hardware. Also I speeded up the image generation around 500% in editing.
🤔👍
i am very interested in fluffy cats...
Es ist extrem schade, dass du deine Videos hauptsächlich auf Englisch veröffentlichst. Es gibt viel zu wenig deutschen Content in dieser Qualität und dementsprechend ist für viele Interessierte der Zugang zu den vorgestellten Techniken nur über Umwege möglich.
Naja für mich als Creator macht es mehr sinn den Content für die größtmögliche Zielgruppe veröffentlichen, und das ist nunmal die Englische Sprache. Außerdem würde ich vermuten, dass Deutsche die 3D machen und kein Wort English sprechen eine sehr sehr kleine Zielgruppe :-)
@@JonasNoell Nun, Hand aufs Herz, es gibt wahrscheinlich 100 weitere Creator die die selben Thematiken abhandeln, der Mehrwert geht dabei für den "Konsumenten" gegen Null. Es geht dabei auch nicht darum, ob man einer Sprache mächtig ist oder nicht, sondern vielmehr darum welche man zu 100% sofort versteht. Es gibt massig Beispiele wo man sich traut, die "Nische" zu bedienen und Content in anderen Sprachen aufzubereiten, die damit zudem durchaus erfolgreich sind. Sieh dir bspw. Stolz3D an, der seinen Content für FreeCAD quasi ausschließlich auf Deutsch macht und damit eine recht große Community um sich ausgebaut hat. Ich glaube du könntest damit auch eine sehr dankbare Klientel beglücken. 🙂
If you know comfy and max (enough) you can probably do all without this plugin for free 😊
Of course, it is just using ComfyUI in the background. It can't do anything that you couldn't already do with ComfyUI. The novelty is the convenience and direct integration making it frictionless to use.
Stable Diffusion works in free version of tyFlow too.
I guess its the end of vray and corona 😅
Blender already has this a year ago, it's not game changing
Interesting, how peopla with lack of knowledge can get blinded by an interpolation algorithm...
Can you enlighten me what am I missing?
back those days that your rendering should be based on your vray masters, but now this ai change the game, that's rediculous back then spending much time from your vray fuck masters!
I don’t think that much has changed, as the tools in their current state don’t produce final, controllable and consistent results. You would still have to set this up traditionally.
10x
I think Unwrapping is dead at least
It is just camera projection the image one the model, there are lots of issues as stretching and being dependant on the camera perspective. Unwrapping is not dead for sure, but I guess there will be AI unwrapping tools which will do the job in the future ;-)
it's nothing but a peace of shit!!!!!!!!
because it does not allow you to use your checkpoint . it even does not allow you to deselect the model you do not want to use for the download?!!!!!!!!!
why should I have to download so many unnecessary things
Maybe have a cup of tea to chill out a bit? 😀
🤮🤮🤮
Why you puke? 😀
that clickbait title tho
Welcome to RUclips you must be new here 😀
Cash cow for the developers, useless in a professional environment, still waiting for serious tools
Why everyone is always so sure that it is useless in a professional environment. In its current state it has many usecases already if you are creative about implementing it.
What are serious tools?
whats a ''professional environment''
I really don't see anything game changing about this.
For me it is gamechanging as before I didn't use ComfyUI/SD for my daily work as it was too troublesome to export everything, leave your 3d software, deal with the hacky user experience of ComfyUI, so apart from some simple initial testing I didn't use it much. Now with tyDiffusion that is different as it is integrated so seamlessly that you would be stupid to not use it at least during concepting phase or when figuring out how you want the end result to look like. It is not a One-Button-Final-Result thing but more like a image search engine on steroids which can give you much better and faster ideas and references than traditionally.
I agree. Not to mention the complexity of a single render setup. Also there are other AI generators out there which make something similar.
Though I'm a paid tyFlow user I cannot see to much use cases... maybe a viewport IPR and / or an animation rendering mode would be a real game changer.
@@JonasNoell Im sorry Noel but I cant agree with you there. If I wanted some lovecraftian abstract approximation to life to know what something might maybe look like, I'd just sniff some shrooms. This is just the enshitification of your ability to imagine. I am yet to see anything spewed by these scrape generators that I can honestly say that is interesting to me. Plus the whole morality of using these generators in the first place. I hope you don't start putting out more content related to this.
I really struggle to see how you can NOT see any use case for this 😀 Did you watch the Release Trailer? It’s literally packed with Use Cases.
So you can’t even see this being useful as an inspirational tool? So I’m a full time lighting and shading artist and I’m confronted every project every day with a blank viewport filled with 3d models and some rough description of what the client wants. How would that not be useful to me? It’s basically reference search on steroids. The client wants a pirate ship made out of cheese? Just see what would come out of the AI and see how I would translate that into a 3d scene and shader? Good luck finding something like this on google. I think you assume it is a solution to just press a button and get something finished. It’s not like this. I really struggle to see how you can see 0 use case of this. Nobody is forcing you to make trippy drug trip animations, you can use it for whatever you want. Use it for what’s useful to you. I’m not saying it’s the be all and end all of traditional way of doing things, as of now it’s a tool that can be used and should be used. And if that becomes a valuable tool for me I will of course continue to make videos about it as I try to provide most valuable content to my followers out of a production perspective. How about you make a video how crappy and useless it is if that is your opinion. Would be interested to see your take on it.
Concept art is dead!! It's over!!
Yeah, For Concept AI has a very strong use case that is hard to beat
AI is just a waste of time. ComfyUI does the same stuff. People are limited with models and they need to pay. PAY PAY PAY. AI in CGI is no go.
tyDiffusion uses ComfyUI in the backend, so yes it is the same stuff. The innovation is the direct and seamless integration into the 3d software, which at least for me makes it accessible. Didn't use it before through ComfyUI as it was annoying, slow, hacky but now through tyDiffusion I do and would be stupid no to. In CGI NOT experimenting with it would be a no go. If you literally can't see any usecases for this and it is and always will be a waste of time for you then I can't help you :-)
Maybe AI in CGI is no go but CGI in AI is the best way to go. Imagine someday we can create a box on the ground and type: "Bush", then create a Cylinder, animate it go pass the bush and type: "A male human wearing a suit". Then type for the whole scene: "A Male human walking next to a bush".
I mean using 3D object and camera to guide everything is the best way to have control over AI image generator. If We want we could create a more complex scene by using low poly human model instead of a cylinder and actually animate him to do what you want. And If you want to go extreme, you could still make a complete model and animate it just like you do right now and use AI simply to render the scene very fast.
You job is to generate tonn of tasteless cg shit? Then it definitely complete your workflow.
Have you ever worked in professional production? 😀
@@JonasNoell yes, and I know that tonns sit is a result of so called "professional production"
In summary, spending years and years studying 3dsmax and many plugins is of no use, I lost years of study... it is best to put this program aside and just concentrate on studying how to generate prompts to achieve the same result in a second without having than wasting time modeling in 3D, materials, lights, effects, etc., with a single click on an AI you get a spectacular result.
excellent explanation! thanks!
Thanks for your support :-)