Maybe your best video yet! While not as technical as relighting, the philosophical aspect of why we are doing what we are doing is even more important than technique to make compelling images.
Thanks! It's been a while since I wanted to make a video like this one, because I think workflow design is the same as any design work. Having a philosophy and a course of action behind what you do is one of the most important things imo.
I hope people really listen... really *grasp* ... what you're saying about discovery through experimentation... to figure out what these things do... and why this is important for real studio work. Too many people just "cut/paste" (grab a workflow, put in a new prompt). Great job!
A very good example of how you can design "differently" with generative ai ... and must, if you really want to utilise the full potential - bravo :) Your tutorial reminds me of an experiment in which I used the Lumetri scopes (Luma waveform) from Adobe Premiere as an image prompt. It would certainly be interesting to capture the "moving" live Luma waveform via screen capture node and link it to AnimateDiff or a real time generation workflow.
Always very interesting to listen to your findings, Andrea! I like your idea of taming noise injections and giving it some structures. One could even enlarge your concept to implement colors, too. From my point of view it is always more interesting to play around with (even copied) ideas, workflows, new nodes than to go only with mainstream concepts. Keep up your professional work! 🙂
Bellissimo video e fantastiche considerazioni sul fronte filosofico/artistico di fronte al potenziale creativo di questi strumenti. Complimenti. Subscribed.
For visualizing noise in blender (1:32) use Color output of the noise texture instead of Fac. That way for offset on each axis it's gonna give slightly different result. Now all of the offset is same for all axis, in the direction of the vector (1, 1, 1)
It's been a hot minute since I worked with geo nodes in Blender. I plugged it in and I debated looking up a guide as soon as I saw it wasn't displacing along the normals, and I said "eh, it's just to visualize stuff, that's fine". But yeah, absolutely, affecting the offset alongside the each face's axis would be the correct way of doing it!
Why no SD3 video? Well, because it's not interesting to me production-wise or even as a base for experimenting production-related stuff. Apart from anything that can be said - and has been said - about SD3, I think it's too early to both taking it into consideration for production related tasks, and jumping to conclusions in terms of how good or bad of a model it ends up being. I'll probably talk about it when - and if - we'll get a complete set of controlnets, finetunes, and accessory modules like IPAdapter or IC-Light. In the meantime there's so much stuff left to explore with 1.5 and XL, and there's so many great channels and videos who'll cover SD3, that I don't think that the lack of my voice on the matter will be missed.
The funny thing is I have a law degree (albeit from an Italian Uni), so even if I'm in no position to give counsel on it, I'd be able to make a breakdown of the license agreement. But either way SAI should just release a simple statement disclosing to the laymen what they expect out of finetunes. Finetuners, coders, and the community members in general are not corps, they shouldn't need to have a legal team in order to understand what they can and can't do.
Definitely not using confiUi any time soon. Not for me at all. Why wasting so much effort making something with all those intricate and confusing entangled lines when I can get an almost identical result in seconds using automatic? Yes , confi gives a lot of control, apparently, but that control is not necessary to achieve great results with good prompting and other techniques. All the super innovative methods developed for Comfi that I've seen are easy to imitate with other tools, even in command line, so I'll pass. I wish people could see it too so they would focus on improving other tools instead of Comfi. (Although, the idea of using custom noise to influence the generation is great).
Well, that's easily said: I personally like node based interfaces much more than standard web UIs and CLI. I spent a lot of time learning Houdini and Blender Geo Nodes, so it sort of comes natural to me, as to many others. I'm all for having different interfaces for different users, so I prefer having the option of choosing which one to use depending on the task and on the kind of use I want to make of it. Also with comfyUI I can spend time working on the "perfect" environment I want to set up, in order to automate the generations, and that's something I could do to a degree with other UIs, but it's much easier in comfyUI for me. It all comes down to personal preferences, I think!
It's one thing to not use ComfyUI because it's not useful to you, but to claim it's a waste of time for everyone else is ridiculous. I use it for animation, and there are a million things that it's better for than with Auto1111. For example, access to experimental nodes, performing transformations on image maps, muting processes with boolean switches, generating proper looping frame interpolation, propagating single images to multiple inputs at the same time, doing multi-step upscaling, animateDiff outpainting & upscaling, quickly swapping inputs for multiple inputs, isolating parameters to a single location, organizing complex workflows, etc.
I have never missed any of your videos because what you do is very practical and highly applicable to my work.
Thank you!
Maybe your best video yet! While not as technical as relighting, the philosophical aspect of why we are doing what we are doing is even more important than technique to make compelling images.
Thanks! It's been a while since I wanted to make a video like this one, because I think workflow design is the same as any design work. Having a philosophy and a course of action behind what you do is one of the most important things imo.
Your closing statement is not only brilliant, but spot on! "Stable diffusion for professionals' indeed! 👏
I hope people really listen... really *grasp* ... what you're saying about discovery through experimentation... to figure out what these things do... and why this is important for real studio work. Too many people just "cut/paste" (grab a workflow, put in a new prompt). Great job!
Thank you for the super kind words!
I never miss any of your videos. Best Comfi Knowledge on RUclips.
Thank you!
love your videos!
Thank you!
A very good example of how you can design "differently" with generative ai ... and must, if you really want to utilise the full potential - bravo :)
Your tutorial reminds me of an experiment in which I used the Lumetri scopes (Luma waveform) from Adobe Premiere as an image prompt. It would certainly be interesting to capture the "moving" live Luma waveform via screen capture node and link it to AnimateDiff or a real time generation workflow.
Exactly! The most interesting stuff we find when using new tech is very rarely found while playing it safe
Very inslightful video, thank you very much. I will rethink about all my workflows and to see if anything to improve.
Always very interesting to listen to your findings, Andrea! I like your idea of taming noise injections and giving it some structures. One could even enlarge your concept to implement colors, too. From my point of view it is always more interesting to play around with (even copied) ideas, workflows, new nodes than to go only with mainstream concepts. Keep up your professional work! 🙂
Thank you! Yes, depth doesn't care too much about colors, that's why I used the latent noise injection, but you can experiment with colors even more!
Bellissimo video e fantastiche considerazioni sul fronte filosofico/artistico di fronte al potenziale creativo di questi strumenti. Complimenti.
Subscribed.
Grazie mille, gentilissimo!
For visualizing noise in blender (1:32) use Color output of the noise texture instead of Fac. That way for offset on each axis it's gonna give slightly different result. Now all of the offset is same for all axis, in the direction of the vector (1, 1, 1)
It's been a hot minute since I worked with geo nodes in Blender. I plugged it in and I debated looking up a guide as soon as I saw it wasn't displacing along the normals, and I said "eh, it's just to visualize stuff, that's fine". But yeah, absolutely, affecting the offset alongside the each face's axis would be the correct way of doing it!
Why no SD3 video? Well, because it's not interesting to me production-wise or even as a base for experimenting production-related stuff. Apart from anything that can be said - and has been said - about SD3, I think it's too early to both taking it into consideration for production related tasks, and jumping to conclusions in terms of how good or bad of a model it ends up being.
I'll probably talk about it when - and if - we'll get a complete set of controlnets, finetunes, and accessory modules like IPAdapter or IC-Light.
In the meantime there's so much stuff left to explore with 1.5 and XL, and there's so many great channels and videos who'll cover SD3, that I don't think that the lack of my voice on the matter will be missed.
That´s why I follow your channel. Thanks!
I agree. And, apparently, there is some questionable issues with it's license agreement as well😒
The funny thing is I have a law degree (albeit from an Italian Uni), so even if I'm in no position to give counsel on it, I'd be able to make a breakdown of the license agreement. But either way SAI should just release a simple statement disclosing to the laymen what they expect out of finetunes. Finetuners, coders, and the community members in general are not corps, they shouldn't need to have a legal team in order to understand what they can and can't do.
@fernandopain4824 thank you, I really appreciate the sentiment
Great tutorial.. just like the way noise is used in touchdesigner.
random noise is truly the gift that keeps on giving across all software
Coming from 3D animation, noise is also a common tool
Nice approach!
Sadly I'm getting an error:
Error occurred when executing ColorPreprocessor:
No module named 'controlnet_aux.color'
It might be because you're missing the auxiliary preprocessors. You can find them here: github.com/Fannovel16/comfyui_controlnet_aux
Definitely not using confiUi any time soon. Not for me at all. Why wasting so much effort making something with all those intricate and confusing entangled lines when I can get an almost identical result in seconds using automatic?
Yes , confi gives a lot of control, apparently, but that control is not necessary to achieve great results with good prompting and other techniques. All the super innovative methods developed for Comfi that I've seen are easy to imitate with other tools, even in command line, so I'll pass. I wish people could see it too so they would focus on improving other tools instead of Comfi.
(Although, the idea of using custom noise to influence the generation is great).
Well, that's easily said: I personally like node based interfaces much more than standard web UIs and CLI. I spent a lot of time learning Houdini and Blender Geo Nodes, so it sort of comes natural to me, as to many others. I'm all for having different interfaces for different users, so I prefer having the option of choosing which one to use depending on the task and on the kind of use I want to make of it.
Also with comfyUI I can spend time working on the "perfect" environment I want to set up, in order to automate the generations, and that's something I could do to a degree with other UIs, but it's much easier in comfyUI for me.
It all comes down to personal preferences, I think!
It's one thing to not use ComfyUI because it's not useful to you, but to claim it's a waste of time for everyone else is ridiculous.
I use it for animation, and there are a million things that it's better for than with Auto1111.
For example, access to experimental nodes, performing transformations on image maps, muting processes with boolean switches, generating proper looping frame interpolation, propagating single images to multiple inputs at the same time, doing multi-step upscaling, animateDiff outpainting & upscaling, quickly swapping inputs for multiple inputs, isolating parameters to a single location, organizing complex workflows, etc.