Thank you so much for sharing this, incredible work really amd you are amazing for sharing it for free! I am trying to use your workflow, however art-venture is giving me some problems and whenever I load the workflow, it shows sdxlaspectratioselector and av_controlnetpreprocessor as missing, anyone having the same issue?
hey, thanks for your feedback! i had some issues resulting in a complete reinstall so i encountered the same issue. The problem is not art-venture but mixlabs nodes. Since the latest update as soon as these nodes are installed, art-venture breaks. So im actually working on an Update and already got it all working again without mixlabs nodes. I had to get rid of the limited floating slider-controllers from the package, but i'll make a note about it in the readme - its not recommended to have negative values for e.g. controlnet. The update will be released soon. Due to the Update the performance for me has increased by around 50% (but i noticed that i still had xformers in use, pytorch attention is the dafault and way to go).
@@Paul_Hansen thank you so much for answering! i subscribed to your channel so I can be notified as soon as you release the update, again thank you much for sharing! Can't wait to try it out once it's fixed
Looks like a lot of work has gone in the creation of this workflow. Congrats and massive thanks for making it accessible! I'm looking forward to learning more and eventually building similar workflows. 👍 I'm a huge believer in having local access to the tools we use. The less we rely on paid internet/cloud services, the better.
UPDATE n. 1: First, thank you so much Paul for releasing for free this huge work, that's amazing how the community can be. Secondly, that can be useful for some people, I was able to run almost the all workflow just on my laptop with 16gb RAM, gpu geforce rtx 3070 (8gb) and cpu AMD Ryzen 5800H. I'm getting the same result as showed in the video, it tooks just few minutes. Only problem atm: it's crashing the VAE Decode (connected to the SamplerCustomAdvanced), so the worflow for me it's not generating the result "Image comparer SDXL/ FLUX" and "image comparer FLUX/Upscale". I tried to modify the node "upscaling by factor" from 4.0 to 1.0, but it still crashing (also, i don't know why this node is orange). Btw, i just installed all the missing nodes all together (and not one by one as suggested in the CivitAI Paul's documentation) and I didn't have any problem with that. I will update with new informations of what I'm discovering on "low performance" laptop as mine, but the thing that all the 13 models used are working on this laptop it's already a miracle. Again, Thank you so much Paul! I was hyping about this release since 2 weeks! Great job
@@898guitarist898 that are some really awesome news to me! Thank you! Because the upscale factor is one of the major factor on how long a complete generation will take, i colored it orange.. ment to be some kind of "attention with this little thing here" sign :)
I was looking forward for this workflow, and now I can see that it's 80% settings and 20% actual workflow, it's made for people who do not want to change a single noodle and just add more free workflows to their hard drive that they will never actually use. I am saying this with all respect, I am getting same results with a very messy workflow and 80% less noodles and nodes. But after all, respect for what you're doing, you're able to read the community very well, and capitalize on that.
Tbh, initialy i made that for just me and i use it for work, you are most likely right about many people just want to have it. But if you take a look at the reddit post in /ComfyUI you might see some people interested in pulling it all apart and learn. Thats my goal here.
@@studiodubai9748 when you switched to the new ui (from settings -> comfy -> menu (scroll down) -> Beta Use new menu... -> Top/Bottom), you then see a eye-icon in the right lower corner.
huggingface.co/lllyasviel/sd_control_collection/blob/d1b278d0d1103a3a7c4f7c2c327d236b082a75b1/diffusers_xl_depth_full.safetensors its the same i use with sdxl in the workflow but for other models that might be used here, you can quickly change the controlnet only for it. The Previz showcase is demonstrated here instagram.com/p/C32BYZlt7wX/ but it was build on mixlabs nodes which at the moment causes problems and therefore will be removed in the next release.
Hi Paul! Thank you so much for sharing your expertise and this fantastic workflow-it’s incredibly helpful. I wanted to ask for some clarification regarding the multi-mask feature. Could you explain how you created it?
My good Paul, congratulations. this is mind blowing, what a great workflow you made. When is going to be available o when you are going to start the creation of this workflow to build for your subscribers. I been digging for weeks on how to create something like this but is very complex and sometime I get lost, to many checkpoints and functions new almost everyday, now flux in between I am sure you know what I mean. I can do control net and upscale for my renders and give more reality but I will like to be able to use a workflow like this.
@@alexflaco76 wow, first of all: thank you very much for your feedback! Much appreciated and of high value to me. I totally can relate regarding complexity and getting lost while building it, having days of thinking on how to fix a behaviour that appeared when adding a "simple" new function between the creation process.. then finding some nodes that are doing the thing you wanted. Basically i started with the "realtime Archviz with sdxl" workflow (you can see on my yt-shorts) and kept adding things.. when flux got released i decided to build this thing here. I initially wanted to release it when i hit 500 followers on Instagram, but i may have to reconsider it and make it available earlier.. the next days will show.
@@Paul_Hansen I am sorry you are not getting your target as soon as you will like. I will be looking forward for you updates and I'll carry on sharing your work.
@@898guitarist898 thank you! tested it only on my environment and it depends on the models you are about to use, i would say if you can run any flux (gguf) then this will run on your system. You may have to lower the resolution too.
at 3:30, you kinda gloss over the Load Images. I get the ifrst one is your 'main image' and the second one is depth. What is the third one? looks like a clown-map? Any guidance on how to do this in Blender? Do you literally create a bunch of emissive color materials and hand-assign them based on each object then render that out? Or is it 'per material' ?
I dont know how to do this in blender, in 3dsmax/vray it is a renderelement where i can assign an id to an object, face or material and i get a color per id. We can use Cryptomatte for that too. Must be possible in blender, maybe look for RGB mask creation
Same as always, if you have a specific design in mind ai doesn't understand what needs changing and what doesnt. If you still need to photoshop it latter you can photoshop everything anyway..
i really appreciate your opinion here, thank you for your feedback! i've not found a method yet to completely get rid of my 3d envirnoment and postproduction and will most likely not give it away for free when i have it :) tbh i dont want to convice you as the video not already did and this only saves about 50% of _MY_ time i spend on an image, so maybe not of use to others. i can live with that :) cheers
You can do same in D5 render, 50x faster... and you can control everything in real time. Who what do like that way, what time you spent to get this finale image?
@@DarkoBirnbaum thank you for your quality comment, much appreciated. D5 is another great tool, yes, but i think its not 50x faster then this. And i dont know why i should consider paying for something i can have for free, but thats also maybe just me :) cheers
@@Paul_Hansen I would add also that the quality of D5 render is much lower than the one showed here. I don't care to have a nice overpriced UI from D5 with animations and video production, if at the end the result is mediocre :)
@@Paul_Hansen ok, not 50x but 10x... 38$ / monthly.... price is quite correct, then you have AI at your disposal. I've been using 3DS max and vray since they were created... it's terrible what I went through with those programs... and then D5 render... a complete refresh... I would recommend that software to anyone
@DarkoBirnbaum 38$/month is 456$/year and tbh i do not see whats the benefit if you can achieve better for lower costs. In my experience there is almost never a simple solution to a complex problem, even i admire it too. Vray is complex because of its outstanding capabilities. I see people using corona and getting instant good results too, the thing is that understanding vray to its core is a great value because the knowledge is transferable to every single renderengine out there.. and now comes ai and changes everything. As i said, everybody should use whatever they like, there is no right or wrong as long as the result is what you want. Im trying to get better results in less time and this here is my successful approach to it.
Look at 9:30, there you have 2 images to choose from created in 20 seconds, so 10 Seconds per Image. These outputs are comparable to d5 ai feature. Correct me if im wrong.
Thank you so much for sharing this, incredible work really amd you are amazing for sharing it for free! I am trying to use your workflow, however art-venture is giving me some problems and whenever I load the workflow, it shows sdxlaspectratioselector and av_controlnetpreprocessor as missing, anyone having the same issue?
hey, thanks for your feedback! i had some issues resulting in a complete reinstall so i encountered the same issue. The problem is not art-venture but mixlabs nodes. Since the latest update as soon as these nodes are installed, art-venture breaks. So im actually working on an Update and already got it all working again without mixlabs nodes. I had to get rid of the limited floating slider-controllers from the package, but i'll make a note about it in the readme - its not recommended to have negative values for e.g. controlnet. The update will be released soon. Due to the Update the performance for me has increased by around 50% (but i noticed that i still had xformers in use, pytorch attention is the dafault and way to go).
@@Paul_Hansen thank you so much for answering! i subscribed to your channel so I can be notified as soon as you release the update, again thank you much for sharing! Can't wait to try it out once it's fixed
@@bt-urq already published, follow on Instagram or LinkedIn for latest news :)
Looks like a lot of work has gone in the creation of this workflow. Congrats and massive thanks for making it accessible! I'm looking forward to learning more and eventually building similar workflows. 👍 I'm a huge believer in having local access to the tools we use. The less we rely on paid internet/cloud services, the better.
thank you, there is no place like 127.0.0.1 :) I have much more trust in opensource and local stuff too.
UPDATE n. 1: First, thank you so much Paul for releasing for free this huge work, that's amazing how the community can be. Secondly, that can be useful for some people, I was able to run almost the all workflow just on my laptop with 16gb RAM, gpu geforce rtx 3070 (8gb) and cpu AMD Ryzen 5800H. I'm getting the same result as showed in the video, it tooks just few minutes. Only problem atm: it's crashing the VAE Decode (connected to the SamplerCustomAdvanced), so the worflow for me it's not generating the result "Image comparer SDXL/ FLUX" and "image comparer FLUX/Upscale". I tried to modify the node "upscaling by factor" from 4.0 to 1.0, but it still crashing (also, i don't know why this node is orange). Btw, i just installed all the missing nodes all together (and not one by one as suggested in the CivitAI Paul's documentation) and I didn't have any problem with that. I will update with new informations of what I'm discovering on "low performance" laptop as mine, but the thing that all the 13 models used are working on this laptop it's already a miracle. Again, Thank you so much Paul! I was hyping about this release since 2 weeks! Great job
@@898guitarist898 that are some really awesome news to me! Thank you! Because the upscale factor is one of the major factor on how long a complete generation will take, i colored it orange.. ment to be some kind of "attention with this little thing here" sign :)
I was looking forward for this workflow, and now I can see that it's 80% settings and 20% actual workflow, it's made for people who do not want to change a single noodle and just add more free workflows to their hard drive that they will never actually use.
I am saying this with all respect, I am getting same results with a very messy workflow and 80% less noodles and nodes.
But after all, respect for what you're doing, you're able to read the community very well, and capitalize on that.
Please don't get me wrong, this is an amazing workflow.
Tbh, initialy i made that for just me and i use it for work, you are most likely right about many people just want to have it. But if you take a look at the reddit post in /ComfyUI you might see some people interested in pulling it all apart and learn. Thats my goal here.
Agree! It's a very good resource for many people
Amazing work! Can't wait for workflow
Amazing work ! Thanks for sharing !
@@bricep5700 thanks for your feedback!
Great work! How do you hide lines/nodes ?
@@studiodubai9748 when you switched to the new ui (from settings -> comfy -> menu (scroll down) -> Beta Use new menu... -> Top/Bottom), you then see a eye-icon in the right lower corner.
thanks for this. What is the PREVIZ ControlNet model? Never seen that one before....
huggingface.co/lllyasviel/sd_control_collection/blob/d1b278d0d1103a3a7c4f7c2c327d236b082a75b1/diffusers_xl_depth_full.safetensors its the same i use with sdxl in the workflow but for other models that might be used here, you can quickly change the controlnet only for it. The Previz showcase is demonstrated here instagram.com/p/C32BYZlt7wX/ but it was build on mixlabs nodes which at the moment causes problems and therefore will be removed in the next release.
Hi Paul!
Thank you so much for sharing your expertise and this fantastic workflow-it’s incredibly helpful. I wanted to ask for some clarification regarding the multi-mask feature. Could you explain how you created it?
@@HerrUberkoly sure, i'm collecting Feedback and will answer all the questions in another video for all
My good Paul, congratulations. this is mind blowing, what a great workflow you made. When is going to be available o when you are going to start the creation of this workflow to build for your subscribers. I been digging for weeks on how to create something like this but is very complex and sometime I get lost, to many checkpoints and functions new almost everyday, now flux in between I am sure you know what I mean. I can do control net and upscale for my renders and give more reality but I will like to be able to use a workflow like this.
@@alexflaco76 wow, first of all: thank you very much for your feedback! Much appreciated and of high value to me. I totally can relate regarding complexity and getting lost while building it, having days of thinking on how to fix a behaviour that appeared when adding a "simple" new function between the creation process.. then finding some nodes that are doing the thing you wanted. Basically i started with the "realtime Archviz with sdxl" workflow (you can see on my yt-shorts) and kept adding things.. when flux got released i decided to build this thing here. I initially wanted to release it when i hit 500 followers on Instagram, but i may have to reconsider it and make it available earlier.. the next days will show.
@@Paul_Hansen I am sorry you are not getting your target as soon as you will like. I will be looking forward for you updates and I'll carry on sharing your work.
all good, i got what i wanted and people like you helped a lot.. so, thank YOU!
thnx a lot!! this is really powerfull workflow
@@sekfrommars just released
Very cool workflow!
@@chrlw thank YOU for pointing me in directions Brother♥️
Hi Paul! What ComfyUI interface or theme are you using here? It looks way different from the original interface. Thanks!
hey, its not a theme, just the new ui. you can enable it in the settings: comfyui-wiki.com/faq/how-to-enable-new-menu
What you produced is stunning, that's amazing! This workflow is working also on low-vram setup? (Like 8gb as my 3070)
@@898guitarist898 thank you! tested it only on my environment and it depends on the models you are about to use, i would say if you can run any flux (gguf) then this will run on your system. You may have to lower the resolution too.
at 3:30, you kinda gloss over the Load Images. I get the ifrst one is your 'main image' and the second one is depth. What is the third one? looks like a clown-map? Any guidance on how to do this in Blender? Do you literally create a bunch of emissive color materials and hand-assign them based on each object then render that out? Or is it 'per material' ?
I dont know how to do this in blender, in 3dsmax/vray it is a renderelement where i can assign an id to an object, face or material and i get a color per id. We can use Cryptomatte for that too. Must be possible in blender, maybe look for RGB mask creation
maybe this is for you blender.community/c/rightclickselect/WXbbbc/?category=compositing&sorting=hot
@@Paul_Hansen interesting, wasn't aware of Cryptomatte, but makes sense. Thanks for the fast reply and point in right direction!
Same as always, if you have a specific design in mind ai doesn't understand what needs changing and what doesnt.
If you still need to photoshop it latter you can photoshop everything anyway..
i really appreciate your opinion here, thank you for your feedback! i've not found a method yet to completely get rid of my 3d envirnoment and postproduction and will most likely not give it away for free when i have it :) tbh i dont want to convice you as the video not already did and this only saves about 50% of _MY_ time i spend on an image, so maybe not of use to others. i can live with that :) cheers
I'd like to add, that i had exactly the shown design in my mind btw ❤
Amazing job
thank you!
I'm your 290th follower on Instagram... only 110 to go until the workflow!
nice try ;)
Mindblowing.
i gave my best :) thanks
VERY NICE PAUL
thank you :)
great works
thanks
love it!
thank you :)
very nice!
YOU are very nice too! thanks for your comment
Impressive
thanks :)
You can do same in D5 render, 50x faster... and you can control everything in real time. Who what do like that way, what time you spent to get this finale image?
@@DarkoBirnbaum thank you for your quality comment, much appreciated. D5 is another great tool, yes, but i think its not 50x faster then this. And i dont know why i should consider paying for something i can have for free, but thats also maybe just me :) cheers
@@Paul_Hansen I would add also that the quality of D5 render is much lower than the one showed here. I don't care to have a nice overpriced UI from D5 with animations and video production, if at the end the result is mediocre :)
@@Paul_Hansen ok, not 50x but 10x... 38$ / monthly.... price is quite correct, then you have AI at your disposal. I've been using 3DS max and vray since they were created... it's terrible what I went through with those programs... and then D5 render... a complete refresh... I would recommend that software to anyone
@DarkoBirnbaum 38$/month is 456$/year and tbh i do not see whats the benefit if you can achieve better for lower costs. In my experience there is almost never a simple solution to a complex problem, even i admire it too. Vray is complex because of its outstanding capabilities. I see people using corona and getting instant good results too, the thing is that understanding vray to its core is a great value because the knowledge is transferable to every single renderengine out there.. and now comes ai and changes everything. As i said, everybody should use whatever they like, there is no right or wrong as long as the result is what you want. Im trying to get better results in less time and this here is my successful approach to it.
Look at 9:30, there you have 2 images to choose from created in 20 seconds, so 10 Seconds per Image. These outputs are comparable to d5 ai feature. Correct me if im wrong.
Let me say GYAAT
I'm considering to take that into my daily vocabulary, thank you :)
is this payed "plugins"? do we have to pay anything to use this?
@@jorgemendesalves527 not a single cent
i forgot electricity