Check out video sponsor LTX Studio and their FREE OPEN SOURCE AI Video model bit.ly/LTXVkamph Flux ControlNet Guide and workflow here www.patreon.com/posts/flux-depth-canny-118065837
@@sebastiankamph I was trying to achieve something like that for 2 days straight and started to lose my mind, your vid is a godsent, such a smart design. I'm the one who has to be thankful
a really useful video would be starting from scratch and explaining how you came up with this workflow. id love to know how one chooses nodes to build a workflow. the how and why, instead of just the what.
i would like to create depth maps for bas reliefs to carve on my cnc is there any chance i could ask you to make a verry basic video on creating a test or image input to a depth map as basic as possible , then with it working maybe move on to more detailed settings im new to this and not all that computer schooled so just about everything here is strange to me but i need this
Nice job Seb very cool. Strangely I had a picture of a woman that didn't resemble the source image and then an image made of string over the top that resembled my source image.
Great way to jailbreak the built in control nets by using a double sampler with control over start and end steps...nice. However when I run this workflow, even with the full versions of flux, the output images are less realistic than just using flux dev with the standard sampling nodes. Is there any way to get results better, looking more real? I have tried playing with steps, flux guidance, loras, schedulers and samplers...nothing seems to give me outputs that look as good as the base model on its own. Thanks for your time
I've run the depth and canny LoRAs side by side and find the results from the depth LoRA are much sharper. You can also combine them with Redux which is quite fun.
Thank you, this seems interesting. I keep hoping there will be a way to use flux's canny and depth at the same time without using the low quality Lora models.
Is there any way to swap out the Midas depth map for DepthAnything? I'm not sure how the custom Midas Depth Map node is interacting or if it can be replaced?
I think I'm missing something obvious. Every time I try to go into the manager and select "install missing nodes" it comes up and says there are no missing nodes. However, there are a lot of "undefined" and red nodes. I'm not sure how to fix the missing nodes or red nodes. Please help!
Would not work for me. Had to disable Flow control custom node to even get the models to show. (was previously using gguf) Then no matter whether I chose Depth model or Canny it always gave me canny in the mask. After then even after lowering the canny steps to 6 while leaving the flux dev at 20, all I got were horrendous images. Thanks for your work tho. Appreciate it Edit I just found the switch lol, crashed python but I now have depth.
What do you call a RUclipsr who is happy making puns? A content creator. I've never got around to using the Flux tools much but I think I'll give your workflow a go. Cheers.
Thanks for the forecast! A bit off-topic, but I wanted to ask: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). What's the best way to send them to Binance?
We do however need to distinguish controlnet from the flux built in controlnet and nodes... just so we understand what we are talking about right? so ummm Fluxnet? Also... a dDad joke for you : Wife: Did you put the bins out Husband : No, I'll do that now. Wife: ...and the cat? Husband: Well I don't know if Tiddles is up to helping to put the bins out but I can ask!
There is any reason why i cant change the image style prompting? ... i uploaded a photorealistic image, aplied CANNY, and prompt to make a draw style image and only getting more photorealistic images as result :/
Check out video sponsor LTX Studio and their FREE OPEN SOURCE AI Video model bit.ly/LTXVkamph
Flux ControlNet Guide and workflow here www.patreon.com/posts/flux-depth-canny-118065837
do they not have open pose flux yet?
Thank you, despite trying ComfyUI a few times, being on AMD I have been using the Zluda flavour and yours is the first workflow that has worked.
Glad to hear it! I try not to overcomplicate workflows.
Thanks!
Thank you very much! Glad you liked the video 😊💫
Danke!
Thank you so much, both for the support and the other kind comment 🌟🥰
@@sebastiankamph I was trying to achieve something like that for 2 days straight and started to lose my mind, your vid is a godsent, such a smart design. I'm the one who has to be thankful
That's an incredibly helpul video, integrated a ultimate sd upscaler, works like a charm. Tahnk you for the great video and provided links.
0:32 I must be giddy. That towels line really got me. 😂
Hey Seb, thanks for the videos. Maybe I missed it but what do I drag into the ComfyUI to get the workflow in there? Thanks.
a really useful video would be starting from scratch and explaining how you came up with this workflow. id love to know how one chooses nodes to build a workflow. the how and why, instead of just the what.
Amazing workflow, thank you!!
Thank you.Friendly tip:)
A high-end cinema camera (e.g., ARRI ALEXA or RED EPIC) and lens choice 50mm for standard scenes. - and that sort of thing:)
Nice! The results seem to be much better. The images look less burnt.
Glad you like it!
Thanks Bro! Always love your videos! Please make more! And just made a USD 10 donations.
Hello, great video. Can I use it with mask on the source image?
i would like to create depth maps for bas reliefs to carve on my cnc
is there any chance i could ask you to make a verry basic video on creating a test or image input to a depth map as basic as possible , then with it working maybe move on to more detailed settings
im new to this and not all that computer schooled so just about everything here is strange to me but i need this
Nice job Seb very cool. Strangely I had a picture of a woman that didn't resemble the source image and then an image made of string over the top that resembled my source image.
I did encounter that once or twice with depth. But generating again or changing strength fixed that.
Great way to jailbreak the built in control nets by using a double sampler with control over start and end steps...nice. However when I run this workflow, even with the full versions of flux, the output images are less realistic than just using flux dev with the standard sampling nodes. Is there any way to get results better, looking more real? I have tried playing with steps, flux guidance, loras, schedulers and samplers...nothing seems to give me outputs that look as good as the base model on its own.
Thanks for your time
🤩This is great! Sebastian Kamph, is there any way to combine the two functions ControlNets and Redux/Ipadapter?😘
I've run the depth and canny LoRAs side by side and find the results from the depth LoRA are much sharper. You can also combine them with Redux which is quite fun.
Interesting. My experience with the loras were just lots of artifacts when zoomed in.
Is there a control net model for the Flex system in Forge? 😔😔
Forge is yet to be updated for them afaik.
@@sebastiankamph 😔😔😔😔
Thank you, this seems interesting. I keep hoping there will be a way to use flux's canny and depth at the same time without using the low quality Lora models.
You can use ModelMergeSimple
Is there any way to swap out the Midas depth map for DepthAnything? I'm not sure how the custom Midas Depth Map node is interacting or if it can be replaced?
Im a patreon subscriber and I cant seem to find a workflow with everything in one? like infill redux etc. Do you have a all in one workflow for redux?
Does the new-ish controlnet union pro node eliminate the need getting all the individual controls (canny, depth etc).?
No, this is a more recent version and more robust. However as they're full size models, are different to work with.
Amazing , thanks
Mine crashes at 82% every time. Loading the second model I get a python error :(
I think I'm missing something obvious. Every time I try to go into the manager and select "install missing nodes" it comes up and says there are no missing nodes. However, there are a lot of "undefined" and red nodes. I'm not sure how to fix the missing nodes or red nodes. Please help!
I figured it out. My Manager "channel" on the left was set to "dev" not to default.
where is the workflow i follow the steps and downloaded everything . what do i drag and drop into comfy???
Would not work for me. Had to disable Flow control custom node to even get the models to show. (was previously using gguf) Then no matter whether I chose Depth model or Canny it always gave me canny in the mask. After then even after lowering the canny steps to 6 while leaving the flux dev at 20, all I got were horrendous images. Thanks for your work tho. Appreciate it
Edit I just found the switch lol, crashed python but I now have depth.
😅
where i should put lora loader?
What do you call a RUclipsr who is happy making puns? A content creator.
I've never got around to using the Flux tools much but I think I'll give your workflow a go. Cheers.
Love Flux tools. Just a little resource heavy.
Controlnet on Forge please
awesome!
Thank you! Good to see you again my fellow viking.
Thanks for the forecast! A bit off-topic, but I wanted to ask: My OKX wallet holds some USDT, and I have the seed phrase. (alarm fetch churn bridge exercise tape speak race clerk couch crater letter). What's the best way to send them to Binance?
We do however need to distinguish controlnet from the flux built in controlnet and nodes... just so we understand what we are talking about right? so ummm Fluxnet?
Also... a dDad joke for you :
Wife: Did you put the bins out
Husband : No, I'll do that now.
Wife: ...and the cat?
Husband: Well I don't know if Tiddles is up to helping to put the bins out but I can ask!
Nowadays most of the videos were flux. Is stable diffusion dead? Imo flux is still not worth for the heavy sysmet resource if it can't generate NSFW
As a base model, Flux has been beating SD lately. We're starting to see finetunes of it now as well.
👋 hi
Hello my friend!
hate comfy ui and its fkn nodes
its not comfy at all
it has never worked for me, it takes hours to load a model, on the other hand a1111 and forge works instantly
Waiting... And waiting.... on forgeUI to Implement them.
just say no to flux anything
Why would you do that?
There is any reason why i cant change the image style prompting? ... i uploaded a photorealistic image, aplied CANNY, and prompt to make a draw style image and only getting more photorealistic images as result :/