When I go into the manager, custom nodes X-labs does not show up . I have the latest version of Comfyui installed. When I do Git pull it says it is already up to date.
In Comfy open the manager and on the left side where it says Channel make sure it's Channel: default and not something else. Yours might say Channel: dev or something. If it does, change it to channel: default.
Hello, if after trying @thehmc method, you are still having the same issue; try updating your ComfyUI Manager manually. Go into the folder "ComfyUI/custom_nodes/ > manager, open a terminal here and type in "git pull". Restart ComfyUI and the if the channel is default. Search for xlabs and see if this resolves it.
So sorry to hear that. I am using the dev-fp8 model, and it only takes me 15 seconds on average to generate 1024x1024 images. I am fortunate to own a 4090 with 24gb vram, 64gb regular RAM...
This usually happens when you have model mismatched. Double check that you are using the correct models for the main checkpoint and the IPAdapter node.
is it possible to use opendepth mask and canny. both controlnet models for the same image? i want to create a realistic version of the architectural design i created using vray
Hello, I have updated the link to point to the folder instead of the file. Once the page loads, next to the file name, there is a little download icon (down pointing arrow). Click on it and it should download. Sorry for the trouble.
@@CodeCraftersCorner is it normal that xlabs sampler node is taking so much time to execute? I have my models on NVME and comfy in HDD, with RTX 3060 12Go VRAM
@ZakariaNada Flux will generally take time to generate. For reference, these are my times when testing. On 16GB VRAM - 2-3 minutes; on 24GB VRAM - 42 seconds; on 4GB VRAM - 40 minutes. I do not have a 12GB VRAM card to test but I am guessing it will take about 4-5 minutes. If it is taking longer, try moving ComfyUI to the NVME drive. Another thing you can try is to use the schnell model, although, you will not get same quality.
I feel your pain. In my case I found it helpful to use Claude AI or Chat GPT to write lengthier posts for Flux. Also I found an amazing node in a video by pixorama that lets you select from a dropdown of a couple hundred styles, which makes it a breeze to add lengthy prompts that Flux does so well with.
i hope they support gguf later , its in road map.
When I go into the manager, custom nodes X-labs does not show up . I have the latest version of Comfyui installed. When I do Git pull it says it is already up to date.
In Comfy open the manager and on the left side where it says Channel make sure it's Channel: default and not something else. Yours might say Channel: dev or something. If it does, change it to channel: default.
Hello, if after trying @thehmc method, you are still having the same issue; try updating your ComfyUI Manager manually. Go into the folder "ComfyUI/custom_nodes/ > manager, open a terminal here and type in "git pull". Restart ComfyUI and the if the channel is default. Search for xlabs and see if this resolves it.
i m getting math 1and math 2 not compatable error can you helpme to fix this
Hello, can you try again after updating ComfyUI and all custom nodes and see if that helps?
its so slow i have 3080ti with 12 gb vram andits taking like 1 hour for 1 generation help
Are you using the fp8? For reference, how long does it take for normal flux workflow without ipadapter and lora/controlnet?
So sorry to hear that. I am using the dev-fp8 model, and it only takes me 15 seconds on average to generate 1024x1024 images. I am fortunate to own a 4090 with 24gb vram, 64gb regular RAM...
@@CodeCraftersCorner well for the normal flux fp8 version it takes only 50 secs idk why only when using ipadapter this issue arrises
Hey I'm getting this error while using IPAdapter
"Mat1 and Mat2 can not be multiplied"
Can you help me with this issue?
This usually happens when you have model mismatched. Double check that you are using the correct models for the main checkpoint and the IPAdapter node.
This is great ! Is it possible to combine the IPAdapter / reference image with ControlNet?
Tried it but did not get good result. I will try again with the new models.
is it possible to use opendepth mask and canny. both controlnet models for the same image? i want to create a realistic version of the architectural design i created using vray
Hello, you can give it a try. For now only depth and canny are supported.
I'm struggling to download the workflow, I don't see any download button in the github link
Hello, I have updated the link to point to the folder instead of the file. Once the page loads, next to the file name, there is a little download icon (down pointing arrow). Click on it and it should download. Sorry for the trouble.
@@CodeCraftersCorner Yes i maanged to download it thanks
@@CodeCraftersCorner is it normal that xlabs sampler node is taking so much time to execute? I have my models on NVME and comfy in HDD, with RTX 3060 12Go VRAM
@ZakariaNada Flux will generally take time to generate. For reference, these are my times when testing. On 16GB VRAM - 2-3 minutes; on 24GB VRAM - 42 seconds; on 4GB VRAM - 40 minutes. I do not have a 12GB VRAM card to test but I am guessing it will take about 4-5 minutes. If it is taking longer, try moving ComfyUI to the NVME drive. Another thing you can try is to use the schnell model, although, you will not get same quality.
I dont like Flux, updating all workflows needs days. Also annoying to write an extreme detailed prompt, hell naw
I feel your pain. In my case I found it helpful to use Claude AI or Chat GPT to write lengthier posts for Flux. Also I found an amazing node in a video by pixorama that lets you select from a dropdown of a couple hundred styles, which makes it a breeze to add lengthy prompts that Flux does so well with.
Yes, there are so many updates that it's difficult to keep up.