Hello , the image input doesnt work when i press generate it generate what is written in the prompt par , not what is inside etn_loadimagebase64 , what seems the proplem ?
I have built comfyui on GoogleCloud and get the IP and Port, but when I want to connect to Touchdesgiener on my local computer, I find that TDComfyUI can load the workflow but cannot run it. There is no response after clicking generate. What should I do?
just discovered your channel - this is my first project in touchdesigner and it's been awesome. looking forward to going through the rest, you're amazing keep it up!
Thanks for sharing so much knowledge! Is it possible to do all this when you have comfyUI working on cloud GPU services? If so would there be any difference in the workflow? My hardware handles touchdesigner but there's no way i could have decent performance with Stable Diffusion locally. Thanks in advance!
This is amazing! Thank you for putting the time into making this. I have a question Im hoping you might be able to answer; I'm getting a whole batch of errors anytime I attempt a Queue prompt. KeyError: 'model.diffusion_model.input_blocks.0.0.weight' any thoughts would be deeply appreciated!
Thanks for publishing this! I'm having a problem... In TDComfyui node, the parameters aren't showing in the "workflow" page... any ideas as to why? I've tried hitting "re-init" on TDComfyUi node, as described in the node's instructions. I get status Connected, then disconnected. Thanks again!
Спасибо!!! Подскажите, на какой карточе в тд юзали дифузию и какой макс фпс получился? И быть может посоветуете мелкую и быструю модельку, для макс фпс?
I enjoyed the video you uploaded. This is a very interesting lecture. I have a question. It felt like the final output video was cut off at every frame. I would like to know if there is a way to make this part play more smoothly.
Thanks for this video about TD and Comfyui. Do you know other optimisations to reduce delay with webcam ? (choice of models, lora LCM, use of cuda/directML in comfyui, ...)
Load image works like a pipe between ComfyUI and Touchdesigner, so you can connect it wherever you want. And also you can set multiple nodes, to use multiple images at same time.
While trying on the Touch Designer step Generating the img2img banana still generates the same Angel from the sample, any idea on how to fix this ? Thx
Thanks , great explanation. My problem is how to build the SD-API? I install the stable-diffusion-webui in my computer locally. But, how to connect the local sd with the TD?
HI, thanks for tutorial but I am facing a problem. When new image begins generating, output replaces with black color. I have tried versions 1.0.1 and 1.0.2 of TDComfyUI but result the same
Hello , the image input doesnt work when i press generate it generate what is written in the prompt par , not what is inside etn_loadimagebase64 , what seems the proplem ?
I have built comfyui on GoogleCloud and get the IP and Port, but when I want to connect to Touchdesgiener on my local computer, I find that TDComfyUI can load the workflow but cannot run it. There is no response after clicking generate. What should I do?
Bro, this is brilliant, but I have Mac. Is it possible to do this with the M1?
I would also like to know the same!
HI, I put tox file into TD have some error and connect json file can't seem any thing in the workflow dashboard, is anything wrong? 🤔
just discovered your channel - this is my first project in touchdesigner and it's been awesome. looking forward to going through the rest, you're amazing keep it up!
dang, what happend to bradley cooper in the thumbnail?
Thanks for sharing so much knowledge! Is it possible to do all this when you have comfyUI working on cloud GPU services? If so would there be any difference in the workflow? My hardware handles touchdesigner but there's no way i could have decent performance with Stable Diffusion locally. Thanks in advance!
This is amazing! Thank you for putting the time into making this.
I have a question Im hoping you might be able to answer; I'm getting a whole batch of errors anytime I attempt a Queue prompt.
KeyError: 'model.diffusion_model.input_blocks.0.0.weight'
any thoughts would be deeply appreciated!
I figured it out. I just used a different model as the one I was using was causing errors!
Thanks for publishing this! I'm having a problem... In TDComfyui node, the parameters aren't showing in the "workflow" page... any ideas as to why? I've tried hitting "re-init" on TDComfyUi node, as described in the node's instructions. I get status Connected, then disconnected. Thanks again!
Hi i just had the same problem. The issue was the format of the json. You have to use "save (API Format)" in ComfyUI instead of the normal save.
Same issue here, i triede 22 version, re - init and save has api format and allways get same error in dax op in TD
same!@@mavro360
Try re-initing [TDComfyUI] before connecting [workflow_api]op to the [TDComfyUI] input.
try tind the safetensors in the text dat @@lee_sung_studio
Exactly the workflow I was looking for, thanks for compiling this for us!!
Спасибо!!!
Подскажите, на какой карточе в тд юзали дифузию и какой макс фпс получился? И быть может посоветуете мелкую и быструю модельку, для макс фпс?
Thank you for the video! Although, why only the dreamshaper model seems to work?
thanks , great explanation
Awesome vid. Very helpful. Been looking for something like this to hook photoshop into comfyui
realtime webcam only working with windows?
Should work on OSX also
does anyone know of an easy way to integrate runpod or other external host for stablediffusion into a workflow like this?
You can change ip and port in settings tab and connect to remote server
How can I connct tagtool to touchdesigner ?
Hi. Thank you for tutorial. Is it real to transfer a picture directly from obs to stable diffusion?
Thi is insane but the connection between TouchDesigner and ConfyUI keeps disconnecting, is there a solution?
I get broken workflow when loading into touch designer. Can you share your workflow?
check that you export "API format" from ComfyUI
@@VJSCHOOL I am, it's something with the load image node. I found out I needed an older version of touch designer, 2022.32660 is working.
@@Reddkomet Thanks so much, your reply helps a lot!!!
I enjoyed the video you uploaded. This is a very interesting lecture.
I have a question.
It felt like the final output video was cut off at every frame.
I would like to know if there is a way to make this part play more smoothly.
Which Windows requierements do i need to run all this?
How do I make it as fast as the video you uploaded on Instagram?
Thank you so much for this tutorial 🎉
this is awesome, thanks
One issue i have, seed stays the same... Need to generate seed from each updated image from spout
I wonder if there is a way to create differents prompts and randomize them
Amazing work! Is it possible for you to share this Touch Designer workflow? Looks like a bit tricky =)
What an outstanding work! Thank yo uso much
Thanks for this video about TD and Comfyui. Do you know other optimisations to reduce delay with webcam ? (choice of models, lora LCM, use of cuda/directML in comfyui, ...)
Looking for an answer on this too. any luck please?
u have no ideea how much I was lookin for this ! u have my subscribe
I'm assuming you can probably just pipe the Load image Base64 into controlnet as well?
Load image works like a pipe between ComfyUI and Touchdesigner, so you can connect it wherever you want. And also you can set multiple nodes, to use multiple images at same time.
While trying on the Touch Designer step Generating the img2img banana still generates the same Angel from the sample, any idea on how to fix this ? Thx
Try to change denoise parameter, something like 0.5 value
Thanks , great explanation. My problem is how to build the SD-API? I install the stable-diffusion-webui in my computer locally. But, how to connect the local sd with the TD?
You should use another component for a1111 webui
github.com/olegchomp/TDDiffusionAPI
@@VJSCHOOL I have an urgent question for another video, can you help please?
stable diffusion runs in local right? cause i can't casue i have a amd graphic card
It can be run on remote server also
so helpfull! thank you
awesome! keep going 👍
Hi any idea why i cant see the manager and share button in the comfyui
I would also like to know the same!
HI, thanks for tutorial but I am facing a problem. When new image begins generating, output replaces with black color. I have tried versions 1.0.1 and 1.0.2 of TDComfyUI but result the same
There is logDAT output (pink color) where you can find status of generation and maybe there are some errors
I'm having the same issue any chance you found the solution?
@@SebboThePotato read comment above. Without knowing what happing, no way to help somehow
brilliant!
This work with SDXL turbo?
Yes
super cool, but the prompt execution time is too high. does anyone know how to speed it up? i have a m1 mac.
"broken workflow" ???