Super excited to try this! The tutorial starts out with the Stream Diffusion operator on the screen but I'm not sure how to get that operator. Do I install the repo somewhere and point to the directory in the parameters box?
Hi there! i've previously gotten streamdiffusion to work locally with conda instead of venv - whats the best way to mod this TOP? :) or should i still use the venv process?
Amazing work and great tutorial. A small tip for your callbacks DAT, on the Common page of parameters if you switch 'Content Language' to Python then the DAT will use python syntax highlighting.
hello. I was enchanted by the possibilities of stream diffusion. I'm a complete newbie and I don't know how to do the installations prior to the tutorial. How can I access this operator? Do I need to subscribe to your Patreon to have access or would I be able to use this tutorial? tanks
Lyell you've got a voice ready for public access. ALSO this looks amazing! For people who know TD and already have a competent workflow, but are lacking in knowledge of SD, tools like this immediately open the door and make it more accessible than it has any right to be. I think seeing SD generating images in real-time, within TD in a format I can immediately wrap my head around to a better degree, is a pull I haven't had with SD until now. Awesome work!
Hey when I start stream with the pulse and my cmd comes up, it get to the end of the code but doesn't get past 'preparing stream...' Any idea why not? I've left it for over an hour and still the same. Have a good computer and high vram nvida card. Really appreciate any help - thanks!!!
And now. Imagine an AI chatbot, when communicating with which you are accompanied by a visual series that displays your adventures in real time. Impressive
Help Please! I'm getting this error on venv install(I have python 3.1 installed to path, CUDA 12.1 and pytorch 2.1): ERROR: Could not find a version that satisfies the requirement torch==2.1.0 (from versions: none) ERROR: No matching distribution found for torch==2.1.0'
Amazing work. A quick question, can we connect this diffusion model in touch designer to depth camera, so the changes are done according to human motion?
Hi, I saw some tutorial to get streamdiffusion running on a MacBook Pro M2 with MPS (ruclips.net/video/Js5-oCSX4tk/видео.html). I wonder if this is also possible within Touchdesigner. Any thoughts?
i got this error when i try to start streming. this is because i didnt pay the 5 dollars? or it can run with the free version? nameError: Numpy is not available Model load has failed. Doesn't exist.
Hi, when I hit "start stream" I get this error in my terminal window: raise LocalEntryNotFoundError( huggingface_hub.errors.LocalEntryNotFoundError: Cannot find an appropriate cached snapshot folder for the specified revision on the local disk and outgoing traffic has been disabled. To enable repo look-ups and downloads online, pass 'local_files_only=False' as input.
wait, so the only way to do this is to subscribe to the patreon? i went through the trouble of installing all the stuff only to realize the TD base featured in this clip doesnt exist in TD :(
hey i have a problem for my installation I have already installed the patch on a first computer but I want to install it on a second, more powerful one (version 1.9). I have a problem with the python packages and I don't understand why Can u help me?
Every update I’ve released is an update to the TOX only. There is no reason to update the installation. Just copy / paste the basefolder value to the new operator and you are good to go!
@@dotsimulate thank you for your reply I tried all weekend to install on the new computer but nothing works even though I did exactly the same thing. I have a problem with python and the dependencies, and even reinstalling it all doesn't work
this looks sooo amazing! I'm trying to run it on my side. Everything seems to go right, but then it stops after "The config attributes {'skip_prk_steps': True} were passed to LCMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file." the streaming just never happens:( anyone had the same issue? I can't wait to test this! 😍
@@olenatrifan7852 quick note if other are stuck here, you have to save your current TD project and open it again. restarting TD and then trying with a new TOX did not get any further than that point.
I know this might be a no brainer but I've tried to get a couple SDXL models working, like FenrisXL and DreamshaperXL turbo and they dont work, do I need to be using the Stabilityai huggingface models? or can I use my own comfyUI?
sd-turbo is a 2.1 based model. It is not sdxl-turbo. Models supported are 1.5 based, because of the 1.5 LCM lora and also sd-turbo. You can use local model from safetensors file if you use the full file path instead of a hugging face id.
@@dotsimulate Awesome I got 1.5 models going, starting up the stream is way faster now. I have a 4090 and can get pretty fast generations with SDXL especially utilising the turbo Loras and such, is there a way to do it with StreanDuffusion Tox or do I need to try connecting it to ComfyTD? :)
I encountered an issue during installation. When I click the install/download button, a window prompts me that there is no network connection, even though my network is actually work fine. What could be the reason? My version is 0.1.9.
Hey! Thank you very much! Just became a patreon. I'm really excited. Does this work with python 3.12? or only 3.10.. I struggled to find the install file for 3.10 version.. had to change it directly on the url.
Hi! @dotsimulate, I am having issues with it, i try to install venv and all requirements and everything appears as an error ERROR: Exception: Traceback (most recent call last): File "C:\TestRepostreamDiffusion\StreamDiffusion\venv\Lib\site-packages\pip\_internal\cli\base_command.py", line 160, in exc_logging_wrapper status = run_func(*args) ^^^^^^^^^^^^^^^ Not sure what to do... i don't really understand it.
Can definitely give it a sense of the shape with input image. Not quite fast enough to run with controlnet at this point for realtime. But definitely possible non realtime with ComfyUI.
Comfy ui is in itself a whole process, I saw in their demos that they were using photoshop as an input for the generation, drawing a base shape and sent it to generation.
hi there, im having issues. just upgraded my patreon account and downloaded the file but when i open it in TD the parameters box in the set up tab is empty. i already installed all that is needed or so i believe. can anyone please help me
Thanks for this! I am currently unable to run it, after the Intializing NDI streaming etc, it runs into: D:\StreamDiffusion\StreamDiffusion\venv\lib\site-packages\diffusers\configuration_utils.py:135: FutureWarning: Accessing config attribute requires_safety_checker directly via 'StableDiffusionPipeline' object attribute is deprecated. Please access 'requires_safety_checker' over 'StableDiffusionPipeline's config object instead, e.g. 'scheduler.config.requires_safety_checker'. deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False) Process Process-4: and goes on and on..what should i do? thankssss🙏🙏🙏🙏🙏🙏
Awesome as usual, thank you so much for this!! Just 2 questions: Will it work with 2022 Touchdesigner Versions? And do you, by any chance, also have a Gumroad page for purchasing, instead of Patreon?
Thanks a lot for this amazing work on StreamDiffusion! Regarding the NDI source name issue: I had similiar issues with the refresh on another project and realized that using 'Bind' instead of 'Reference' fixes the issue.
Thank you for the video! Is an image-to-image generation also possible? For example, a jpg file as input and stream diffusion creates similar looking images in real time?
thanks for the amazing work! but I have some issues when tochdesigner try to open powershell, the powershell will crash. Anyone knows the reason or solution?
Hi, im unable to get trough the start streaming step 'cause cmd gives me this message as soon as it opens: Traceback (most recent call last): File "C:\Users amir\Desktop\TestInstall\streamdiffusionTD\main_ndi.py", line 20, in from utils.viewer import receive_images ModuleNotFoundError: No module named 'utils' and then it closes immediately, I cant get no output. I think i've followed every step accordingly, even adding git to path, and installing the model with all of its features in a folder (in my desktop in first instance to try it out). I would love to try this awesome feature!!!
Same thing for me, I get to start streaming step but I get the No module named: tkinter error. I really would like to know how to get past this error please!!
Super excited to try this! The tutorial starts out with the Stream Diffusion operator on the screen but I'm not sure how to get that operator. Do I install the repo somewhere and point to the directory in the parameters box?
You start the Tutorial by adding the StreamDiffusionTD Operator in TD, however I don't have this in my TD installation what do I do?
Get it from their patron
Does this also work with MacOS? m1
no, you need an nvida graphic card
Wow, this is amazing. Do I gain access to this Operator as your Start SD Patreon ?
Yes ! thank you !
@@dotsimulateI will join.
Hi there! i've previously gotten streamdiffusion to work locally with conda instead of venv - whats the best way to mod this TOP? :) or should i still use the venv process?
Wen Mac version? :) is there compatibility issues?
absolutely mindblowing!! getting 6fps on my laptop 3070, thank you so much ❤
me too
What about 2gb v ram?
Amazing work and great tutorial. A small tip for your callbacks DAT, on the Common page of parameters if you switch 'Content Language' to Python then the DAT will use python syntax highlighting.
Dotsimulate InSession when?
Yessssss 100 %@@FunctionStore
❤Thank you !! I will make sure to set that when callbacks are created.
❤
hello. I was enchanted by the possibilities of stream diffusion. I'm a complete newbie and I don't know how to do the installations prior to the tutorial. How can I access this operator? Do I need to subscribe to your Patreon to have access or would I be able to use this tutorial? tanks
It works great but the resolution and details are very poor, you don't get good faces
Lyell you've got a voice ready for public access. ALSO this looks amazing! For people who know TD and already have a competent workflow, but are lacking in knowledge of SD, tools like this immediately open the door and make it more accessible than it has any right to be. I think seeing SD generating images in real-time, within TD in a format I can immediately wrap my head around to a better degree, is a pull I haven't had with SD until now. Awesome work!
do not quite understand how to import stream diffusion to td, really lost,
Incredible! Can this run on A Mac Studio with an M2 Max/ultra?
How do you load up the operator in the first place ?
Thank god I'm not alone here. I have no idea how to even begin this process cuz he skipped right over step 1 as if TD is intuitive lol
Hey when I start stream with the pulse and my cmd comes up, it get to the end of the code but doesn't get past 'preparing stream...'
Any idea why not? I've left it for over an hour and still the same. Have a good computer and high vram nvida card. Really appreciate any help - thanks!!!
Is there any way to integrate your own LoRa/safetensor into this ??
Really great! Got 1.5 fps with my GTX 1070 😅
same setup, confirming drop to 0.7/0.8 with live image segmentation running parallel
Is there any guide of how ot add the StreamdiffusionTD operator to the Touchdesigner? Sorry I'm newbie
Drag it from location on disk after downloading. Just released a new version! (available on my Patreon)
i am super unexperienced so how do I add the operator into touchdesginer network?
And now. Imagine an AI chatbot, when communicating with which you are accompanied by a visual series that displays your adventures in real time. Impressive
May I ask how did you load the operator in the network at 1:16 ? ❤❤❤
yeah we need a even more clear tutorial on this plzzz
You need to go to his patron pay 5$ and then download the Operator
Help Please! I'm getting this error on venv install(I have python 3.1 installed to path, CUDA 12.1 and pytorch 2.1):
ERROR: Could not find a version that satisfies the requirement torch==2.1.0 (from versions: none)
ERROR: No matching distribution found for torch==2.1.0'
1:17 How did you get this Node StreamDiffusionTD
Seems like you have to be a Patron
@@lukaspetersen6087 (((
@@lukaspetersen6087 Thanks
Amazing work. A quick question, can we connect this diffusion model in touch designer to depth camera, so the changes are done according to human motion?
he's done it! I know what I'll be doing tomorrow...
Hi, I saw some tutorial to get streamdiffusion running on a MacBook Pro M2 with MPS (ruclips.net/video/Js5-oCSX4tk/видео.html). I wonder if this is also possible within Touchdesigner. Any thoughts?
Will this ever be available for AMD graphic card users? :)
How do you load the operator, its not showing in the tab menu. im very new to td.
How come at the beginning of the video we see almost 25 fps but during the main video tutorial is 18 fps?
i got this error when i try to start streming. this is because i didnt pay the 5 dollars? or it can run with the free version?
nameError: Numpy is not available
Model load has failed. Doesn't exist.
How do you optimize it to get better FPS? Getting 1.4 FPS on Macbook M1 pro:)
Amazing! Thank you for sharing. Would this work on a Mac? Specifically MBP M2 Max 64 GB
Hi, when I hit "start stream" I get this error in my terminal window: raise LocalEntryNotFoundError(
huggingface_hub.errors.LocalEntryNotFoundError: Cannot find an appropriate cached snapshot folder for the specified revision on the local disk and outgoing traffic has been disabled. To enable repo look-ups and downloads online, pass 'local_files_only=False' as input.
Hey :) would it be possible to do a turial of this tipic on Mac :) kind regards
@dotsimulate bro where can i find or install the StreamdiffusionTD operator new to touch designer
gooday brother did u figure it out yet, also facing the same thing
Yesssss Lyell! Amazing work and tutorial!! Would love to see you create more tutorials in the future!!
does this work with python 3.10.6 ?
Does this work on mac?
beautiful work, great tutorial, I'm running on average at 6 FPS . If I wanted to ask how to load new MODEL IDs?
amazing !! how to load new different model ?
this is actually just the craziest thing i've ever seen
Nice job bro. Appreciate the information.
Thank you so much for sharing! I'm a big fan even after the first minute already! Sub
Hi! Does it work with current python versions (12)?
3.10.9 is currently best supported. 3.10.11 as well. Other versions have caused conflicts for people.
@@dotsimulate Thank you
i think im in love..
❤
wait, so the only way to do this is to subscribe to the patreon? i went through the trouble of installing all the stuff only to realize the TD base featured in this clip doesnt exist in TD :(
Mac!!!
hi, I'm wondering if this operator works in macOS?
same
hey i have a problem for my installation
I have already installed the patch on a first computer but I want to install it on a second, more powerful one (version 1.9).
I have a problem with the python packages and I don't understand why
Can u help me?
Every update I’ve released is an update to the TOX only. There is no reason to update the installation. Just copy / paste the basefolder value to the new operator and you are good to go!
@@dotsimulate thank you for your reply
I tried all weekend to install on the new computer but nothing works even though I did exactly the same thing.
I have a problem with python and the dependencies, and even reinstalling it all doesn't work
@@hoole3796 send me a message on discord or Patreon.
Hello, it's cool work. Does your plugin support tensorrt acceleration?
Yep! I’ve got an improved installation process with a separate step for tensorrt.
getting 9fps on my RTX A4000 in 512x512 resolution with none acceleration option.
Where's the local cache? It took 5GB and I want to clear the model
Update: I found it. it at "C:/Users/{username}/.cache/huggingface/hub/"
this looks sooo amazing!
I'm trying to run it on my side. Everything seems to go right, but then it stops after "The config attributes {'skip_prk_steps': True} were passed to LCMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file."
the streaming just never happens:( anyone had the same issue?
I can't wait to test this! 😍
thanks for your help on discord! if anyone has the same issue, a simple TD restart solved it for me
@@olenatrifan7852 quick note if other are stuck here, you have to save your current TD project and open it again. restarting TD and then trying with a new TOX did not get any further than that point.
I know this might be a no brainer but I've tried to get a couple SDXL models working, like FenrisXL and DreamshaperXL turbo and they dont work, do I need to be using the Stabilityai huggingface models? or can I use my own comfyUI?
sd-turbo is a 2.1 based model. It is not sdxl-turbo. Models supported are 1.5 based, because of the 1.5 LCM lora and also sd-turbo. You can use local model from safetensors file if you use the full file path instead of a hugging face id.
@@dotsimulate Awesome I got 1.5 models going, starting up the stream is way faster now. I have a 4090 and can get pretty fast generations with SDXL especially utilising the turbo Loras and such, is there a way to do it with StreanDuffusion Tox or do I need to try connecting it to ComfyTD? :)
I encountered an issue during installation. When I click the install/download button, a window prompts me that there is no network connection, even though my network is actually work fine. What could be the reason? My version is 0.1.9.
Ok so everything is set up, I can stream because “model is not found” I’m
Unclear on that part, I also don’t have a hugging face. I’m a novice here
Hey! Thank you very much! Just became a patreon. I'm really excited. Does this work with python 3.12? or only 3.10.. I struggled to find the install file for 3.10 version.. had to change it directly on the url.
did you ever find out if it works on 3.12?
@@zpacetree in case it helps you: I have tried all versions and it only worked to me with 3.10
Hi! @dotsimulate, I am having issues with it, i try to install venv and all requirements and everything appears as an error
ERROR: Exception:
Traceback (most recent call last):
File "C:\TestRepostreamDiffusion\StreamDiffusion\venv\Lib\site-packages\pip\_internal\cli\base_command.py", line 160, in exc_logging_wrapper
status = run_func(*args)
^^^^^^^^^^^^^^^
Not sure what to do... i don't really understand it.
"Currently only working using windows + nvidia graphics card"
Does this mean other OS and graphics compatibility are actively in the works?
any chance to use control net? or masks on the inputs to control some shapes in the generation?
Can definitely give it a sense of the shape with input image. Not quite fast enough to run with controlnet at this point for realtime. But definitely possible non realtime with ComfyUI.
Comfy ui is in itself a whole process, I saw in their demos that they were using photoshop as an input for the generation, drawing a base shape and sent it to generation.
Thanks for sharing. It is only possible with an nvidia card. I have a hp probook laptop with raedon card
Isthere any way of implementing it with an AMD Graphics Card?
wtf
Thisi is incredible! Why wouldn’t it work on Mac?
Yeees finally! 😍 You‘re the best for putting this out! And the tutorial is well made made too 🖤
I need a tutorial on how to change the model ID using the Hugging Face or Civitai website
hi there, im having issues. just upgraded my patreon account and downloaded the file but when i open it in TD the parameters box in the set up tab is empty. i already installed all that is needed or so i believe. can anyone please help me
And I am exactly trying to do a firework show
It says my python 3.1 executable not found but i added it ot my path....?
Thank you so much for sharing this!! ❤ My dreams just came true..
Any chance to run that setup on a silicon Mac ?
Yes. It is working as of a few versions. I haven’t posted any update videos but there have been nearly 10 updates since this.
i dont get where you take StreamdiffusionTD node from?
Thanks for this! I am currently unable to run it, after the Intializing NDI streaming etc, it runs into:
D:\StreamDiffusion\StreamDiffusion\venv\lib\site-packages\diffusers\configuration_utils.py:135: FutureWarning: Accessing config attribute requires_safety_checker directly via 'StableDiffusionPipeline' object attribute is deprecated. Please access 'requires_safety_checker' over 'StableDiffusionPipeline's config object instead, e.g. 'scheduler.config.requires_safety_checker'.
deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
Process Process-4:
and goes on and on..what should i do? thankssss🙏🙏🙏🙏🙏🙏
Awesome as usual, thank you so much for this!!
Just 2 questions:
Will it work with 2022 Touchdesigner Versions?
And do you, by any chance, also have a Gumroad page for purchasing, instead of Patreon?
nom it uses Custom Sequential Parameters (see wiki, opens a lot of doors!), which is in 2023.Official only.
thanks, when I ayked it was not in the description yet
Hey! I tried and it seems not to work with 2022 TD.
do you have link to your discord/patreon?
Can you tell me how to use controlnet? Are there any tutorials? Thank you very much
Thanks a lot for this amazing work on StreamDiffusion! Regarding the NDI source name issue: I had similiar issues with the refresh on another project and realized that using 'Bind' instead of 'Reference' fixes the issue.
Will investigate. Thank you.
when all the setup is great how do i import streamdiffusion in touchdesigner
Thank you so much for this! I may be lost right now but how do I write the path to a local safetensor model? I can't seam to get it working
this is so sick! any indication when it will work on Mac? does it already work on Mac?
very cool video.
I have a problem for the installation I have an error for cuda python
How can i resolve ?
Thanks a lot ! But Lora models don't work, how do you setup this ?
Hey, there is a way to use this with inpainting to select to afect only a determined area of the image?
Is there a branch that would work with m1 MacBook?
fabulous tool design and great explanation in the video.
is anything out of pocket aside from becoming a patreon?
What Spec of your workstation?
Thanks a lot for this! Running at 8 FPS with 4070
Is there an alternative for AMD systems?
This is pretty cool!
any advice on getting it running on a mac?
could you run this on a mac m3 pro?
Me gustaria saberlo, tambien tengo M3
Thank you for the video! Is an image-to-image generation also possible? For example, a jpg file as input and stream diffusion creates similar looking images in real time?
Hi! Can you make an tutorial for Mac?
How I can duble the resolution?
thanks for the amazing work! but I have some issues when tochdesigner try to open powershell, the powershell will crash. Anyone knows the reason or solution?
Thanks for the video!
Is it a must to download the Python v 3.10.9 + cuda 11.8?
since i work on a mac pro (win 11 is installed) with a vega 56, i question is will the amd GPU gets a support for this?
Hi, im unable to get trough the start streaming step 'cause cmd gives me this message as soon as it opens:
Traceback (most recent call last):
File "C:\Users
amir\Desktop\TestInstall\streamdiffusionTD\main_ndi.py", line 20, in
from utils.viewer import receive_images
ModuleNotFoundError: No module named 'utils'
and then it closes immediately, I cant get no output. I think i've followed every step accordingly, even adding git to path, and installing the model with all of its features in a folder (in my desktop in first instance to try it out). I would love to try this awesome feature!!!
Same thing for me, I get to start streaming step but I get the No module named: tkinter error. I really would like to know how to get past this error please!!
Great! How do I get/create the Container nvidia_smi in TD?
will this work with python 3.12 or only 3.10?