Is anyone else having trouble with Image-to-Image in SD_API? Text-to-Image works fine for me, but Image-to-Image does not seem to do anything. I don't see a progress bar in the terminal, for example.
I don t see the progression bar in the terminal and don t get the image, just if I generate the image from microsoft edge, which opens when I launch the webui. Can somebody help me with this?
Hello ! Thanks for this tutorial. To challenge myself I choosed to create my own API with the tut' you linked : ruclips.net/video/4khcLvGjoX8/видео.html As a beginner I see there is huge difference between the API of Interactive & Immersive HQ and the one from DotSimulate. I mean a lot of parameters we don't have. In other way, the toe created in the tut' is a BASE, like a closed box. How can we connect other operators to it ? Thanks.
Hi! Sorry but I can't figure out how to create a custom parameter for the resample CHOP like you have in your simpleresample component. In this case, it seems like I need to edit the FFT size in the AudioSpectrum CHOP? Sorry if this is such a noob question but I have no clue how to do that!
question, whenever i hit launch WebUi, in the command panel it doesn't want to launch web UI in the command pannel, it says invalid syntax error. Pls help i
In case anybody ran into the same problem. I couldn't lauch the webui form the API node around 19:20. I had installed the WebUI following the instructions "Install and Run on NVidia GPUs" - Automatic Installation - Windows (method 1). Then I deleted the sd.webUI folder and followed the instructions "Windows (method 2)", first installiung Python and then git and it worked immediately.
Has anyone figured out how to have less change in between the individual frames, so the movie looks more fluent? All the parameters I tried didn't really work in transform and level.
I am stuck at the point of launching the SD Webui from TD. MacBook Pro M2 Max. The Webui launches fine from Terminal but when I put the path into the SD_API SD Webui Folder address in the API Settings, nothing launches when I press the Pulse button. Has anyone else had any similar issues or could point me in the right direction? Eternal thank you.
@@xiaotingtan3369 @leotromano @time_itself @clee6030 I never did figure it out, I just launch the webui from Terminal and then it operates as normal. I copied the launch command into a document and just copy paste when I need to use it.
Because I am trying this now with a newer version of dotstimulate's SD API, when I grab a null from the API it doesn't have a currentframe element in it, just: Streamactive, framecount, and fps. :( How do I go about this now?
Hi! Thank you for the great video :) I've completed all the steps and I generated 1000 frames over night, but I'm confused on how to create an mp4 file from the generated frames. What would you recommend that I do?
heyy i got stuck on the scale instacing part of this tutorial. i followed your 'simple-resample' chop tutorial, however i got suck on what to specify on the 'scale x' and 'scale y' under 'scale OP'.
I did some testing and for me the Select TOP you put in to smooth the flickering ended up to give me more flickering. Sometimes the AI is triggered also by the very low opacity input and "jumps" back to generate some elements at the "old" positions of the blobs, or get stuck at positions. I tried to somehow set up a Feedback Loop which fades out the fed back animation over time but couldn't figuring out how
Followed all steps but still the independent timeline doesnt advance once image is generated. the only difference i can see is the newer version of SD_API doesnt have anything linked to the (current frame) parameter by default (within the sd api) your version however does have something linked there, would you mind sharing what your expression is there?
As suggested by Bileam in another comment, I deleted my entire content of chopexec 2 and added: def onValueChange(channel, sampleIndex, val, prev): op('independent/local/time').frame +=1 return Then deactivated Off to On and On to Off and activated Value Change. At least the timeline moves on in frames now, unfortunately it jumps two at a time, but that seems good enough for me. Can you follow?
I need some help, I have recorded but the recording almost speed past the frames, and sound is speed up basically like a screech. Realtime is turned off
thank you for this wonderful! there doesn't seem to be any current frame, but only a channel named framecount, the issue is after every frame rendered, the value goes back to 0, meaning it is pulsing twice on every render, also meaning that the independant base is cooking twice for every stable diffusion cook, do you have any ideas there?
So I'm really close but on the CHOP Exec 2 I keep having the error of Cannot find function named OnValueChange (project1/chopexec2). I've gone over the tutorial a couple times and can't figure out what I'm doing wrong.
Hi! I need help, I don’t know why when I press “Launch webUI” pop up this message; Couldn't launch python exit code: 9009 stderr: No se encontr¾ Python; ejecuta sin argumentos para instalar desde Microsoft Store o deshabilita este acceso directo en Configuraci¾n > Administrar alias de ejecuci¾n de la aplicaci¾n. Launch unsuccessful. Exiting. Presione una tecla para continuar . . .
Awesome tutorial, thank you so much! I am not sure, but I think people could stumble over is your writing of "independent" as "independant" and then getting the referencing in the code wrong at the end wrong?
Awesome tutorial, thank you! For me the workflow fails to run smoothly tho, it seems there is a problem with the frame-by-frame workflow. When I use SD to proceed on currentframe, it doesnt always send the signal to play the independant timeline. When I hook a timer to count up the currentframe, it updates as expected. Might be, because I run on Mac M1.. Maybe it's some lag issue in TD. I will try this on a faster windows pc soon.
Thanks! Interesting. There's anther hacky way of doing this, by using and Info CHOP on the texture coming out of SD_API and use the total_cooks channels instead of the Currentframe. Or maybe you've gotta use a Trigger CHOP to make sure it's really hitting one, maybe you're dropping frames somewhere. Is realtime turned off?
I did everything, it works fine but I have a problem. When I activate Keep Generating the image input does not change but the frames advance, when I generate with the Generate Frame Pulse button it works. Where can I check?
Hello. Thank you so much for the video. For me, whenever I link the Audio CHOP to Moviefileout, it prevents the local timeline from advancing for some reason. And audio stops moving forward. Can you help?
Awesome tutorial! I got everything working, but with the same settings as in the video I get runaway feedback where the background noise and any bright spots eventually go to white and start growing. Any suggestions on how to limit that, or maybe I missed some setting? Thanks!
Everything works fine for me but the audio sounds laggy. I have the audio fixed to the timeline and realtime is off. Any ideas where to look for the error?
For some reason my frames arent updating while recording... it records and updates few frames correct then overlaps the frames without updating the base noise and also the audio glitches any idea why thats happening?
Thanks for this :) strange but for me the frame by frame seems to skip forward 2 frames at a time for some reason and I seem to have duplicate frames in the final movie. Not an unfixable issue as I can workaround it by rendering individual frames and removing the duplicates, but I can't figure out why it's doing that yet...
this is amazing, thank you! Found a way to trigger the next frame of the independent component that's maybe a bit simpler, since it can be run as a single script: timeOp.par.play = 1 run("timeOp.par.play = 0", delayFrames=2)
could you please add more information about your tip? your syntax is different from what is shown in the video. op('independant/local/time').par.play = 1
amazing tutorial!!! though I have a question - when the time is being triggered to play, instead of moving by just 1 frame, i think it jumps more (eg. from 00:01:13:13 to 00:01:59:01) . How can I fix that?
thanks! hmm that's odd. you can try a different approach someone on discord shared, which might be better anyways. so in the chop execute you'd use this expression: op('local/time').frame +=1
@@elekktronaut So I deleted my entire content of chopexec 2 and added: def onValueChange(channel, sampleIndex, val, prev): op('independent/local/time').frame +=1 return Then deactivated Off to On and On to Off and activated Value Change. At least the timeline moves on in frames now, unfortunately it jumps two at a time, but that seems good enough for me.
Hi, bileam, I followed every single step, but my independent timeline didn't seem to move, I put a count CHOP after delay CHOP it seems like every move was detected but made no difference to the Time COMP. I downloaded the exact same model you were using there, but my flower is so plain with no details like leaves, stems and textures
Maybe you were calling the Base-Component "independAnt" like he is and then wrote "independEnt" in the code or the other way around? Was something i stumbled over...
Sorry for naive question: would this in theory mean each generated frame is a stable diffusion token (so rendering would cost money, basically) ? Thank you :)
Yes if you're accessing a paid cloud backend, but afaik the automatic1111 is meant for local use on your own hardware, which is free as long as your hardware can handle it.
exactly. no tokens involved, this is running locally and there's no limit but your gpu. it's completely free (except for the patreon support for dotsimulate, but there's alternatives as well) :)
@@electromagneticgoldstar7903 Hi! I was wondering if you've had any problems? I did the whole GitHub process for Mac I also got the famous sentence "To create a public link, set `share=True` in `launch()`" which I guess means everything is ok like Bileam said. But unfortunately I can't create an image :( (Got M1 too)
Damn, how did you do that? I'm trying to install AUTOMATIC1111 on my mac and when I'm trying to run ./webui.sh on terminal there's one info over and over again - "Stable diffusion model failed to load" :/
Hi, I am new as well, and am wondering how to specifically change the number of samples using a resample chop? When he types "60" into the "num samples" field at 8:33, I don't know how I would resample to 60 using a default resmaple chop.@@soundswhile9529
nice technique! quick question: if using 24FPS video, why resample the audio to 60? as a sidenote: FlowFrames and Video2X are great free alternatives to Da Vinci and Topaz :)
Hello, I love your tutorials! I have a question, if my stable diffusion takes 6 minutes to generate an image (I have a AMD GPU that can't use Nvidia features and I've been researching about that for one complete day lol), do you think it it possible to still do your tutorial?
Such an amazing overview and powerful approach with the independent time component. This is awesome ! thank you !
thanks man, appreciate your work ❤️
How showing the actual result?
Is anyone else having trouble with Image-to-Image in SD_API? Text-to-Image works fine for me, but Image-to-Image does not seem to do anything. I don't see a progress bar in the terminal, for example.
I don t see the progression bar in the terminal and don t get the image, just if I generate the image from microsoft edge, which opens when I launch the webui. Can somebody help me with this?
Hello ! Thanks for this tutorial.
To challenge myself I choosed to create my own API with the tut' you linked : ruclips.net/video/4khcLvGjoX8/видео.html
As a beginner I see there is huge difference between the API of Interactive & Immersive HQ and the one from DotSimulate. I mean a lot of parameters we don't have. In other way, the toe created in the tut' is a BASE, like a closed box. How can we connect other operators to it ?
Thanks.
did you ever figure this out?
Hi! Sorry but I can't figure out how to create a custom parameter for the resample CHOP like you have in your simpleresample component. In this case, it seems like I need to edit the FFT size in the AudioSpectrum CHOP? Sorry if this is such a noob question but I have no clue how to do that!
@elekktronaut I have the same question have been following your tutorials - do you have any other tutorial where u built the simple_resampler? thanks
Did you ever figure this out? I am stuck on the same thing!
I'm looking for help on this part too any luck?
@@yaraalhusaini2551 yes! I simply went to his patreon to find the simple resample CHOP
ruclips.net/video/NJE48IVzNVc/видео.html 3:00
im a students. where is SD API...in touchdesigner.. PLEASE .. i want detail... please
Hello, where can I download the simple_resample plug-in in the video? Thanks
question, whenever i hit launch WebUi, in the command panel it doesn't want to launch web UI in the command pannel, it says invalid syntax error. Pls help i
In case anybody ran into the same problem. I couldn't lauch the webui form the API node around 19:20. I had installed the WebUI following the instructions "Install and Run on NVidia GPUs" - Automatic Installation - Windows (method 1). Then I deleted the sd.webUI folder and followed the instructions "Windows (method 2)", first installiung Python and then git and it worked immediately.
BUMP!
Solved my issue thank you SO much!!!
YES FINALLY stable diffusion, thank you sooo much man!!!! Love your channel
Has anyone figured out how to have less change in between the individual frames, so the movie looks more fluent? All the parameters I tried didn't really work in transform and level.
did you figure this out? have the same question
you should read into controllnets. They controll the change between images. @@Schall-und-Rauch
I am stuck at the point of launching the SD Webui from TD. MacBook Pro M2 Max. The Webui launches fine from Terminal but when I put the path into the SD_API SD Webui Folder address in the API Settings, nothing launches when I press the Pulse button. Has anyone else had any similar issues or could point me in the right direction? Eternal thank you.
Hitting the same issue! Did you figure out how to get unblocked?
I believe that Streamdiffusion is only compatible with windows and NVIDIA cards; you might be SOL
also curious if you had any luck?
I met the same issue, did u figure it out?
@@xiaotingtan3369
@leotromano
@time_itself
@clee6030 I never did figure it out, I just launch the webui from Terminal and then it operates as normal. I copied the launch command into a document and just copy paste when I need to use it.
Because I am trying this now with a newer version of dotstimulate's SD API, when I grab a null from the API it doesn't have a currentframe element in it, just: Streamactive, framecount, and fps. :( How do I go about this now?
Hi! Thank you for the great video :) I've completed all the steps and I generated 1000 frames over night, but I'm confused on how to create an mp4 file from the generated frames. What would you recommend that I do?
heyy i got stuck on the scale instacing part of this tutorial. i followed your 'simple-resample' chop tutorial, however i got suck on what to specify on the 'scale x' and 'scale y' under 'scale OP'.
I did some testing and for me the Select TOP you put in to smooth the flickering ended up to give me more flickering. Sometimes the AI is triggered also by the very low opacity input and "jumps" back to generate some elements at the "old" positions of the blobs, or get stuck at positions. I tried to somehow set up a Feedback Loop which fades out the fed back animation over time but couldn't figuring out how
pleas,python3.10 executable not found.
why the SD_API link is not support?
what do you mean?
Followed all steps but still the independent timeline doesnt advance once image is generated. the only difference i can see is the newer version of SD_API doesnt have anything linked to the (current frame) parameter by default (within the sd api) your version however does have something linked there, would you mind sharing what your expression is there?
As suggested by Bileam in another comment, I deleted my entire content of chopexec 2 and added:
def onValueChange(channel, sampleIndex, val, prev):
op('independent/local/time').frame +=1
return
Then deactivated Off to On and On to Off and activated Value Change. At least the timeline moves on in frames now, unfortunately it jumps two at a time, but that seems good enough for me. Can you follow?
Hi have you solve that😢I got same problem with you,and I also see the independent time value down below shows Red but in the tutorial it’s white
getting stuck at 5:06 when I change the rate to cookrate() the Rate field goes to 0.01 and turns red. Anyone else experiencing this?
I'm wondering how to make this using comfyui for TD?
This video is a gift and best way to start 2024. Thank you. Do you know if the SD_API supports LoRas?
when using image to image, how do you change the input resolution?
Great video!!!!. Which graphics card are you using?
I need some help, I have recorded but the recording almost speed past the frames, and sound is speed up basically like a screech. Realtime is turned off
How to connect the local stable-diffusion-webui with the touch designer? I do not know how to build the SD-API in this video.
This one is big!! Awesome and easy solution to a pretty complicated problem❤
Please, Can you continue this video combining with Kinect ?
thank you for this wonderful! there doesn't seem to be any current frame, but only a channel named framecount, the issue is after every frame rendered, the value goes back to 0, meaning it is pulsing twice on every render, also meaning that the independant base is cooking twice for every stable diffusion cook, do you have any ideas there?
super klas masterklas kvas
So I'm really close but on the CHOP Exec 2 I keep having the error of Cannot find function named OnValueChange (project1/chopexec2). I've gone over the tutorial a couple times and can't figure out what I'm doing wrong.
OnValueChange was deleted out by him, so it shouldn't be in yours anymore.
Hi! I need help, I don’t know why when I press “Launch webUI” pop up this message;
Couldn't launch python
exit code: 9009
stderr:
No se encontr¾ Python; ejecuta sin argumentos para instalar desde Microsoft Store o deshabilita este acceso directo en Configuraci¾n > Administrar alias de ejecuci¾n de la aplicaci¾n.
Launch unsuccessful. Exiting.
Presione una tecla para continuar . . .
Same here. Have you found a solution?
which windows requirements do i need to run all this?
You my friend are a legend! I followed your video to the letter and finally actually understand TD a lot more! Thank you
Awesome tutorial, thank you so much! I am not sure, but I think people could stumble over is your writing of "independent" as "independant" and then getting the referencing in the code wrong at the end wrong?
got stuck on the simple resample chop, how do i create that?
it's a custom .tox you can find on my patreon but it's really just a resample chop. look at the example here: ruclips.net/video/NJE48IVzNVc/видео.html
@@elekktronaut thanks! found it!
Awesome tutorial, thank you!
For me the workflow fails to run smoothly tho, it seems there is a problem with the frame-by-frame workflow. When I use SD to proceed on currentframe, it doesnt always send the signal to play the independant timeline. When I hook a timer to count up the currentframe, it updates as expected. Might be, because I run on Mac M1.. Maybe it's some lag issue in TD. I will try this on a faster windows pc soon.
Thanks! Interesting. There's anther hacky way of doing this, by using and Info CHOP on the texture coming out of SD_API and use the total_cooks channels instead of the Currentframe. Or maybe you've gotta use a Trigger CHOP to make sure it's really hitting one, maybe you're dropping frames somewhere. Is realtime turned off?
@@elekktronaut lets gooo, turning off realtime fixed it. Also working on a faster setup works, but might drop frames. Thanks again!
@@nime1575 cool! frame drops shouldn't happen when realtime's turned off, that's literally why you turn it off 😌
@@nime1575 I was having the same issue, thanks to your comment I figured that, I also didn't turned off the realtime. 🥲
I did everything, it works fine but I have a problem. When I activate Keep Generating the image input does not change but the frames advance, when I generate with the Generate Frame Pulse button it works. Where can I check?
NVM, turn real time OFF fix it!!!!! i need to read more lol
I've been waiting for this! You are a genius madman, thank you! 😄
Hello. Thank you so much for the video. For me, whenever I link the Audio CHOP to Moviefileout, it prevents the local timeline from advancing for some reason. And audio stops moving forward. Can you help?
Upon further inspection, this only seems to be happening with "Stop-frame Movie". The audio file works fine with other types of export.
When i change the settings to movie instead of stop frame movie it's completely broken. How did you manage to make that run?
@@massakalli
Awesome tutorial! I got everything working, but with the same settings as in the video I get runaway feedback where the background noise and any bright spots eventually go to white and start growing. Any suggestions on how to limit that, or maybe I missed some setting? Thanks!
I think I fixed it, I was over-sharpening. Thanks again!
I really appreciate your effort. Many and many thanks.
omg i know what im doing tonight! great job as always!
Everything works fine for me but the audio sounds laggy. I have the audio fixed to the timeline and realtime is off. Any ideas where to look for the error?
Same problem here
A friend told me he had to adjust the animation speed of a noise to fix it but i didn't touch the project since then.@@francescbecerracabrera2837
gonna do this after work, thank you brother!
For some reason my frames arent updating while recording... it records and updates few frames correct then overlaps the frames without updating the base noise and also the audio glitches any idea why thats happening?
I have the same problem, could you manage to fix it?
Same here, my audio basically gliches and genrations just fly past
Great tutorial, thanks!!
Thanks for this :) strange but for me the frame by frame seems to skip forward 2 frames at a time for some reason and I seem to have duplicate frames in the final movie. Not an unfixable issue as I can workaround it by rendering individual frames and removing the duplicates, but I can't figure out why it's doing that yet...
I have the same problem, just wanna push your question up a bit.
pay attention to have your fps at 24
this is amazing, thank you! Found a way to trigger the next frame of the independent component that's maybe a bit simpler, since it can be run as a single script:
timeOp.par.play = 1
run("timeOp.par.play = 0", delayFrames=2)
could you please add more information about your tip? your syntax is different from what is shown in the video. op('independant/local/time').par.play = 1
please give more information, where to setup the script. thanks
amazing tutorial!!! though I have a question - when the time is being triggered to play, instead of moving by just 1 frame, i think it jumps more (eg. from 00:01:13:13 to 00:01:59:01) . How can I fix that?
thanks! hmm that's odd. you can try a different approach someone on discord shared, which might be better anyways. so in the chop execute you'd use this expression: op('local/time').frame +=1
@@elekktronaut So I deleted my entire content of chopexec 2 and added:
def onValueChange(channel, sampleIndex, val, prev):
op('independent/local/time').frame +=1
return
Then deactivated Off to On and On to Off and activated Value Change. At least the timeline moves on in frames now, unfortunately it jumps two at a time, but that seems good enough for me.
I guess you're not on 24 fps right? It's somehow connected to the fps but i dont know why.
Hi, bileam, I followed every single step, but my independent timeline didn't seem to move, I put a count CHOP after delay CHOP it seems like every move was detected but made no difference to the Time COMP. I downloaded the exact same model you were using there, but my flower is so plain with no details like leaves, stems and textures
Maybe you were calling the Base-Component "independAnt" like he is and then wrote "independEnt" in the code or the other way around? Was something i stumbled over...
at first its not supposed to move
Great video! waiting for pt 2
Sorry for naive question: would this in theory mean each generated frame is a stable diffusion token (so rendering would cost money, basically) ? Thank you :)
Yes if you're accessing a paid cloud backend, but afaik the automatic1111 is meant for local use on your own hardware, which is free as long as your hardware can handle it.
exactly. no tokens involved, this is running locally and there's no limit but your gpu. it's completely free (except for the patreon support for dotsimulate, but there's alternatives as well) :)
LETSSS GOOOOOOOOOOO
Thank you for sharing this 💖
this is incredible so much information here! thank you! *is working fine on mac m1
4 real ?!! OMGGG
@@louisfievet9341 for real!
@@electromagneticgoldstar7903 Hi! I was wondering if you've had any problems? I did the whole GitHub process for Mac I also got the famous sentence "To create a public link, set `share=True` in `launch()`" which I guess means everything is ok like Bileam said. But unfortunately I can't create an image :( (Got M1 too)
Damn, how did you do that? I'm trying to install AUTOMATIC1111 on my mac and when I'm trying to run ./webui.sh on terminal there's one info over and over again - "Stable diffusion model failed to load" :/
Obviously, you are lying. Please do not misguide people here
Hi, I'm a beginner to TouchDesigner, I love this tut, can you please explain what this simple-resample is?
me too
@@BrightHeart-e3yit’s just a resample chop. make sure to turn time slice off
Hi, I am new as well, and am wondering how to specifically change the number of samples using a resample chop? When he types "60" into the "num samples" field at 8:33, I don't know how I would resample to 60 using a default resmaple chop.@@soundswhile9529
@@soundswhile9529 thank yo!!
Very very nice 👌
Do you know if this does work on mac?
it does, but image generation takes about 1.30 on my m1 at 25 samples
King
nice technique! quick question: if using 24FPS video, why resample the audio to 60? as a sidenote: FlowFrames and Video2X are great free alternatives to Da Vinci and Topaz :)
thanks, also for the recommendation! the resampling defines the amount of circles for instancing :)
@@elekktronaut ahhh, missed that bit (admittedly watching at 1.5x speed lol), thanks for clarifying!
There is also the Deforum AI video making extension for Automatic1111 which has a video upscaler and it does interpolation too 👍🏻
@@kiksu1 definitely. would be verrry interesting to extend the SD COMP to also support Deforum, Warp, and/or TemporalNet from within TD
Hello, I love your tutorials!
I have a question, if my stable diffusion takes 6 minutes to generate an image (I have a AMD GPU that can't use Nvidia features and I've been researching about that for one complete day lol), do you think it it possible to still do your tutorial?
This one's been on my wishlist for a while now! 🥹 So happy to finally see a way to connect that webui with TD!!!