Sonic Flowers - TouchDesigner x StableDiffusion Tutorial 1

Поделиться
HTML-код
  • Опубликовано: 1 окт 2024

Комментарии • 127

  • @dotsimulate
    @dotsimulate Год назад +26

    Such an amazing overview and powerful approach with the independent time component. This is awesome ! thank you !

  • @metrognomed
    @metrognomed Год назад +4

    Is anyone else having trouble with Image-to-Image in SD_API? Text-to-Image works fine for me, but Image-to-Image does not seem to do anything. I don't see a progress bar in the terminal, for example.

  • @jarygergely2074
    @jarygergely2074 Год назад +4

    I don t see the progression bar in the terminal and don t get the image, just if I generate the image from microsoft edge, which opens when I launch the webui. Can somebody help me with this?

  • @mobioboris1
    @mobioboris1 Год назад +6

    Hello ! Thanks for this tutorial.
    To challenge myself I choosed to create my own API with the tut' you linked : ruclips.net/video/4khcLvGjoX8/видео.html
    As a beginner I see there is huge difference between the API of Interactive & Immersive HQ and the one from DotSimulate. I mean a lot of parameters we don't have. In other way, the toe created in the tut' is a BASE, like a closed box. How can we connect other operators to it ?
    Thanks.

  • @tinglesound6621
    @tinglesound6621 8 месяцев назад +7

    Hi! Sorry but I can't figure out how to create a custom parameter for the resample CHOP like you have in your simpleresample component. In this case, it seems like I need to edit the FFT size in the AudioSpectrum CHOP? Sorry if this is such a noob question but I have no clue how to do that!

    • @soprano3317
      @soprano3317 8 месяцев назад +5

      @elekktronaut I have the same question have been following your tutorials - do you have any other tutorial where u built the simple_resampler? thanks

    • @yaraalhusaini2551
      @yaraalhusaini2551 4 месяца назад +2

      Did you ever figure this out? I am stuck on the same thing!

    • @bridgetteteare3856
      @bridgetteteare3856 3 месяца назад +2

      I'm looking for help on this part too any luck?

    • @tinglesound6621
      @tinglesound6621 2 месяца назад +1

      @@yaraalhusaini2551 yes! I simply went to his patreon to find the simple resample CHOP

    • @jiyunnam2864
      @jiyunnam2864 2 месяца назад

      ruclips.net/video/NJE48IVzNVc/видео.html 3:00

  • @seulkireadlim2869
    @seulkireadlim2869 7 месяцев назад +1

    im a students. where is SD API...in touchdesigner.. PLEASE .. i want detail... please

  • @胡图图-d9b
    @胡图图-d9b Месяц назад +1

    Hello, where can I download the simple_resample plug-in in the video? Thanks

  • @nico3144
    @nico3144 Год назад +1

    question, whenever i hit launch WebUi, in the command panel it doesn't want to launch web UI in the command pannel, it says invalid syntax error. Pls help i

  • @Schall-und-Rauch
    @Schall-und-Rauch 9 месяцев назад +1

    In case anybody ran into the same problem. I couldn't lauch the webui form the API node around 19:20. I had installed the WebUI following the instructions "Install and Run on NVidia GPUs" - Automatic Installation - Windows (method 1). Then I deleted the sd.webUI folder and followed the instructions "Windows (method 2)", first installiung Python and then git and it worked immediately.

    • @MikeHancho663
      @MikeHancho663 3 месяца назад

      BUMP!
      Solved my issue thank you SO much!!!

  • @Raul-ym4ly
    @Raul-ym4ly Год назад +4

    YES FINALLY stable diffusion, thank you sooo much man!!!! Love your channel

  • @Schall-und-Rauch
    @Schall-und-Rauch 9 месяцев назад +2

    Has anyone figured out how to have less change in between the individual frames, so the movie looks more fluent? All the parameters I tried didn't really work in transform and level.

    • @abrandtneris
      @abrandtneris 9 месяцев назад +1

      did you figure this out? have the same question

    • @spacefordigitalvisualresea8031
      @spacefordigitalvisualresea8031 8 месяцев назад

      you should read into controllnets. They controll the change between images. @@Schall-und-Rauch

  • @peacekulture
    @peacekulture 8 месяцев назад +2

    I am stuck at the point of launching the SD Webui from TD. MacBook Pro M2 Max. The Webui launches fine from Terminal but when I put the path into the SD_API SD Webui Folder address in the API Settings, nothing launches when I press the Pulse button. Has anyone else had any similar issues or could point me in the right direction? Eternal thank you.

    • @clee6030
      @clee6030 6 месяцев назад +1

      Hitting the same issue! Did you figure out how to get unblocked?

    • @time_itself
      @time_itself 6 месяцев назад

      I believe that Streamdiffusion is only compatible with windows and NVIDIA cards; you might be SOL

    • @leotromano
      @leotromano 5 месяцев назад

      also curious if you had any luck?

    • @xiaotingtan3369
      @xiaotingtan3369 3 месяца назад

      I met the same issue, did u figure it out?

    • @peacekulture
      @peacekulture 2 месяца назад +1

      @@xiaotingtan3369
      @leotromano
      @time_itself
      @clee6030 I never did figure it out, I just launch the webui from Terminal and then it operates as normal. I copied the launch command into a document and just copy paste when I need to use it.

  • @AngelsEgg9
    @AngelsEgg9 4 месяца назад

    Because I am trying this now with a newer version of dotstimulate's SD API, when I grab a null from the API it doesn't have a currentframe element in it, just: Streamactive, framecount, and fps. :( How do I go about this now?

  • @clee6030
    @clee6030 5 месяцев назад

    Hi! Thank you for the great video :) I've completed all the steps and I generated 1000 frames over night, but I'm confused on how to create an mp4 file from the generated frames. What would you recommend that I do?

  • @AnaMariaPires8915
    @AnaMariaPires8915 3 месяца назад

    heyy i got stuck on the scale instacing part of this tutorial. i followed your 'simple-resample' chop tutorial, however i got suck on what to specify on the 'scale x' and 'scale y' under 'scale OP'.

  • @irgendwaer3000
    @irgendwaer3000 3 месяца назад

    I did some testing and for me the Select TOP you put in to smooth the flickering ended up to give me more flickering. Sometimes the AI is triggered also by the very low opacity input and "jumps" back to generate some elements at the "old" positions of the blobs, or get stuck at positions. I tried to somehow set up a Feedback Loop which fades out the fed back animation over time but couldn't figuring out how

  • @大山SAN
    @大山SAN 5 месяцев назад

    pleas,python3.10 executable not found.

  • @hecosmos1996
    @hecosmos1996 Год назад +1

    why the SD_API link is not support?

  • @geoseatooo
    @geoseatooo 9 месяцев назад +1

    Followed all steps but still the independent timeline doesnt advance once image is generated. the only difference i can see is the newer version of SD_API doesnt have anything linked to the (current frame) parameter by default (within the sd api) your version however does have something linked there, would you mind sharing what your expression is there?

    • @Schall-und-Rauch
      @Schall-und-Rauch 9 месяцев назад +3

      As suggested by Bileam in another comment, I deleted my entire content of chopexec 2 and added:
      def onValueChange(channel, sampleIndex, val, prev):
      op('independent/local/time').frame +=1
      return
      Then deactivated Off to On and On to Off and activated Value Change. At least the timeline moves on in frames now, unfortunately it jumps two at a time, but that seems good enough for me. Can you follow?

    • @visionz5776
      @visionz5776 7 месяцев назад

      Hi have you solve that😢I got same problem with you,and I also see the independent time value down below shows Red but in the tutorial it’s white

  • @KaleidoKurt
    @KaleidoKurt 5 месяцев назад

    getting stuck at 5:06 when I change the rate to cookrate() the Rate field goes to 0.01 and turns red. Anyone else experiencing this?

  • @ArashBaqipur
    @ArashBaqipur 6 месяцев назад

    I'm wondering how to make this using comfyui for TD?

  • @ysy69
    @ysy69 9 месяцев назад

    This video is a gift and best way to start 2024. Thank you. Do you know if the SD_API supports LoRas?

  • @ThomasMYoutube
    @ThomasMYoutube 3 месяца назад

    when using image to image, how do you change the input resolution?

  • @mendoziya
    @mendoziya 4 месяца назад

    Great video!!!!. Which graphics card are you using?

  • @istarothe
    @istarothe 5 месяцев назад

    I need some help, I have recorded but the recording almost speed past the frames, and sound is speed up basically like a screech. Realtime is turned off

  • @JiaCUI-gd3qu
    @JiaCUI-gd3qu 9 месяцев назад

    How to connect the local stable-diffusion-webui with the touch designer? I do not know how to build the SD-API in this video.

  • @BasEkkelenkamp
    @BasEkkelenkamp Год назад +1

    This one is big!! Awesome and easy solution to a pretty complicated problem❤

  • @simarimbunetaliasidabutar4476
    @simarimbunetaliasidabutar4476 8 месяцев назад

    Please, Can you continue this video combining with Kinect ?

  • @SMAWxyz
    @SMAWxyz 8 месяцев назад

    thank you for this wonderful! there doesn't seem to be any current frame, but only a channel named framecount, the issue is after every frame rendered, the value goes back to 0, meaning it is pulsing twice on every render, also meaning that the independant base is cooking twice for every stable diffusion cook, do you have any ideas there?

  • @dariayakubovska1877
    @dariayakubovska1877 6 месяцев назад +1

    super klas masterklas kvas

  • @RussellKlimas
    @RussellKlimas Год назад

    So I'm really close but on the CHOP Exec 2 I keep having the error of Cannot find function named OnValueChange (project1/chopexec2). I've gone over the tutorial a couple times and can't figure out what I'm doing wrong.

    • @Schall-und-Rauch
      @Schall-und-Rauch 9 месяцев назад

      OnValueChange was deleted out by him, so it shouldn't be in yours anymore.

  • @brunotripodi
    @brunotripodi 8 месяцев назад

    Hi! I need help, I don’t know why when I press “Launch webUI” pop up this message;
    Couldn't launch python
    exit code: 9009
    stderr:
    No se encontr¾ Python; ejecuta sin argumentos para instalar desde Microsoft Store o deshabilita este acceso directo en Configuraci¾n > Administrar alias de ejecuci¾n de la aplicaci¾n.
    Launch unsuccessful. Exiting.
    Presione una tecla para continuar . . .

    • @L3K1P
      @L3K1P 5 месяцев назад

      Same here. Have you found a solution?

  • @triangulummapping4516
    @triangulummapping4516 5 месяцев назад

    which windows requirements do i need to run all this?

  • @excido7107
    @excido7107 7 месяцев назад

    You my friend are a legend! I followed your video to the letter and finally actually understand TD a lot more! Thank you

  • @Qwert_Zuiop
    @Qwert_Zuiop Год назад

    Awesome tutorial, thank you so much! I am not sure, but I think people could stumble over is your writing of "independent" as "independant" and then getting the referencing in the code wrong at the end wrong?

  • @digitalflick
    @digitalflick Год назад +1

    got stuck on the simple resample chop, how do i create that?

    • @elekktronaut
      @elekktronaut  Год назад +5

      it's a custom .tox you can find on my patreon but it's really just a resample chop. look at the example here: ruclips.net/video/NJE48IVzNVc/видео.html

    • @digitalflick
      @digitalflick Год назад +1

      @@elekktronaut thanks! found it!

  • @nime1575
    @nime1575 Год назад +1

    Awesome tutorial, thank you!
    For me the workflow fails to run smoothly tho, it seems there is a problem with the frame-by-frame workflow. When I use SD to proceed on currentframe, it doesnt always send the signal to play the independant timeline. When I hook a timer to count up the currentframe, it updates as expected. Might be, because I run on Mac M1.. Maybe it's some lag issue in TD. I will try this on a faster windows pc soon.

    • @elekktronaut
      @elekktronaut  Год назад +3

      Thanks! Interesting. There's anther hacky way of doing this, by using and Info CHOP on the texture coming out of SD_API and use the total_cooks channels instead of the Currentframe. Or maybe you've gotta use a Trigger CHOP to make sure it's really hitting one, maybe you're dropping frames somewhere. Is realtime turned off?

    • @nime1575
      @nime1575 Год назад +1

      @@elekktronaut lets gooo, turning off realtime fixed it. Also working on a faster setup works, but might drop frames. Thanks again!

    • @elekktronaut
      @elekktronaut  Год назад

      @@nime1575 cool! frame drops shouldn't happen when realtime's turned off, that's literally why you turn it off 😌

    • @melihg.kutsal7566
      @melihg.kutsal7566 Год назад +1

      @@nime1575 I was having the same issue, thanks to your comment I figured that, I also didn't turned off the realtime. 🥲

  • @apoca07
    @apoca07 Год назад

    I did everything, it works fine but I have a problem. When I activate Keep Generating the image input does not change but the frames advance, when I generate with the Generate Frame Pulse button it works. Where can I check?

    • @apoca07
      @apoca07 Год назад +2

      NVM, turn real time OFF fix it!!!!! i need to read more lol

  • @kiksu1
    @kiksu1 Год назад

    I've been waiting for this! You are a genius madman, thank you! 😄

  • @massakalli
    @massakalli Год назад

    Hello. Thank you so much for the video. For me, whenever I link the Audio CHOP to Moviefileout, it prevents the local timeline from advancing for some reason. And audio stops moving forward. Can you help?

    • @massakalli
      @massakalli Год назад +1

      Upon further inspection, this only seems to be happening with "Stop-frame Movie". The audio file works fine with other types of export.

    • @spacefordigitalvisualresea8031
      @spacefordigitalvisualresea8031 8 месяцев назад

      When i change the settings to movie instead of stop frame movie it's completely broken. How did you manage to make that run?
      @@massakalli

  • @ph0enixr
    @ph0enixr 10 месяцев назад

    Awesome tutorial! I got everything working, but with the same settings as in the video I get runaway feedback where the background noise and any bright spots eventually go to white and start growing. Any suggestions on how to limit that, or maybe I missed some setting? Thanks!

    • @ph0enixr
      @ph0enixr 10 месяцев назад

      I think I fixed it, I was over-sharpening. Thanks again!

  • @HYBE02
    @HYBE02 Год назад

    I really appreciate your effort. Many and many thanks.

  • @qde2
    @qde2 Год назад

    omg i know what im doing tonight! great job as always!

  • @spacefordigitalvisualresea8031
    @spacefordigitalvisualresea8031 8 месяцев назад

    Everything works fine for me but the audio sounds laggy. I have the audio fixed to the timeline and realtime is off. Any ideas where to look for the error?

  • @Philemon888
    @Philemon888 Год назад

    gonna do this after work, thank you brother!

  • @aryansingh5470
    @aryansingh5470 9 месяцев назад

    For some reason my frames arent updating while recording... it records and updates few frames correct then overlaps the frames without updating the base noise and also the audio glitches any idea why thats happening?

    • @secilkurtulus9368
      @secilkurtulus9368 6 месяцев назад

      I have the same problem, could you manage to fix it?

    • @istarothe
      @istarothe 5 месяцев назад

      Same here, my audio basically gliches and genrations just fly past

  • @marcovioletvianello
    @marcovioletvianello 8 месяцев назад

    Great tutorial, thanks!!

  • @NicholasCarn
    @NicholasCarn 10 месяцев назад

    Thanks for this :) strange but for me the frame by frame seems to skip forward 2 frames at a time for some reason and I seem to have duplicate frames in the final movie. Not an unfixable issue as I can workaround it by rendering individual frames and removing the duplicates, but I can't figure out why it's doing that yet...

  • @prismatic.visuals
    @prismatic.visuals Год назад +1

    this is amazing, thank you! Found a way to trigger the next frame of the independent component that's maybe a bit simpler, since it can be run as a single script:
    timeOp.par.play = 1
    run("timeOp.par.play = 0", delayFrames=2)

    • @vsevolodtaran4818
      @vsevolodtaran4818 Год назад

      could you please add more information about your tip? your syntax is different from what is shown in the video. op('independant/local/time').par.play = 1

    • @rudolf_II
      @rudolf_II Год назад

      please give more information, where to setup the script. thanks

  • @AnyaTran
    @AnyaTran Год назад

    amazing tutorial!!! though I have a question - when the time is being triggered to play, instead of moving by just 1 frame, i think it jumps more (eg. from 00:01:13:13 to 00:01:59:01) . How can I fix that?

    • @elekktronaut
      @elekktronaut  Год назад +2

      thanks! hmm that's odd. you can try a different approach someone on discord shared, which might be better anyways. so in the chop execute you'd use this expression: op('local/time').frame +=1

    • @Schall-und-Rauch
      @Schall-und-Rauch 9 месяцев назад +1

      ​@@elekktronaut So I deleted my entire content of chopexec 2 and added:
      def onValueChange(channel, sampleIndex, val, prev):
      op('independent/local/time').frame +=1
      return
      Then deactivated Off to On and On to Off and activated Value Change. At least the timeline moves on in frames now, unfortunately it jumps two at a time, but that seems good enough for me.

    • @spacefordigitalvisualresea8031
      @spacefordigitalvisualresea8031 8 месяцев назад

      I guess you're not on 24 fps right? It's somehow connected to the fps but i dont know why.

  • @HeLevi
    @HeLevi Год назад

    Hi, bileam, I followed every single step, but my independent timeline didn't seem to move, I put a count CHOP after delay CHOP it seems like every move was detected but made no difference to the Time COMP. I downloaded the exact same model you were using there, but my flower is so plain with no details like leaves, stems and textures

    • @Qwert_Zuiop
      @Qwert_Zuiop Год назад +1

      Maybe you were calling the Base-Component "independAnt" like he is and then wrote "independEnt" in the code or the other way around? Was something i stumbled over...

    • @adrianarvidsson1384
      @adrianarvidsson1384 8 месяцев назад

      at first its not supposed to move

  • @marcoaccardi
    @marcoaccardi Год назад

    Great video! waiting for pt 2

  • @connergriffith3601
    @connergriffith3601 Год назад

    Sorry for naive question: would this in theory mean each generated frame is a stable diffusion token (so rendering would cost money, basically) ? Thank you :)

    • @aulerius
      @aulerius Год назад +3

      Yes if you're accessing a paid cloud backend, but afaik the automatic1111 is meant for local use on your own hardware, which is free as long as your hardware can handle it.

    • @elekktronaut
      @elekktronaut  Год назад +3

      exactly. no tokens involved, this is running locally and there's no limit but your gpu. it's completely free (except for the patreon support for dotsimulate, but there's alternatives as well) :)

  • @iloveallvideos
    @iloveallvideos 8 месяцев назад

    LETSSS GOOOOOOOOOOO

  • @Nanotopia
    @Nanotopia Год назад

    Thank you for sharing this 💖

  • @electromagneticgoldstar7903
    @electromagneticgoldstar7903 Год назад +5

    this is incredible so much information here! thank you! *is working fine on mac m1

    • @louisfievet9341
      @louisfievet9341 Год назад +1

      4 real ?!! OMGGG

    • @electromagneticgoldstar7903
      @electromagneticgoldstar7903 Год назад +2

      @@louisfievet9341 for real!

    • @louisfievet9341
      @louisfievet9341 Год назад

      @@electromagneticgoldstar7903 Hi! I was wondering if you've had any problems? I did the whole GitHub process for Mac I also got the famous sentence "To create a public link, set `share=True` in `launch()`" which I guess means everything is ok like Bileam said. But unfortunately I can't create an image :( (Got M1 too)

    • @mateuszsarapata
      @mateuszsarapata Год назад

      Damn, how did you do that? I'm trying to install AUTOMATIC1111 on my mac and when I'm trying to run ./webui.sh on terminal there's one info over and over again - "Stable diffusion model failed to load" :/

    • @bardoof
      @bardoof 5 месяцев назад

      Obviously, you are lying. Please do not misguide people here

  • @baoqiancheng8224
    @baoqiancheng8224 10 месяцев назад

    Hi, I'm a beginner to TouchDesigner, I love this tut, can you please explain what this simple-resample is?

    • @BrightHeart-e3y
      @BrightHeart-e3y 9 месяцев назад

      me too

    • @soundswhile9529
      @soundswhile9529 9 месяцев назад +5

      ⁠@@BrightHeart-e3yit’s just a resample chop. make sure to turn time slice off

    • @blackleatherboots
      @blackleatherboots 8 месяцев назад

      Hi, I am new as well, and am wondering how to specifically change the number of samples using a resample chop? When he types "60" into the "num samples" field at 8:33, I don't know how I would resample to 60 using a default resmaple chop.@@soundswhile9529

    • @soprano3317
      @soprano3317 8 месяцев назад

      @@soundswhile9529 thank yo!!

  • @Data_Core_
    @Data_Core_ Год назад

    Very very nice 👌

  • @mustaTraceur
    @mustaTraceur 8 месяцев назад

    Do you know if this does work on mac?

    • @ericgoldstein2051
      @ericgoldstein2051 6 месяцев назад

      it does, but image generation takes about 1.30 on my m1 at 25 samples

  • @Luigih12
    @Luigih12 Год назад

    King

  • @TheGladScientist
    @TheGladScientist Год назад +3

    nice technique! quick question: if using 24FPS video, why resample the audio to 60? as a sidenote: FlowFrames and Video2X are great free alternatives to Da Vinci and Topaz :)

    • @elekktronaut
      @elekktronaut  Год назад

      thanks, also for the recommendation! the resampling defines the amount of circles for instancing :)

    • @TheGladScientist
      @TheGladScientist Год назад +1

      @@elekktronaut ahhh, missed that bit (admittedly watching at 1.5x speed lol), thanks for clarifying!

    • @kiksu1
      @kiksu1 Год назад +1

      There is also the Deforum AI video making extension for Automatic1111 which has a video upscaler and it does interpolation too 👍🏻

    • @TheGladScientist
      @TheGladScientist Год назад +3

      @@kiksu1 definitely. would be verrry interesting to extend the SD COMP to also support Deforum, Warp, and/or TemporalNet from within TD

  • @pierreleveille515
    @pierreleveille515 9 месяцев назад

    Hello, I love your tutorials!
    I have a question, if my stable diffusion takes 6 minutes to generate an image (I have a AMD GPU that can't use Nvidia features and I've been researching about that for one complete day lol), do you think it it possible to still do your tutorial?

  • @Markdood88
    @Markdood88 Год назад

    This one's been on my wishlist for a while now! 🥹 So happy to finally see a way to connect that webui with TD!!!