TEMPORALKIT - BEST EXTENSION THAT COMBINES THE POWER OF SD AND EBSYNTH!

Поделиться
HTML-код
  • Опубликовано: 20 июн 2024
  • This is a tutorial on how to install and use TemporalKit for Stable Diffusion Automatic 1111. This extension uses Stable Diffusion and Ebsynth.
    HOW TO SUPPORT MY CHANNEL
    -Support me by joining my Patreon: / enigmatic_e
    _________________________________________________________________________
    SOCIAL MEDIA
    -Join my discord: / discord
    -Instagram: / enigmatic_e
    -Tik Tok: / enigmatic_e
    -Twitter: / 8bit_e
    - Business Contact: esolomedia@gmail.com
    _________________________________________________________________________
    TemporalKit: github.com/CiaraStrawberry/Te...
    7-zip: 7-zip.org/download.html
    Ciara: / ciararowles1
    Ciara Tutorial: • TemporalKit + Ebsynth ...
    Tokyojab: / tokyojab
    TroubleChute: • How To: Download+Insta...
    Install SD
    • Installing Stable Diff...
    Install ControlNet
    • New Stable Diffusion E...
    Chapters
    0:00 Intro
    0:47 What is TemporalKit?
    1:59 Installing TemporalKit
    05:17 Settings
    10:51 IMG2IMG
    14:17 Exporting
    15:05 Ebsynth
    16:58 Experimenting
    18:48 Longer Videos
  • РазвлеченияРазвлечения

Комментарии • 260

  • @cynbot7814
    @cynbot7814 Год назад +5

    Thanks for all your hard work making these tutorials - always excited to see your vids when they come out!

  • @My123Tutorials
    @My123Tutorials Год назад

    I've been waiting so long for a solution that makes this process a bit more easy and reliable. Thanks for sharing, man! 🙏😊

  • @dreamzdziner8484
    @dreamzdziner8484 Год назад

    Since you posted the news on twitter I have been checking here frequently for this tutorial video. Finally we got the consistency we were looking for🙂Thanks mate🙏

  • @Samy91700
    @Samy91700 Год назад +1

    As always, an amazing tutorial! Thanks for your good work

  • @TroubleChute
    @TroubleChute 9 месяцев назад +1

    Did not expect to see myself. Thanks for the shoutout!
    Busy messing around with vid2vid, everything I've used so far brings way too much of the original video through (SD-CN-Animation), or creates something incredibly fickery. Busy following this guide :)

    • @enigmatic_e
      @enigmatic_e  9 месяцев назад +1

      Hey!! Thanks for the tut, it helped me out so much. Hopefully this tut is helpful for you. I may need to update it since a lot of people say they’ve recently had a lot of issues.

  • @Daxviews
    @Daxviews Год назад +11

    For anyone wondering why the Temporal-Kit tab isn´t showing up in the Web-ui:
    You have to install moviepy, too. Just had the issue...after installing moviepy, everything worked fine.

    • @pastuh
      @pastuh Год назад

      I think it's auto installed after you reload cmd

    • @cyril1111
      @cyril1111 Год назад

      thanks dude, i had the issue too!

    • @daffertube
      @daffertube Год назад

      I just had to close and restart the CMD window.

  • @milan.reiter
    @milan.reiter 8 месяцев назад

    The video I was looking for, thank you!!

  • @trippyvortex
    @trippyvortex Год назад

    I appreciate that you include the keyboard shortcut tidbits (and the tutorial in general)

  • @theairchitect
    @theairchitect Год назад

    always fun watch your tuts! ❤❤😍😍

  • @DSHGOA
    @DSHGOA Год назад

    Love you man! Funny, interesting, very helpful!

  • @karim_yourself
    @karim_yourself Год назад +6

    Great video ! If you select a part of your prompt, hold control and press arrow up or down, you can directly change the weights of the keywords in your prompt :)

  • @iiLikePiez
    @iiLikePiez Год назад

    great video, very thorough and helpful, thanks!

  • @artprasert6577
    @artprasert6577 10 месяцев назад

    So cool! Thanks for the great tutorial.

  • @user-ni2we7kl1j
    @user-ni2we7kl1j Год назад +28

    I wonder if it's possible to increase the consistency between frame groups by including previously generated frames which are masked on img2img step
    For example, let's say we extract 2x2 groups of frames from the video and include 2 previous frames:
    1) We stylize the first group:
    1 2 => *1 *2
    3 4 => *3 *4
    // Numbers are frame indices, "*" means stylized frame
    2) Append two last frames to next group:
    *3 *4 => *3 *4 // *5 *6
    7 8 => *7 *8
    3) Repeat for each group
    *7 *8 => *7 *8
    9 10 => *9 *10
    11 12 => *11 *12
    In the end we end up with one 2x2 group and several 3x2 groups, which are (hopefully) more temporally coherent than regular 2x2 groups
    I would like to try this myself, but my PC is a potato that can barely handle 768x768 generation, and you obviously need a lot more power to do this trick with several ControlNETs :(

    • @aitz3vil
      @aitz3vil Год назад +1

      I think you just talking about "Border Key Frame" right?

    • @heyitsjoshd
      @heyitsjoshd Год назад +1

      Yes, this works. This is how the multi frame renderer A1111 script works. One thing to add is you should regenerate the initial frames with this process too.

    • @mik3lang3lo
      @mik3lang3lo Год назад +1

      Great reasoning, I would like to add that if you prepare a Lora with your character you will achieve great consistent there too

    • @strangelaw6384
      @strangelaw6384 7 месяцев назад

      If this works, I wouldn't know why. Running through img2img removes style information of the original image. They style before and after stylizing should not be dependent unless you're using a low denoise... which goes against your original intention of stylizing the image.
      On the other hand, I think that you can use the same extra noise for every img2img to further improve consistency.

  • @Injaznito1
    @Injaznito1 Год назад

    Nice tutorial! Thanx!!

  • @digital_magic
    @digital_magic Год назад

    awesome video 🙂

  • @cowlevelcrypto2346
    @cowlevelcrypto2346 Год назад

    Dude ! Awesome explaining.. :P

  • @agnesslovehealz
    @agnesslovehealz 11 месяцев назад

    Yess love

  • @lenny_Videos
    @lenny_Videos Год назад

    great video

  • @aicontroversy
    @aicontroversy Год назад

    Thank you for this! I literally started an img2img batch last night for a video and woke up to this new temporalkit. I was having the same issue last night with controlnet producing the canny and open pose images. throwing my # sequence off. Anyways that batch is now obsolete thanks to this new temporalkit! The power of AI! Evolving so fast! If you find out how to stop the depth output from controlnet please lmk and thanks again for this tutorial!

  • @user-nc2hs4rp7l
    @user-nc2hs4rp7l Год назад

    완벽한 튜토리얼!!

  • @rickardbengtsson
    @rickardbengtsson Год назад

    Very cool

  • @royalsero848
    @royalsero848 11 месяцев назад

    hey chef thanks for the very detailed awesome video, can we use inpaint also ? because on the batch header we have a field under in out directory named : Inpaint batch mask directory (required for inpaint batch processing only)

  • @unreal_unit
    @unreal_unit Год назад

    Great tool, its like EbSynth Utility, but without mask, and its a bit faster.

  • @RonnieMirands
    @RonnieMirands Год назад +2

    I will confirm this, but if you use a good denoiser after this, the software will interpret these variations as noise and will improve a lot. As the davinci resolve flicker will polish a lot as welll :)

    • @enigmatic_e
      @enigmatic_e  Год назад +3

      Yes! Please let me know, I would love to find a solution to this.

    • @matbeedotcom
      @matbeedotcom Год назад

      interesting, it could introduce blurriness but with some of these new AI implementations of sharpening and upscaling it could be a moot point

  • @macronomicus
    @macronomicus Год назад +1

    Try putting more frames in the spritesheet, but then use Controlnet Tile Upscaler to make it huge & stylize in one step, is there a limit to the size ebsynth can work with? even if still smallish ending video size, you could upscale the video at the end with some other software perhaps. GLHF 😜

  • @planktonfun1
    @planktonfun1 11 месяцев назад +2

    there's a lot that could be improved here
    1. in ffmpeg you can extract key frame per scene instead of frame count(means lower keyframes = faster process)
    2. you can use a photo enhancer to enhance low quality grids(means you can fit more tiles = more consistency)
    3. lastly enhance video with another AI for quality and fps
    4. yeah its tedious but the result is nice

    • @david_ce
      @david_ce 10 месяцев назад

      Could you please give software that can be used for each of these steps?. I’m new to this and it would help greatly

    • @RiiahTV
      @RiiahTV 10 месяцев назад

      LETS SEE YOUR RESULTS BRO WHERE CAN I FIND?

  • @wm.wallace
    @wm.wallace Год назад +4

    Suggestion for higher quality - there's an extension called Tiled VAE or something similar, it lets you generate high res images by splitting up the pictures into tiles. Haven't tried it using this method but it could help

    • @Chrono..
      @Chrono.. Год назад +3

      It is actually a new model within the ControlNet 1.1 extension, it was released about 1 week ago and can, with the help of the Ultimate SD Upscaler, make an image not only have a much higher resolution, but also much more details.

    • @The_Art_of_AI_888
      @The_Art_of_AI_888 Год назад

      @@Chrono.. can it keep up with the consistency? or the details changing every pics ?

    • @merodiro
      @merodiro Год назад

      @@Chrono.. These are different things. Tiled VAE can work similar to Ultimate SD Upscaler with controlnet but it can also work independently and allow you to generate images with higher res than you can usually generate with your card without getting OOM error

    • @Chrono..
      @Chrono.. Год назад

      @@merodiro So you're saying that the sole purpose of Tiles is to give the ability to upscale, on cards with less vram? That is, if I have a good video card, Tiles isn't necessary?

    • @ViensVite
      @ViensVite Год назад

      @@Chrono.. tiles doesnt mean your graphic card is bad, is just splitting a lets say mountain in pieces. you still go faster with better card, actually il increase by a lot render times at least in 3D renders :)

  • @ekke7995
    @ekke7995 Год назад

    Its amazing! I've been stuck for the past 40days. looks like a lot have happend but not enough on the video consistency using Stable diffusion. What GPU are you using? I've got RTX3060 12gb but struggle with the limited Vram. I want to add another RTX3060 12gb but don't know if it will work? Any advice. Also my Video clips is between 4-8 seconds, with the longest clip around 20 seconds. have you been successful over a longer time? My idea is to do a full style transfer with very high CFG Scale between .6 to 1. i was able to get reasonable coherence but not as good as this short clips from you, however I was able to maintain consistence over 20 seconds. I did it about 2 months ago, so a lot have changed.

    • @enigmatic_e
      @enigmatic_e  Год назад

      I currently use RTX3080 10gb and it's pretty good. I do run into issues when I start adding controlnet and make the resolution high.

    • @ekke7995
      @ekke7995 Год назад +1

      @@enigmatic_e I'm excited and want to test temporal kit. But even before starting I can already tell the frustration ahead. I'd say the problems I run into is less about A1111 and more about the AMD system. I use rasen7 CPU and mother board. So there's always problems with CUDA, drivers and FFMPEG.

  • @BigBadJerBear
    @BigBadJerBear Год назад +2

    Interesting it looks like I was running into issues because of some issues with the dimensions of what I was working with, that and using too many images in a grid.
    I was pretty sure I had my dimensions matched up correctly but I’m wondering if I need to start with a 512x768 or square grid to work with in this method. I know Stable Diffusion does some weirdness if not following those ratios.

    • @strangelaw6384
      @strangelaw6384 7 месяцев назад

      Typically, you want 512x512 for SD2.0 or below. Bigger should work fine. Different aspect ratio should work fine until the ratio exceeds around 1.3. HOWEVER if you're running tiled images, none of what I said applies for sure anymore.

  • @marcovegano7913
    @marcovegano7913 Год назад

    Great Tutorial! To solve the quality problem, wouldn't be possible to take the grid of images and upscale them individually with low denoise and then run them with temporal kit? just an idea. great video anyway!

    • @enigmatic_e
      @enigmatic_e  Год назад

      Yea there has to be method to make the quality better. Ill be messing around with it more.

  • @Yic17Gaming
    @Yic17Gaming Год назад

    You gotta increase the strength of your prompt. So instead of just cyberpunk robot, put (cyberpunk robot) to give it more strength. Or even stronger like (((cyberpunk robot))). The strongest you can go I think is (((cyberpunk robot:1.5))).
    Anyway, nice video. I'm gonna go have fun with Temporal Kit now. 😃

  • @daffertube
    @daffertube Год назад +1

    15:50 you lost me... are you hitting a button here? It doesn't create outputs for me.
    Edit: For some reason the first keyframe was named 0002. It needed to be renamed to 0001 before the synth button would start the processing work.
    edit 2: the list of keyframes isn't showing up. You Jumpcut and say "it creates all these outputs" but that doesn't happen on my end and I can't see how you did it.

  • @lioncrud9096
    @lioncrud9096 Год назад

    this is very interesting however I really hope EBSynth evolves to support more keyframes. This can become tedious very fast for anything beyond 3-5 seconds.

  • @ppn7
    @ppn7 Год назад

    Hi do you think EBsynth will be able to do more than 20 keyframes ? It's a mess for now to do long video manually 😅

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      I hope so. But there the work around that I mention at end of video.

  • @niftydegen
    @niftydegen Год назад +4

    respect for suggesting other channels. thats the way, people pulling each other up. other channels will never mention another channel and delete any reference to another channel in comments. insecure and longterm fail.

    • @iamYork_
      @iamYork_ Год назад

      yeah enigmatic is one of the good ones... One of the main reasons I stopped making tutorials was because so many other channels were repackaging my techniques and taking credit for it... Digital copycats... Not much can be done but im always happy to see people give credit and help out other creators...

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Thanks. Yea I believe in giving credit where credit is due.

    • @matbeedotcom
      @matbeedotcom Год назад +1

      Yep, it's in fact smart to do, as it seeds connections to other creators in the search history, which means youre more likely to be surfaced in suggested videos as they have the similar audience. Keep it up, refer other creators, and it absolutely pays off.

  • @envoy9b9
    @envoy9b9 Год назад

    great video, will there be a part 2 for loger video explaining split video setting?

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      I explain it at the end of the video at around 18:48.

    • @envoy9b9
      @envoy9b9 Год назад

      @@enigmatic_e ty ty

  • @CMD424582
    @CMD424582 Год назад

    Hey man, thanks for the video!! Do you know when EBSynth is saying my gpu is unavailable under the advanced tab? Running a ubuntu cloud machine with an A100, so gpu shouldnt be an issue.

    • @enigmatic_e
      @enigmatic_e  Год назад

      oh man, I don't know. I have no experience with cloud machines. Sorry

  • @StrongzGame
    @StrongzGame Год назад

    hmmm what if we used topaz on the the grid to and then after on the grid?

    • @enigmatic_e
      @enigmatic_e  Год назад

      I haven’t tried that but it would interesting to test it

  • @beyou3015
    @beyou3015 Год назад

    I have the same problem on my output with the controller preview images. How do you solve this problem?

  • @EranMahalu
    @EranMahalu Год назад

    thanks for the tutorial, i tried it but got reall yweird frmaes in the "frames" folder (all weird gray and pixelated) - what am I doing wrong?

  • @wkrmbm5097
    @wkrmbm5097 Год назад

    Additional step that could help with the consistent img2img is to turn ON the, "Apply color correction to img2img results to match original colors."

    • @pastuh
      @pastuh Год назад

      I would skip this..
      If you want to transform forest to hell forest, it will be impossible. Everything will be in green color

  • @RoboMagician
    @RoboMagician Год назад

    when I dragged a 16x9 video into the temporalkit, the video kind of covers the text in the UI entirely, making it unusable to make any settings. is the a bug with 16x9 videos with temporalkit?

  • @jbiziou
    @jbiziou Год назад

    Wonderful Videos!! thank you for the great stuff. I too am having the same issue as another,, My frames are all the same when I hit Run, any idea ?

    • @jbiziou
      @jbiziou Год назад

      Got it to work, I had to make sure my video was mp4, ( And at 24 fps ), it did not like 23.976.

    • @enigmatic_e
      @enigmatic_e  Год назад

      Good to know.

  • @wndrflx
    @wndrflx Год назад

    When I recombine I get a blank crossfade video file. Everything works up that point and Ebsynth made all the frames and folders. Input video looks good. I made sure all of automatic 111, ffmpeg, and Ebsynth were up to date. Any ideas?

  • @alekmoth
    @alekmoth Год назад

    Could you use the many frames in a plate but then upscale?

  • @randomgameplay523
    @randomgameplay523 7 месяцев назад

    good video, however when i run an img2img batch it makes only one image and i get this error:
    IndexError: list index out of range
    any idea how to fix?
    i tried: limiting input frames to 20 or less, enabling split video, setting the sides or keyframes lower and non worked.
    edit: when disabling controlnet it works, but that kind of defeats what i wanted to do. Now i dont get the desired results. it also doesnt get in a 2x2 grid anymore, i now have inconsistant frames as a grid input is seen as one image
    ------------
    i wanted to test it anyway and tried putting the frames and such into ebsynth, however the window gets off screen to the bottom and theres no way to scroll, so i cant run it

  • @ganemonster
    @ganemonster Год назад

    Success with your tutorial 🥳. Maybe if the quality bad we can enhance the video. Thank you

    • @My123Tutorials
      @My123Tutorials Год назад

      DaVinci Resolve Studio 18.5 Beta now has an AI video upscaling feature.
      You have to own the studio version but it's worth it anyway when you create video content regularly.

    • @ganemonster
      @ganemonster Год назад +1

      @@My123Tutorials thanks 🙏

  • @robojobot77
    @robojobot77 Год назад

    The Shao Khan laugh lol

    • @enigmatic_e
      @enigmatic_e  Год назад

      so happy someone caught that! 😂

  • @matbeedotcom
    @matbeedotcom Год назад +7

    I finally had the eureka moment at 0:50, so THATS how it works. I totally didnt understand why it compiles a grid, but then I realized the seed and diffusion are working in the same pass so it'll be extremely close in output per grid

    • @matbeedotcom
      @matbeedotcom Год назад

      The initial noise size, from what I understand, is 64x64 and then the area (512x512 etc) is then filled with the noise/tensor shape

  • @eyevenear
    @eyevenear Год назад +1

    Can I ask you what gpu you use?

  • @MrSwsw2
    @MrSwsw2 Год назад

    Thank you for amazing tutorial! Everything working, but for some reason after recombine it's show very low quality mp4 file(500kb file size). But separate shot in output folders have decent quality.
    How to fix this?

  • @jacintduduka9137
    @jacintduduka9137 Год назад +5

    You can always upscale the low quality image, can you not?

    • @redot9914
      @redot9914 Год назад +2

      U can

    • @Jarod45
      @Jarod45 10 месяцев назад

      Yeah, it doesn't always work that well though

  • @herroeswm
    @herroeswm Год назад

    I did everything like you, but my control network does not work with a batch, that is, it does 1 frame as it should, and then does not select the next one in the list. This only applies to the control network, how to fix it?

  • @BUEN0FILMS
    @BUEN0FILMS Год назад

    Does anyone know if it's possible to install FFMPEG to runpod? I've downloaded TemporalKit, Everything works except the output looks like a TV Satellite losing signal effect. I'm assuming it's because I didn't install FFMPEG correctly.

  • @joaopaulodepretto2993
    @joaopaulodepretto2993 Год назад

    my ebsynth is not loading the keyframes under it when i drag the keys folder, any idea?

  • @-Belshazzar-
    @-Belshazzar- Год назад +1

    That was really great tutorial! Only thing I would say is that for a lot of the things I see people do with this, like a toon style or something is just easier to fire up after effects and apply some filter. I wish you have succeeded to make it a robot, now that is not something you can filter in after...

  • @rasalimohr
    @rasalimohr Год назад

    After installing and reloading the UI, had to close command prompt and reboot the bat file to get the Temporal-Kit tab to show. It's there now.

    • @enigmatic_e
      @enigmatic_e  Год назад

      👍🏽

    • @kianma8381
      @kianma8381 9 месяцев назад

      i did the same. even i restart my computer but still doesn't show up ....

  • @muaddib01
    @muaddib01 Год назад

    Nice! Works in Google Collab?

  • @pastuh
    @pastuh Год назад

    It would be great if I could prepare a prompt for each frame that will be generated using the style
    Now looks like need go 1 by 1

  • @Sfuentez098
    @Sfuentez098 Год назад

    Hi!, love the videos, Ebysinth its not workin for me, i get and error, something about missing a file 0001 or something like that.

  • @huytunguyen9522
    @huytunguyen9522 9 месяцев назад

    [HELP] I use Temporakit, and when I reach the "Ebsynth_process" step after pressing "Prepare Ebsynth," I don't see any files in the Keys folder; it's completely empty. What could be the issue?

    • @FirdausHayate
      @FirdausHayate 9 месяцев назад

      i try rename img from folder output ''0and0'' same ..it work for me

  • @SuperDao
    @SuperDao Год назад

    Maybe you can do the more consistently possible and after upscale the result with AI tool like topaz labs?

    • @enigmatic_e
      @enigmatic_e  Год назад

      Mmm not sure, haven’t tried it

  • @shiccup
    @shiccup Год назад +1

    what happened between 15:57 and 15:58 my ebsynth says the naming is off how do i fix that?

    • @enigmatic_e
      @enigmatic_e  Год назад

      Did you make sure to click on ebsynth mode and batch?

    • @MondoMurderface
      @MondoMurderface Год назад

      @@enigmatic_e Yes, what did you skip?

  • @GreenGecko93
    @GreenGecko93 10 месяцев назад +1

    Hi! Does anyone know if it's possible to use this with a Google Colab notebook? I have to use Google Colab Pro since my GPU doesn't meet the required 16 GB VRAM for Stable Warpfusion.

  • @ganemonster
    @ganemonster Год назад

    Need your negative prompts as my default 😅

  • @lucho3612
    @lucho3612 Год назад

    when I put my input image of 4 slides in the img2img, it generates only one image, not 4, how can I make it generate 4 results?

    • @erende44
      @erende44 Год назад

      controlnet canny

  • @vikramchary9778
    @vikramchary9778 Год назад +1

    Can you please fix the problem here In the ebsynth it is not working for me and it not showing the keyframes and the directory is also also showingin the frames tab and key tab but not in the project directory

  • @NeuroFauna
    @NeuroFauna Год назад

    ☺grasias

  • @vfxfortalcontato53
    @vfxfortalcontato53 Год назад

    Ebysinth its not workin for me, i get and error, something about missing a file 0001 or something like that.

  • @cowlevelcrypto2346
    @cowlevelcrypto2346 Год назад

    There are so many extensions now for Auto1111, how do you decide which ones you need? I tried loading them all and bogged everything down.

    • @enigmatic_e
      @enigmatic_e  Год назад

      yea i think you just have to choose the ones that make sense for the kind of stuff you want to create.

    • @matbeedotcom
      @matbeedotcom Год назад +1

      You can use vladmandic's fork of a1111 and disable checking for updates/etc, it should speed up your launch time

  • @Han-ds9yy
    @Han-ds9yy Год назад +3

    When I drag the key folder into ebsynth, it won't automatically set batches for me, and the number of keyframes less than 20. what’s wrong with that?

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      I would just try to see if there’s an update that might fix the issue. Other than that I’m just sure why it’s not working for you sorry.

    • @Han-ds9yy
      @Han-ds9yy Год назад

      @@enigmatic_e Well, thank you for your reply anyway

    • @erende44
      @erende44 Год назад

      select "split video" in EBsynth settings tab (Temporal Kit)

  • @nighttime3673
    @nighttime3673 Год назад

    For some reason, I don't generate any keys for my Temporal Kit. Did I miss a step somewhere? I have the frames and I have the output.

  • @DorothyJeanThompson
    @DorothyJeanThompson Год назад

    I"m running into the issue where it says "Missing frame 0001" anyone know of a fix? - i tried to rename 2 to 1, that did not work, only created one out folder with 1 image, i also copied 2 and renamed it but still no luck.

    • @KratomSyndicate
      @KratomSyndicate 11 месяцев назад

      same issue, got all the way into this video and then it was like oh ya download ebsyth and I did it exactly like shown and none of the images populate in ebsyth and click any button just error 0001.png, ect not found but folder and file locate is good and filename is correct.

  • @kianma8381
    @kianma8381 9 месяцев назад

    I have a problem that can't find they way to fix it in the internet... my temporalkit's extension tab doesn't show up:) please help me i'm loosing my mind.

  • @Aaisn
    @Aaisn 10 месяцев назад

    Why my "Run all" Button on ebsynth dissapeared when i drag keys?

  • @DanielSimon-em2pe
    @DanielSimon-em2pe 11 месяцев назад +1

    Hey, does anyone know why Ebsynth won't list the outputs automatically? Or is that supposed to be a manual process? I have a lot of keyframes, it would take loads of time to set the ranges.

    • @DanielSimon-em2pe
      @DanielSimon-em2pe 11 месяцев назад

      at 16 mins

    • @enigmatic_e
      @enigmatic_e  10 месяцев назад

      @@DanielSimon-em2pe did you make sure to click split video?

    • @kartikashri
      @kartikashri 10 месяцев назад

      same happened to me did you know how to automate it?

    • @DanielSimon-em2pe
      @DanielSimon-em2pe 10 месяцев назад

      thank you for the reply @@enigmatic_e ! I combed through ebsynth buttons, but I can not find 'split'. I have sequences in the right place and everything is in order. There is a cut before you say 'then its gonna create all the outputs' and I lose track, 'cause manually filling all those info is not optimal. I left this workflow, but getting good results with the tempolarNET controlnet model combined with one or two other controlnet tab.

  • @FIRUFILMS
    @FIRUFILMS 10 месяцев назад

    Hello I try to install temporalkit and I have the error and no open anymore my A1111 I try and try and count find the solution this error ModuleNotFoundError: No module named 'tqdm.auto'

  • @china100
    @china100 8 месяцев назад +1

    I have been trying to install Temporal Kit to Stable Diffusion but when I install and update in the browser I get the tqdm error and can no longer run Stable Diffusion, unless I delete Temporal Kit from my extension folder and delete the venv folder completely. Does anyone have a solution for this? Or know a reason why it is happening? I can see online I am not the only one who has had this issue.

  • @AndyDeighton
    @AndyDeighton Год назад

    You can set the sides dimension to 1 instead of 2, 3 etc...

  • @zerosad6873
    @zerosad6873 11 месяцев назад

    Can this be used on mac?

  • @leshiy2780
    @leshiy2780 Год назад +1

    Hi all!
    I have a problem. When I put my video(25fps) in INPUT and hit run, I keep getting an error. Please tell me how to solve this problem?

  • @m_sha3er
    @m_sha3er Год назад

    Well done, how I can cancel the controlnet output files?

    • @enigmatic_e
      @enigmatic_e  Год назад +1

      Settings/controlnet/check do not append detect map to output.

    • @m_sha3er
      @m_sha3er Год назад

      Thanks bro🙏🏻

  • @rudimind
    @rudimind Год назад

    why my EBSynth box to big..icant make it smaller..i cant see the run all button....

  • @xaiyeon_xiuzhen
    @xaiyeon_xiuzhen Год назад

    tyvm for the video :D luv it, but what about this error when trying to run: I have webUI version: v1.2.1  •  python: 3.10.6  •  torch: 2.0.1+cu118  •  xformers: N/A  •  gradio: 3.29.0  •  checkpoint: cf489251a5
    Temporl kit error
    , line 86, in _init_
    '-r', '%.02f' % fps,
    TypeError: must be real number, not NoneType

  • @dkamhaji
    @dkamhaji 11 месяцев назад

    Im getting issues when batch processing Img2Img from the seq created by the preprocessing stage of temporal kit.
    Their seq defaults to this: 0and0.png and then 1and0.png and it looks like batch likes normal file seq like name_001.png
    So when I run the img2img batch its skipping certain frames.
    Has anyone here figured out how to fix this or a temp fix for this?

  • @keeppushin2592
    @keeppushin2592 Год назад

    what about for mac?

  • @kewk
    @kewk Год назад +1

    Anyone figure out what was causing the controlnet images to be saved as well?

    • @kewk
      @kewk Год назад +3

      NVM I figured it out, in the A1111 settings go to controlnet and click do not append detectmap to output.

    • @ppn7
      @ppn7 Год назад +1

      ​@@kewk thanks you saved me from trouble !

  • @teemumathias
    @teemumathias Год назад +1

    Anyone know what the other tabs do in TemporalKit(the warp tabs)? Also, does TemporalKit also use TemporalNet at some point? Was wondering if using 1side has any effect at all other than just preparing for EBSynth. Having issues when the faces are further away and I can't raise the resolution of the grids enough (VRam)

  • @DrunkenKnight71
    @DrunkenKnight71 9 месяцев назад +1

    oh well this sucks...stable diffusion working fine, installed temporalkit, restarted and i get the error ImportError: cannot import name 'auto' from 'tqdm'...i don't have a clue how to fix it so i guess i'll now have to delete and reinstall everything

  • @v0id_mg
    @v0id_mg Год назад +1

    I watch your videos and I'm terribly jealous of those who have enough video memory. My laptop only has 4 GB and it can't do anything.

    • @enigmatic_e
      @enigmatic_e  Год назад

      Sorry to hear that. I would say to save up and and invest but the way AI is moving, someday 4gb will be all you need, who knows.

    • @matbeedotcom
      @matbeedotcom Год назад

      cheaper to use cloud servers for a couple years than to buy a 4090

    • @judgeworks3687
      @judgeworks3687 Год назад

      This lady (prompt muse) has a good overview and tut on running stable diffusion in cloud (using runpod) when you have a garbage computer. Good luck. ruclips.net/video/--Z03wbDp_s/видео.html

    • @Gryphan
      @Gryphan Год назад

      seme here

  • @uisato_
    @uisato_ Год назад

    I've been struggling to make it work properly for a while now. Last results got a tiny bit better, but still getting weird un-consistent outputs: ruclips.net/video/HL6bdxTaHBQ/видео.html
    Any thoughts on what could be happening here?

    • @enigmatic_e
      @enigmatic_e  Год назад

      What was your resolution? I think it might be that when you put resolutions in SD it might not align things precisely and cause TemporalKit to not split the grid correctly. Try making the video 1424x800 for an 16:9. Let me know if it works better.

    • @uisato_
      @uisato_ Год назад

      ​@@enigmatic_e You were right, sir. I was using 1024 when I should have been using 1080. Thanks!

  • @camprey
    @camprey Год назад +5

    I dont know if anyone else is getting this error, but everytime i click on "Run", after its done the frames in the "input" folder are exactly the same, it ignores the rest for some reason. And inside the target folder, there's an input_video.mp4 which is a video of the same frame, like if it is frozen

    • @jbiziou
      @jbiziou Год назад +3

      Got it to work, I had to make sure my video was mp4, ( And at 24 fps ), it did not like 23.976.

    • @camprey
      @camprey Год назад

      @@jbiziou darn it, that's probably it. Mine was 23.976 as well. Thank you!

    • @cynbot7814
      @cynbot7814 Год назад +1

      Thanks! It also has the same problem with 29.97 fps.

    • @simonbronson
      @simonbronson Год назад +1

      cheers - 24fps works!

  • @vettorazi
    @vettorazi Год назад

    for some reason Ebsynth isn't creating the keyframes (15:56) what am I doing wrong?

    • @m3dia_offline
      @m3dia_offline 11 месяцев назад

      Facing the same issue here as well and once I run all, it says the keys are not starting from 0001 :/

    • @user-yj3mf1dk7b
      @user-yj3mf1dk7b 10 месяцев назад

      probably too many files. try to delete some.
      i've using Ebsynth Utility - it split everything into max 18 frames.

  • @GuillermoCatalano
    @GuillermoCatalano Год назад

    I don´t know why my first keyframe is called keys003 and because of it I get a keys0001 missing on EbSynth

  • @pastuh
    @pastuh Год назад

    10:28 strange why no one managed to store previous generated data as hint for future generations..

  • @ManilMopas
    @ManilMopas Год назад +1

    can anybody make a long video and share the result?

  • @eyevenear
    @eyevenear Год назад

    btw guys we FUNDAMENTALLY 1 Step from having GEN-1 level of generations for free (well provided you can afford a gpu, but i think it's way better than not having one and paying memberships for life)

  • @royalsero848
    @royalsero848 11 месяцев назад

    i get everytime error list index out of range

  • @lazygamer6191
    @lazygamer6191 Год назад

    How much vram the PC need to do this?

    • @enigmatic_e
      @enigmatic_e  Год назад

      I think it works with as low as 4gb