TEMPORALKIT - BEST EXTENSION THAT COMBINES THE POWER OF SD AND EBSYNTH!
HTML-код
- Опубликовано: 20 июн 2024
- This is a tutorial on how to install and use TemporalKit for Stable Diffusion Automatic 1111. This extension uses Stable Diffusion and Ebsynth.
HOW TO SUPPORT MY CHANNEL
-Support me by joining my Patreon: / enigmatic_e
_________________________________________________________________________
SOCIAL MEDIA
-Join my discord: / discord
-Instagram: / enigmatic_e
-Tik Tok: / enigmatic_e
-Twitter: / 8bit_e
- Business Contact: esolomedia@gmail.com
_________________________________________________________________________
TemporalKit: github.com/CiaraStrawberry/Te...
7-zip: 7-zip.org/download.html
Ciara: / ciararowles1
Ciara Tutorial: • TemporalKit + Ebsynth ...
Tokyojab: / tokyojab
TroubleChute: • How To: Download+Insta...
Install SD
• Installing Stable Diff...
Install ControlNet
• New Stable Diffusion E...
Chapters
0:00 Intro
0:47 What is TemporalKit?
1:59 Installing TemporalKit
05:17 Settings
10:51 IMG2IMG
14:17 Exporting
15:05 Ebsynth
16:58 Experimenting
18:48 Longer Videos Развлечения
Thanks for all your hard work making these tutorials - always excited to see your vids when they come out!
I've been waiting so long for a solution that makes this process a bit more easy and reliable. Thanks for sharing, man! 🙏😊
Since you posted the news on twitter I have been checking here frequently for this tutorial video. Finally we got the consistency we were looking for🙂Thanks mate🙏
As always, an amazing tutorial! Thanks for your good work
Did not expect to see myself. Thanks for the shoutout!
Busy messing around with vid2vid, everything I've used so far brings way too much of the original video through (SD-CN-Animation), or creates something incredibly fickery. Busy following this guide :)
Hey!! Thanks for the tut, it helped me out so much. Hopefully this tut is helpful for you. I may need to update it since a lot of people say they’ve recently had a lot of issues.
For anyone wondering why the Temporal-Kit tab isn´t showing up in the Web-ui:
You have to install moviepy, too. Just had the issue...after installing moviepy, everything worked fine.
I think it's auto installed after you reload cmd
thanks dude, i had the issue too!
I just had to close and restart the CMD window.
The video I was looking for, thank you!!
I appreciate that you include the keyboard shortcut tidbits (and the tutorial in general)
always fun watch your tuts! ❤❤😍😍
Love you man! Funny, interesting, very helpful!
Great video ! If you select a part of your prompt, hold control and press arrow up or down, you can directly change the weights of the keywords in your prompt :)
🤯 I gotta try that!
great video, very thorough and helpful, thanks!
So cool! Thanks for the great tutorial.
I wonder if it's possible to increase the consistency between frame groups by including previously generated frames which are masked on img2img step
For example, let's say we extract 2x2 groups of frames from the video and include 2 previous frames:
1) We stylize the first group:
1 2 => *1 *2
3 4 => *3 *4
// Numbers are frame indices, "*" means stylized frame
2) Append two last frames to next group:
*3 *4 => *3 *4 // *5 *6
7 8 => *7 *8
3) Repeat for each group
*7 *8 => *7 *8
9 10 => *9 *10
11 12 => *11 *12
In the end we end up with one 2x2 group and several 3x2 groups, which are (hopefully) more temporally coherent than regular 2x2 groups
I would like to try this myself, but my PC is a potato that can barely handle 768x768 generation, and you obviously need a lot more power to do this trick with several ControlNETs :(
I think you just talking about "Border Key Frame" right?
Yes, this works. This is how the multi frame renderer A1111 script works. One thing to add is you should regenerate the initial frames with this process too.
Great reasoning, I would like to add that if you prepare a Lora with your character you will achieve great consistent there too
If this works, I wouldn't know why. Running through img2img removes style information of the original image. They style before and after stylizing should not be dependent unless you're using a low denoise... which goes against your original intention of stylizing the image.
On the other hand, I think that you can use the same extra noise for every img2img to further improve consistency.
Nice tutorial! Thanx!!
Thanks for checking it out!
awesome video 🙂
Dude ! Awesome explaining.. :P
Thank you!
Yess love
great video
Thank you for this! I literally started an img2img batch last night for a video and woke up to this new temporalkit. I was having the same issue last night with controlnet producing the canny and open pose images. throwing my # sequence off. Anyways that batch is now obsolete thanks to this new temporalkit! The power of AI! Evolving so fast! If you find out how to stop the depth output from controlnet please lmk and thanks again for this tutorial!
완벽한 튜토리얼!!
Very cool
hey chef thanks for the very detailed awesome video, can we use inpaint also ? because on the batch header we have a field under in out directory named : Inpaint batch mask directory (required for inpaint batch processing only)
Great tool, its like EbSynth Utility, but without mask, and its a bit faster.
I will confirm this, but if you use a good denoiser after this, the software will interpret these variations as noise and will improve a lot. As the davinci resolve flicker will polish a lot as welll :)
Yes! Please let me know, I would love to find a solution to this.
interesting, it could introduce blurriness but with some of these new AI implementations of sharpening and upscaling it could be a moot point
Try putting more frames in the spritesheet, but then use Controlnet Tile Upscaler to make it huge & stylize in one step, is there a limit to the size ebsynth can work with? even if still smallish ending video size, you could upscale the video at the end with some other software perhaps. GLHF 😜
there's a lot that could be improved here
1. in ffmpeg you can extract key frame per scene instead of frame count(means lower keyframes = faster process)
2. you can use a photo enhancer to enhance low quality grids(means you can fit more tiles = more consistency)
3. lastly enhance video with another AI for quality and fps
4. yeah its tedious but the result is nice
Could you please give software that can be used for each of these steps?. I’m new to this and it would help greatly
LETS SEE YOUR RESULTS BRO WHERE CAN I FIND?
Suggestion for higher quality - there's an extension called Tiled VAE or something similar, it lets you generate high res images by splitting up the pictures into tiles. Haven't tried it using this method but it could help
It is actually a new model within the ControlNet 1.1 extension, it was released about 1 week ago and can, with the help of the Ultimate SD Upscaler, make an image not only have a much higher resolution, but also much more details.
@@Chrono.. can it keep up with the consistency? or the details changing every pics ?
@@Chrono.. These are different things. Tiled VAE can work similar to Ultimate SD Upscaler with controlnet but it can also work independently and allow you to generate images with higher res than you can usually generate with your card without getting OOM error
@@merodiro So you're saying that the sole purpose of Tiles is to give the ability to upscale, on cards with less vram? That is, if I have a good video card, Tiles isn't necessary?
@@Chrono.. tiles doesnt mean your graphic card is bad, is just splitting a lets say mountain in pieces. you still go faster with better card, actually il increase by a lot render times at least in 3D renders :)
Its amazing! I've been stuck for the past 40days. looks like a lot have happend but not enough on the video consistency using Stable diffusion. What GPU are you using? I've got RTX3060 12gb but struggle with the limited Vram. I want to add another RTX3060 12gb but don't know if it will work? Any advice. Also my Video clips is between 4-8 seconds, with the longest clip around 20 seconds. have you been successful over a longer time? My idea is to do a full style transfer with very high CFG Scale between .6 to 1. i was able to get reasonable coherence but not as good as this short clips from you, however I was able to maintain consistence over 20 seconds. I did it about 2 months ago, so a lot have changed.
I currently use RTX3080 10gb and it's pretty good. I do run into issues when I start adding controlnet and make the resolution high.
@@enigmatic_e I'm excited and want to test temporal kit. But even before starting I can already tell the frustration ahead. I'd say the problems I run into is less about A1111 and more about the AMD system. I use rasen7 CPU and mother board. So there's always problems with CUDA, drivers and FFMPEG.
Interesting it looks like I was running into issues because of some issues with the dimensions of what I was working with, that and using too many images in a grid.
I was pretty sure I had my dimensions matched up correctly but I’m wondering if I need to start with a 512x768 or square grid to work with in this method. I know Stable Diffusion does some weirdness if not following those ratios.
Typically, you want 512x512 for SD2.0 or below. Bigger should work fine. Different aspect ratio should work fine until the ratio exceeds around 1.3. HOWEVER if you're running tiled images, none of what I said applies for sure anymore.
Great Tutorial! To solve the quality problem, wouldn't be possible to take the grid of images and upscale them individually with low denoise and then run them with temporal kit? just an idea. great video anyway!
Yea there has to be method to make the quality better. Ill be messing around with it more.
You gotta increase the strength of your prompt. So instead of just cyberpunk robot, put (cyberpunk robot) to give it more strength. Or even stronger like (((cyberpunk robot))). The strongest you can go I think is (((cyberpunk robot:1.5))).
Anyway, nice video. I'm gonna go have fun with Temporal Kit now. 😃
15:50 you lost me... are you hitting a button here? It doesn't create outputs for me.
Edit: For some reason the first keyframe was named 0002. It needed to be renamed to 0001 before the synth button would start the processing work.
edit 2: the list of keyframes isn't showing up. You Jumpcut and say "it creates all these outputs" but that doesn't happen on my end and I can't see how you did it.
did you figure it out? I'm having the same issue :(
having same issue as well. no output folders
this is very interesting however I really hope EBSynth evolves to support more keyframes. This can become tedious very fast for anything beyond 3-5 seconds.
Hi do you think EBsynth will be able to do more than 20 keyframes ? It's a mess for now to do long video manually 😅
I hope so. But there the work around that I mention at end of video.
respect for suggesting other channels. thats the way, people pulling each other up. other channels will never mention another channel and delete any reference to another channel in comments. insecure and longterm fail.
yeah enigmatic is one of the good ones... One of the main reasons I stopped making tutorials was because so many other channels were repackaging my techniques and taking credit for it... Digital copycats... Not much can be done but im always happy to see people give credit and help out other creators...
Thanks. Yea I believe in giving credit where credit is due.
Yep, it's in fact smart to do, as it seeds connections to other creators in the search history, which means youre more likely to be surfaced in suggested videos as they have the similar audience. Keep it up, refer other creators, and it absolutely pays off.
great video, will there be a part 2 for loger video explaining split video setting?
I explain it at the end of the video at around 18:48.
@@enigmatic_e ty ty
Hey man, thanks for the video!! Do you know when EBSynth is saying my gpu is unavailable under the advanced tab? Running a ubuntu cloud machine with an A100, so gpu shouldnt be an issue.
oh man, I don't know. I have no experience with cloud machines. Sorry
hmmm what if we used topaz on the the grid to and then after on the grid?
I haven’t tried that but it would interesting to test it
I have the same problem on my output with the controller preview images. How do you solve this problem?
thanks for the tutorial, i tried it but got reall yweird frmaes in the "frames" folder (all weird gray and pixelated) - what am I doing wrong?
Additional step that could help with the consistent img2img is to turn ON the, "Apply color correction to img2img results to match original colors."
I would skip this..
If you want to transform forest to hell forest, it will be impossible. Everything will be in green color
when I dragged a 16x9 video into the temporalkit, the video kind of covers the text in the UI entirely, making it unusable to make any settings. is the a bug with 16x9 videos with temporalkit?
Wonderful Videos!! thank you for the great stuff. I too am having the same issue as another,, My frames are all the same when I hit Run, any idea ?
Got it to work, I had to make sure my video was mp4, ( And at 24 fps ), it did not like 23.976.
Good to know.
When I recombine I get a blank crossfade video file. Everything works up that point and Ebsynth made all the frames and folders. Input video looks good. I made sure all of automatic 111, ffmpeg, and Ebsynth were up to date. Any ideas?
Could you use the many frames in a plate but then upscale?
good video, however when i run an img2img batch it makes only one image and i get this error:
IndexError: list index out of range
any idea how to fix?
i tried: limiting input frames to 20 or less, enabling split video, setting the sides or keyframes lower and non worked.
edit: when disabling controlnet it works, but that kind of defeats what i wanted to do. Now i dont get the desired results. it also doesnt get in a 2x2 grid anymore, i now have inconsistant frames as a grid input is seen as one image
------------
i wanted to test it anyway and tried putting the frames and such into ebsynth, however the window gets off screen to the bottom and theres no way to scroll, so i cant run it
Success with your tutorial 🥳. Maybe if the quality bad we can enhance the video. Thank you
DaVinci Resolve Studio 18.5 Beta now has an AI video upscaling feature.
You have to own the studio version but it's worth it anyway when you create video content regularly.
@@My123Tutorials thanks 🙏
The Shao Khan laugh lol
so happy someone caught that! 😂
I finally had the eureka moment at 0:50, so THATS how it works. I totally didnt understand why it compiles a grid, but then I realized the seed and diffusion are working in the same pass so it'll be extremely close in output per grid
The initial noise size, from what I understand, is 64x64 and then the area (512x512 etc) is then filled with the noise/tensor shape
Can I ask you what gpu you use?
Rtx 3080 10gb
Thank you for amazing tutorial! Everything working, but for some reason after recombine it's show very low quality mp4 file(500kb file size). But separate shot in output folders have decent quality.
How to fix this?
You can always upscale the low quality image, can you not?
U can
Yeah, it doesn't always work that well though
I did everything like you, but my control network does not work with a batch, that is, it does 1 frame as it should, and then does not select the next one in the list. This only applies to the control network, how to fix it?
Does anyone know if it's possible to install FFMPEG to runpod? I've downloaded TemporalKit, Everything works except the output looks like a TV Satellite losing signal effect. I'm assuming it's because I didn't install FFMPEG correctly.
my ebsynth is not loading the keyframes under it when i drag the keys folder, any idea?
That was really great tutorial! Only thing I would say is that for a lot of the things I see people do with this, like a toon style or something is just easier to fire up after effects and apply some filter. I wish you have succeeded to make it a robot, now that is not something you can filter in after...
After installing and reloading the UI, had to close command prompt and reboot the bat file to get the Temporal-Kit tab to show. It's there now.
👍🏽
i did the same. even i restart my computer but still doesn't show up ....
Nice! Works in Google Collab?
It would be great if I could prepare a prompt for each frame that will be generated using the style
Now looks like need go 1 by 1
Hi!, love the videos, Ebysinth its not workin for me, i get and error, something about missing a file 0001 or something like that.
[HELP] I use Temporakit, and when I reach the "Ebsynth_process" step after pressing "Prepare Ebsynth," I don't see any files in the Keys folder; it's completely empty. What could be the issue?
i try rename img from folder output ''0and0'' same ..it work for me
Maybe you can do the more consistently possible and after upscale the result with AI tool like topaz labs?
Mmm not sure, haven’t tried it
what happened between 15:57 and 15:58 my ebsynth says the naming is off how do i fix that?
Did you make sure to click on ebsynth mode and batch?
@@enigmatic_e Yes, what did you skip?
Hi! Does anyone know if it's possible to use this with a Google Colab notebook? I have to use Google Colab Pro since my GPU doesn't meet the required 16 GB VRAM for Stable Warpfusion.
Need your negative prompts as my default 😅
when I put my input image of 4 slides in the img2img, it generates only one image, not 4, how can I make it generate 4 results?
controlnet canny
Can you please fix the problem here In the ebsynth it is not working for me and it not showing the keyframes and the directory is also also showingin the frames tab and key tab but not in the project directory
same here..
☺grasias
De nada 🙂
Ebysinth its not workin for me, i get and error, something about missing a file 0001 or something like that.
There are so many extensions now for Auto1111, how do you decide which ones you need? I tried loading them all and bogged everything down.
yea i think you just have to choose the ones that make sense for the kind of stuff you want to create.
You can use vladmandic's fork of a1111 and disable checking for updates/etc, it should speed up your launch time
When I drag the key folder into ebsynth, it won't automatically set batches for me, and the number of keyframes less than 20. what’s wrong with that?
I would just try to see if there’s an update that might fix the issue. Other than that I’m just sure why it’s not working for you sorry.
@@enigmatic_e Well, thank you for your reply anyway
select "split video" in EBsynth settings tab (Temporal Kit)
For some reason, I don't generate any keys for my Temporal Kit. Did I miss a step somewhere? I have the frames and I have the output.
me too...
I"m running into the issue where it says "Missing frame 0001" anyone know of a fix? - i tried to rename 2 to 1, that did not work, only created one out folder with 1 image, i also copied 2 and renamed it but still no luck.
same issue, got all the way into this video and then it was like oh ya download ebsyth and I did it exactly like shown and none of the images populate in ebsyth and click any button just error 0001.png, ect not found but folder and file locate is good and filename is correct.
I have a problem that can't find they way to fix it in the internet... my temporalkit's extension tab doesn't show up:) please help me i'm loosing my mind.
Why my "Run all" Button on ebsynth dissapeared when i drag keys?
Hey, does anyone know why Ebsynth won't list the outputs automatically? Or is that supposed to be a manual process? I have a lot of keyframes, it would take loads of time to set the ranges.
at 16 mins
@@DanielSimon-em2pe did you make sure to click split video?
same happened to me did you know how to automate it?
thank you for the reply @@enigmatic_e ! I combed through ebsynth buttons, but I can not find 'split'. I have sequences in the right place and everything is in order. There is a cut before you say 'then its gonna create all the outputs' and I lose track, 'cause manually filling all those info is not optimal. I left this workflow, but getting good results with the tempolarNET controlnet model combined with one or two other controlnet tab.
Hello I try to install temporalkit and I have the error and no open anymore my A1111 I try and try and count find the solution this error ModuleNotFoundError: No module named 'tqdm.auto'
I have been trying to install Temporal Kit to Stable Diffusion but when I install and update in the browser I get the tqdm error and can no longer run Stable Diffusion, unless I delete Temporal Kit from my extension folder and delete the venv folder completely. Does anyone have a solution for this? Or know a reason why it is happening? I can see online I am not the only one who has had this issue.
I have the same issue
You can set the sides dimension to 1 instead of 2, 3 etc...
Can this be used on mac?
Hi all!
I have a problem. When I put my video(25fps) in INPUT and hit run, I keep getting an error. Please tell me how to solve this problem?
me too. :( someone please help
Well done, how I can cancel the controlnet output files?
Settings/controlnet/check do not append detect map to output.
Thanks bro🙏🏻
why my EBSynth box to big..icant make it smaller..i cant see the run all button....
tyvm for the video :D luv it, but what about this error when trying to run: I have webUI version: v1.2.1 • python: 3.10.6 • torch: 2.0.1+cu118 • xformers: N/A • gradio: 3.29.0 • checkpoint: cf489251a5
Temporl kit error
, line 86, in _init_
'-r', '%.02f' % fps,
TypeError: must be real number, not NoneType
Im getting issues when batch processing Img2Img from the seq created by the preprocessing stage of temporal kit.
Their seq defaults to this: 0and0.png and then 1and0.png and it looks like batch likes normal file seq like name_001.png
So when I run the img2img batch its skipping certain frames.
Has anyone here figured out how to fix this or a temp fix for this?
what about for mac?
Anyone figure out what was causing the controlnet images to be saved as well?
NVM I figured it out, in the A1111 settings go to controlnet and click do not append detectmap to output.
@@kewk thanks you saved me from trouble !
Anyone know what the other tabs do in TemporalKit(the warp tabs)? Also, does TemporalKit also use TemporalNet at some point? Was wondering if using 1side has any effect at all other than just preparing for EBSynth. Having issues when the faces are further away and I can't raise the resolution of the grids enough (VRam)
oh well this sucks...stable diffusion working fine, installed temporalkit, restarted and i get the error ImportError: cannot import name 'auto' from 'tqdm'...i don't have a clue how to fix it so i guess i'll now have to delete and reinstall everything
I watch your videos and I'm terribly jealous of those who have enough video memory. My laptop only has 4 GB and it can't do anything.
Sorry to hear that. I would say to save up and and invest but the way AI is moving, someday 4gb will be all you need, who knows.
cheaper to use cloud servers for a couple years than to buy a 4090
This lady (prompt muse) has a good overview and tut on running stable diffusion in cloud (using runpod) when you have a garbage computer. Good luck. ruclips.net/video/--Z03wbDp_s/видео.html
seme here
I've been struggling to make it work properly for a while now. Last results got a tiny bit better, but still getting weird un-consistent outputs: ruclips.net/video/HL6bdxTaHBQ/видео.html
Any thoughts on what could be happening here?
What was your resolution? I think it might be that when you put resolutions in SD it might not align things precisely and cause TemporalKit to not split the grid correctly. Try making the video 1424x800 for an 16:9. Let me know if it works better.
@@enigmatic_e You were right, sir. I was using 1024 when I should have been using 1080. Thanks!
I dont know if anyone else is getting this error, but everytime i click on "Run", after its done the frames in the "input" folder are exactly the same, it ignores the rest for some reason. And inside the target folder, there's an input_video.mp4 which is a video of the same frame, like if it is frozen
Got it to work, I had to make sure my video was mp4, ( And at 24 fps ), it did not like 23.976.
@@jbiziou darn it, that's probably it. Mine was 23.976 as well. Thank you!
Thanks! It also has the same problem with 29.97 fps.
cheers - 24fps works!
for some reason Ebsynth isn't creating the keyframes (15:56) what am I doing wrong?
Facing the same issue here as well and once I run all, it says the keys are not starting from 0001 :/
probably too many files. try to delete some.
i've using Ebsynth Utility - it split everything into max 18 frames.
I don´t know why my first keyframe is called keys003 and because of it I get a keys0001 missing on EbSynth
10:28 strange why no one managed to store previous generated data as hint for future generations..
can anybody make a long video and share the result?
btw guys we FUNDAMENTALLY 1 Step from having GEN-1 level of generations for free (well provided you can afford a gpu, but i think it's way better than not having one and paying memberships for life)
i get everytime error list index out of range
How much vram the PC need to do this?
I think it works with as low as 4gb