Stable Cascade - Local Install - SUPER EASY!
HTML-код
- Опубликовано: 14 май 2024
- Stable Cascade Local Install Guide - Super Easy Guide for Pinoki and ComfyUI. Run Stable Cascade Local today with this super easy tutorial.
#### Links from my Video ####
python_embeded\python.exe -m pip install -r
/ purzbeats
www.youtube.com/@PurzBeats/st...
stability.ai/news/introducing...
pinokio.computer/
github.com/kijai/ComfyUI-Diff...
#### Join and Support me ####
Buy me a Coffee: www.buymeacoffee.com/oliviotu...
Join my Facebook Group: / theairevolution
Joint my Discord Group: / discord
AI Newsletter: oliviotutorials.podia.com/new...
Support me on Patreon: / sarikas
00:00 intro
01:12 Stable Cascade Pinokio AI Install Guide
04:08 ComfyUI Stable Cascade Install Guide - Хобби
#### Links from my Video ####
python_embeded\python.exe -m pip install -r
twitter.com/PurzBeats
www.youtube.com/@PurzBeats/streams
stability.ai/news/introducing-stable-cascade
pinokio.computer/
github.com/kijai/ComfyUI-DiffusersStableCascade
👋
Dude, you can also just run 'pip install -r requirements.txt' in your 'DiffusersStableCascade' folder.
REVISED ENTRY: Did some testing in a 4070 Ti Super 16GB
Model took 13.5 GB VRAM after doing a 512x512 image. [GPU Memory reading from Task Manager]
512x512: 8.62s [13.5GB]
640x640: 9.3s [13.6GB]
768x768: 9.44s [13.6GB]
896x896: 9.6s [14.2GB]
1024x1024: 10.1 sec [14.6GB]
1152x1152: 10.5 sec [15.0GB]
1280x1280: 11.3 sec [15.5GB]
1408x1408: 13.5 sec. [15.9GB]
1536x1536: 21.8 sec (overflowed VRAM into GPU shared memory). [16.4 GB]
1664x1664: 31.6 sec [16.9GB]
1792x1792: 43.2 sec.[17.5GB]
1920x1920: 56.4 sec. [18.2GB]
2048x2048: 78 sec. [18.8GB]
2176x2176: 93-130 sec [19.2GB] (times were highly variable over 5 runs with different styles and prompts)
Too slow for me.
There is a something wrong, my rtx4070 12gb generating VERY slow, every iteration doing about 500 seconds... What could be the reason?
@@modzha2011 There are other comments mentioning the same thing, one of the issues is this initial unoptimized research model takes up 13GB VRAM without generating anything, it could be causing issues with shared memory swapping to system RAM or even swapping to disk if you don’t have enough free RAM available due to other apps running.
I have 64GB system RAM and noticed comfy uses up to 20GB at times for other SD models outside of VRAM.
I would also do a clean installation of your NVidia graphics driver, I got the 70 Ti super this past Sunday and did a wipe and clean install using version 551.23, though newer ones might work.
Otherwise I would wait on Comfy’s official release which rumour suggest is coming very soon, remember this is an experimental research model not at all optimized for most consumer systems.
@@glenyoung1809 Thank you for your answer, you reassured me)) I’ll wait for the release, and until then I’ll calmly continue to use SDXL models
@@modzha2011 SD 2.1 is better for a 12GB card, XL is very vram heavy
Thank you, sir, very helpful.
Bro you are awesome for this!
It can do 512x512 and 768x768. It can also do 2048x2048 without problems. In the console version, any resolution that's a multiple of 8x works, e.g., 1920x1080 works perfectly.
Is there anywhere we can use it online without downloading it locally?
@@mattahmann Online versions use the diffusers library, which is broken for most resolutions. (Other than that, I don't know about any in particular, I use it offline. Sorry.)
Würstchen and a sausage dancing, I got your reference...
I'm here sending some love to Germany and Austria too
Hold my Bratwurst!
So you just ignore Switzerland? 😢😅
@@darki0022 Nope, Switzerland is life, Switzerland is love, Switzerland is heaven on earth
Many many many many thanks!!! 😃
Watched a video saying just a few hours ago showing a web demo but said this hasn't been released openly. That is awesome if this is out now.
I probablly watched the same video as you, was it MattVidPro by any chance?
Hooray for Purz!
I'll test it sooner or later, but for to start I'm testing it on the Huggingface Spaces and seems like if you add 'made by Dall-E 3' in the prompts you get a very better output overall
Dear friend thank you for the video! But I would like to see a comparison in quality and speed, for example with solutions from A1111 or Comfy. And also is there any hope for a working inpaint/outpaint functionality?
after trying it for architecture , I can say it is quite promising ! it reads patterns and geometry waaay better than SD , still some small issues for pattern of course but we are getting closer to something big ! with Controlnet and stuff ... man this tool will be a killer !
thank tou for this🎉🎉🎉
Wonderful! I have a RTX4070 12 GB VRAM and an installed extension for A1111-Forge. I can generate 2048x1024 pixels image in 17 seconds! However, on a A1111 (no Forge), the Stable Cascade extension runs slowly.
Same gpu but throu Pinokio, thing generates extremely slow feels like it's using cpu
Very cool!. I thnk Stable Cascade will be another big leap kind of like SDXL. I think I'll wait until comfy UI's native implementation instead of having it as a custom node for now, but very cool video!
It is expected to officially land in ComfyUI this weekend.
thanks for covering pinokio, Olivio! 🎉
Mate thank you🎉
Happy to help
That pinokio application is amazing for people that are not amazing at software like myself! tysm for showing that
They should at least add a digital signature to their .exe so it doesn't trigger Windows Defender. Seems like amateur hour to me.
It's not exactly a finished product! I had a lot of problems and gave up with it.
I wasn’t ready for that beard coloring 😮
finally i was able to run it, but with updating all dependency parts to the latest versions helped...
On windows, 4090 13900k, pinokio works fine, but comfyUI show a diffuser error looking for it on C: drive, while all Ai stuff is on E: drive. Any suggestion? Tried many times to solve but no success.
Olivio, can you please make a tutorial on how to train a Lora on this? They provide a guide but I am sure you can explain it way better :)
i would lov ethis
Great platform Pinokio! Are there ways to start it in lowram mode or similar like in automatic1111? I tried installing Instant-ID and it gives me cuda out of memory. Furthermore, image generation with Stable Diffusion Cascade is veeeeeery slow
How slow we talkin?
Can I use only the method one? I mean just installing Pinokio? May I use images generated by Cascade for commercial purposes?
Olivio, what did you use to make your end screen animation???
New node seems broken on Linux, at least for now,
ComfyUI-DiffusersStableCascade module for custom nodes: Failed to import diffusers.pipelines.stable_cascade.pipeline_stable_cascade because of the following error (look up to see its traceback):
cannot import name 'List' from 'typing_extensions'
I already have typing_extensions so not clear what the fix is.
Man i running M1 Max MAC shipset here, i want to use this but cant seem to get it to work at the moment
Has it worked with controlnets or other node yet?
is there a way to download the files and models for a1111 etc?
Running Stable Cascade from Pinokio offers the Option 'WiFi - Local Sharing' - but for this to work it obviously needs to allow public sharing first. In the terminal it says: To create a public link, set `share=True` in `launch()`. But I couldn't find out yet WHERE the heck this needs to be changed. Any hint?
Can I use it in Automatic1111? Thx for your videos
just a thought here but does this software work off line or isit another ai website i would rather it working offline
not working over here, installed everything as in your instructions but I receive an IMPORT FAILED for SiffuserStableCascade module. Also tried updating all, uninstalling the node from the manager and reinstalling it. Any suggestions?
Here is a fix for import Failed.
go to C:\Users\YOURUSERNAME\.cache\huggingface\hub\models--stabilityai--stable-cascade\snapshots\f2a84281d6f8db3c757195dd0c9a38dbdea90bb4\decoder\
open config.json
On line 45 change "c_in": 4, to "in_channels": 4,
how many vram do I need to use this on local ? I just have RTX 4060 with 8gb vram. it is slow and make my pc freeze. I use latest comfyui that have cascade.
Just wait, Stable Cascade will be implemented in Comfy itself soon.
"Very fast" ? Depending of what is "fast" for you because for me, to create 1 image it takes more 1 hour with Stable Cascade ! Something I never experienced before with Stable Diffusion (in this last one, it only takes few seconds)
Unfortunately the ComfyUI version doesn't work on Mac:
Error occurred when executing DiffusersStableCascade:
BFloat16 is not supported on MPS
The ComfyUI version doesn't work well on AMD. The demo version (installed with Pinokio) does work though.
I can only see an error message on the box in site, and the error 'previewer is not defined' keeps appearing in pinokio. Is there any solution?
Same error here on 3 different systems.
Gretting what about graphics card? Work on using amd gpu?thanks
Nope. Need a solution that supports DirectML. Stabled does download the DirectML dependencies, but crashes and spits errors when rendering.
i tried installing the custom node... but it failed to import :(
hi can you help me to solve this error "module diffuser has no attribute StableCascadeUnet. i installed cascade install in stable diffusion but i got this error after installing of all model on window 11
does pinokio work for arc a770?
Using a 16GB 4060 TI, with default settings, it takes 14.4 seconds to generate an image, 37.8s with max resolution. The model is using all the VRAM, it would explain why it is so slow on GPU with less VRAM
Love the Videos Olivio!, Can we get more Details about forge ui and the extensions
yes, i will create a video about some of the included extensions soon
@@OlivioSarikas you're the best !!!
how to install comfyui in pinokio? Also how to run cascade through comfyui in pinokio? Would love to see a vid on that!
Is there a way to fix it from downloading installing and drawing from my main C drive? I run Comfyui from an alternate SSD but the nodes puts it on my other drive and makes it real slow
Use symbolic links
Great video. One question..
How did you create the animation at the end of the video? Im Very interested in creating a fictional character like that. Thanks
with a software called Adobe Animate. this is actually done with the free demo
i got error
Error occurred when executing DiffusersStableCascade:
BFloat16 is not supported on MPS
how do you use face swap?
So this would be a more basic version of Automatic1111?
PC minimum requirements?
Pinokio doesn't work for me, only detects my CPU.
ComfyUI detects my GPU and is downloaded the required model files and worked the first attempt.
Hello, i am following your channel everyday, and i can not call mysself as expert, well i am on sdxl with comfy ui already, could you please tell me if stable cascade is a new model or is a new stablle diffusion to focus on ? thank you
it's a new model and a new method
I have an AMD GPU, so what to do at the point you choose the run_nvidia_gpu.bat?
Can you run Cascade in Forge UI?
But does it support loras or similar stuff? Is it uncensored? Does it have any controlnet or similar? Because if it doesn't it will be another curious but not popular tool.
30GB Pinokio Cascade installation...extremely slow, 5min for simple prompt (3060/12GB)...heavy censored! - just uninstall.
Is it normal for a RTX 3080 to be taking several minutes to create an image?
Copy path? I have no this option windows 10 ...
Does anyone know where the output folder is in Pinokio?
i am getting cuda out of memory on my nvidia 1060 6gb vram card for stable diffusion cascade
Walter White --"Cowboy House?" 🤣🤣🤣
Can I set this to 2 gigabytes of video memory??
Well, I got it working at least but I grew a beard in the time it took to generate an image
I downloaded it yesterday and tried it but it was taking too long to be worth it. So i uninstalled it. Looking forward to fine tuned models.
I'm getting this error:
Error: User did not grant permission. at C:\Users\home\AppData\Local\Programs\Pinokio
esources\app.asar
ode_modules\sudo-prompt-programfiles-x86\index.js:553:29 at ChildProcess.exithandler (node:child_process:430:5) at ChildProcess.emit (node:events:513:28) at maybeClose (node:internal/child_process:1091:16) at ChildProcess._handle.onexit (node:internal/child_process:302:5)
It failed because my app.asar is a file (51MB) and not a directory. Anyone has this issue and how to fix?
SageMaker Studio Lab Tutorial?
I just got Forge running.. what's the benefit of this, besides text? thanks! : )
Forge uses SD 1.5 and SDXL models. This is the newest model from Stability AI, Cascade. I'm sure Forge/A1111 will be updated to run Cascade soon enough.
doesnt work. it opens but when i try to run the prompt all i see is an error text in the center, wherre the picture should be, there's an attributeError. same thing with the pinokio version.
i'm getiing Cuda error with this with my 6gigvram 😥
Does it run on low or med end pc like SD1.5 ?
no absolutely not
everything seems to have worked, except it doesnt display any image when i queue the prompt
After installing I get the error message: "Runtime Error. Failed to import transformers.models.clip" - No module named 'transformers.models'. I can't start the application. What went wrong?
Installed Pinokio, it runs but seems VERY slow here (even though I have a very fast pc). Any idea why?
I would guess on, that your VRAM is the problem. How much do you have?
EDIT: I just have seen, that many people said, it takes up to 13-16GB VRAM, so if you are below, it might really be that.
@@darki0022 I have a RTX3080
Does someone has tips on fixing? ERROR: Invalid requirement: 'Files\\SD\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-DiffusersStableCascade\
equirements.txt'
Hint: It looks like a path. File 'Files\SD\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-DiffusersStableCascade
equirements.txt' does not exist.
Fooocus is better overall I think. but idk it depends on the user.
no luck... think I'm going to uninstall and start all over from scratch... but have seen others say it doesn't work through pino as well
Any idea why it takes so long to generate? Stable diffusion didn't take nearly this long...
same here
installed in with Pinocio and have a 3080, any idea why it takes like 20 minutes to generate?
Are you sure you don't have anything open in the background? Maybe comfyui still running?
@@OlivioSarikas yeah, 100% i know that from instantID, which takes a lot to run, so i tend to make sure nothing is running when generating
@@mendthedivide Are you sure it's running on GPU and not on CPU render?
@@Steamrick interesting, not sure how to check or what the default would be when installing with Pinocio, idk
@@mendthedivide use task manager to see if the GPU or CPU is running a big load?...
how to add a lora with running cascade,please
I got an error : Error occurred when executing DiffusersStableCascade:
module 'comfy.model_management' has no attribute 'unload_all_models'
update your comfyui
Wonderfull answer ! Thank you for the help !
@ I ran into this issue today too:)
After completing the install process and trying to start and use the Web UI, I get the following error: NameError: name 'previewer' is not defined. Did you mean: 'Previewer?'
Can anyone help? This is after installing via Pinokio
Yah Pinokio failed to install Cascade on mine. Complained about an old python version already on my machine for another program. Eh, I will wait for A1111 support and finetunes to role in.
Same here
hmm when I click the download button for the windows version (after finding it with Discover), it thinks for a second then pinokio just crashes completely, hmm... twice so far.
any word on Forge being able to run it?
Not that I heard of. But it was only released yesterday
it just got semi-added to comfyUI :)
I only have 12GB of VRAM. Will it work with it?
Yes.
everything was installed, but refused to work, because I have a comfyui on a separate disk, and give this node the system paths
of sadness
It works like a charm as a Forge extension (I have a Forge installed on a external SSD disk).
Where is info for a111?
So how is the NSFW potential?
Huge... 🙄
How to install in a collab notebookM
probably we need community performance hacks to run this as fast as it should.
Ther are models for ComfyUI
i cant do it, I get the error 0.0 seconds (IMPORT FAILED): 😟
as of now both ways do not work. pinokio says app not found while comfyui says it's deprecated
The opposite of faster - on my PC it runs around 6-10x slower than SDXL. I'm running through the stand-alone demo app, though...
Requires min 20GB of hard disk? Is it really worth it... yet? 🤔
nope.....3 hours, installed, nothing work..unistall...done
Not working for directml
you should have mentuioned what is the diffrence between this and forgeUI
It's over for dalle and Midjourney
Thanks for all your effort and nice videos! Just one question.. what is "koopi"? Do you mean to say "copy" when you say "koopi"?
it's called an accent lol, yes, copy