After installing home-brew, there will be an instruction given in the terminal output to add brew to your path. It's not shown in this video, because he has already installed brew, but you need to do it.
I don't understand how to do it. Care to explain? I'm writing in the commands it's telling me to run, but get the error message "-bash: syntax error near unexpected token `)'"
@@mohammedsarmadawy362 There are two strings under "==> Next steps: - Run these two commands in your terminal to add Homebrew to your PATH:". Copy the first one hit enter, and then copy the second one and hit enter. After that, you can continue to enter: brew install cmake....
This video didn’t work. It says error can’t generate metadata at the end. I don’t know if I did something wrong but I copied the video and it didn’t work.
learning IT is so fun when there is this brilliant voice explaining everything clearly and not a random indian tech support agent with the thickest accent you will ever hear
I have Mac and had an issue setting up stable diffusion finally, I did it from the terminal and I was there but for the first time when I wanted to try and see how it generates the image I got this error: RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' Time taken: 0.74s ? I even edited the command for stable diffusion for no halftime script and updated the Python version but no luck is there any other way
i have an error, can u advice how to fix it ? NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Thank you so much ! Those are very clear instructions, I was able to do it. Hopefully it will become simpler in the future, but I guess we're still early adopters
- Run these two commands in your terminal to add Homebrew to your PATH: (echo; echo 'eval "$(/opt/homebrew/bin/brew shellenv)"') >> /Users/vilijam/.zprofile eval "$(/opt/homebrew/bin/brew shellenv)"
Hi help? not working getting an error: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. Time taken: 0.49s
Hi, thank you for this! Very helpful! However, I got everything smoothly, but when I tried to generate the image, it said ""LayerNormKernelImpl" not implemented for 'Half'" in the terminal and failed. How I can fix this?
i am stuck at a stage where i had used the browser link and then it did something that led the terminal to get stuck at "Model loaded in 5.5s (calculate hash:..... "
cd /opt/homebrew/bin/ PATH=$PATH:/opt/homebrew/bin cd touch .zshrc echo export PATH=$PATH:/opt/homebrew/bin >> .zshrc Run the commands in that order in terminal, you'll be editing the path and creating the missing .zshrc file, exporting the path to this new file. Now you should be able to use:
gave an error at the stage of launching the web interface: Installing torch and torchvision ERROR: Could not find a version that satisfies the requirement torch==2.0.1 (from versions: none) ERROR: No matching distribution found for torch==2.0.1 WARNING: You are using pip version 20.2.3; however, version 23.2.1 is available. You should consider upgrading via the '/Users/kupyasha/stable-diffusion-webui/venv/bin/python3 -m pip install --upgrade pip' command.
Great vid but, when i want to generate an image this happens: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' What does this mean how can i fix it?
Everything worked so far but when I try to generate the pool it says "RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'" can somebody tell me what the problem is?
When i tried to run the webui.bat, there is a error that comes: ERROR: Could not find a version that satisfies the requirement torch==1.12.1 (from versions: 2.0.0) ERROR: No matching distribution found for torch==1.12.1
Im getting this error after installing and running a prompt: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' I am super beginner and don't know how to fix this
I appreciate your video. I had no clue on what I was doing, but your video helped me install everything. My only question is, how do I know I’m running the latest version, which is 1.0. Been looking on how to update this on a Mac, but so far it’s only pc.
Got through, but in the web browser ui I get an error (RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half') if there is a particular problem with that, I have a pro m1...
After completing the install procees I don't get the local URL. Last line is "Model loaded in 64.6s (calculate hash: 32.3s, load weights from disk: 12.1s, create model: 6.5s, apply weights to model: 11.2s, apply half(): 2.3s)." Anyone with the same issue?
Hi! Thank you for your video, it was great! However I am encountering an issue: RuntimeError: MPS backend out of memory (MPS allocated: 4.14 GB, other allocations: 2.33 GB, max allowed: 6.80 GB). Tried to allocate 1012.50 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). No idea where I have to change the value to 0, do you know?
I tried on a Mac Mini Intel 2018 i5 32GB. It worked, but too slow, at around 22s/it. One option though is to install Windows with Boot Camp and use an eGpu.
At 3mins33second in you say to go back to terminal and type “cd stable-diffusion-webui/“ When I run the command I get this back “cd: no such file or directory: stable-diffusion-webui/“ - iv followed every step up to this point so I’m not sure what I’m doing wrong. Could you please advice
thank you for the video, I am using Apple M2 Max and Deforum in Stable Diffusion is not working, maybe you can help? - I have such a report "NoneType' object has no attribute 'sd_checkpoint_info''. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli."
Solved my own problem Copy and Paste nano ~/.zshrc enter 2) export PATH="/opt/homebrew/bin:$PATH" enter Control + X - then - Y - then - Enter 3) source ~/.zshrc enter 4) brew --version (Check version) enter 5) brew install cmake protobuf rust python@3.10 git wget enter
Recieved this errow when enter the URL in the browser and add a prompt after completing everything. RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half' Can someone advice?
Warning: /opt/homebrew/bin is not in your PATH. Instructions on how to configure your shell for Homebrew can be found in the 'Next steps' section below. hello, please help!
Thanks so much for this. Was able to follow all the way through. I've downloaded 2 models since and dropped in 2 new models in the Stable-diffusion folder since. What are the steps or is there a video for adding new models?
When I try to download the stable diffusion 1.5 model from hugging face, the download speed does not get above 200bits and gives a ETA of 6 hours, unfortunately it stops downloading after about 60mb and says there was a timeout. This is crazy as I have super fast broadband. Looks like I won't be using this!! The home-brew bit all went swimmingly.
4:45 I'm stuck in this stage. There's an error that goes like "[notice] A new release of pip is available: 23.0.1 -> 23.1.2 [notice] To update, run: pip install --upgrade pip" And then "raise RuntimeError(message) RuntimeError: Couldn't clone Taming Transformers. " How do I fix this??
compared to playgroundAI on the Mac MiniStudio Max the images take around 30 seconds to generate which means it is just shy if 2 to 3 times slower compared to the web. I don't think this is super slow.
Lora list problem. Models list problem problem: Lora Model List empty solve: check the folder paths in which the program searches for files, by default its username/stable-diffusion-webui-maste even if you put all the files in your folder and run the program from there, the program may look for files in the root directory My story i follow this instruction in steps where i dont understand what to do and fell into the trap due to partial understanding. So in step when you show how to launch webui with terminal i understand that path can be different so i use my path downloads/stable-diffusion-webui-master and everything started perfectly, but there was a problem: I didn’t see the Lore models that I downloaded and added to the folder and no instructions on the Internet helped, until I found out through the inspection option in the browser that the path of the folder in which the program searches for Lore files is different from that folders where I save, it turns out that in the root directory there was another stable deffusion folder and the program didn’t care where I launched it from, it looked for the files there.
ERROR: No matching distribution found for numpy==1.26.2 (from -r requirements_versions.txt (line 16)) WARNING: You are using pip version 19.2.3, however version 24.0 is available. You should consider upgrading via the 'pip install --upgrade pip' command.
Thanks for video. Another question. To install additional models. Can I add them to the models folder whilst the Terminal and/or browser UI is still running, or should I quit out of it, add the models to the folder and then restart
A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check. WHAT DOES THIS MEAN?? How do I do this
Hi, I finished all the tutorial, but once I have to copy the URL provided in terminal, it won't open on the browser. Any suggestions? Thank you for the video
Mine works, but I have to keep the terminal open. Is this supposed to happen? I'm sure I didn't do something right. It says that closing the terminal will terminate running python.
Hi! Thanks for video! But I have a problem with generation images. TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead. How to fix it? Please help .
Hello, I've tried googling for this problem but didn't find any answers. I did everything as shown in the tutorial but when I click the generate button I get this error message: "Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead." I am using a .safetensor model and I'm trying it on a 13 inch m2 MacBook Pro. Has anyone had similar problems or maybe knows how I may try to solve it?
after ups and downs most of the time fixing error messages, I spent about 5h to make it work. Great vid, easy to follow
After installing home-brew, there will be an instruction given in the terminal output to add brew to your path. It's not shown in this video, because he has already installed brew, but you need to do it.
Thank you
I don't understand how to do it. Care to explain? I'm writing in the commands it's telling me to run, but get the error message "-bash: syntax error near unexpected token `)'"
@@mohammedsarmadawy362 There are two strings under "==> Next steps:
- Run these two commands in your terminal to add Homebrew to your PATH:". Copy the first one hit enter, and then copy the second one and hit enter. After that, you can continue to enter: brew install cmake....
@@TienW626 where do the commands begin and end
This video didn’t work. It says error can’t generate metadata at the end. I don’t know if I did something wrong but I copied the video and it didn’t work.
learning IT is so fun when there is this brilliant voice explaining everything clearly and not a random indian tech support agent with the thickest accent you will ever hear
follow your leader ;)
Hi, Im getting this error RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
yes me too what can done??
me too.
Same
Very timely. We're doing an artist residency using AI generated videos. Exactly what I needed. Thank you so much!
nice can i have the info of the residency? curious
I hardly ever comment but you are a legend my friend. This saved me hours, thank you!
Outstanding tutorial, thank you. Installs and runs on MacBook Pro M1 with stable-diffusion-v1-5.
THAAAAANK You!!!
I tried to instal for 4 hours until I found your video!!
Hero!
what type of mac do you have?
@@digitalpabs 2020 Intel
I have Mac and had an issue setting up stable diffusion
finally, I did it from the terminal and I was there
but for the first time when I wanted to try and see how it generates the image I got this error:
RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
Time taken: 0.74s
? I even edited the command for stable diffusion for no halftime script and updated the Python version but no luck is there any other way
Got the same problem
i have an error, can u advice how to fix it ? NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
did u figure this out?
I have same problem, have you found a solution by any chance?
i have been struggling with installing SD, THANK YOU VERY MUCH , I DID IT
Thank you so much ! Those are very clear instructions, I was able to do it. Hopefully it will become simpler in the future, but I guess we're still early adopters
I can't install python@3.10. Command not found. Mac pro M1, Ventura.
me too
me too
It said error can’t generate metadata for me at the last step
- Run these two commands in your terminal to add Homebrew to your PATH:
(echo; echo 'eval "$(/opt/homebrew/bin/brew shellenv)"') >> /Users/vilijam/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"
Hi help? not working getting an error: NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
Time taken: 0.49s
Hi! i have this error when push in generate image: AttributeError: 'NoneType' object has no attribute 'lowvram' do you know how fix it? Thanks!!
To create a public link, set `share=True` in `launch()`.
Startup time: 120.1s (import torch: 3.8s, import gradio: 3.8s, import ldm: 0.7s, other imports: 3.4s, setup codeformer: 0.2s, load scripts: 1.5s, load SD checkpoint: 105.1s, create ui: 1.1s, gradio launch: 0.3s).
Error completing request
Arguments: ('task(eprn50rcg0itid4)', 'pool', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0) {}
Traceback (most recent call last):
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
processed = process_images(p)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/processing.py", line 486, in process_images
res = process_images_inner(p)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/processing.py", line 625, in process_images_inner
uc = get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, p.steps, cached_uc)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/processing.py", line 570, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/prompt_parser.py", line 140, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/sd_hijack_clip.py", line 229, in forward
z = self.process_tokens(tokens, multipliers)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/sd_hijack_clip.py", line 254, in process_tokens
z = self.encode_with_transformers(tokens)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/modules/sd_hijack_clip.py", line 302, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 811, in forward
return self.text_model(
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 721, in forward
encoder_outputs = self.encoder(
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 650, in forward
layer_outputs = encoder_layer(
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py", line 378, in forward
hidden_states = self.layer_norm1(hidden_states)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 189, in forward
return F.layer_norm(
File "/Users/user/stable-diffusion-webui/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/functional.py", line 2503, in layer_norm
return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
Dear TrobleChute, the terminal doesn't go ahed when I write this "brew install cmake protobuf rust python@3.10 git wget" Can you help me? Thank you
I'm having the same problem.
Hi, thank you for this! Very helpful! However, I got everything smoothly, but when I tried to generate the image, it said ""LayerNormKernelImpl" not implemented for 'Half'" in the terminal and failed. How I can fix this?
i have the same problems like you , i don't know how to fix
me too, please anyone..
Me too
hi there me too.
@@jeremieclimaco9946 does one of you find the issue ?
i am stuck at a stage where i had used the browser link and then it did something that led the terminal to get stuck at "Model loaded in 5.5s (calculate hash:..... "
When I put the command: "brew install cmake protobuf rust python@3.10 git wget", it says "zsh: command not found: brew". Any idea how to fix that?
cd /opt/homebrew/bin/
PATH=$PATH:/opt/homebrew/bin
cd
touch .zshrc
echo export PATH=$PATH:/opt/homebrew/bin >> .zshrc
Run the commands in that order in terminal, you'll be editing the path and creating the missing .zshrc file, exporting the path to this new file.
Now you should be able to use:
@@MayurGawkar you're my hero thanks
when i type in "cd stable-diffusion-webui/" i get no such file or directory. Any ideas?
same :(
me tooo :(
me too
Check if the file name matches - another way of opening would be to locate in finder and right click folder > Service > New Terminal at Folder 👍🏽
@@bonny-james thank you, it finally worked :)
add "export PATH=/opt/homebrew/bin:$PATH" if you get this error while installing brew "zsh: command not found: brew"
thanks
just want i needed thanks!
It says error metadata generation failed at the end. Any idea how I can fix this? Using Mac M1
@@quackyman796 same!
What about if you get "Error completing request"? Thoughts?
When I try to generate something it pops out a window which says that python closed suddenly , and the programs abort in the terminal
gave an error at the stage of launching the web interface:
Installing torch and torchvision
ERROR: Could not find a version that satisfies the requirement torch==2.0.1 (from versions: none)
ERROR: No matching distribution found for torch==2.0.1
WARNING: You are using pip version 20.2.3; however, version 23.2.1 is available.
You should consider upgrading via the '/Users/kupyasha/stable-diffusion-webui/venv/bin/python3 -m pip install --upgrade pip' command.
Can I use Radeon Vega eGpu with it?
Great vid but, when i want to generate an image this happens:
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
What does this mean how can i fix it?
great tutorial, the only thing is its missing controlnet
It keeps saying "Stable diffusion model failed to load" at the very last step. I did everything the same as you. What am I doing wrong??
Everything worked so far but when I try to generate the pool it says
"RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'"
can somebody tell me what the problem is?
idk why my SB don't generate:( just called RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
Thanks a lot. But my m1 pro 14" runs faster without optimization part
When i tried to run the webui.bat, there is a error that comes:
ERROR: Could not find a version that satisfies the requirement torch==1.12.1 (from versions: 2.0.0)
ERROR: No matching distribution found for torch==1.12.1
fr
me too (((( HELP
did you find how to solve this problem?
Im getting this error after installing and running a prompt: RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' I am super beginner and don't know how to fix this
I have the same error
@@Luxeduardo same thing
def what I'm searching for. Tkx so much bro
How fast is stable diffusion on Mac M2 ? Is it equivalent to which GPU for computer in terms of speed?
Thank for your very clear tutorial, it seems "commandline_args low or medvram" doesn't change to much.
I appreciate your video. I had no clue on what I was doing, but your video helped me install everything. My only question is, how do I know I’m running the latest version, which is 1.0. Been looking on how to update this on a Mac, but so far it’s only pc.
thank you so much!
this is amazing
Got through, but in the web browser ui I get an error (RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half') if there is a particular problem with that, I have a pro m1...
not working for 3d is there a certain setup for it?
After completing the install procees I don't get the local URL. Last line is "Model loaded in 64.6s (calculate hash: 32.3s, load weights from disk: 12.1s, create model: 6.5s, apply weights to model: 11.2s, apply half(): 2.3s)." Anyone with the same issue?
I have the same problem, did you solve it?
I too have the same problem :(
@@dias8837 mine showed up before these lines for some reason.
Is this still the most up to date version?
Got a message saying "command not found: brew" what should I do about that? Thanks in advance!
same here, can you fix it?
did you figure it out bud?
@@saideeprai29 Nooope lol, not super savvy with this kinda stuff so I had to set it aside
Do you have a tutorial on how to install deforum? Thanks so much for this video!
Hi! Thank you for your video, it was great! However I am encountering an issue:
RuntimeError: MPS backend out of memory (MPS allocated: 4.14 GB, other allocations: 2.33 GB, max allowed: 6.80 GB). Tried to allocate 1012.50 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
No idea where I have to change the value to 0, do you know?
Thanks mate! Such a nice tutorial. Im gone check all your videos now, and of course donate!
Hello, can this work on an intel mac as well?
discovery if it works?
I tried on a Mac Mini Intel 2018 i5 32GB. It worked, but too slow, at around 22s/it. One option though is to install Windows with Boot Camp and use an eGpu.
At 3mins33second in you say to go back to terminal and type “cd stable-diffusion-webui/“
When I run the command I get this back
“cd: no such file or directory: stable-diffusion-webui/“ - iv followed every step up to this point so I’m not sure what I’m doing wrong. Could you please advice
did you solve it?
When I tried to hit the "generate" after installation, python quits unexpectedly. any way to solve this issue? Im using macos ventura 13.2.1
following for anyone that can resolve. I am having the same issue. Runtime error unfortunately but I have great RAM
thank you for the video, I am using Apple M2 Max and Deforum in Stable Diffusion is not working, maybe you can help? - I have such a report "NoneType' object has no attribute 'sd_checkpoint_info''. Before reporting, please check your schedules/ init values. Full error message is in your terminal/ cli."
well i got stuck at brew install cmake protobuf rust python@3.10 git wget. it says command not found
Solved my own problem
Copy and Paste
nano ~/.zshrc
enter
2) export PATH="/opt/homebrew/bin:$PATH"
enter
Control + X - then - Y - then - Enter
3) source ~/.zshrc
enter
4) brew --version (Check version)
enter
5) brew install cmake protobuf rust python@3.10 git wget
enter
@@PHUKU thx from thailand!!
I tried, but it didn't work..
@@PHUKU thank you sooo much brother!!!!!!!
internet speed a flex !
Recieved this errow when enter the URL in the browser and add a prompt after completing everything.
RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'
Can someone advice?
Same I have the same problem :( Is yours an M1 Mac too?
Mine is Imac
Warning: /opt/homebrew/bin is not in your PATH.
Instructions on how to configure your shell for Homebrew
can be found in the 'Next steps' section below.
hello, please help!
Thanks so much for this. Was able to follow all the way through. I've downloaded 2 models since and dropped in 2 new models in the Stable-diffusion folder since. What are the steps or is there a video for adding new models?
Nevermind. I figured it out.
@@whiplashtv what did you do lol we're doing the same thing rn
@@bossmachine File drop from desktop to that folder, too awhile but it stook
When I try to download the stable diffusion 1.5 model from hugging face, the download speed does not get above 200bits and gives a ETA of 6 hours, unfortunately it stops downloading after about 60mb and says there was a timeout. This is crazy as I have super fast broadband. Looks like I won't be using this!! The home-brew bit all went swimmingly.
i can not get passed the brew install.... sectiion keep getting an error message. i am on an M2 macbook air
4:45 I'm stuck in this stage. There's an error that goes like "[notice] A new release of pip is available: 23.0.1 -> 23.1.2
[notice] To update, run: pip install --upgrade pip"
And then "raise RuntimeError(message)
RuntimeError: Couldn't clone Taming Transformers.
"
How do I fix this??
me too any idea¿
@@joel33famara61 im about to use this technique hope @troublechute can help us
compared to playgroundAI on the Mac MiniStudio Max the images take around 30 seconds to generate which means it is just shy if 2 to 3 times slower compared to the web. I don't think this is super slow.
Way faster now with the Mac optimised models
@@someghosts which models do you mean ?
Doesnt work. My terminal doesnt find the commant brew
can. you do one video on uninstall also, proper way of doing that? thank you
This was super helpful! Thanks for sharing!
I accidentally closed the WEB UI url, where can we get the url to run it again?
You solved my problem
Thank youuu 🎉
You made my life much easier!!
super helpful, thanks for it :))
i got some error that say "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'"
Me too
did you find the fix for this ?
@@zachorsinelli2142 No, I tried a lot of variations, but not one did not fit
I work in Mac OS intel
did you find a fix? having the same error
Did anybody figure out this error: "RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'"
гений, спасибо за туториал
Все работает?
@@gerychdo да
Thank you so F**king much for this. You've saved me so much time and headaches.
Lora list problem. Models list problem
problem: Lora Model List empty
solve: check the folder paths in which the program searches for files, by default its username/stable-diffusion-webui-maste
even if you put all the files in your folder and run the program from there, the program may look for files in the root directory
My story
i follow this instruction in steps where i dont understand what to do and fell into the trap due to partial understanding.
So in step when you show how to launch webui with terminal i understand that path can be different so i use my path downloads/stable-diffusion-webui-master and everything started perfectly, but there was a problem: I didn’t see the Lore models that I downloaded and added to the folder and no instructions on the Internet helped, until I found out through the inspection option in the browser that the path of the folder in which the program searches for Lore files is different from that folders where I save, it turns out that in the root directory there was another stable deffusion folder and the program didn’t care where I launched it from, it looked for the files there.
This help a lot, thank you very much~!
hey, guys, do you use it on Mac? Is any sense of using it on mac or better to have windows or linux ?
I installed everything and it works, but I closed the terminal, how can I log in again Stable Diffusion?
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
I get the same error
@@ytmundocripto I found a solution, I need to register two commands in the terminal
@@blackfoxai which commands?
@@blackfoxai which command?v
@@ytmundocripto which commands?
ERROR: No matching distribution found for numpy==1.26.2 (from -r requirements_versions.txt (line 16))
WARNING: You are using pip version 19.2.3, however version 24.0 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
I have trouble with the deforum extension; Especially with FFMPEG ? can u do a video about it ? i think several user on mac will have this issue
Thanks for video. Another question. To install additional models. Can I add them to the models folder whilst the Terminal and/or browser UI is still running, or should I quit out of it, add the models to the folder and then restart
when i type cd stable-diffusion-webui and enter nothing happens...is this normal?
Does it take more time to install the whole thing - mine was stuck at - Textual inversions loading
thanks for this unqiue program! now i can generate my anime characters!
I have a lot of libraries to download, can I set up all this folder for my external hard drive?
A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
WHAT DOES THIS MEAN?? How do I do this
i had the same problem, cannot proceed
@@avocadopictures9706 Do you have a link you can share?
Settings > Stable Diffusion > Enable option "Upcast cross attention layer to float32".
This worked for me.
@@kyni87 Thank you!! life saver
@@kyni87 where do you i type that or how do i get to settings i don't see it?
Thank you for the video.
But, how can I install ControlNet?
Hi, I finished all the tutorial, but once I have to copy the URL provided in terminal, it won't open on the browser. Any suggestions?
Thank you for the video
I'm trying to download it, but it shows "Torch not compiled with CUDA enabled" 😳Is there anyone know what it means, and how to fix this?
Are you using tensorflow or do you have an nvidia gpu?
Mine works, but I have to keep the terminal open. Is this supposed to happen? I'm sure I didn't do something right. It says that closing the terminal will terminate running python.
Hi! Thanks for video! But I have a problem with generation images. TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead. How to fix it? Please help .
Please make a tutorial how to train own lora using mac❤
I got an error
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
pleeease help me fix it
Thank you, thank you and thank you.
I don't quite understand the tab to write automatically, it stops and I can't get past it, I need help. Thank you
Thanks brother, I did it.
how do i uninstall all of these files? i no longer want this and im not sure how to delete everything
please help
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
I don't have CUDA - am I stuck?
after typing "brew install cmake protobuf rust python@3.10 git wget" I get an error message saying "command not found"
good morning,
i have problems installing after homebrew..i have a mac mini m1
can you help me thanks
how would you uninstall it if you didn't want it anymore?
Hello, I've tried googling for this problem but didn't find any answers. I did everything as shown in the tutorial but when I click the generate button I get this error message: "Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead."
I am using a .safetensor model and I'm trying it on a 13 inch m2 MacBook Pro.
Has anyone had similar problems or maybe knows how I may try to solve it?