How to convert civitai models to ONNX! AMD GPU's on windows can use tons of SD models!

Поделиться
HTML-код
  • Опубликовано: 19 ноя 2024

Комментарии • 119

  • @richkell1653
    @richkell1653 11 месяцев назад +5

    Managed to optimize a different Civitai model and it works perfectly! Jumped from 2-3it/s to 12.36it/s!!! You SIR do ROCK!!! Keep the vids coming :)

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад

      That’s awesome! Don’t know why the first one didn’t work. Might try deleting the folders for that model and trying again.

  • @goolom
    @goolom 10 месяцев назад +1

    Your humor and facts are 100% correct haha. I tried all those options and nope. waste of time. I'm glad I found this. I hope you make more content like this, your good at it

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      Ugg I know. It was so annoying trying one thing after another and manually trying to fix strings of problems just to end up saying even if it can be fixed and used it’s just not worth the effort. :-/

  • @drdray0876
    @drdray0876 Год назад

    Great informational video! I also appreciate how you caught the disk usage. Appears that the olive method is a pain vs dual booting

    • @FE-Engineer
      @FE-Engineer  Год назад

      Definitely. Anyone who is ok with dual booting and using it in Linux right now absolutely should.

    • @duladrop4252
      @duladrop4252 11 месяцев назад

      @@FE-Engineer When I have my new Computer I will definitely follow your Linux Guide and Dual boot my computer...

  • @Kierak
    @Kierak 11 месяцев назад +1

    I'm getting this epic error: AttributeError: 'ONNXStableDiffusionModel' object has no attribute 'lowvram' when I press on the optimize button.

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад

      Yes. This is both an epic error and and something seems to have broken in the code. I’m looking at what happened.

  • @matijakos2772
    @matijakos2772 4 месяца назад

    Where does the onnx exists on our pc ? i would like to try importing the onnx file in another program (houdini). It should work?

  • @rudolfaeschlimann6959
    @rudolfaeschlimann6959 10 месяцев назад

    man I was clueless and wasting time. Thank you very much!

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      You are welcome. It actually took me way too long to figure out how to do it and get it to work correctly. It is not the most friendly setup. I’m glad it helped! Thank you for watching!

  • @baheth3elmy16
    @baheth3elmy16 8 месяцев назад +2

    How do you get Olive installed?

  • @michaelbuzbee5123
    @michaelbuzbee5123 10 месяцев назад

    Asked a question on one of your other videos that has been answered here. Kept looking and found it, but I keep getting an error 'NoneType' object has no attribute 'lowvram'. Even when I followed the how to get dreamshaper off huggingface. Any suggestions?

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      Ugg sorry to be this way. I have a recent video explaining how to fix problems in automatic1111 directml. This lowvram problem came up for a lot of people.
      This video has it. Towards the end. Just skip forward to it.
      ruclips.net/video/mKxt0kxD5C0/видео.htmlsi=oNAqQLqrvmyCm28N

  • @Vardigard
    @Vardigard 7 месяцев назад

    I have a latest Automatic1111 fork for DirectML, and there is none of the ONNX or Olive tabs in web UI.

    • @FE-Engineer
      @FE-Engineer  7 месяцев назад

      Ah yes. They went away now. I will have to update the video and description. Sorry about that.

  • @rikaa7056
    @rikaa7056 10 месяцев назад

    i liked and subscribed you help me tons. you are not this fake channels that talk bs

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      I try to provide reasonably easy paths forward for people to get things up and running.

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      Thank you for watching and supporting my channel. It means the world to me!

  • @jinxPad
    @jinxPad 11 месяцев назад

    nice, simple concise tutorial, thanks again!

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад

      You are welcome! Glad it helped :)

  • @Teardropbrut
    @Teardropbrut 10 месяцев назад

    The washed out picture is usually a symptom of not using a VAE. Some models have VAE baked in, some do not. I noticed that you don't have visible the option to choose what (separately downloaded) VAE to use. I have enabled the UI element in settings and I usually have vae-ft-mse-840000-ema-pruned.safetensors selected.

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      Does the VAE work with ONNX?

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      My understanding was that the VAE did not work with ONNX format…

  • @Leviathan-kp8mz
    @Leviathan-kp8mz Год назад +1

    about the washed out look, i dont think onnx is baking in the Vae?

    • @FE-Engineer
      @FE-Engineer  Год назад +1

      That is absolutely possible. Unfortunately if you are running on windows with olive setting a VAE does not work or function properly. :-/. I tried that.

  • @Gamer4Eire
    @Gamer4Eire 8 месяцев назад

    Onnx doesnt appear as a tab on SD and it is not a run time argument for SD either, what am I missing?

    • @FE-Engineer
      @FE-Engineer  8 месяцев назад

      Nothing. The code changed. It’s weird now. Stay tuned. I have a new video coming out about this that will be better overall.

  • @ElPinoles17
    @ElPinoles17 10 месяцев назад

    I didn't understand well, should I use the "Modelname" [Optimized]? or keep using the one that I downloaded from Civitai? I'm asking because in the video you still use the "Modelname" instead of using the one with "[Optimized]"

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      Once optimized they will both work the same way. It will reference the one that is optimized anyway.

  • @_JustCallMeRex_
    @_JustCallMeRex_ 10 месяцев назад

    Hello. I would like to ask something about the optimization process.
    I tried following the instructions you said in the tutorial video and, for some reason this error keeps popping up.
    "AttributeError: 'NoneType' object has no attribute 'lowvram'"
    What does this mean? And how do I fix this?

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      I made a video recently 12/2023 about errors coming up for folks with automatic1111 directml. One of them is that lowvram error during optimization.
      In the video I show how to get around that error. It’s a hacky fix but it does work to get around it.
      ruclips.net/video/mKxt0kxD5C0/видео.htmlsi=Z432ctBls2kEFSOS

  • @markdenooyer
    @markdenooyer 11 месяцев назад

    I have been having an issue where ONNX says that it only supports 77 or so nodes and that it is truncating the rest of the prompt. I used to use huge long prompts when I was using a NVIDIA card, but now I have this rx7900gre I am now limited to a handful of prompts? I must be missing something.

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад

      So I saw that as well last time I was fiddling around with my install on windows. I think this is something unique to running directml and onnx. When I run it on Linux I don’t run into that since I am using ROCm. While it’s definitely a hassle. If you want to use more of the features running it in Linux right now is a significantly better option overall.

  • @andythedeane
    @andythedeane 10 месяцев назад

    How do I add embeddings for the ONNX models? I have them in the folder but it doesnt show them in Automatic 1111

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      You can’t use them with ONNX. At least not yet as far as I am aware.

  • @andre_tech
    @andre_tech 9 месяцев назад

    Why I cant just use the citae models normally like before? After fixing that cuda core error I can only use ONNX models on automatic1111 ?

    • @FE-Engineer
      @FE-Engineer  9 месяцев назад +1

      Remove -onnx from your start command if you don’t want to use onnx. But there is a big performance hit.

  • @Caglarknk
    @Caglarknk 9 месяцев назад

    I installed '' juggernautXL_v8Rundiffusion '' and '' dreamshaperXL_turboDpmppSDE '' models and optimized Olive - Optimized checkpoint.
    I always see '' AssertionError: '' . How i can load models ? and i cant see DPM++ 2M SDE Sampling method. I can see 6 models.
    Help me please. Can't we use these models without optimising them?

    • @FE-Engineer
      @FE-Engineer  9 месяцев назад

      Can’t use sdxl
      Can’t use those samplers on onnx

    • @Caglarknk
      @Caglarknk 9 месяцев назад

      @@FE-Engineer Is there any possibility to remove onnx from my computer with amd processor to use this sampler and models? I want to be able to operate on all models. Can you help if there is?

  • @julianrioux4134
    @julianrioux4134 10 месяцев назад

    The speed increase, I saw, was crazy, 1920 by 1080 images in 30 secs, but at the moment it's way too limiting. The measly 77 tokens is awful, can't use lora and textural inversions, even merging has negligible effect. And like most of you, I've found optimizing to be a crap shoot. Insights that may help some ppl, as FE stated, it uses a crap tom of memory, I've seen 19gbs, diskspace, 8gb pagefiles. If you try to put in a Vae, it will fail. Leave VAE Source Subfolder as "vae", changing image size also fails. Clearing out the ONNX Model tab also seems to help as I've seen that referenced during a fail as well. Lastly, thank you kindly Fe-Engineer for your tutorials. I'll be following your guide to Linux with Rocm, when i get around to buying another hd, but I suspect I'll be sorely underwhelmed with rendering speeds after experiencing olive. Like you I have a 7900xtx and think perhaps i should have spent another grand and given in to Nvidia's price gouging.

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      My speeds with ROCm on Linux are roughly 17-19 it/s on a normal 512x512 with sd1.5 models.
      Obviously size changes and sdxl change things. But overall even compared to olive, I found the slight performance hit to be 100% worth it to be able to just use everything the way it was initially built and intended to be used.
      I think you will be pleasantly surprised.
      And once we get ROCm 6 with PyTorch for ROCm 6 I think it will be even faster.
      ROCm 6 is out now. PyTorch is still not really using ROCm6 though from what i have seen.

  • @memzz3670
    @memzz3670 11 месяцев назад +1

    I keep getting assertion error when i try to optimise, Any ideas?

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад

      Someone else I saw had a similar sounding issue. They found out it was eating all of their ram during optimization. Might try using a lowram flag?

  • @krisshietala2119
    @krisshietala2119 Год назад +1

    You talk about optimised model results but you chose the unoptimised model to generate from checkpoint list.

    • @FE-Engineer
      @FE-Engineer  Год назад +1

      Neat story once you optimize it, it does not matter. Even though it shows unoptimized and optimized it is the same files that both point to.

  • @ashureg1354
    @ashureg1354 11 месяцев назад

    Great video! Question: can we convert loras and vae to onnx too? I cant seem to figure it out.
    Gonna switch to Linux on my main gaming PC soon. But I have to back everything up and reinstall so thats gonna happen after Christmas is done for.
    Thanks :)

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад

      So can it be done. Yes.
      I went down a huge rabbit hole trying to convert things to ONNX.
      My general view is: don’t bother.
      Using rocm on Linux even if you have to set that all up from scratch is significantly easier and then you don’t need to convert to ONNX than trying to convert stuff to ONNX.
      If you do heavy AI and specifically want to get deep into ONNX explicitly then sure. For any casual user don’t bother.
      For reference I spent pretty much every moment not at work for 2 days trying to find a straight forward and easy way to convert to onnx. Everything I found was busted the only one that worked was the way I did it in the video. :-/

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад

      Thank you so much for the kind words :)

  • @Nubinator
    @Nubinator 9 месяцев назад

    I keep getting "FileNotFoundError: [Errno 2] No such file or directory: 'footprints\\unet_gpu-dml_footprints.json"

    • @FE-Engineer
      @FE-Engineer  9 месяцев назад

      If it is an SDXL model. Those will not work.

    • @Nubinator
      @Nubinator 9 месяцев назад

      @@FE-Engineer i was a stable diffusion model, which ive used before without onnx

    • @FE-Engineer
      @FE-Engineer  9 месяцев назад

      I’m having trouble using automatic on windows without onnx. So I’m not sure but something might be going on.

  • @ragnarlothbrok367
    @ragnarlothbrok367 11 месяцев назад +1

    How to use upscaler with that? No matter what I select it doesn't upscale at all

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад

      You can use the upscale from the extras tab.

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад

      If you want inpainting and the AI upscalers you need to use ROCm on Linux to get all the fancy stuff

  • @void-qy4ov
    @void-qy4ov 11 месяцев назад

    Great tut :) but with all things that needed for converting models... i switched to linux to run ROCm

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад

      Yea I generally use my Linux one as well. I mostly made it because several people asked how to convert civitai models specifically.

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад

      And thank you for the kind words.

  • @vexillen1877
    @vexillen1877 Год назад

    I'm on 6700 XT. It seems to work for now but do you think I should use any running arguments? I was using a lot of arguments while I was using default version.

    • @FE-Engineer
      @FE-Engineer  Год назад

      Not if you are using directml and ONNX. When ROCm is available in windows and amd GPU users can get the full feature set then those arguments will come back.

  • @geraltofvengerberg3049
    @geraltofvengerberg3049 10 месяцев назад +1

    adetailer isn't working any way to get it working ?

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      ONNX has a lot of limitations currently. I have not used adetailer personally so unfortunately I don’t know what the problem is. My guess is it probably has to do with either directml or onnx format more than anything else.

    • @geraltofvengerberg3049
      @geraltofvengerberg3049 10 месяцев назад

      thx for answer but render preview isn't working either just like in your video

    • @geraltofvengerberg3049
      @geraltofvengerberg3049 10 месяцев назад

      ok im pretty much pissed, SD was working fine even without ONNX now i installed W11 and its not anymore. Gotta install linux and dual boot to it

  • @NeedaSolutionTV
    @NeedaSolutionTV 10 месяцев назад

    man i love your channel. Do you know photo realistic SD models which work with ONNX i couldnt find any one there. SD 1.5 gives me always pics with bad faces particularly eyes looks horrible but the rest of the charackter looks amazing. Only the eyes are kinda cringe. Would be glad if you could suggest as photo realistic models

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад +1

      Dreamshaper is probably one of the best in my opinion. It is consistently good and pretty realistic overall.
      If you go ROCm on Linux you can do SDXL which opens up a lot of possibilities.

    • @NeedaSolutionTV
      @NeedaSolutionTV 10 месяцев назад

      Do you have a guide or video for seting up ROCm what it is or means. Is it possible to set it up on windows ?@@FE-Engineer

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      ROCm is not available on windows yet to let you run AI.
      ROCm is basically AMD's version of CUDA like Nvidia.
      For right now it really only runs AI on linux, it should be coming to windows "soon" but we have also been waiting for quite a while. I check the github progress, and it does look really close, but no idea when it will be up and running.
      So if you wanted to really run ROCm on linux, I have a video about making a dual boot PC. Then a video of installing ROCm on linux.
      I also will likely have a new video showing installing the newest version of ROCm up soon.

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      Also as a random side note. I have found Euler and Euler A (Euler Ancestral) tend to produce slightly better results generally for faces and eyes. It’s still wonky sometimes. But some of the other samplers really give me weird results more than I would like.

    • @NeedaSolutionTV
      @NeedaSolutionTV 10 месяцев назад

      thanks have experienced the same euler is my favourite. But what i have noticed that if i generate pics with 1.5 only face and upperbody no problem with eyes, but if start full body images eyes and particularly faces starts looking horrible i dunno why@@FE-Engineer

  • @uxot
    @uxot 2 месяца назад

    Why i dont have onnx and olive tabs???

    • @FE-Engineer
      @FE-Engineer  2 месяца назад

      Updated code read the video description

  • @MarkSokolov0
    @MarkSokolov0 3 месяца назад

    Is there any way to do this without access to Stable Diffusion?

    • @FE-Engineer
      @FE-Engineer  3 месяца назад

      Yes. But most of the ways I found to do it were either very complicated or did not work. I initially wanted to cut stable diffusion out entirely and just convert model types. It was not very straight forward especially in AMD cards.

  • @lememz
    @lememz 11 месяцев назад +1

    AttributeError: 'NoneType' object has no attribute 'lowvram'

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад

      Something has changed and been broken recently on this version of automatic1111 directml. I’m looking into it. For now the alternatives that work are either shark from nod ai. Or comfyui.

    • @TrippyRiddimKid
      @TrippyRiddimKid 10 месяцев назад

      @@FE-Engineer maybe a tutorial for onnx on comfyUI? really wanna run onnx turbo models as I only have a 5600xt and cant get it working still with 1111

  • @nomanqureshi1357
    @nomanqureshi1357 10 месяцев назад

    thanks alot

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      You are very welcome! Thanks for watching!

  • @BerndbobBrotkopf
    @BerndbobBrotkopf Год назад

    Thank you for your tutorial.
    Evertime I try to optimize a civitai model, I get an AssertionError. Does anyone know how to fix this? (RX 7900XTX gpu, win11, automatic1111 webui)

    • @FE-Engineer
      @FE-Engineer  Год назад +1

      I saw this with a few models that were missing some config file inside them.
      Other ones you absolutely have to make sure the other tabs don’t have any information in them. Reload the ui and try again. I found most civitai models seemed to work without issues. Some were missing the VAE and did not function properly without it (looked washed out).
      Which model?

    • @BerndbobBrotkopf
      @BerndbobBrotkopf Год назад

      @@FE-Engineer it's almost all models, stable diffusion and stable diffusion xl. I got epicphotogasm to work on the third try, but only at 512px.
      Have you tried installing sd on wsl?
      I've tried installing it with Rocm, but haven't got it to work yet.

    • @BerndbobBrotkopf
      @BerndbobBrotkopf 11 месяцев назад

      @@FE-EngineerI tried photopedia

  • @4MERSAT
    @4MERSAT 11 месяцев назад

    It's much easier for me to use Shark. Yes, you are limited to one lora, but you can use a different combination of resolutions, not only 512x512. For my 6700 xt, the generation speed is the same as olive.

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад +1

      Are you in windows? I tried shark on Linux with rocm and it was not working properly and I could not get around all the errors.

    • @Stupid_Rabbit
      @Stupid_Rabbit 10 месяцев назад

      Bit late but does not have to just be 512x512, by default can be any multiple of times ad well such as 1024x512. This is due to it being the values when you opermise the model, if you want you can opermise it with other values and go by multiples of that instead

    • @algroyp3r
      @algroyp3r 10 месяцев назад

      @@FE-Engineer Same, I have tried using Shark on windows a few month ago, and it was just throwing random errors.

  • @jeff6928
    @jeff6928 9 месяцев назад

    assert conversion_footprint and optimizer_footprint
    AssertionError
    Edit:
    Fixed
    Cleared cashe and fixed it. Also my ssd where my os is located is running very low on memory. Don't know why, because my root folder is on my hdd

    • @FE-Engineer
      @FE-Engineer  9 месяцев назад +1

      Hard to say. If they are on different drives it shouldn’t use much. But I’m not sure how it might cache and where that might end up. Temporarily.

    • @jeff6928
      @jeff6928 9 месяцев назад

      Hey @@FE-Engineer ! Thanks for response! I found a way of making SD work for my 6800XT combining your fixes and with very few steps. What i did i extract SD from git into my custom directory and followed your steps of fixing runtimeerror no cuda gpus are available. After that SD worked for me normally, except when trying to upscale, i was getting not enough vram error. I fixed that with these command lines
      --use-directml --opt-sub-quad-attention --medvram --disable-nan-check --no-half --precision full --no-half-vae --opt-sub-quad-attention --opt-split-attention-v1 --autolaunch --listen --medvram --precision full --opt-split-attention-v1 --no-half --no-half-vae --opt-sub-quad-attention --disable-nan-check --use-cpu interrogate gfpgan bsrgan esrgan scunet codeformer
      After that everything worked smoothly and with no problems. My it/s seems to be low at only 2 to 2.38 max. However, it's still reasonably fast. I can't remember if i encountered any other errors but i think this is all i had to do. I installed python 3.10.6, Git and latest AMD GPU drivers. Thanks for your help, it really helped making it finally work.

    • @FE-Engineer
      @FE-Engineer  9 месяцев назад

      You are welcome I am glad you got it working.
      For me. Usually I don’t have to put no half. But for some of the SD 2.1 models I do have to use no half.
      Just be aware that using no half for me at least cuts my performance in half I think. So you might want to test. If you can use it properly without needing no half you might get better performance. And maybe you can use sd1.5 based models at least with better performance.

    • @jeff6928
      @jeff6928 9 месяцев назад +1

      @@FE-Engineer Oh nice, thanks for that info. I will test no half and see what it does. Even tho my it/s is low, times it takes for images to render is still reasonably fast. But those it/s is still pretty low for 6800XT in my opinion.

  • @zerohcrows
    @zerohcrows 10 месяцев назад

    keep getting an AssertionError 😓

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      What does it say?

    • @zerohcrows
      @zerohcrows 10 месяцев назад

      @@FE-Engineerended up uninstalling everything and just following your fix guide and it's working now. Taking a very long time to optimize with olive though.

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      the optimization process is absurdly expensive. Ram, hard drive space, and CPU absolutely go bonkers. It is REALLY hard on a computer to optimize into ONNX. :-/

    • @zerohcrows
      @zerohcrows 10 месяцев назад

      ​@@FE-Engineer do you know if I'd be able to share my ONNX folder with my friend? or would he have to do it himself?

  • @furkanbezci3520
    @furkanbezci3520 4 месяца назад

    There is no Olive tab in my webuı 😂

    • @FE-Engineer
      @FE-Engineer  4 месяца назад

      See video description. Updated videos. Code changed

  • @kopros1679
    @kopros1679 10 месяцев назад

    great video, appreciate your work . I still get an error though to which i havent found any solution. After like 20-50 seconds i get "TypeError: StableDiffusionPipeline.__init__() got an unexpected keyword argument 'text_encoder_2'" . Got any idea of what can i do?

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад +2

      This sounds like SDXL? I have not been able to get SDXL to work with ONNX.

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад +2

      You are welcome! Thanks for watching!

    • @kopros1679
      @kopros1679 10 месяцев назад

      You are right, i chose a different model, cyberrealistic which says its SD i think and after following you steps again it worked!@@FE-Engineer

  • @WiRaR
    @WiRaR 3 месяца назад

    Are you joking??? No friking Olive nor ONNX tab is here!!!!! HOOOOWWWW?????

  • @rikaa7056
    @rikaa7056 10 месяцев назад

    I get key error:time_embed.0.weight'

    • @FE-Engineer
      @FE-Engineer  10 месяцев назад

      What were you doing? I’ve never seen that error before?

    • @rikaa7056
      @rikaa7056 10 месяцев назад

      @@FE-Engineer how much ram you need to convert i looked at my task manager i have 16gb ddr 4 ram and it ate all of it and gave that error. How much ram do you have?

  • @lurkmoar4
    @lurkmoar4 11 месяцев назад

    Great tutorial, managed to convert a few models successfully, I'm now running into a Problem after I launch SD and try to convert a model Nothing really hapens and get the following error message. "AssertionError: No valid accelerator specified for target system. Please specify the accelerators in the target system or provide valid execution providers. Given execution providers. Given execution providers: ['DmlExecutionProvider']. Current accelerators: [...]". I have the feeling I am not launching SD correctly, I use "webui.bat --onnx --backend directml", from an anaconda prompt in the SD directory, if i just use the shell to launch "webui.bat --onnx --backend directml" without running anaconda I don 't even see the onnx and olive tabs. Any pointers or solutions would be appreciated, and thanks for your videos

    • @FE-Engineer
      @FE-Engineer  11 месяцев назад

      This is strange. I have not heard of anyone else having this problem.
      My first suggestion would be to do a full reboot of your computer. Exit out of everything reboot. And after you reboot. Anaconda run your startup script and see if it works then.
      Also I have like a 2 minute video about creating a startup script so that you can just like double click on a file and it starts stable diffusion for you.
      ruclips.net/video/vKIqd5FDLn0/видео.htmlsi=BTN_lmgD8YdXuZ6Z

    • @PointlessClip
      @PointlessClip 9 месяцев назад

      I ran into the exact same error, but after unchecking "Safety Checker" directly above the convert button fixed it. Hope it works for you too :)