How to Install ComfyUI in 2023 - Ideal for SDXL!

Поделиться
HTML-код
  • Опубликовано: 7 июн 2024
  • In this ComfyUI tutorial I show how to install ComfyUI and use it to generate amazing AI generated images with SDXL! ComfyUI is especially useful for SDXL as poor old Automatic1111 can have a hard time using it - especially if you try to use the refiner! Also works great for Stable Diffusion 1.5!
    It's really easy to install - especially with this video as a guide. It gives a level a freedom you've never had before, so why not give it a try?
    Works on best on Linux but also works on MS Windows and even Mac (apparently)!
    == Links! ==
    stability.ai/blog/stable-diff...
    github.com/comfyanonymous/Com...
    huggingface.co/stabilityai/st...
    huggingface.co/stabilityai/st...
    comfyanonymous.github.io/Comf...
    github.com/SytanSD/Sytan-SDXL...
    7-zip.org/
    Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
    == Increase your knowledge with Stable Diffusion Playlists! ==
    * General - ruclips.net/p/PLj...
    * Dreambooth - • Stable Diffusion Dream...
    * Textual Inversion - • Stable Diffusion Textu...
    #Sdxl #nocode #comfyui
  • НаукаНаука

Комментарии • 216

  • @pedrogorilla483
    @pedrogorilla483 10 месяцев назад +34

    It’s getting a lot of attention now after SDXL, hopefully there will be many improvements to the user experience in the coming months. It’s truly a powerful tool. Although I love A1111, it has grown into a software mess and feels like a lot of things are put together with duct tape. I think they’ll have to redo the entire backend from scratch at some point. It keeps getting harder to maintain and every new update breaks a lot of things.

    • @pedrogorilla483
      @pedrogorilla483 10 месяцев назад +1

      By the way, thanks for the video Rodent! You’re my favorite SD content creator! Your genuine excitement about the technology really shows.

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +8

      I can’t wait to see what happens 2 papers down the line 😉

    • @Cara.314
      @Cara.314 6 месяцев назад

      Comfy UI is already becoming the same thing i feel like, they really need to take a step back and established sound nodes that mimic what the community uses the most. So that we can avoid the need to install 100's of custom nodes to make it useable for people. i'm starting to see abandoned custom node libs that are replaced with a branch and cause strange conflicts, i could see how having different custom node packs leads to lots of utility nodes that do the same.
      i would love to be able to build a simple UI from the graph, all it does is take flagged node properties and lay them out in a side panel or something, could consolidate all the key values in the graph into an easy to work with interface for iterating on prompts and setting

  • @omarbromar
    @omarbromar 4 месяца назад +2

    Had to re-install comfy, video was a pretty good refresher! thanks!

  • @daniwanicki
    @daniwanicki 10 месяцев назад +17

    I enjoy the flexibility with ComfyUI. I can press one button and it will generate, refine, auto fix face and hands, remove background, then upscale all in one go. I can also queue hundreds of images in the background and not have to worry about my computer crashing while browsing the web. I'm impressed!

  • @Luxcium
    @Luxcium 3 месяца назад

    This is what I hope to have more of, I understand that you do have an awesome voice and charismatic accent, but I am also very curious about all there is to know with ComfyUI and all if you are willing to do more videos that explore this amazing UI

  • @Mimeniia
    @Mimeniia 10 месяцев назад +1

    Wow that Sytan workflow is insane. Thanks for vid, Nerdy.

  • @agaviani
    @agaviani 8 месяцев назад

    we love you. Thank you for teaching sharing! Consistent style and consistent character in many different poses is the most interesting topic.

    • @NerdyRodent
      @NerdyRodent  8 месяцев назад +1

      Then I trust you've checked out my "Instant LoRA" videos? ;)

  • @Puckerization
    @Puckerization 10 месяцев назад +24

    Learning ComfyUI just takes a little more perseverance than your normal attention deficit threshold. Once you pass that threshold, the basics all clicks into place. I like the fact that the images created are a record of the workflow and settings. Drag an image into the UI and it reproduces the node workflow with all the parameters.

    • @pedrogorilla483
      @pedrogorilla483 10 месяцев назад +2

      Yeah, totally worth it putting the time. To my surprise it was very intuitive for me and I learned it after a couple of days playing with it. I think I’ll stick with it. I hope some tools like deforum can be ported to comfy.

    • @c0nsumption
      @c0nsumption 10 месяцев назад +2

      100. Not for nothing, I really like comfyUI because it forces you to see what’s happening underneath the hood which allows you a ton more control.
      The only thing I don’t like about it is extension support. The reality is that Auto1111s extensions are INSANE. Like amazing. If the community puts as much effort into comfyUI… I genuinely believe that auto1111 wouldn’t keep up.
      Could you imagine? Using something like Deforum and having control over every frame in the nodes forcing temporal coherence 🥲

    • @ateafan
      @ateafan 10 месяцев назад +3

      strangely comfyui was so much more intuitive for me, and I am very much a beginner. I think you are correct in that once you understand the basic needed flow and terms and what a few of the basics do it becomes so much better and you feel in much more control

    • @c0nsumption
      @c0nsumption 10 месяцев назад +2

      @@ateafan that’s why I wish auto1111 extensions were fully migrated to comfyUI. Like the amount of control you have would be so dope. Imagine easily upscaling frames or manipulating them for Deforum, animatediff, roop, etc :O
      Like animatediff is amazing but upscaling takes FOREVER. In comfyUI you could easily rip the image up into its frames and upscale for each image individual instead of trying to do the whole sheet at once.
      I wish I wish I wish. Gotta be honest, the communities negativity towards adapting is really annoying considering how much easier A.I. has made the creation of art.

    • @lpnp9477
      @lpnp9477 8 месяцев назад

      ​@@c0nsumptionhow do you do a for loop on images in a batch? I'm struggling

  • @Kusari55
    @Kusari55 7 месяцев назад +1

    Much appreciated! I'm a complete newb and was able to follow this tutorial!

  • @JanBadertscher
    @JanBadertscher 10 месяцев назад +1

    You helped me to learn to use conda environments properly thanks to your first SD videos. Now I'd like to help you out too and recommend mamba to you! It's a one to one drop in replacement of conda, so you just subsitute all "conda" with "mamba" and have all the benefits: faster native C++ implementation of conda with parallel downloading, multi threading and faster dependency solving.
    Honestly, just start using it and never look back at conda :)

    • @kishirisu1268
      @kishirisu1268 22 дня назад

      After this comment I tried to use mamba (miniforge3), initially generation was 30 sec, with Mamba 27 sec (on my potato PC). Not really big improvement but it works.

  • @remco805
    @remco805 10 месяцев назад +10

    It's "modular" and that looks very promising to me, I love that the workflows are customizable

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      Yup, you can do a lot!

  • @brgtubedev001
    @brgtubedev001 7 месяцев назад +1

    Brilliant introduction to image gen in ComfyUI! Honestly, I've looked at all the other comparable videos for a starting point, and this is the most comprehensive for the installation and basic functions phase of learning. I've got the ComfyUI ControlNet tutorial open in my next tab.
    On that note, can I offer some feedback regarding the in video link you placed at the end of the video that leads to the AudioGen AI video? My understanding is that RUclips allows linking to the next video, to encourage "bingeability", so users click through and keep watching once a video ends. Would it make more sense to link to another ImageGen tutorial after this one? It was sort of disjointed for me seeing the AudioGen link, because I wouldn't mind watching another tutorial on ComfyUI ImageGen, but I don't do anything with audio, and I found this video by searching for ImageGen tutorials, so why would I continue on to an AudioGen tutorial... My apologies if this comes off as being too blunt. I really like your channel and have been following for about half a year now, so I just thought I should share my thought process and search intent, since you are using the RUclips in-video link feature.

  • @ScottLahteine
    @ScottLahteine 10 месяцев назад +5

    Among the many SDXL options available for Windows / macOS / Linux, ComfyUI is one the most optimal. At the moment (on macOS M1 Ultra) I’ve got ComfyUI, InvokeAI 2.3.5 and 3.0.1, Automatic1111 v1.5.1, Apple CoreML Diffusion, DiffusionBee, and MochiDiffusion. Automatic1111 is the fastest of the bunch with 3.4it/s for a 512x512 image, but ComfyUI is a close second with 3.0it/s. Both of them are pretty efficient with memory usage so they should work fine on more modest systems. We’re lucky to have so many great Stable Diffusion options now. What a time to be alive!

    • @razoraz
      @razoraz 9 месяцев назад

      Have you tried Draw Things? It's the only one as far as I know that has integrated the Apple CoreML diffusion code to speed things up. (correct me if I'm wrong!) Negatives with that one is it's basically an iPad app and that makes the interface a little annoying on the Mac, and you can't point it to use models you've already downloaded. On the other hand, it's simple to install and use which great for a lot of non-tech people.

  • @bigbo1764
    @bigbo1764 9 месяцев назад +2

    I started by learning on ComfyUI, it was annoying at first, but the customization and resources are unmatched. The barrier to entry is also very low, although it is pretty easy to get carried away with upscaling and refining and make a workflow that takes forever to run.

    • @MikevomMars
      @MikevomMars 8 месяцев назад

      Exactly my experience - all users that are disappointed with the poor performance of slow A1111 should try it. A1111 URGENTLY needs a complete overhaul.

  • @Maria_Nette
    @Maria_Nette 10 месяцев назад +7

    I'm quite comfy with the node base ui! Gotten used to it from doing a lot of work from Autodesk 3ds Max lol.

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      Yeah, if you’re used to blender and other node based systems then you’re golden 😉

  • @notDreadful
    @notDreadful 10 месяцев назад +15

    It does work indeed better, the speeds in ComfyUI are insane.
    But that UI is too much for me and for the average SD enjoyer.
    I'll probably just stick with 1.5 until Automatic1111 catches up.
    Unfortunatly, on the other hand, I've personally seen the power of SDXL and I'm eager for things to move along quicker ahah
    Amazing video Nerdy Rodent!

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +6

      It’s not too bad if you just use the workflows made by the community. Just ignore the spaghetti and focus on prompting 😉

    • @S4f3ty_Marc
      @S4f3ty_Marc 10 месяцев назад +1

      I've been testing Auto1111 and Comfy for last few days, maybe it's due to having a 4090 but the difference in speed between them for me is minimal
      Auto1111 1024x1024 6.1 It/s
      ComfyUI 1024x1024 6.6 It/s
      For Auto1111 I use the latest 1.51 with --opt-sdp-attention
      No Xformers - with Xformers it's about half the speed :)

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      @@S4f3ty_Marc same as me then, only I was using the default xformers for both

    • @Definesleepalt
      @Definesleepalt 10 месяцев назад +1

      what i can recommend is to set up the preset on 1111 and generate a very basic 1 x 1 image , then drag and drop in comfyUI . then you can mess around with things

    • @mirek190
      @mirek190 10 месяцев назад +1

      @@S4f3ty_Marc but you know A111 only using base SDXL moel and later you have to move that generated model to imgtoimg and use refiner? Tht is not proper workflow for SDXL as base mode should with data with noise and later going strait ( no picture yet) to refiner.

  • @AlexanderBukh
    @AlexanderBukh 10 месяцев назад

    figured how to use it a couple days ago but still, a very good video, thanks for making it.
    one thing i wanted to do but saw it here first - present and save two images (or more) at once, say with or without the refiner/lora/parameters/whatever

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +1

      Yup, Sytan did a good workflow there!

  • @Kelticfury
    @Kelticfury 10 месяцев назад +1

    I tried it but it is really a huge pain in the arse. You showed me a new way.

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      It’s easy to use, fiddly to make your own new thing that no one has ever done before 😉

  • @Puckerization
    @Puckerization 10 месяцев назад +12

    ROOP works better in ComfyUI (make sure to separately install the restore faces nodes) you can add many ROOP nodes in a daisy chain and assign them to different figures...OR use multiple ROOPs of the same subject images (left , right older, younger) and assign them to the same figure and you'll get a mix with better details.

    • @musicandhappinessbyjo795
      @musicandhappinessbyjo795 10 месяцев назад

      I actually used mtb custom node for this. Coz Roop wasn't working for me at all.

    • @Puckerization
      @Puckerization 10 месяцев назад

      @@musicandhappinessbyjo795 That's strange it didn't work for you. Installing MTB has bricked my ConfyUI installeation..twice!

    • @musicandhappinessbyjo795
      @musicandhappinessbyjo795 10 месяцев назад

      @@Puckerization yes it had a lot of bugs initially but now most of it is fixed.
      But I won't recommend using it. I think Roop and MTB don't go hand in hand.

    • @Puckerization
      @Puckerization 10 месяцев назад +1

      @@musicandhappinessbyjo795 Yes I think you're right. This happened to me two days ago. I spent hours trying to fix my installation and the errors appeared to be a Roop vs MTB conflict.

  • @xmattar
    @xmattar 10 месяцев назад +24

    Fun fact, I run SD 1.5 SD on 512 mb of vram

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +3

      Win! 👍

    • @synthoelectro
      @synthoelectro 10 месяцев назад +1

      that's insane, wow, didn't know it goes that low, and I thought my 4GB VRAM was pushing it.

    • @timmygilbert4102
      @timmygilbert4102 10 месяцев назад

      GPU ram?

    • @IllD.
      @IllD. 10 месяцев назад

      What resolution?

    • @xmattar
      @xmattar 10 месяцев назад

      @@IllD. 512

  • @yanbaraban4850
    @yanbaraban4850 6 месяцев назад

    @NerdyRodent
    Can you provide a workflow for the rat img?

  • @pathworker2010
    @pathworker2010 5 месяцев назад +1

    once you have an idea of the basic workflow, it does get easier, plus, the pinouts are labelled with suggested nodes to connect. hat makes life a lot easer.the rest of the leering curve involves getting an understating of how the diriment nodes interact with each other.

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      😀 you may be interested in ComfyUI Workflow Creation Essentials - Make and Edit Your Own! - ruclips.net/video/VM9snsuoqBc/видео.html

    • @pathworker2010
      @pathworker2010 5 месяцев назад

      @@NerdyRodent Thanks for this I've added the link to my dodo list. ;-)

  • @wholeness
    @wholeness 10 месяцев назад +1

    What about control net and all the plugins automatic has ?

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +2

      Expect to see a load more SDXL stuff in the future, such as SDXL control nets!

    • @harnageaa
      @harnageaa 10 месяцев назад

      no control net yet for sdxl, u gotta wait months for that.

    • @lionhearto6238
      @lionhearto6238 10 месяцев назад

      @@harnageaa dang why months?

    • @Sylfa
      @Sylfa 10 месяцев назад

      Comfy has control nets, but afaik control nets doesn't work with SDXL until it's updated, regardless of front end.

  • @hilbrandbos
    @hilbrandbos 8 месяцев назад

    You can set Automatic to hold multiple models loaded as well, big timesaver. But I do like the flow of comfy, you just don't have to switch tabs for txtimg or imgimg all the time.

    • @NerdyRodent
      @NerdyRodent  8 месяцев назад +1

      At least automatic seems to be able to render more than four images without crashing now!

  • @kalisticmodiani2613
    @kalisticmodiani2613 10 месяцев назад +3

    Somebody should do the spaghetti and hide it in a nice interface. We'd call it Comfy1111

  • @spiffingbooks2903
    @spiffingbooks2903 10 месяцев назад

    I agree that once one gets used to it, Comfy is the better inter face. Sadly my laptop only has 4GB of Vram so even comfy is on the slow side. I also have a Google colab account, but I'm a bit perplexed as to how that works. If using locally I can download the refined models from Civitai and but them in my models folder. How do I do this in Colab? Where do I put the various refined models that I might download? Is it all supposed to be kept in Google Cloud?

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      Yup, google drive is your cloud storage device

  • @randomlikeu
    @randomlikeu 10 месяцев назад +1

    letsgo!

  • @phillcorpe
    @phillcorpe 10 месяцев назад +1

    it would be great to see comfy workflow that replicates auto1111. txt2img is easy, but how do you move onto img2img, controlnet functions, inpainting, upscaling?

    • @Puckerization
      @Puckerization 10 месяцев назад +2

      You just add the controlnet, inpainting, upscaling nodes to your node graph and link them up. In fact you can use the Add Image Inpainting node for both txt2img and img2img. Inpainting in CumfyUI is less frustrating because you can zoom, move and resize the Inpainting window/image to suit your needs...only downside is there is no undo feature...yet.

  • @stevebruno7572
    @stevebruno7572 10 месяцев назад

    Been using it on A1111 l, Can you set comfy to save all the images it generates during its workflow for troubleshooting??

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +1

      Yup, just like in the examples shown!

  • @gulfblue
    @gulfblue 10 месяцев назад +1

    Loved the video. Can you clarify a small part for me? How did you download your upscalers in that file? I git cloned the repository into my upscalers file and it's not in a .pth and comfyUI can't retrieve it...

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +1

      I used my web browser, clicked the various links and downloaded the pth files that way

  • @uk3dcom
    @uk3dcom 10 месяцев назад +1

    I would love if we could group and hide the nodes exposing just the relevant sliders like in Blender3D nodess setup. I think that would overcome many peoples fear of complexity.

    • @lambgoat2421
      @lambgoat2421 10 месяцев назад

      there is a github repo that does that. I forget what it's called though.

  • @reezlaw
    @reezlaw 10 месяцев назад

    Will it agree with my fresh CUDA version? That was always a problem with Oobabooga for example, that and Python versions always make me prefer running these things in a container. I love that Oobabooga maintains a docker_compose.yml, what's easier than this?
    git clone whatever
    cd whatever
    docker compose up --build
    I wish Automatic1111 Comfy etc all did that

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      That’s why I use anaconda 😉

  • @mariozappini7784
    @mariozappini7784 10 месяцев назад +2

    I have a Laptop with 16 GB of Ram and a 2070 with 8 GB of Vram and Comfyui has been a saviour, i have to close pretty much everything else but i can do 1024x1024 images in about 40/90 seconds with base and refiner and do batches up to 10 images while with Auto1111 could barely do 1 with just the base model
    SDXL is great but man it's resource intensive, with 1.5 i could open up 10 chrome tabs and browse the web while Auto1111 generated batches of 40+ images it was sooo good but i understand that SDXL is a much more intensive AI to use.

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +2

      I’m sure someone once said that size doesn’t matter, when it seems it really does… 😉

  • @camprey
    @camprey 10 месяцев назад

    is there a way to efficiently uninstall Automatic1111? The only reason I dont want to install comfyUI at the moment is because Automatic1111 has been taking a lot of storage space, there's some not so obvious folders where it has stored a lot of gigabytes and i feel like i haven't even found the rest of them. (also, does comfyUI take as much space as Automatic1111?)

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      Just delete the environment and sd directory

    • @camprey
      @camprey 10 месяцев назад

      @@NerdyRodent Thank you! I'll definitely look that up!

  • @runebinder
    @runebinder Месяц назад

    When doing this on Windows, is there a way to create a batch file to autolaunch it? PowerShell in the Windows 11 Terminal doesn't recognise the commands and I have to open the Anaconda PowerShell Prompt.

    • @NerdyRodent
      @NerdyRodent  Месяц назад

      I have a desktop icon to launch it, so I imagine such a thing can be done on Microsoft windows too!

  • @titanitis
    @titanitis 10 месяцев назад

    And that is why we have Invoke Ai!!

  • @KyleDornez
    @KyleDornez 10 месяцев назад

    Would there be any ussue if I tell ComfyUI to pilfer the Automatic1111 folders for models and upscalers? I mean just loading them from those folders. Because my HDD is not made of rubber >.

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +1

      That’s exactly what the yaml file is for 😉

    • @sirtimatbob
      @sirtimatbob 5 месяцев назад

      You're so patient for explaining exactly what your video just went over. @@NerdyRodent

  • @theshuriken
    @theshuriken 5 месяцев назад

    Sir, can you please make a short video how to install and use the IPAdapter ? I am new to this and lost on how to install that module, please

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      Drop me a DM on www.patreon.com/NerdyRodent and I can guide you through it!

  • @v33tay
    @v33tay 25 дней назад

    I have tried portable version, but i am missing installing packages via cli, really feel unCOMFY without it. So is the 3.10.6 python version more relevant to use for all kind of image generations? Because as i understood a portable one uses 3.11.*

    • @NerdyRodent
      @NerdyRodent  25 дней назад

      Yes, the portable download does cause people a lot of issues!

  • @DreamingAIChannel
    @DreamingAIChannel 10 месяцев назад

    I cannot use the same Primitive with "steps" and "start_at_step" at the same time! How did you do that? I mean, i wrote a custom node for myself beacuse of that LOL

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      Yeah, they have to be the same type of thing for the primitive to connect, by default

    • @DreamingAIChannel
      @DreamingAIChannel 10 месяцев назад

      @@NerdyRodent Oh ok! Thanks for the answer! I will keep to use my custom node then 🤣

  • @andremonteiro1506
    @andremonteiro1506 10 месяцев назад

    Is the problem in Auto1111 with low vram systems only?
    I've got a RTX4090 wonder if it's worth to learn this cumbersome app

    • @Sylfa
      @Sylfa 10 месяцев назад

      I believe even the XL addon for Auto1x4 runs through the base model, converts it to pixel format, then converts it back to latent image and runs it through the refiner. With ComfyUI it doesn't run the image through two conversions before refining it, letting it work like it's intended to work.
      ComfyUI has some UX issues, but if it's the spaghetti look that's bothering you then I can heartily recommend learning how to work with it. It's used in everything from Blender, Meshroom, Natron, Unity, Unreal, and so on. It's essentially just a flowchart.
      If it's that you need more understanding of how the AI works, yeah I get you. But you can do things like making composite images where you join multiple prompts together and specify *where* in the image you want the subjects to be.
      I don't know ComfyUI well enough to do it myself, yet, but you should be able to setup an image where you describe several characters in high detail and where in the image they should be. Like the basic "red ball on blue box" which SD can't do properly on its own would be relatively easy. It should also be possible to make much larger images that join together nicely, though it would likely turn into a behemoth of a node setup, but at least you *can* do it.

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      Yes, systems with less than 32GB RAM can struggle on A1111 apparently

  • @StewardGarcia
    @StewardGarcia 10 месяцев назад +2

    Today i have downloaded ComfyUI, because I can't load the SDXL base model in A1111 due to Out Of Memory (my laptop just have 4 GB of VRAM, is a RTX 3050) but with ComfyUI I can load the base model without problems, I don't need to set --lowvram to load and play with the model, normal settings and I can generate a image 1024x1024 20 samples in 56 seconds, without refiner :(

    • @Sylfa
      @Sylfa 10 месяцев назад

      It automatically would use the lowvram setting in your case, the flags are only necessary if it doesn't pick the right option automatically for some reason. You can absolutely use the refiner setup, unfortunately it'd certainly add a lot to the generation time so you might want to find a good base image first and *then* load the settings for that into the base+refiner setup.

  • @-nufzy-
    @-nufzy- 10 месяцев назад

    I've got a problem: ModuleNotFoundError: No module named 'safetensors' . How can I fix it? I completely followed the guide. I'm running windows with AMD GPU.

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +1

      Make sure you’ve run the appropriate Windows AMD install commands.

    • @-nufzy-
      @-nufzy- 10 месяцев назад

      @@NerdyRodent Solved. But now the ckpt_name is undefined even though there are ckpt files in the models/checkpoints folder.

  • @DoozyyTV
    @DoozyyTV 9 месяцев назад

    what are the vae files? Do you need them?

    • @NerdyRodent
      @NerdyRodent  9 месяцев назад +1

      No, they’re built in

  • @natsuschiffer8316
    @natsuschiffer8316 10 месяцев назад

    Using Anaconda and creating an enviroment, than having to activate that enviroment each time I need to restart Comfy would be so hard for me. I personally don't use it with an enviroment.
    Is there an easier way of doing it? I would love to catch up on those missing it/s if possible.

    • @Sylfa
      @Sylfa 10 месяцев назад

      You can create a new text file, put the commands line by line in there, then save it with a .bat extension. Then you can just run the bat file instead. If you have the bat files from the installer then just put the name of the one you'd be running at the end instead of the python command.

    • @natsuschiffer8316
      @natsuschiffer8316 10 месяцев назад

      @@Sylfa thanks, makes sense.
      Would that gain me it/s? Or decrease the chance of future problems?

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      Just the same as with anything, you could set up a desktop launcher icon. That way it is one click rather than the 2 steps of open terminal, run script. Personally, I'd set "Terminal=true" so you have something to close when you want to terminate the app.

  • @TapTwice
    @TapTwice 10 месяцев назад

    can you make an inpainting tutorial?

  • @Ray2kay6HG
    @Ray2kay6HG 9 месяцев назад

    my queue prompt isnt showing how do you make it visible

    • @NerdyRodent
      @NerdyRodent  9 месяцев назад +1

      To be honest, I wouldn’t know how to turn it off!

  • @aceathor
    @aceathor 10 месяцев назад +1

    Is the same thing that is in InvokeAI ?

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +3

      InvokeAI can run SDXL too, but it’s a different program

  • @Xaddre
    @Xaddre 10 месяцев назад

    Do you have a discord channel or something that I could join or is there a discord channel that you frequent where you find this information? or maybe a reddit where I can find the kind of information you do?
    Edit: P.S. I really like that you use anaconda for your tutorials as it allows for much much more flexibility and less python version conflicts if you use it right!!
    P.P.S. Could you make a video about your Linux installation and setup because you do a lot of the things I like to do on Linux I dual boot Windows and Linux (windows for gaming and specific programs that don't support Linux or poorly support it and Linux for most other things) and I really want to know what addons you use and why and what distro you use and why, ect. That information is harder to find than it should be. Sure some people will say what they use but they never explain why so I can't really know if it would work for my usecase or whatever else and a lot of that information is outdated and whatnot.

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      Sorry, no discord or Reddit I’m afraid! Maybe one day… But yes, Windows is for gaming - especially on things like Destiny 2 or other games with “anti-cheat”. They can see Linux is superior and get scared 😉

  • @scarletblaze
    @scarletblaze 10 месяцев назад

    I can't get it to work, like 99% fragmentation. OK with SD1.5. I'm guessing not enough GPU memory. Hopefully it'll work on the next update.

    • @Sylfa
      @Sylfa 10 месяцев назад +1

      Try it with the --cpu flag, it'd take *forever* but if it still doesn't generate properly then it's something else that is causing the issue.

  • @pragmaticcrystal
    @pragmaticcrystal 10 месяцев назад +1

    👍

  • @dogme666
    @dogme666 10 месяцев назад

    automatic1111 has a refiner addon , is it not working the same way?

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +1

      I’ve tried three refiner add-ons so far, and all of them are more leaky than a sieve 😉

    • @dogme666
      @dogme666 10 месяцев назад

      thanks! ill work on both then , i have to say im having quite good results with one of the addons on automatic1111, its relatively fast and i get way way better outcomes than any previous model (with rtx3060 12gb) , also thanks for your videos! you are the king of all rodents! and ive learned a lot from you!@@NerdyRodent

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      Whatever works best for you is the way forward! Personally I'm rather liking the nodes so far as I can do whatever I like

  • @technostartups
    @technostartups 5 месяцев назад

    what is the systen requrment to use this??

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      As with anything AI, the best OS to use is Linux. On top of that, you’ll get the best experience with at least 8GB VRAM, though it is possible to use lower-end cards.

  • @subn0rma1
    @subn0rma1 10 месяцев назад +1

    I want to switch. How straightforward is it to fully uninstall automatic1111?

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      Very straightforward. For me I’d just remove the conda environment and delete the directory!

    • @subn0rma1
      @subn0rma1 10 месяцев назад +1

      @@NerdyRodent I don't know what a conda environment is or how to remove it. I, like most people, don't know much about git or python or anything involving typing stuff into a cmd box. I just followed a video tutorial to get it all installed. I have no idea what random bits and bobs this installation process has added to my computer. Maybe you can make a video on how to properly uninstall automatic1111? I think you'd get a lot of views from that one!

    • @Sylfa
      @Sylfa 10 месяцев назад

      @@subn0rma1 If you *really* want to be sure to remove all the bits then you can delete the Auto4x1 folder, then uninstall Python in windows. But you'll need most of the same things for ComfyUI so when you install that you'll simply reinstall most if not all of what you already had installed.
      Unless you really, really want to I'd simply remove the Auto4x1 folder and leave it at that. Just don't forget to move the models out of A4x1 since Comfy can use them as well, saves you from downloading those again.

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      @@subn0rma1 it's only two commands - conda env remove --name sd2 ; rm -rf ~/github/sd2

  • @rioharta526
    @rioharta526 9 месяцев назад

    hi rodent, i only have 6 gb vram how to setup comfyUI to low vram ?

    • @NerdyRodent
      @NerdyRodent  9 месяцев назад

      That should work just fine 😉

  • @flonixcorn
    @flonixcorn 10 месяцев назад +3

    Been using Comfyui for 4 days now, only thing that annoys me is that there isnt an easy fix for eyes, had to try out so many node setups and im just using a 1.5 model to fix the eyes

    • @Slav4o911
      @Slav4o911 10 месяцев назад

      Yes that's why I think SDXL is not uncensored.... so there is no actual reason for me to use it. I don't know why people think it's better... you can add as much detail to SD 1.5 as you need or want and it would also not generate ugly images. The problem with censored models is they either generate bad images or lack fine tuning. Like for example Midjourney generates grate images but you can't fine tune or control them at all.

    • @c0nsumption
      @c0nsumption 10 месяцев назад +1

      @@Slav4o911are you a bot? He’s talking about eyes. Also there are already uncensored models and Lora.
      In order to say 1.5 is good, you don’t have to say SDXL is bad. They are both good. 1.5 has had a lot of community development. But from working with SDXL for just a few days: even with the few checkpoints and Lora’s available, for the most part it blows 1.5 away. In a few months SDXL WILL completely dwarf 1.5 making it irrelevant. Especially when ControlNet or Stability’s version of ControlNet is supported.
      It literally JUST came out. 1.5 was crap when it was first out too. It’s a blueprint for all of us to take initiative and create new models, tools, and workflows.

  • @drawmaster77
    @drawmaster77 10 месяцев назад +1

    inpainting still kind of off, which I am most interested in. SDXL doesn't have an inpainting model yet. The older models are kinda meh. Tried their inpainting example, image didn't look good. Also having to use an external app to create a masked image is a deal breaker for me, since when I am editing something I have to make dozens of modifications.

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +3

      Tried right-click, mask editor?

    • @drawmaster77
      @drawmaster77 10 месяцев назад

      @@NerdyRodent you are right, Nerdy, I didn't see it. Still I'd love SDXL inpainting model and workflow.
      Also Automatic1111 had inverse inpaint which was very nice, when for example, I just want to keep the face, but change everything else.
      Ideally I'd love to see a fast iteration inpaint workflow, where I could generate a result, then inpaint certain portions of it, and do it over again. Not sure if something like this would be technically possible with node-based editor.

    • @sirtimatbob
      @sirtimatbob 5 месяцев назад

      Have you found something like this? Have you looked again recently?@@drawmaster77

  • @audiogus2651
    @audiogus2651 10 месяцев назад

    A1111 is great for me and SDXL. There is just an unfortunately long load time for SDXL models. Aside from that it has all the workflow goodnesses I have come to appreciate. Cozy does not seem aptly named for me. Great for nerds who want a peak under the hood but I want to get mad fast results with tons of img2img control and parameter iteration. Cozy seems great for hardcore diffusion tech lords developing apps etc who need to document things for engineers to implement. All good, but by no means a universal swish on that dawg.

    • @mirek190
      @mirek190 10 месяцев назад

      A111 has not proper workflow for SDXL at all ... base model is generating noise for refiner ... you can not do that under A111

    • @audiogus2651
      @audiogus2651 10 месяцев назад

      @@mirek190 cannot do what?

  • @lpnp9477
    @lpnp9477 8 месяцев назад

    Why can't we load lora from prompts? I know that's not how it's done, but that option is much better. This way, a lora can come from a wildcard and you can xy grid search and replace to test the loras at different strengths. Not in comfy.
    Also minor things but no indication in the ui of eta or time taken, you have to open the console. No indication of token count. Prompt weighting is broken. No lora selection gallery or Metadata. No image gallery.
    Honestly I like the node method, but it's missing a UX pass or eight.

    • @NerdyRodent
      @NerdyRodent  8 месяцев назад

      I just like they way it can make more than 4 images without crashing ;)

    • @lpnp9477
      @lpnp9477 8 месяцев назад

      Has that happened to you in a1111 or Vlad? Because never for me. Whereas I've had the sampler stop doing anything with no error messages in the middle of a gen on multiple occasions in comfy.@@NerdyRodent

  • @rosdiosman-deeoz
    @rosdiosman-deeoz 9 месяцев назад

    hi rodent, how to setup comfyUI to mid vram ?

    • @NerdyRodent
      @NerdyRodent  9 месяцев назад +1

      Command line option: --lowvram to make it work on GPUs with less than 3GB vram (enabled automatically on GPUs with low vram)

    • @rosdiosman-deeoz
      @rosdiosman-deeoz 9 месяцев назад

      thanks@@NerdyRodent

  • @kishirisu1268
    @kishirisu1268 23 дня назад

    It is very hard to belive but Comfy works on AMD GPU with 4Gb Vram on Windows - 30sec per image. Automatic - hardly even starts..

  • @kalibtv8001
    @kalibtv8001 6 месяцев назад

    Want to change to ComfyUI from Automatic1111 because automatic is very slow.

    • @NerdyRodent
      @NerdyRodent  6 месяцев назад

      ComfyUI is certainly a lot faster for me!

    • @kalibtv8001
      @kalibtv8001 6 месяцев назад

      have u tried comfybox ? i need a good inpainting extension. what is the alternative?@@NerdyRodent

  • @NewPhilosopher
    @NewPhilosopher 10 месяцев назад

    I run SDXL in Visions of Chaos.

  • @Definesleepalt
    @Definesleepalt 10 месяцев назад +1

    SDXL on automatic1111 struggles hard .... i have a 4070 and 64gb of ddr4 ram and i cant generate anything but 1x1 images , if i try to 16:9 it auto crashes nearly everytime... comfy UI on the otherhand just works .... lol

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +1

      Sure does!

    • @Elwaves2925
      @Elwaves2925 10 месяцев назад

      I haven't had any of those issues on a 3060 with 12Gb VRAM and 32Gb RAM. Takes about 8-10 secs to do 1024x1024 without the refiner, which is basically useless anyway. Same goes for portrait or landscape at 896x1152.

    • @Sylfa
      @Sylfa 10 месяцев назад

      @@Elwaves2925 Don't write off the refiner if all you've done is run it through Auto4x1, it's not being used the way it's intended there. It's like saying acrylic paint is useless because you're using it to write a novel.

    • @Elwaves2925
      @Elwaves2925 10 месяцев назад

      @@Sylfa I've used it in ComfyUI and Invoke too. It is being used in A1111 in the way it's intended if you get the refiner extension.
      The refiner has it's uses for the base model, no arguments there but I've already moved off that and onto custom models, where it isn't needed IMO. My results are better without it. 🙂

  • @drawmaster77
    @drawmaster77 10 месяцев назад +2

    that looks awesome, though requires an AI scientist to understand what all these nodes do lol

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +1

      Time to dig in then, right? 😉

    • @drawmaster77
      @drawmaster77 10 месяцев назад

      @@NerdyRodent yes! I know what I am doing this weekend haha

  • @OliNorwell
    @OliNorwell 10 месяцев назад

    It did make me smile that their "simple option" requires the user to go off and download and install 7-Zip. Surely for the simplest of users just packaging in a zip that Windows has native support for would have made more sense. Yeah it's less efficient but it's literally supposed to be the simple option for newbies.

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад +4

      I think with anything new in AI, it's safe to assume that people should have at least the basic computer skills from school. If a user struggles to unzip a file, that is going to be the least of their problems!

  • @LouisGedo
    @LouisGedo 10 месяцев назад

    👋

  • @Elwaves2925
    @Elwaves2925 10 месяцев назад +2

    I disagree that it works better but it's all subjective. It's certainly faster and has better memory management which is great if you need it. However, there's a refiner extension for A1111 which makes swapping models redundant, and the refiner really isn't needed once you move off the base SDXL. I've stopped using both, the results are excellent and it cuts down generation time a lot.
    Where A1111 wins out for me is custom filenames with model and prompt in it. The gallery extension is also a plus. Each to their own though.

    • @mirek190
      @mirek190 10 месяцев назад

      A111 has not proper workflow for SDXL at all ... base model is generating noise for refiner ... you can not do that under A111

  • @peterpui7219
    @peterpui7219 10 месяцев назад +1

    Wrong! SDXL is running the best with Invoke AI

  • @MisterWealth
    @MisterWealth 10 месяцев назад +2

    I want to drive the car, not learn how to build it. lol

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      Not a bad analogy, this is indeed exactly like learning to drive the car. Of course, if you needed to build it, then you’d have to learn coding in python and a whole variety of other things!

  • @tvanime6747
    @tvanime6747 7 месяцев назад

    En AMD GPU ? Como seria obvio Windows

    • @NerdyRodent
      @NerdyRodent  7 месяцев назад

      For using AMD, Linux would be a better choice

    • @tvanime6747
      @tvanime6747 7 месяцев назад

      @@NerdyRodent LPTM 🤕entonces no es posible no en Windows con AMD

  • @jimmyTimtam
    @jimmyTimtam 9 месяцев назад

    SDXL for me works much better in A1111 vs comfyUI.

  • @Potts2k8
    @Potts2k8 19 дней назад

    You lost me at 5:20 🤔

    • @NerdyRodent
      @NerdyRodent  19 дней назад +1

      That option is only if you already have existing models. Because you don’t, you can simply skip the optional changes!

  • @MuradBeybalaev
    @MuradBeybalaev 9 месяцев назад

    You don't seem to get that what you describe as "anything but comfy" is the comfy part to people who know what they're doing. I find node-based interfaces as comfy as it can get when I'm not busy coding. Web UI controls only exist to annoy me.

  • @usama57926
    @usama57926 4 месяца назад

    It looks so complicated....

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +2

      Looks can be deceiving 😉

    • @usama57926
      @usama57926 4 месяца назад

      @@NerdyRodent 😂

  • @bentp4891
    @bentp4891 10 месяцев назад

    It looks hateful

    • @fredbred1092
      @fredbred1092 6 месяцев назад

      i think i just broke my hand

  • @fr0zen1isshadowbanned99
    @fr0zen1isshadowbanned99 9 месяцев назад

    Comfy is awful! They needlessly complicate every minute detail and the generations look worse.
    The time I wasted on that shitty UI already... just .py errors, incompatibilities, and hidden/excluded features.
    It could be so easy to get comparable results to A1111, but for Months now they can't seem to figure out that Face restore and Model Upscaling are essential Features...

  • @tuurblaffe
    @tuurblaffe 10 месяцев назад

    i don't want to dis automatic1111 but comfyui for me feels a lot easier to work with to get the final images i need as i can visualize what i am working on, and that on it's own does a lot, for example upscaling a latent space or upscaling an image as well the ability to not need to include etc in my promts is a huge advantage for me just load a lora and set the settings, want to apply something later on in the image just put it later in the flow multi model mega images are nice as well the only downside i noticed is that screwing up your envoirment is quite easy

    • @NerdyRodent
      @NerdyRodent  10 месяцев назад

      Yup, Comfy is very configurable!