ComfyUI Tutorial Series: Ep05 - Stable Diffusion 3 Medium

Поделиться
HTML-код
  • Опубликовано: 6 фев 2025
  • Welcome to episode 5 of our ComfyUI tutorial series! In this video, you'll learn how to run Stable Diffusion 3 Medium on ComfyUI, including where to find the models, which ones are better, and how they compare to the SDXL model. I'll walk you through the setup process, model downloads, and detailed comparisons to help you achieve the best results. Enjoy the tutorial!
    Download sd3_medium_incl_clips
    civitai.com/mo...
    Download sd3_medium_incl_clips_t5xxlfp8
    civitai.com/mo...
    Download sd3_medium_incl_clips_t5xxlfp16
    civitai.com/mo...
    Or Download from Hugging Face
    huggingface.co...
    Settings for KSampler
    Steps 28
    CFG 4.5
    Sampler: DPMpp2M
    Scheduler: SGM Uniform
    Download the workflow from Discord
    / discord
    look for the channel ai-resources

Комментарии • 69

  • @pixaroma
    @pixaroma  6 месяцев назад +1

    If you have any questions you can post them in Pixaroma Community Group facebook.com/groups/pixaromacrafts/
    or Pixaroma Discord Server discord.gg/gggpkVgBf3

  • @christiandeleo7737
    @christiandeleo7737 3 месяца назад +2

    Awesome video.. didn’t know the difference between sdxl and sd3! Thank you also for explaining the nodes, copy, paste and the various commands. Most things are obvious, but some really aren’t! Will continue watching your vids! Thanks again! ❤

  • @OllyV
    @OllyV 2 месяца назад +2

    Great series of videos, much appreciated :)

  • @hasanduger3259
    @hasanduger3259 7 дней назад

    What a great tutorial 🌟

  • @vnM89
    @vnM89 4 месяца назад

    following this is a treat because you pick up the pace of the explanations and the flow is faster the more we progress into the series, thx again

  • @brucenunez01
    @brucenunez01 2 месяца назад

    Thank you for sharing your findings on the 3 models and creating that workflow it has expanded my mind on ComfyUI's capabilities.

    • @pixaroma
      @pixaroma  2 месяца назад

      You are welcome, there is a new version now for 3.5 version :)

  • @bhargavzantye
    @bhargavzantye 28 дней назад +1

    completed another tutorial
    great knowledge

  • @higherleveling
    @higherleveling 5 месяцев назад +1

    my fav series.

  • @DezorianGuy
    @DezorianGuy 5 месяцев назад +2

    Excellent tutorial as always!

  • @jennifertsang6572
    @jennifertsang6572 6 месяцев назад +1

    Another great video. I am always amazed at what can be created.

  • @TheWayneFang
    @TheWayneFang 10 дней назад

    good test for SDXL and SD3. thanks for sharing

  • @GenoG
    @GenoG 5 месяцев назад +6

    I wasn't going to watch this one because I don't really care about SD3, however the A/B/C testing that you set up is VERY HANDY for my kind of work, so again another helpful video!! FI DOLLA!! 😀 And, I still don't care about SD3!

    • @pixaroma
      @pixaroma  5 месяцев назад +2

      Thank you, yeah since Flux was launched many don't care about sd3 anymore 😁

  • @SumoBundle
    @SumoBundle 6 месяцев назад +2

    Very useful tutorial as usual. Keep up the good work.

    • @pixaroma
      @pixaroma  6 месяцев назад

      Thanks ☺️

  • @makadi86
    @makadi86 6 месяцев назад +1

    Significant progress and a start to see what can be achieved with ComfyUI

  • @GenoG
    @GenoG 5 месяцев назад

    Thanks!

    • @pixaroma
      @pixaroma  5 месяцев назад

      Thank you so much ☺️

  • @MGomaa-hr3ct
    @MGomaa-hr3ct 5 месяцев назад

    Amazing content ❤

  • @samussong1486
    @samussong1486 Месяц назад

    Cool! Next ep!

  • @ahmedrefaat9012
    @ahmedrefaat9012 6 месяцев назад +1

    Great content and incredible explanation skills
    Would you please create videos for:
    1- ipadapter and differences between its versions, configuration parameters meanings per case
    2- face stuff, character creation , group different people into single image…etc

    • @pixaroma
      @pixaroma  6 месяцев назад +1

      thanks for suggestions, I will see what i can do, control net is in plan for an episode in the future, will see what i cand o with ip adapter, i am taking it more slowly because I still need to learn it and have a basic understanding in order to be able to explain it, and some stuff are just complex :)

  • @anhthonguyen8675
    @anhthonguyen8675 Месяц назад

    i am following your tutorial , thank you for your awesome series. Just curious what graphic card you are running

  • @selfhosted
    @selfhosted 10 дней назад

    What tool are you using to generate the voiceovers? Sounds very real.

    • @pixaroma
      @pixaroma  10 дней назад

      I use elevenlabs website

  • @Fayrus_Fuma
    @Fayrus_Fuma 6 месяцев назад +1

    Thank you so much from me.
    It's a pity, but for me (character creation) neither SDXL nor SD3 is still suitable.
    I have tried to make with these models - furniture, apartments, buildings, bathrooms (very bad), showers (many attempts) and many more.
    Alas, I will have to continue to sit on other models. Too bad the same SD3 still has an arm problem.

    • @pixaroma
      @pixaroma  6 месяцев назад +1

      Lets hope they fix it soon, i know it can be frustrating, so far no AI can do perfect characters

    • @Fayrus_Fuma
      @Fayrus_Fuma 6 месяцев назад

      @@pixaroma What about Midjourney? I've seen comments that it makes normal hands. But I'm skeptical, because there are a lot of liars on the internet. Because I'm sick of people who say - what's the problem? Create your own Lora and use it.

    • @pixaroma
      @pixaroma  6 месяцев назад +1

      I don't use mid journey anymore because is too expensive I already have chatgpt that is quite usefulfoe text and can do sometimes ok illustration but is limited. Bur you can try it for a month and see if ot does what you need. I tested a lot of ai for a month 😀 just to see if does what i need. But mamy have problems with hand or animal feet or extra tails etx

  • @LeadrosXrG
    @LeadrosXrG 2 месяца назад

    thx

    • @pixaroma
      @pixaroma  2 месяца назад

      You are welcome, there is a new version now 3.5 blog.comfy.org/sd3-5-comfyui/

  • @konnstantinc
    @konnstantinc 6 месяцев назад

    cool!

  • @luniar5190
    @luniar5190 2 месяца назад

    hello, the tutorials you posted really helped me expand my knowledge in this area, thanks! I also have a question, I got an idea from this section and tried to run 4 models at the same time. For my computer, one of these models is enough to slow down my system a lot. And now when I use all 4 together without any loras and after setting the primitives, text inputs and seeds, it crashes immediately. Could I have done something wrong or is it just natural that it crashes?

    • @pixaroma
      @pixaroma  2 месяца назад +1

      you need a lot of vram to run multiple models, just use one at the time, do tests, and then switch and do the other one, some models are so big that it will struggle even with one model

    • @luniar5190
      @luniar5190 2 месяца назад +1

      @@pixaroma now i can see, thank you very much for the answer.

  • @Artyom_A2S
    @Artyom_A2S 6 месяцев назад

    Thanks for the creativity, will you be able to repeat the video "Game design with stable interface of Diffusion Forge and Photoshop" but on the new version of Stable Diffusion?

    • @pixaroma
      @pixaroma  6 месяцев назад +1

      I can do in comfyui once I advance with the episodes and has anything it needs, I still need to cover control net, and upscaling and stuff before i can recreate that

    • @Artyom_A2S
      @Artyom_A2S 6 месяцев назад

      @@pixaroma If you do this, it will be very cool

  • @arentol7
    @arentol7 5 месяцев назад

    Not to give you a hard time, just something that occurred to me that you could have done. At 10:24 when you deleted the 2nd and 3rd workflows, you could have just deleted the 3rd, then for the second just deleted EmptySD3LatentImage and replaced it with an Empty Latent Image. Then change your checkpoint to Juggernaut. Would have been faster and easier.

    • @pixaroma
      @pixaroma  5 месяцев назад

      thanks, many time I just record portions and explorer and found out later that was an easy way :) didnt want to redo again since already take a lot of time the recording and editing.

  • @ahmedrefaat9012
    @ahmedrefaat9012 6 месяцев назад

    Also would you please increase the frequency of the videos (ex 3 per week 😍)

    • @pixaroma
      @pixaroma  6 месяцев назад +3

      I would like that, but youtube doesnt bring me enough earning to be able to give up on other design projects, so one for week for now, maybe too if i get more time.

  • @cybernetic-ransomware1485
    @cybernetic-ransomware1485 Месяц назад

    Can three of them share the same node of Empty Latern Image?

    • @pixaroma
      @pixaroma  Месяц назад +1

      yes they can, empty image is just a noise image that is used to get started, and if the seed is different the result is different, or if the prompt is different the result is different so yes can use same empty latent image

    • @cybernetic-ransomware1485
      @cybernetic-ransomware1485 29 дней назад

      ​@@pixaroma but will this be always the same cached image, or is it unique generated by each call? I wonder by relability to repeat image with this same seed and model. You know, curiosity 🙂
      I'm referring here to Ep02, where you connected the Empty Latent Image output to the VAE Decoder and Preview Image.

    • @pixaroma
      @pixaroma  29 дней назад

      @@cybernetic-ransomware1485 same model + same seed , same ksampler settings + same prompt it will give you same image. As soon as you change any of those the things will start to look different, the difference can be subtle or big depends on what you change. Empty latent image is just some noise so it can start to generate something, change prompts, change seed, change model all those will give you different results, put seed on random and get a different result each time even if is the same model, prompt, and settings

  • @baheth3elmy16
    @baheth3elmy16 6 месяцев назад

    👍

  • @NadjibIxe
    @NadjibIxe 6 месяцев назад

    should we use comfyui or swarmui ? 🙏

    • @pixaroma
      @pixaroma  6 месяцев назад

      I am using comfyui but you can use what fit best for you, depends what you need it for and what is easier for you, i have been using a1111 then forge then comfy UI, so it seems that first that get updated and most active is comfyui that why i try to learn it step by step

    • @NadjibIxe
      @NadjibIxe 6 месяцев назад

      ​@@pixaroma I was a bit confused with these different frontends. I'm going to start with ComfyUI since Swarm is still new and in beta. Thank you so much for all the videos you provide us with

  • @emr550m
    @emr550m 13 дней назад

    did you share anywhere which GPU model you have and how many ram :)

    • @pixaroma
      @pixaroma  13 дней назад

      Rtx4090 24gb vram and 128gb system ram

    • @emr550m
      @emr550m 13 дней назад +1

      @@pixaromathanks I heard it on next video . great series 👏

  • @kaiserinvictoria4897
    @kaiserinvictoria4897 6 месяцев назад

    Hi man,I have a little off the topic question
    how can I use your styles.csv file in comfyui?

    • @pixaroma
      @pixaroma  6 месяцев назад +1

      I am testing this week for a solution, i will make a video once i make it to work

  • @mrrubel8841
    @mrrubel8841 Месяц назад

    SD3 for food,illustration etc
    SDXL for animals, people etc

  • @MoonEight
    @MoonEight 3 месяца назад

    what's your hardware setup?

    • @pixaroma
      @pixaroma  3 месяца назад +2

      **My PC: **
      - CPU Intel Core i9-13900KF (3.0GHz, 36MB, LGA1700) box
      - GPU GIGABYTE AORUS GeForce RTX 4090 MASTER 24GB GDDR6X 384-bit
      - Motherboard GIGABYTE Z790 UD LGA 1700 Intel Socket LGA 1700
      - 128 GB RAM Corsair Vengeance, DIMM, DDR5, 64GB (4x32gb), CL40, 5200Mhz
      - SSD Samsung 980 PRO, 2TB, M.2
      - SSD WD Blue, 2TB, M2 2280
      - Case ASUS TUF Gaming GT501 White Edition, Mid-Tower, White
      - Cooler Procesor Corsair iCUE H150i ELITE CAPELLIX Liquid
      - PSU Gigabyte AORUS P1200W 80+ PLATINUM MODULAR, 1200W
      - Microsoft Windows 11 Pro 32-bit/64-bit English USB P2, Retail
      - Wacom Intuos Pro M

  • @Fenkreg
    @Fenkreg 5 месяцев назад

    BTW in a matter to improve video you can switch Windows to display windows in dark mode, since Comfy is dark it would be better for the viewers, less "flashing lights" ^^"

  • @Adreitz7
    @Adreitz7 6 месяцев назад

    This is more surface-level than I was looking for; just an intro to Comfy and first look at SD3, though I was interested to hear that the SD3-specific empty latent has a reason for existence.
    I was hoping to find a good SD3-only upscaling workflow. The model output tends to break when going over 1MP or using aspect ratios over ~1.5:1, so direct generation at high resolution or a hiresfix-like workflow are not possible. I've done some experimenting with tiled workflows, but ran into a lot of problems. The SD Ultimate Upscale custom node works with the SD3 tile controlnet, but it doesn't seem to add much more visual information and can mess up details, as well as having inconsistency between the tiles. I tried three different custom nodes for Mixture of Diffusers and MultiDiffusion tiling algorithms, but none of them worked with tile controlnet (each tile was being controlled with the entire first stage generation, leading to a repeated ghosting all over the second stage output) and they added hallucinations when used without controlnet. This is keeping me from being able to effectively use SD3, as I'm not content with 1MP output.
    Also, is this video AI narrated or did you just highly edit your voice recording? It just feels off.

    • @pixaroma
      @pixaroma  6 месяцев назад +3

      I am still using sdxl mostly for more complex stuff, sd3 is still new and doesn't integrate well with many extensions, I just hope they promised update will make things better. If you watched other episodes you will see I am a designer and i try to present things in simple ways. As for empty latent if you understand code you can looks at the code of the empty latent and compare it with the code of sd3 empty latent, mostly is similar but have some formulas there that are different and didn't understand much. As for voice i write te text and use ai to get the voice, just i struggled more this episode, they platform i use have a bug and i have to generate 3-4 times to get a decent result. I am looking also on more solution for upscaling but probably i will not use sd3 but a sdxl workflow

    • @vault382
      @vault382 2 месяца назад

      What tool or software do you use for the ai voice? I wish I could use my voice as a reference for ai text to speech but I'm not sure what open source free platform is best.