ComfyUI Tutorial Series: Ep05 - Stable Diffusion 3 Medium

Поделиться
HTML-код
  • Опубликовано: 26 дек 2024

Комментарии • 52

  • @pixaroma
    @pixaroma  4 месяца назад

    If you have any questions you can post them in Pixaroma Community Group facebook.com/groups/pixaromacrafts/
    or Pixaroma Discord Server discord.gg/gggpkVgBf3

  • @christiandeleo7737
    @christiandeleo7737 Месяц назад +1

    Awesome video.. didn’t know the difference between sdxl and sd3! Thank you also for explaining the nodes, copy, paste and the various commands. Most things are obvious, but some really aren’t! Will continue watching your vids! Thanks again! ❤

  • @brucenunez01
    @brucenunez01 28 дней назад

    Thank you for sharing your findings on the 3 models and creating that workflow it has expanded my mind on ComfyUI's capabilities.

    • @pixaroma
      @pixaroma  28 дней назад

      You are welcome, there is a new version now for 3.5 version :)

  • @vnM89
    @vnM89 2 месяца назад

    following this is a treat because you pick up the pace of the explanations and the flow is faster the more we progress into the series, thx again

  • @OllyV
    @OllyV Месяц назад +1

    Great series of videos, much appreciated :)

  • @samussong1486
    @samussong1486 13 дней назад

    Cool! Next ep!

  • @SumoBundle
    @SumoBundle 4 месяца назад +2

    Very useful tutorial as usual. Keep up the good work.

    • @pixaroma
      @pixaroma  4 месяца назад

      Thanks ☺️

  • @jennifertsang6572
    @jennifertsang6572 4 месяца назад +1

    Another great video. I am always amazed at what can be created.

  • @higherleveling
    @higherleveling 3 месяца назад +1

    my fav series.

  • @makadi86
    @makadi86 4 месяца назад +1

    Significant progress and a start to see what can be achieved with ComfyUI

  • @DezorianGuy
    @DezorianGuy 3 месяца назад +1

    Excellent tutorial as always!

  • @GenoG
    @GenoG 4 месяца назад +5

    I wasn't going to watch this one because I don't really care about SD3, however the A/B/C testing that you set up is VERY HANDY for my kind of work, so again another helpful video!! FI DOLLA!! 😀 And, I still don't care about SD3!

    • @pixaroma
      @pixaroma  4 месяца назад +1

      Thank you, yeah since Flux was launched many don't care about sd3 anymore 😁

  • @GenoG
    @GenoG 4 месяца назад

    Thanks!

    • @pixaroma
      @pixaroma  4 месяца назад

      Thank you so much ☺️

  • @MGomaa-hr3ct
    @MGomaa-hr3ct 3 месяца назад

    Amazing content ❤

  • @ahmedrefaat9012
    @ahmedrefaat9012 4 месяца назад +1

    Great content and incredible explanation skills
    Would you please create videos for:
    1- ipadapter and differences between its versions, configuration parameters meanings per case
    2- face stuff, character creation , group different people into single image…etc

    • @pixaroma
      @pixaroma  4 месяца назад +1

      thanks for suggestions, I will see what i can do, control net is in plan for an episode in the future, will see what i cand o with ip adapter, i am taking it more slowly because I still need to learn it and have a basic understanding in order to be able to explain it, and some stuff are just complex :)

  • @Fayrus_Fuma
    @Fayrus_Fuma 4 месяца назад +1

    Thank you so much from me.
    It's a pity, but for me (character creation) neither SDXL nor SD3 is still suitable.
    I have tried to make with these models - furniture, apartments, buildings, bathrooms (very bad), showers (many attempts) and many more.
    Alas, I will have to continue to sit on other models. Too bad the same SD3 still has an arm problem.

    • @pixaroma
      @pixaroma  4 месяца назад +1

      Lets hope they fix it soon, i know it can be frustrating, so far no AI can do perfect characters

    • @Fayrus_Fuma
      @Fayrus_Fuma 4 месяца назад

      @@pixaroma What about Midjourney? I've seen comments that it makes normal hands. But I'm skeptical, because there are a lot of liars on the internet. Because I'm sick of people who say - what's the problem? Create your own Lora and use it.

    • @pixaroma
      @pixaroma  4 месяца назад +1

      I don't use mid journey anymore because is too expensive I already have chatgpt that is quite usefulfoe text and can do sometimes ok illustration but is limited. Bur you can try it for a month and see if ot does what you need. I tested a lot of ai for a month 😀 just to see if does what i need. But mamy have problems with hand or animal feet or extra tails etx

  • @LeadrosXrG
    @LeadrosXrG 29 дней назад

    thx

    • @pixaroma
      @pixaroma  29 дней назад

      You are welcome, there is a new version now 3.5 blog.comfy.org/sd3-5-comfyui/

  • @Artyom_A2S
    @Artyom_A2S 4 месяца назад

    Thanks for the creativity, will you be able to repeat the video "Game design with stable interface of Diffusion Forge and Photoshop" but on the new version of Stable Diffusion?

    • @pixaroma
      @pixaroma  4 месяца назад +1

      I can do in comfyui once I advance with the episodes and has anything it needs, I still need to cover control net, and upscaling and stuff before i can recreate that

    • @Artyom_A2S
      @Artyom_A2S 4 месяца назад

      @@pixaroma If you do this, it will be very cool

  • @luniar5190
    @luniar5190 20 дней назад

    hello, the tutorials you posted really helped me expand my knowledge in this area, thanks! I also have a question, I got an idea from this section and tried to run 4 models at the same time. For my computer, one of these models is enough to slow down my system a lot. And now when I use all 4 together without any loras and after setting the primitives, text inputs and seeds, it crashes immediately. Could I have done something wrong or is it just natural that it crashes?

    • @pixaroma
      @pixaroma  20 дней назад +1

      you need a lot of vram to run multiple models, just use one at the time, do tests, and then switch and do the other one, some models are so big that it will struggle even with one model

    • @luniar5190
      @luniar5190 19 дней назад +1

      @@pixaroma now i can see, thank you very much for the answer.

  • @konnstantinc
    @konnstantinc 4 месяца назад

    cool!

  • @kaiserinvictoria4897
    @kaiserinvictoria4897 4 месяца назад

    Hi man,I have a little off the topic question
    how can I use your styles.csv file in comfyui?

    • @pixaroma
      @pixaroma  4 месяца назад +1

      I am testing this week for a solution, i will make a video once i make it to work

  • @NadjibIxe
    @NadjibIxe 4 месяца назад

    should we use comfyui or swarmui ? 🙏

    • @pixaroma
      @pixaroma  4 месяца назад

      I am using comfyui but you can use what fit best for you, depends what you need it for and what is easier for you, i have been using a1111 then forge then comfy UI, so it seems that first that get updated and most active is comfyui that why i try to learn it step by step

    • @NadjibIxe
      @NadjibIxe 4 месяца назад

      ​@@pixaroma I was a bit confused with these different frontends. I'm going to start with ComfyUI since Swarm is still new and in beta. Thank you so much for all the videos you provide us with

  • @MoonEight
    @MoonEight Месяц назад

    what's your hardware setup?

    • @pixaroma
      @pixaroma  Месяц назад +1

      **My PC: **
      - CPU Intel Core i9-13900KF (3.0GHz, 36MB, LGA1700) box
      - GPU GIGABYTE AORUS GeForce RTX 4090 MASTER 24GB GDDR6X 384-bit
      - Motherboard GIGABYTE Z790 UD LGA 1700 Intel Socket LGA 1700
      - 128 GB RAM Corsair Vengeance, DIMM, DDR5, 64GB (4x32gb), CL40, 5200Mhz
      - SSD Samsung 980 PRO, 2TB, M.2
      - SSD WD Blue, 2TB, M2 2280
      - Case ASUS TUF Gaming GT501 White Edition, Mid-Tower, White
      - Cooler Procesor Corsair iCUE H150i ELITE CAPELLIX Liquid
      - PSU Gigabyte AORUS P1200W 80+ PLATINUM MODULAR, 1200W
      - Microsoft Windows 11 Pro 32-bit/64-bit English USB P2, Retail
      - Wacom Intuos Pro M

  • @arentol7
    @arentol7 4 месяца назад

    Not to give you a hard time, just something that occurred to me that you could have done. At 10:24 when you deleted the 2nd and 3rd workflows, you could have just deleted the 3rd, then for the second just deleted EmptySD3LatentImage and replaced it with an Empty Latent Image. Then change your checkpoint to Juggernaut. Would have been faster and easier.

    • @pixaroma
      @pixaroma  4 месяца назад

      thanks, many time I just record portions and explorer and found out later that was an easy way :) didnt want to redo again since already take a lot of time the recording and editing.

  • @ahmedrefaat9012
    @ahmedrefaat9012 4 месяца назад

    Also would you please increase the frequency of the videos (ex 3 per week 😍)

    • @pixaroma
      @pixaroma  4 месяца назад +3

      I would like that, but youtube doesnt bring me enough earning to be able to give up on other design projects, so one for week for now, maybe too if i get more time.

  • @Fenkreg
    @Fenkreg 3 месяца назад

    BTW in a matter to improve video you can switch Windows to display windows in dark mode, since Comfy is dark it would be better for the viewers, less "flashing lights" ^^"

  • @baheth3elmy16
    @baheth3elmy16 4 месяца назад

    👍

  • @Adreitz7
    @Adreitz7 4 месяца назад

    This is more surface-level than I was looking for; just an intro to Comfy and first look at SD3, though I was interested to hear that the SD3-specific empty latent has a reason for existence.
    I was hoping to find a good SD3-only upscaling workflow. The model output tends to break when going over 1MP or using aspect ratios over ~1.5:1, so direct generation at high resolution or a hiresfix-like workflow are not possible. I've done some experimenting with tiled workflows, but ran into a lot of problems. The SD Ultimate Upscale custom node works with the SD3 tile controlnet, but it doesn't seem to add much more visual information and can mess up details, as well as having inconsistency between the tiles. I tried three different custom nodes for Mixture of Diffusers and MultiDiffusion tiling algorithms, but none of them worked with tile controlnet (each tile was being controlled with the entire first stage generation, leading to a repeated ghosting all over the second stage output) and they added hallucinations when used without controlnet. This is keeping me from being able to effectively use SD3, as I'm not content with 1MP output.
    Also, is this video AI narrated or did you just highly edit your voice recording? It just feels off.

    • @pixaroma
      @pixaroma  4 месяца назад +3

      I am still using sdxl mostly for more complex stuff, sd3 is still new and doesn't integrate well with many extensions, I just hope they promised update will make things better. If you watched other episodes you will see I am a designer and i try to present things in simple ways. As for empty latent if you understand code you can looks at the code of the empty latent and compare it with the code of sd3 empty latent, mostly is similar but have some formulas there that are different and didn't understand much. As for voice i write te text and use ai to get the voice, just i struggled more this episode, they platform i use have a bug and i have to generate 3-4 times to get a decent result. I am looking also on more solution for upscaling but probably i will not use sd3 but a sdxl workflow

    • @vault382
      @vault382 Месяц назад

      What tool or software do you use for the ai voice? I wish I could use my voice as a reference for ai text to speech but I'm not sure what open source free platform is best.