AI Angel Gallery
AI Angel Gallery
  • Видео 12
  • Просмотров 122 892
ComfyUI EP07: How to use LoRA in ComfyUI [Stable Diffusion]
Unlock a whole new level of creativity with LoRA!
Go beyond basic checkpoints to design unique...
- Characters
- Poses
- Styles
- Clothing/Outfits
Mix and match different LoRAs to get the ultimate masterpiece!
Bulma (Dragon Ball) LoRA: civitai.com/models/89013/bulma-dragon-ball
Heart Hands / Pose LoRA : civitai.com/models/131294/heart-hands-pose
Negative Prompt : text, watermark, NSFW, (worst quality, low quality, normal quality:1.4)
Positive Prompt 1 : (high quality:1.3), (photo realistic:1.2), dragon ball, bulma
Positive Prompt 2 : (high quality:1.3), (photo realistic:1.2), dragon ball, blmpony, aqua hair, hair ribbon, braided ponytail, pink shirt, belt, scarf, pink skirt, clothes writing, brown g...
Просмотров: 32 635

Видео

ComfyUI EP06-SP: ControlNet Preprocessors auxiliary models [Stable Diffusion]
Просмотров 4,2 тыс.9 месяцев назад
In episode 06, I recommended using custom nodes called Fannovel16 ControlNet Preprocessors, which might not have been the best advice. But no worries, just uninstall the old one and switch to Fannovel16 (WIP) under ComfyUI's ControlNet Preprocessors "Auxiliary models." Doing this should get you sorted, and your existing workflow will still work right away. One thing to note: the new one doesn't...
ComfyUI EP06: Using ControlNet in ComfyUI to Control Results as Desired [Stable Diffusion]
Просмотров 28 тыс.9 месяцев назад
Ever wondered how to master ControlNet in ComfyUI? Dive into this video and get hands-on with controlling specific AI Image results. You'll learn how to play with these controls: - Human Pose - Background Depth - Tile for Ultimate SD Upscale Get the hang of these, and you'll be tweaking AI images like a pro in no time! Whether you're a newbie or an expert, this clip's for you. Don't miss out; c...
ComfyUI EP05: UpScale AI Image to get High Quality result [Stable Diffusion]
Просмотров 8 тыс.9 месяцев назад
Learn how to upscale AI images using ComfyUI in this easy tutorial. Dive in and make your images look stunning! Checkpoint Model AI Angel Mix (V3) : civitai.com/models/104608/ai-angel-mix Positive Prompt: portrait shot , beautiful young asian idol in coffee shop (high quality:1.3), (photo realistic:1.2) Negative Prompt: text, watermark, NSFW, (worst quality, low quality, normal quality:1.4) Ups...
ComfyUI EP04 : (Smart) Inpaint with ComfyUI [Stable Diffusion]
Просмотров 22 тыс.9 месяцев назад
"Want to master inpainting in ComfyUI and make your AI Images pop? 🎨 Join me in this video where I'll take you through not just one, but THREE ways to create inpaint masks! 🖌️ 1. Crafting the mask area by hand and DIY your own mask area 2. Automatically detecting the mask region in a selected area with Impact Pack Custom Nodes 3. Creating the mask using text prompts through ClipSeg Custom Nodes...
ComfyUI EP03 - Part4/4 : ComfyUI Manager and How to Update ComfyUI [Stable Diffusion]
Просмотров 3,5 тыс.9 месяцев назад
ComfyUI Manager : github.com/ltdrdata/ComfyUI-Manager Git Download Link : git-scm.com/downloads7 ComfyUI Git Pull command : git pull github.com/comfyanonymous/ComfyUI.git 🚀 Welcome to this special ComfyUI video tutorial! In this episode, I will take you through the techniques to create your own Custom Workflow in Stable Diffusion ComfyUI that is the great foundation for more advance topics in m...
ComfyUI EP03 - Part3/4 : Pre-Diffusion Workflow [Stable Diffusion]
Просмотров 1,9 тыс.9 месяцев назад
🚀 Welcome to this special ComfyUI video tutorial! In this episode, I will take you through the techniques to create your own Custom Workflow in Stable Diffusion ComfyUI that is the great foundation for more advance topics in my future videos. There are 4 Parts in this episode: Pt1✅ Discover easy and effective ways to tailor your Workflow to your preferences. Pt2✅ Learn how to create "IMG2IMG wo...
ComfyUI EP03 - Part2/4 : Image to Image Workflow [Stable Diffusion]
Просмотров 3,7 тыс.9 месяцев назад
🚀 Welcome to this special ComfyUI video tutorial! In this episode, I will take you through the techniques to create your own Custom Workflow in Stable Diffusion ComfyUI that is the great foundation for more advance topics in my future videos. There are 4 Parts in this episode: Pt1✅ Discover easy and effective ways to tailor your Workflow to your preferences. Pt2✅ Learn how to create "IMG2IMG wo...
ComfyUI EP03 - Part1/4 : Start from Blank Workflow [Stable Diffusion]
Просмотров 2,7 тыс.9 месяцев назад
🚀 Welcome to this special ComfyUI video tutorial! In this episode, I will take you through the techniques to create your own Custom Workflow in Stable Diffusion ComfyUI that is the great foundation for more advance topics in my future videos. There are 4 Parts in this episode: Pt1✅ Discover easy and effective ways to tailor your Workflow to your preferences. Pt2✅ Learn how to create "IMG2IMG wo...
ComfyUI EP02: AI Image Generation Process (Details of K-Sampler Node) [Stable Diffusion]
Просмотров 4 тыс.9 месяцев назад
In this video, we'll demystify the core functionalities of each basic node of ComfyUI , enabling you to fully comprehend their unique values and operations. . By the end, you'll not only gain the ability to generate images that match your ideal vision, but also a deeper insight into the AI-driven image creation process. . Be sure to watch until the end for a comprehensive understanding of the f...
ComfyUI EP01: Create your first AI Image [Stable Diffusion]
Просмотров 12 тыс.9 месяцев назад
In this tutorial, you will learn how to create your very first AI image using Stable Diffusion ComfyUI. This powerful tool is exceptionally fast and requires less GPU power compared to other tools like automatic1111. Follow along to discover the fascinating process of crafting your AI-generated image! Web Article : www.thepexcel.com/comfyui-ep01/ Link from Clip ComfyUI Git Hub : github.com/comf...

Комментарии

  • @Adinap_
    @Adinap_ 6 дней назад

    สอนดีมากๆๆ เลยค่ะ ไปดูต่างชาติอยู่ตั้งนาน ช่องคนไทยอธิบายละเอียดกว่าเยอะเลย 5555 ขอบคุณมากนะคะ

  • @soloplayer8260
    @soloplayer8260 7 дней назад

    ตามมาจากเวป Thep Excel ตามดูมาตั้งแต่ webui จนมาถึงอันนี้ยอมรับว่ามึนมากๆ ผมดูวนไปวนมาหลายรอบกว่าจะวางโหนดโยงโหนดเองได้ comfy ทำให้ผมเจนภาพจาก SDXL ได้ ซึ่งwebui ทำไม่ได้เครื่องช้าไม่ก็ค้างไปเลย ผมใช้ 3070ti มันมี Vram แค่ 8GB ถ้าใช้ comfy + SDXLเวลามันโหลดเช็คพ้อยแล้วผมเช็คดูมันใช้ Vram ไปราวๆ 7GB เฉียดมากๆ แต่ webui + SDXL วิ่งไปตันที่ 8GB แล้วมันก็ช้าทันทีหรือค้างไปเลย ตอนนี้ผมเล่นเป็นได้ระดับนึง แต่ผมติดปัญหาตรงที่ว่า ผมใช้ SDXL แล้วจะอัพสเกล แล้วใช้ tile มันทำงานไม่ได้อ่ะครับ ซึ่งลองกับ SD1.5 มันใช้ได้

  • @rosetyler4801
    @rosetyler4801 14 дней назад

    Thank you. This was helpful.

  • @morgansouren
    @morgansouren 19 дней назад

    thanks a lot ! you're a very kind person!

  • @shareeftaylor3680
    @shareeftaylor3680 21 день назад

    Can u please upload your json workflow of this I found out how to use comfyui in android but it's hard to navigate the ui thx I'm trying to figure out how to make a github page / I figured out how to navigate comfyui u can zoom out and link nodes just tap and drag😊

  • @Xavi-Tenis
    @Xavi-Tenis 21 день назад

    thanks!

  • @cyberspider78910
    @cyberspider78910 23 дня назад

    This is gold standard....many vidoes in one video actually.....again a great great video....but sir what is your hardware configuration for generation ?

  • @cyberspider78910
    @cyberspider78910 23 дня назад

    This is superb !! I want to give you 10 likes actually but RUclips limites it to one. Keep up good work. Sir, Can you let me know your hardware specification for this type of work ?

  • @ronsvfx5033
    @ronsvfx5033 25 дней назад

    That was amazing!!! BIGLOVE!!

  • @thanagridseesuae
    @thanagridseesuae Месяц назад

    ทำแบบนี้ใน sd ได้ไหมครับ

  • @salomevsn3724
    @salomevsn3724 Месяц назад

    thank you so much

  • @achen0319
    @achen0319 Месяц назад

    It realy helps! Thanks.

  • @promisesfdc456
    @promisesfdc456 Месяц назад

    Excellent - thanks

  • @user-pp4wg7kx4l
    @user-pp4wg7kx4l Месяц назад

    มีพากย์ไทยที่ไม่ต้องอ่านซับไตเติ้ลไหมครับ?

  • @seanwilson7098
    @seanwilson7098 Месяц назад

    you sound soy

  • @kroanosm617
    @kroanosm617 Месяц назад

    Can you do a video to make an image clearer/sharper more detailed. Sometimes I get an image and I like the general look and poses but I am not able to get a high quality version of it without major weird changes.

  • @santisook333
    @santisook333 Месяц назад

    กด queue Prompt อย่างไรครับ ให้มันทำแค่ ksample เดียว ผมกดแล้วมันทำงาน 2 อันเลยครับ

    • @AIAngelGallery
      @AIAngelGallery Месяц назад

      กด ctrl+m mute หรือ ctrl+b bypass ตัวที่ไม่ต้องการ

  • @shekharkumar377
    @shekharkumar377 Месяц назад

    Thank you

  • @HuTrzy
    @HuTrzy Месяц назад

    Great material, thanks a lot!

  • @articraftic
    @articraftic Месяц назад

    I was watching Latent Vision but the way he teaches things is very advanced. You really helped me understand things from the very basics.

    • @AIAngelGallery
      @AIAngelGallery Месяц назад

      latent vision channel is very good one. but his explanation is so deep though

  • @user-gn9oe3je8b
    @user-gn9oe3je8b Месяц назад

    You’re a great teacher! So sad you haven’t done more videos in the last 6 months😢 I’ll keep subscribed and hope you do more. Thank so much for what you’ve already done!

    • @AIAngelGallery
      @AIAngelGallery Месяц назад

      i will be back soon, with tons of new tips & tricks!

    • @PENTAGAMEId
      @PENTAGAMEId Месяц назад

      @@AIAngelGallery i'll wait, i really really wait untill you comeback, sir.

  • @antmax
    @antmax Месяц назад

    I've been playing with Reactor and now trying to add detail to the smooth airbrushed faces. This series is really helping me, though the face restore models are driving me nuts. One adds skin detail but wonky eyes, the other has good eyes but less skin pores. Anyway, on with the series, see if you introduce better ways :). Your tutorials have really explained some basic core concepts of how AI works that too many others skim through without explanation. Really appreciate that.

  • @rafa-lk6lf
    @rafa-lk6lf 2 месяца назад

    is there any way i could use the "confusing" tradicional comfyui method to do a lora stack\multiple loras? I don't like the efficiency method 💀

  • @ArtiCast
    @ArtiCast 2 месяца назад

    Good!

  • @focus678
    @focus678 2 месяца назад

    ไม่ทราบสเปกคอมที่ใช้เป็นแบบไหนครับ เจนเร็วมาก

  • @Eisenbison
    @Eisenbison 2 месяца назад

    Did this video REALLY need to be 22 minutes long?

  • @benchee7783
    @benchee7783 2 месяца назад

    Love your Explaination, I'm a new bies.. detail n understand enough. Thnx

  • @valen98421
    @valen98421 2 месяца назад

    Why do my images turn out opaque? :(

  • @ahmetneder1
    @ahmetneder1 2 месяца назад

    hello, is it posible 2d character to psd file, each body parts another image file, how can we do?

  • @icejust9195
    @icejust9195 2 месяца назад

    How can I turn on progress image on ksampler node(box)?

  • @Yingphotos
    @Yingphotos 2 месяца назад

    efficiency มันบอกของผมต้องใช้ comfyui version 2.0 ++

  • @Yingphotos
    @Yingphotos 2 месяца назад

    เป็นคลิปที่ดีมากๆ ฝึกตั้งแต่ง่ายๆ ไปยากขึ้นเรื่อยๆ ผมก็นั่งทำซ้ำๆ จนจำได้แล้ว ขอบคุณครับ

  • @Yingphotos
    @Yingphotos 2 месяца назад

    เพิ่งหัดเล่น เข้าใจง่ายดี พี่ทำคลิปเยอะๆครับ

  • @imbodo1549
    @imbodo1549 2 месяца назад

    Oh my God. This is the easiest basic comfyui tutorial channel for me to understand. I was able to understand each step and how to connect the nodes together. Thank you.

  • @fredrik241
    @fredrik241 2 месяца назад

    Thank you for showing the several methods of masking and inpainting!

  • @Yingphotos
    @Yingphotos 2 месяца назад

    เข้าใจง่ายดีครับ

  • @sudabdjadjgasdajdk3120
    @sudabdjadjgasdajdk3120 2 месяца назад

    Does he go over how to use reference image?

  • @user-nd7hk6vp6q
    @user-nd7hk6vp6q 2 месяца назад

    In the first example, what if I want to do an image to image using controlnet, like to recreate the same or similar image but with a different pose, how would i do that, pls help

  • @fejesmarketing6736
    @fejesmarketing6736 2 месяца назад

    Absolutely awesome video! Thank you!

  • @FabiVFX
    @FabiVFX 3 месяца назад

    Great tutorial, I was struggling trying to understand how ControlNet works but now is very clear. Thanks for sharing your knowledge!

  • @MustafaAAli-uv2sx
    @MustafaAAli-uv2sx 3 месяца назад

    Thank you for great great tutorial.

  • @jbcJohn10v10
    @jbcJohn10v10 3 месяца назад

    This video is from over 5 centuries into the future. Time travel is real, people!

  • @santiagomoreno4669
    @santiagomoreno4669 3 месяца назад

    I know the video is old, but maybe you know. I just discovered thanks to you the efficient nodes and I 'm trying them a bit when I found a problem. With UltriumV7SDXL model, the efficient pipelie gives a pattern image that looks like noise and the standard comfyui does give a proper result. Do you know if it's a known issue or if I am doing something wrong? Btw, good video! I suscribed :)

  • @RAZVRATKA
    @RAZVRATKA 3 месяца назад

    Спасибо бро

  • @TheRMartz12
    @TheRMartz12 3 месяца назад

    Amazing video, very clear

  • @the_meridian
    @the_meridian 3 месяца назад

    New person question: When you have all of this hooked up to your main workflow and you hit "Queue Prompt" it runs through the whole thing instead of going to the mask subroutine, screwing everything up. I didn't see you bypass anything so how do you avoid that?

  • @EmosGambler
    @EmosGambler 3 месяца назад

    Thanks mate! Easy way to setup lora!

  • @perfekta_unlimited
    @perfekta_unlimited 3 месяца назад

    Hello, when I add the image in order to get the nodes that were used to generate that image it is not working for me, maybe I am doing something wrong?

  • @zoemorn
    @zoemorn 3 месяца назад

    Sometimes inpainting through SetLatentNoiseMask generates very poorly compared to using Vae for Inpainting, poorly meaning that though it stays in the mask, what it generates is not logically good or desired compared to Vae for Inpainting which seems to 'understand' better what is intended or desired. Have you seen this or have advice? in your example at 6:51 using SetLatentNoiseMask works great so i dont know why i sometimes get poor results unless it is the model i am using. do some models work poorly with inpainting and can you provide your experience on that? Thank you for your channel

  • @kamizama9177
    @kamizama9177 3 месяца назад

    คนไทยใช้มะ