ComfyUI Tutorial Series Ep 23: How to Install & Use Flux Tools, Fill, Redux, Depth, Canny

Поделиться
HTML-код
  • Опубликовано: 25 дек 2024

Комментарии • 147

  • @pixaroma
    @pixaroma  Месяц назад +7

    Free workflows are available on the Pixaroma Discord server in the pixaroma-workflows channel discord.gg/gggpkVgBf3
    You can now support the channel and unlock exclusive perks by becoming a member:
    pixaroma ruclips.net/channel/UCmMbwA-s3GZDKVzGZ-kPwaQjoin
    Check my other channels:
    www.youtube.com/@altflux
    www.youtube.com/@AI2Play

    • @jonrich9675
      @jonrich9675 Месяц назад

      do it have to be installed on comfyui or does forge work as well?

    • @pixaroma
      @pixaroma  Месяц назад +1

      @jonrich9675 I don't think the forge team has updated the interface to support it, usually takes days or weeks, only comfyui offers day 1 support, that was one of the reason why i switched to comfyui because it has taken too long to be able to use new technologies

    • @jonrich9675
      @jonrich9675 Месяц назад

      @@pixaroma bummer. I prefer forge due to how easy it is to use. Thanks for the info

    • @jonrich9675
      @jonrich9675 Месяц назад

      @@pixaroma Also can you do a flux dev open pose video? I seen like nothing on youtube only depth, and canny.

    • @pixaroma
      @pixaroma  Месяц назад +1

      It used to work but recently it didn't work anymore not sure what happened some update or something, like i had old workflows that did work and now sometimes it works if i mention the pose but most of the time doesn't, so not sure what happened

  • @JoelB71
    @JoelB71 Месяц назад +10

    A knowledgeable person who actually knows how to put together a proper tutorial! Fantastic stuff. Thanks for putting this together.

    • @pixaroma
      @pixaroma  Месяц назад +1

      Glad it was helpful 🙂

  • @SebAnt
    @SebAnt Месяц назад +7

    Thank you 🙏 You are the So much exciting new content in this episode - It is like drinking from a firehose!!

    • @pixaroma
      @pixaroma  Месяц назад +1

      Thank you so much sebant, it was a busy week 😁

  • @robboburgers
    @robboburgers 29 дней назад +1

    By far some of the absolute best Ai instructional videos on RUclips. Thank you for your amazing efforts.

    • @pixaroma
      @pixaroma  29 дней назад

      Thank you ☺️

  • @frankiberlin
    @frankiberlin 29 дней назад

    Thank you so much!
    I am deeply impressed by how well and structured you explain all the steps, so that even installations can be done cleanly.
    Your videos, this channel, and your offerings on Discord, as far as I have been able to study them, stand out from the rest.
    I admire how much time you spend explaining these new technologies to the world and offering them for free.
    My hat is off to you! 🎩
    Thanks & regards. 😊

    • @pixaroma
      @pixaroma  29 дней назад +2

      Thank you so much ☺️

  • @i_viceroy6598
    @i_viceroy6598 17 дней назад

    flux redux is a great tool for animation :) also great job on this page! its very helpful and informative on flux

  • @iamdihan
    @iamdihan Месяц назад

    Love the format of your channel and i always recommend your channel to anyone learning SD. Thank you for not putting workflows behind paywalls and I hope your generosity in turn rewards you for the effort. You and Latent Vision is at the top

    • @pixaroma
      @pixaroma  Месяц назад

      Thank you so much, yeah I like matteo videos also :)

  • @Uday_अK
    @Uday_अK Месяц назад +2

    Thank you for making such an informative and detailed guide-your hard work is truly appreciated! 🙏✨

    • @pixaroma
      @pixaroma  Месяц назад +2

      Thank you Uday ☺️

  • @nacho8049
    @nacho8049 27 дней назад

    Your videos are the best! You explain everything so clearly. Thanks for your amazing work!

  • @philippeheritier9364
    @philippeheritier9364 Месяц назад +1

    This tutorial is excellent and surgically precise.

  • @GenoG
    @GenoG 20 дней назад

    As always, I come for 2 things and leave with 10 great ideas!! Thank you!! 😀

    • @pixaroma
      @pixaroma  20 дней назад

      Thank you ☺️

  • @hatsworld2008
    @hatsworld2008 27 дней назад

    Thank you very much, Was stuck for a day. Your video really helped.

  • @Redemptionz2
    @Redemptionz2 Месяц назад +1

    This is a very good tutorial channel.

  • @59Marcel
    @59Marcel Месяц назад

    Brilliant work flow and well explained. Thank you.

    • @pixaroma
      @pixaroma  Месяц назад

      Thanks marcel ☺️

  • @giuseppedaizzole7025
    @giuseppedaizzole7025 28 дней назад

    Thanks for sharing..very well explained, well done!

    • @pixaroma
      @pixaroma  28 дней назад +1

      Thank you ☺️

  • @RamonRodgers
    @RamonRodgers Месяц назад

    I was hoping you were going to do this. Thank you!

    • @pixaroma
      @pixaroma  Месяц назад

      Hope you enjoyed it ☺️

  • @SumoBundle
    @SumoBundle Месяц назад

    Amazing one. Thanks for the workflows

  • @ckhmod
    @ckhmod 7 дней назад

    Found out on this build with 3090 that for Flux Depth part using a weight d_type : fp8_e4m3fn with Flux Guidance : 4.0 and leave everything else the same will produce some quality photorealistic results.
    Hope this helps! Thanks again for the tutorials.

  • @timmyng9560
    @timmyng9560 20 дней назад +1

    Hi pixaroma , thanks for your effort .
    I'm just wonder what the different between this office models and other models you were mentioned before ?
    like :
    Ep19 , Flux Dev Q8 INPAINT OUTPAINT
    Ep14 , Flux Dev Q8 GGUF with ControlNet

    • @pixaroma
      @pixaroma  20 дней назад +1

      is using different methods, and does similar things, some are bigger than other so might not work on all computers if dont have enogh vram, in same cases some are better than others. For example with this tools you can use only those models from episode, but with method from ep 19 you can use sdxl, or flux diffeent models that are smaller than the Fill model. For control net in this episode is using lora, and in ep14 is different model using control net, so are different technology to achieve similar things, like in many software you can do the same thing in different ways and have to see what works for you. All come with advantages and disadvantages, some of this flux tool need a high flux guidance that might not work well if you want to do different more complex workflow and combine with other models, and since some models are so big you might not be able even to run together with other models, like combine fill with control net and so on in some cases

  • @alexamer-oy1dy
    @alexamer-oy1dy 21 день назад

    Thank you

  • @rajvora2876
    @rajvora2876 Месяц назад

    hey this is agreat video! Keep it up!!!

  • @farhang-n
    @farhang-n Месяц назад

    thanks a lot

  • @Darkwing8707
    @Darkwing8707 Месяц назад +1

    There are a couple ways to control the style transfer strength. The easiest way is with KJNodes' Apply Style Model Advanced node. The other is to use ConditioningSetTimestepRange or ConditioningSetAreaStrength and combine the conditionings.

    • @pixaroma
      @pixaroma  Месяц назад

      Does it work with ksampler? Or it need the other workflow like the one using full dev?

    • @Darkwing8707
      @Darkwing8707 Месяц назад +1

      @@pixaroma It should work fine with the regular ksampler. I also just found the Advanced Reflux control nodes that look like they may be even better.

  • @telosa9487
    @telosa9487 Месяц назад +3

    Are you planning on making a similar setup walkthrough for SD3.5?
    SD Outpainting is the bane of my existence - the generations never blend in well with the original image and Comfyui is so messy with file directories that i will run out of space long before figuring out the right combination of nearly infinite models out there 🤕

    • @pixaroma
      @pixaroma  Месяц назад +3

      Not sure, sd3.5 still doesn't get me better images than flux, i was hoping for a fine-tuned version coming up like happened with sdxl to fix some anatomy mistakes

    • @telosa9487
      @telosa9487 28 дней назад +4

      @@pixaroma I see, it might take them a better part of the year if looking at intervals between major releases, but i see your point.
      It would still be nice to have the option to switch to sd3.5 since its imperfections have their own charm that leaves some room for creative freedom in concept art.

  • @newwen2102
    @newwen2102 Месяц назад

    so cool!!!! thanks a lot sir, u are the best

    • @pixaroma
      @pixaroma  Месяц назад

      Thank you ☺️

  • @CrustyHero
    @CrustyHero Месяц назад

    does the with flux inpatient model work with the turbo lora?

    • @pixaroma
      @pixaroma  Месяц назад +1

      well it didnt give me an error when tried turbo alpha, but the result was not so great, it was looking like when I generate without lora at 8 steps, so with or without lora at 8 steps i got that a little pixelated artefacts on mask, so not sure if it has an effect, you just reduce the steps of normal model and is faster, so instead of 20 try 16 or something to be a little faster, on 8 image is degrading. But maybe I didnt combined some nodes right, but I would have got an error I guess

  • @patriksuchy-be5ie
    @patriksuchy-be5ie 29 дней назад

    Hi, do you plan to create a video about the Pulid Flux workflow on Flux Dev using ComfyUI? Thanks for your reply!

    • @pixaroma
      @pixaroma  29 дней назад

      I am not doing tutorials for any tools that use insightface, so mo pulid, roop, faceid, etc. some RUclipser got copyright strike, is not available for commercial use, and it gives a lot of problems and dependencies problems when you try to install it. I am sure new technology will appear that doesn't use insightface, or maybe the new desktop comfyui can fix that somehow to avoid any problems

  • @tyata.1999
    @tyata.1999 8 часов назад

    is fp16 required? Can we download the t5xxl fp8 one ?

    • @pixaroma
      @pixaroma  8 часов назад +1

      I think it works, but didn't test it

    • @tyata.1999
      @tyata.1999 6 часов назад

      @@pixaroma sure I'll test

  • @buda3d2007
    @buda3d2007 12 дней назад

    How much vram do you think you need for first inpaint node?

    • @pixaroma
      @pixaroma  12 дней назад +1

      I dont know maybe 16, i think it has similar requirements with the full dev the original one, so if you can run that probably you can run this also.

  • @AndreyJulpa
    @AndreyJulpa 25 дней назад

    Hey pixorama, I hope you got a great vacation. I wanted to ask 1 more thing) Is there a way in Fill model to control what exactly to infil? Example - I want to change cloth on model with the exact example, is this possible to do? Maybe connect ipadaper or something like that?

    • @pixaroma
      @pixaroma  25 дней назад

      I didn't try yet, and i am not using ipadapter, but if i find a way i will do a tutorial

  • @AndreyJulpa
    @AndreyJulpa Месяц назад

    Hey pixorama, I think you're the best when it comes to new workflows and reviews of new tools. I have a couple of questions but wasn’t sure where to ask them.
    1. I have an interior scene, and I’d like to change the lighting to different times of day like night, morning, etc. Is that possible to do?
    2. I have a cream tube, and I want to place it against a beautiful background in a way that doesn’t look photoshopped but keeps all the labels intact.
    Do you have any reviews or workflows that cover something like this?

    • @pixaroma
      @pixaroma  Месяц назад +1

      You can try with a control net but it will not be identical, you will have some differences so is like you get similar interiors but some things will be different like maybe a vase in one will be a jar in other and so on. As for cream tube you can use the flux fill and inpaint anything else just noy the tube, so you change the background without touching the background. But i have to do some experiments when i get some time, maybe using the node that remove background to get a clear mask so we can inpaint only background more accurately, but i need more time to test it and it wasn't a priority

    • @AndreyJulpa
      @AndreyJulpa Месяц назад

      @pixaroma thank you for answer. The thing with the tube is I want so the lighting of the tube will also change, like shadows casting on it. I think this is a little too difficult. But I will go into your discord channel , I see there is so much useful information!

    • @pixaroma
      @pixaroma  Месяц назад +1

      @AndreyJulpa inpaint it first the background, then run through image to image to get a variation of it, but that will probably change text and what others things you have, maybe a combination of photoshop with Ai, not sure

    • @pixaroma
      @pixaroma  Месяц назад

      Do a search on this words is something new and might work for what you need, search: "In Context Lora"

  • @AInfectados
    @AInfectados Месяц назад

    Can you modify the Inpaint one to use "Inpaint Crop and Stitch" Nodes?

    • @pixaroma
      @pixaroma  Месяц назад

      I didn't try, not sure if works well together, and I cant test right now, only if you try

    • @AInfectados
      @AInfectados Месяц назад

      @@pixaroma 🥲

  • @hannibal911007
    @hannibal911007 Месяц назад

    Thanks again for this useful guide. I noticed that the models provided by Black FOrest are so large, why should we switch toi those ones as there are some alternatives for Flux IPAdapter

    • @pixaroma
      @pixaroma  Месяц назад +1

      Depends on the PC configuration and I test all and keep only the one that i am happy with and deleted the rest, so for systems doesn't worth it. I use for example dev q8 because it works ok for me, so probably in a few days or weeks it will appear smaller models so we can use those if it works ok. So far I like the flux fill so i will use that, and canny lora also works nice, and redux model is small

  • @Michael_H_Nielsen
    @Michael_H_Nielsen 29 дней назад

    what hardware do you have?

    • @pixaroma
      @pixaroma  29 дней назад

      **My PC: **
      - CPU Intel Core i9-13900KF (3.0GHz, 36MB, LGA1700) box
      - GPU GIGABYTE AORUS GeForce RTX 4090 MASTER 24GB GDDR6X 384-bit
      - Motherboard GIGABYTE Z790 UD LGA 1700 Intel Socket LGA 1700
      - 128 GB RAM Corsair Vengeance, DIMM, DDR5, 64GB (4x32gb), CL40, 5200Mhz
      - SSD Samsung 980 PRO, 2TB, M.2
      - SSD WD Blue, 2TB, M2 2280
      - Case ASUS TUF Gaming GT501 White Edition, Mid-Tower, White
      - Cooler Procesor Corsair iCUE H150i ELITE CAPELLIX Liquid
      - PSU Gigabyte AORUS P1200W 80+ PLATINUM MODULAR, 1200W
      - Microsoft Windows 11 Pro 32-bit/64-bit English USB P2, Retail
      - Wacom Intuos Pro M

  • @denisvisker2851
    @denisvisker2851 Месяц назад

    Hello! I'm using Flux tools inpaint in Comfy UI and it works perfect! But it has some shifting in saturation -> final image saves with a little less saturation. I think 5-7% sat. drop. It would be nice to have final result untouched (upd, i used a GGUF Flux for original)

    • @pixaroma
      @pixaroma  Месяц назад +1

      I guess in some cases it still not work perfect, is only the first version, lets hope they improve it and we get better inpaint

  • @lucadamico6248
    @lucadamico6248 Месяц назад

    Could this inpaint workflow work with a controlnet to guide what is generated?
    Let s say u have a specific rocket toy in mind, adding a line drawing or image reference (canny, depth ect)?

    • @pixaroma
      @pixaroma  Месяц назад

      I didn't try, that complicates the workflow a little not sure if it will work, and how to connect the right nodes, but give it a try and let me know if you can make it work

  • @AndreyJulpa
    @AndreyJulpa Месяц назад

    So cool! Thank you! Just tested and you really need GPU with 16 gigs to run it(4070ti super or 4080)

    • @pixaroma
      @pixaroma  Месяц назад

      Yeah are quite big not sure what is the minimum, but same with flux full model i think is similar

  • @Scitcat1
    @Scitcat1 Месяц назад

    Thank you! So if i understand it right, the only FLUX model i need is 23 GB Fill model? For the seek of storage saving?

    • @pixaroma
      @pixaroma  Месяц назад +1

      If you want Inpainting only then the fill and of course the clip ones if you dont have it.Or wait maybe someone will make them smaller

    • @Scitcat1
      @Scitcat1 Месяц назад

      @@pixaroma So the fill one is not the "regular + inpaint" option in one file?

    • @pixaroma
      @pixaroma  Месяц назад +1

      @@Scitcat1 it doesn't include clip models so you need those separate you can see in the video the nodes or download workflows from discord

    • @Scitcat1
      @Scitcat1 Месяц назад

      @@pixaroma Ok Thank you very much!

  • @UmarandSaqib
    @UmarandSaqib Месяц назад

    nice!

  • @AndreyJulpa
    @AndreyJulpa 28 дней назад

    One more quetion) While using Fill items are a little bit blurry. Is there a way to make them sharper?

    • @pixaroma
      @pixaroma  28 дней назад +1

      Make sure the image is not bigger than 2 megapixels sometimes that helps, test with 1024*1024px images see if still blurry

    • @AndreyJulpa
      @AndreyJulpa 28 дней назад

      @@pixaroma It helps a bit, how do you think raising up sampling stems does affect quality of infiil object?

    • @pixaroma
      @pixaroma  28 дней назад +1

      @@AndreyJulpa only if you play around with it, i didn't had much time to test since is only a few days old, and I am in vacation now, so maybe more test when im back

  • @DarthChrisB
    @DarthChrisB 28 дней назад

    12:55 I think they meant that the restyling workflow with 1 image + 1 prompt is available through their API, but it still only uses Redux.

    • @pixaroma
      @pixaroma  28 дней назад

      Yeah I think so, but saw some nodes advanced reflux that gives a little more control to the prompt

  • @multiform.
    @multiform. Месяц назад

    how much vram is needed?

    • @pixaroma
      @pixaroma  Месяц назад

      Probably the same as the full dev model since it is the same size, is quite new and so not many people had time to test it, and i have 24gb of vram.

  • @freakyninjaman3
    @freakyninjaman3 3 дня назад

    I downloaded the Flux1 Fill file and put it in my UNET folder, but I don't see it as a selectable option after I restart. I only see the Flux1 gguf file. Do you know why this might be?

    • @pixaroma
      @pixaroma  3 дня назад

      Not sure, do you have other nodes maybe that influence? Some had problem after installing flow control node with gguf models not showing, maybe js the case that some node is in conflict, not sure

    • @pixaroma
      @pixaroma  3 дня назад

      Also check this post maybe someone will post an update, seems to be a recent problem github.com/comfyanonymous/ComfyUI/issues/6165

  • @studio_20x30
    @studio_20x30 Месяц назад

    Thank you so much sensei. Please do a tutorial on archviz where we can enhance realism of our renders using flux.

    • @pixaroma
      @pixaroma  Месяц назад

      I will see what i can do, but usually with control net you can already do that with canny or depth

    • @studio_20x30
      @studio_20x30 Месяц назад +1

      @@pixaromai seem to struggle with this, with controlnet i cannot keep the texture unchanged. I could not find a good tutorial which is not complex to understand. Please help us architects!

  • @Cserror1
    @Cserror1 Месяц назад

    Does ccomfui works on mac. it kinda difficult...

    • @pixaroma
      @pixaroma  Месяц назад

      I saw people using it, but some had problems with it, need to be installed in a certain way i think, similar to linux, but I cant help there

  • @nekola203
    @nekola203 Месяц назад

    what's the new resource monitor?

    • @pixaroma
      @pixaroma  Месяц назад

      you go to manager, custom nodes manager, and install the node called cyrstools, restart comfyui and it will appear

    • @nekola203
      @nekola203 Месяц назад

      @pixaroma I have crystools but it won't show up after the new ui changes

    • @pixaroma
      @pixaroma  Месяц назад

      @@nekola203 go to settings that gear wheel, look in the left for crystools, and then in the right it says Position (floating not implemented yet) make sure is says Top and see if other buttons are on there and is not deactivated

    • @nekola203
      @nekola203 Месяц назад

      @@pixaroma tried all that it's not working. thanks anyways

    • @pixaroma
      @pixaroma  Месяц назад

      @@nekola203 I have comfyui on 2 pc so it works on both, maybe a clean install of comfyui

  • @wpoole10
    @wpoole10 27 дней назад

    Great series so far! I've watched all the videos and caught up to this one. I was wondering if it's possible to set up user accounts with a username and password. I'm trying to configure it for my kids to use, but I want to restrict their ability to install or delete anything. Is this feature available?

    • @pixaroma
      @pixaroma  27 дней назад

      I didn't see something like that, maybe you can find a custom node that do that since are hundreds of nodes. But i am not aware of any

  • @AndreyJulpa
    @AndreyJulpa Месяц назад

    I’ve spent more time experimenting with Flux Fill and discovered a significant issue. If you want to modify a small detail in a large image-like replacing 3D people in an exterior visualization-the results often lack quality. Invoke solves this problem by allowing you to isolate and inpaint only the specific area of the image, preventing unnecessary generation on the entire scene. Is there a way to address a similar issue in ComfyUI?

    • @pixaroma
      @pixaroma  Месяц назад +1

      Maybe you can combine with crop and stitch node like i did in episode 19 but didn't try yet, that takes a crop and modify it and put it back in the big image. Also make sure your image is no too large because flux can do max 2 mp images.

    • @AndreyJulpa
      @AndreyJulpa Месяц назад +1

      @@pixaroma Crop and stich works, thank you!

  • @mahdi-binaa
    @mahdi-binaa 24 дня назад

    hey my friend can you tell me which ai tools for your voice on videos ?!

    • @pixaroma
      @pixaroma  24 дня назад

      ElevenLabs dot io

  • @yapyh2872
    @yapyh2872 Месяц назад

    In your opinion, what is the best way to remove something unwanted in an image? E.g. an object, etc using these kind of tools and without Photoshop

    • @pixaroma
      @pixaroma  Месяц назад +2

      i still use photoshop remove tool :D you can use inpaint in comfyui and prompt for what is in the image, so if is a bird on a sky and want to remove the bird maybe prompt for sky, or put a cloud in place, or prompt for another bird maybe it looks better, so you just replace what you dont like with something else, ,is never empty like sure must be something there, white background, or something since we dont generate with transparency, so prompt for that. If still doesnt work, paint with a similar color with the background over the image and then try inpaint again

    • @yapyh2872
      @yapyh2872 26 дней назад

      @@pixaroma Nice idea. Thank for sharing.

  • @MrDebranjandutta
    @MrDebranjandutta Месяц назад

    Hi how do I train jewellery as a flux lora, and then inpaint a lora (like a necklace) to inpaint with

    • @pixaroma
      @pixaroma  Месяц назад +1

      I think you need photos of that necklace from different angles on different backgrounds, I used for example tensor art to train a person or a style but didnt tried with an object yet, I saw somewhere someone trained some sinkers, so it should work theoretically, I was able to inpaint a face on a different photo

    • @MrDebranjandutta
      @MrDebranjandutta Месяц назад

      @pixaroma I just have the hires pics of the products on a bust from different angles. With sdxl it was never accurate, but using fluxgym trained it to good accuracy. Works as a lora, but since no reference pics of models wearing it size mismatch can happen. Hence was wondering if I can use the trained lora and inpainting over an accurate mask Also most pics it generates are from nose down since no people in training images

    • @pixaroma
      @pixaroma  Месяц назад

      @@MrDebranjandutta I never done something like that so unless you try different things not sure what it will work or not, since with AI all is random :)

  • @ob3ythee.t.128
    @ob3ythee.t.128 14 дней назад

    Yeah this keeps crashing, only 12gb of VRAM :( is there anyway to make it work? For flux fill.

    • @pixaroma
      @pixaroma  14 дней назад +1

      only if you find a smaller version, saw online some flux fill fp8 but might need a different workflow

    • @ob3ythee.t.128
      @ob3ythee.t.128 13 дней назад

      Ah fair enough yeah found the fp8 version, no worries thanks for replying. I got a workflow setup for it testing it

  • @aysenkocakabak7703
    @aysenkocakabak7703 26 дней назад

    flux1-dev-fp8 also works Loras? Or it must be a full version fp16. did anyone tried that. Thank you

    • @pixaroma
      @pixaroma  26 дней назад

      Theoretically it should work, just make sure it has the right nodes, like the loader is different from gguf

    • @aysenkocakabak7703
      @aysenkocakabak7703 26 дней назад

      @@pixaroma ohh sure, thank you :)

    • @aysenkocakabak7703
      @aysenkocakabak7703 26 дней назад

      @@pixaroma Also do you think that after discovering flux tools,should I also try Controlnet Union Pro? I somehow thought that it replaces union pro.

    • @pixaroma
      @pixaroma  26 дней назад

      @@aysenkocakabak7703 you can try both, see what works best, maybe some are faster depends on your pc, and stick with what works, none is perfect but we use what we have

  • @nuwanchandrasirigamage8130
    @nuwanchandrasirigamage8130 Месяц назад

    Great tutorial... But unfortunately Flux not for commercial use... RealVisXL V5.0 can use for commercial use. can you please make tutorial for it.. specially nature, animal, human, images.. Thank you

    • @pixaroma
      @pixaroma  Месяц назад +1

      You can use the images you generate with the model for commercial work, the output, you can use the model for commercial use like asking people money to use the model on your server

  • @devnull_
    @devnull_ Месяц назад +1

    Uses a mask, to generate a mask - lol :D

    • @pixaroma
      @pixaroma  Месяц назад

      Who, where, when? 😂

  • @tfozo
    @tfozo Месяц назад

    This guy.

  • @JackC-d9x
    @JackC-d9x Месяц назад +1

    This one didn't work for me, but I think it's because of my machine. It's still a great tutorial, though

    • @pixaroma
      @pixaroma  Месяц назад +1

      It need a lot of vram just like the full dev model, so is possible to not work, maybe try the loras version or redux those are smaller

    • @JackC-d9x
      @JackC-d9x Месяц назад +1

      @@pixaroma Thx for the reply! For now I'm going to stop, wait a bit. Keep an eye on the channel for possible updates

  • @Vanced2Dua
    @Vanced2Dua Месяц назад

    Please. Tutorial install magicQuill

    • @pixaroma
      @pixaroma  Месяц назад

      From what i saw on Reddit people says is not for commercial use, i will check it out but it looks like an inpainting method

  • @tubeflix31
    @tubeflix31 29 дней назад

    Artificiel stupidty need a lot of space

    • @pixaroma
      @pixaroma  29 дней назад

      They are getting faster in time and less space need it or cheaper hard drive will appear :) but they are big, the smarter it is more it needs. Imagine the size of chatgpt model 😀

  • @srikantdhondi
    @srikantdhondi 23 дня назад

    My PC has: Total VRAM 8192 MB, total RAM 32637 MB
    pytorch version: 2.5.1+cu124
    Set vram state to: NORMAL_VRAM
    Device: cuda:0 NVIDIA GeForce RTX 3050 : cudaMallocAsync
    Even flux1-schnell-fp8.safetensors based workflows not working in my pc, Comify UI Reconnecting and pausing, any suggestions how to fix this issue?

    • @pixaroma
      @pixaroma  23 дня назад

      You dont have enough vram to run those models, are too big for your video card. If they make smaller GGUF models like q4 maybe then, but even those need like 12-16gb of vram, flux need a lot of vram unfortunately