Free workflows are available on the Pixaroma Discord server in the pixaroma-workflows channel discord.gg/gggpkVgBf3 You can now support the channel and unlock exclusive perks by becoming a member: pixaroma ruclips.net/channel/UCmMbwA-s3GZDKVzGZ-kPwaQjoin Check my other channels: www.youtube.com/@altflux www.youtube.com/@AI2Play
@jonrich9675 I don't think the forge team has updated the interface to support it, usually takes days or weeks, only comfyui offers day 1 support, that was one of the reason why i switched to comfyui because it has taken too long to be able to use new technologies
It used to work but recently it didn't work anymore not sure what happened some update or something, like i had old workflows that did work and now sometimes it works if i mention the pose but most of the time doesn't, so not sure what happened
Thank you so much! I am deeply impressed by how well and structured you explain all the steps, so that even installations can be done cleanly. Your videos, this channel, and your offerings on Discord, as far as I have been able to study them, stand out from the rest. I admire how much time you spend explaining these new technologies to the world and offering them for free. My hat is off to you! 🎩 Thanks & regards. 😊
Love the format of your channel and i always recommend your channel to anyone learning SD. Thank you for not putting workflows behind paywalls and I hope your generosity in turn rewards you for the effort. You and Latent Vision is at the top
Found out on this build with 3090 that for Flux Depth part using a weight d_type : fp8_e4m3fn with Flux Guidance : 4.0 and leave everything else the same will produce some quality photorealistic results. Hope this helps! Thanks again for the tutorials.
Hi pixaroma , thanks for your effort . I'm just wonder what the different between this office models and other models you were mentioned before ? like : Ep19 , Flux Dev Q8 INPAINT OUTPAINT Ep14 , Flux Dev Q8 GGUF with ControlNet
is using different methods, and does similar things, some are bigger than other so might not work on all computers if dont have enogh vram, in same cases some are better than others. For example with this tools you can use only those models from episode, but with method from ep 19 you can use sdxl, or flux diffeent models that are smaller than the Fill model. For control net in this episode is using lora, and in ep14 is different model using control net, so are different technology to achieve similar things, like in many software you can do the same thing in different ways and have to see what works for you. All come with advantages and disadvantages, some of this flux tool need a high flux guidance that might not work well if you want to do different more complex workflow and combine with other models, and since some models are so big you might not be able even to run together with other models, like combine fill with control net and so on in some cases
There are a couple ways to control the style transfer strength. The easiest way is with KJNodes' Apply Style Model Advanced node. The other is to use ConditioningSetTimestepRange or ConditioningSetAreaStrength and combine the conditionings.
Are you planning on making a similar setup walkthrough for SD3.5? SD Outpainting is the bane of my existence - the generations never blend in well with the original image and Comfyui is so messy with file directories that i will run out of space long before figuring out the right combination of nearly infinite models out there 🤕
Not sure, sd3.5 still doesn't get me better images than flux, i was hoping for a fine-tuned version coming up like happened with sdxl to fix some anatomy mistakes
@@pixaroma I see, it might take them a better part of the year if looking at intervals between major releases, but i see your point. It would still be nice to have the option to switch to sd3.5 since its imperfections have their own charm that leaves some room for creative freedom in concept art.
well it didnt give me an error when tried turbo alpha, but the result was not so great, it was looking like when I generate without lora at 8 steps, so with or without lora at 8 steps i got that a little pixelated artefacts on mask, so not sure if it has an effect, you just reduce the steps of normal model and is faster, so instead of 20 try 16 or something to be a little faster, on 8 image is degrading. But maybe I didnt combined some nodes right, but I would have got an error I guess
I am not doing tutorials for any tools that use insightface, so mo pulid, roop, faceid, etc. some RUclipser got copyright strike, is not available for commercial use, and it gives a lot of problems and dependencies problems when you try to install it. I am sure new technology will appear that doesn't use insightface, or maybe the new desktop comfyui can fix that somehow to avoid any problems
Hey pixorama, I hope you got a great vacation. I wanted to ask 1 more thing) Is there a way in Fill model to control what exactly to infil? Example - I want to change cloth on model with the exact example, is this possible to do? Maybe connect ipadaper or something like that?
Hey pixorama, I think you're the best when it comes to new workflows and reviews of new tools. I have a couple of questions but wasn’t sure where to ask them. 1. I have an interior scene, and I’d like to change the lighting to different times of day like night, morning, etc. Is that possible to do? 2. I have a cream tube, and I want to place it against a beautiful background in a way that doesn’t look photoshopped but keeps all the labels intact. Do you have any reviews or workflows that cover something like this?
You can try with a control net but it will not be identical, you will have some differences so is like you get similar interiors but some things will be different like maybe a vase in one will be a jar in other and so on. As for cream tube you can use the flux fill and inpaint anything else just noy the tube, so you change the background without touching the background. But i have to do some experiments when i get some time, maybe using the node that remove background to get a clear mask so we can inpaint only background more accurately, but i need more time to test it and it wasn't a priority
@pixaroma thank you for answer. The thing with the tube is I want so the lighting of the tube will also change, like shadows casting on it. I think this is a little too difficult. But I will go into your discord channel , I see there is so much useful information!
@AndreyJulpa inpaint it first the background, then run through image to image to get a variation of it, but that will probably change text and what others things you have, maybe a combination of photoshop with Ai, not sure
Thanks again for this useful guide. I noticed that the models provided by Black FOrest are so large, why should we switch toi those ones as there are some alternatives for Flux IPAdapter
Depends on the PC configuration and I test all and keep only the one that i am happy with and deleted the rest, so for systems doesn't worth it. I use for example dev q8 because it works ok for me, so probably in a few days or weeks it will appear smaller models so we can use those if it works ok. So far I like the flux fill so i will use that, and canny lora also works nice, and redux model is small
Hello! I'm using Flux tools inpaint in Comfy UI and it works perfect! But it has some shifting in saturation -> final image saves with a little less saturation. I think 5-7% sat. drop. It would be nice to have final result untouched (upd, i used a GGUF Flux for original)
Could this inpaint workflow work with a controlnet to guide what is generated? Let s say u have a specific rocket toy in mind, adding a line drawing or image reference (canny, depth ect)?
I didn't try, that complicates the workflow a little not sure if it will work, and how to connect the right nodes, but give it a try and let me know if you can make it work
@@AndreyJulpa only if you play around with it, i didn't had much time to test since is only a few days old, and I am in vacation now, so maybe more test when im back
I downloaded the Flux1 Fill file and put it in my UNET folder, but I don't see it as a selectable option after I restart. I only see the Flux1 gguf file. Do you know why this might be?
Not sure, do you have other nodes maybe that influence? Some had problem after installing flow control node with gguf models not showing, maybe js the case that some node is in conflict, not sure
@@pixaromai seem to struggle with this, with controlnet i cannot keep the texture unchanged. I could not find a good tutorial which is not complex to understand. Please help us architects!
@@nekola203 go to settings that gear wheel, look in the left for crystools, and then in the right it says Position (floating not implemented yet) make sure is says Top and see if other buttons are on there and is not deactivated
Great series so far! I've watched all the videos and caught up to this one. I was wondering if it's possible to set up user accounts with a username and password. I'm trying to configure it for my kids to use, but I want to restrict their ability to install or delete anything. Is this feature available?
I’ve spent more time experimenting with Flux Fill and discovered a significant issue. If you want to modify a small detail in a large image-like replacing 3D people in an exterior visualization-the results often lack quality. Invoke solves this problem by allowing you to isolate and inpaint only the specific area of the image, preventing unnecessary generation on the entire scene. Is there a way to address a similar issue in ComfyUI?
Maybe you can combine with crop and stitch node like i did in episode 19 but didn't try yet, that takes a crop and modify it and put it back in the big image. Also make sure your image is no too large because flux can do max 2 mp images.
i still use photoshop remove tool :D you can use inpaint in comfyui and prompt for what is in the image, so if is a bird on a sky and want to remove the bird maybe prompt for sky, or put a cloud in place, or prompt for another bird maybe it looks better, so you just replace what you dont like with something else, ,is never empty like sure must be something there, white background, or something since we dont generate with transparency, so prompt for that. If still doesnt work, paint with a similar color with the background over the image and then try inpaint again
I think you need photos of that necklace from different angles on different backgrounds, I used for example tensor art to train a person or a style but didnt tried with an object yet, I saw somewhere someone trained some sinkers, so it should work theoretically, I was able to inpaint a face on a different photo
@pixaroma I just have the hires pics of the products on a bust from different angles. With sdxl it was never accurate, but using fluxgym trained it to good accuracy. Works as a lora, but since no reference pics of models wearing it size mismatch can happen. Hence was wondering if I can use the trained lora and inpainting over an accurate mask Also most pics it generates are from nose down since no people in training images
@@MrDebranjandutta I never done something like that so unless you try different things not sure what it will work or not, since with AI all is random :)
@@aysenkocakabak7703 you can try both, see what works best, maybe some are faster depends on your pc, and stick with what works, none is perfect but we use what we have
Great tutorial... But unfortunately Flux not for commercial use... RealVisXL V5.0 can use for commercial use. can you please make tutorial for it.. specially nature, animal, human, images.. Thank you
You can use the images you generate with the model for commercial work, the output, you can use the model for commercial use like asking people money to use the model on your server
They are getting faster in time and less space need it or cheaper hard drive will appear :) but they are big, the smarter it is more it needs. Imagine the size of chatgpt model 😀
My PC has: Total VRAM 8192 MB, total RAM 32637 MB pytorch version: 2.5.1+cu124 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 3050 : cudaMallocAsync Even flux1-schnell-fp8.safetensors based workflows not working in my pc, Comify UI Reconnecting and pausing, any suggestions how to fix this issue?
You dont have enough vram to run those models, are too big for your video card. If they make smaller GGUF models like q4 maybe then, but even those need like 12-16gb of vram, flux need a lot of vram unfortunately
Free workflows are available on the Pixaroma Discord server in the pixaroma-workflows channel discord.gg/gggpkVgBf3
You can now support the channel and unlock exclusive perks by becoming a member:
pixaroma ruclips.net/channel/UCmMbwA-s3GZDKVzGZ-kPwaQjoin
Check my other channels:
www.youtube.com/@altflux
www.youtube.com/@AI2Play
do it have to be installed on comfyui or does forge work as well?
@jonrich9675 I don't think the forge team has updated the interface to support it, usually takes days or weeks, only comfyui offers day 1 support, that was one of the reason why i switched to comfyui because it has taken too long to be able to use new technologies
@@pixaroma bummer. I prefer forge due to how easy it is to use. Thanks for the info
@@pixaroma Also can you do a flux dev open pose video? I seen like nothing on youtube only depth, and canny.
It used to work but recently it didn't work anymore not sure what happened some update or something, like i had old workflows that did work and now sometimes it works if i mention the pose but most of the time doesn't, so not sure what happened
A knowledgeable person who actually knows how to put together a proper tutorial! Fantastic stuff. Thanks for putting this together.
Glad it was helpful 🙂
Thank you 🙏 You are the So much exciting new content in this episode - It is like drinking from a firehose!!
Thank you so much sebant, it was a busy week 😁
By far some of the absolute best Ai instructional videos on RUclips. Thank you for your amazing efforts.
Thank you ☺️
Thank you so much!
I am deeply impressed by how well and structured you explain all the steps, so that even installations can be done cleanly.
Your videos, this channel, and your offerings on Discord, as far as I have been able to study them, stand out from the rest.
I admire how much time you spend explaining these new technologies to the world and offering them for free.
My hat is off to you! 🎩
Thanks & regards. 😊
Thank you so much ☺️
flux redux is a great tool for animation :) also great job on this page! its very helpful and informative on flux
Love the format of your channel and i always recommend your channel to anyone learning SD. Thank you for not putting workflows behind paywalls and I hope your generosity in turn rewards you for the effort. You and Latent Vision is at the top
Thank you so much, yeah I like matteo videos also :)
Thank you for making such an informative and detailed guide-your hard work is truly appreciated! 🙏✨
Thank you Uday ☺️
Your videos are the best! You explain everything so clearly. Thanks for your amazing work!
Thanks ☺️
This tutorial is excellent and surgically precise.
As always, I come for 2 things and leave with 10 great ideas!! Thank you!! 😀
Thank you ☺️
Thank you very much, Was stuck for a day. Your video really helped.
This is a very good tutorial channel.
Brilliant work flow and well explained. Thank you.
Thanks marcel ☺️
Thanks for sharing..very well explained, well done!
Thank you ☺️
I was hoping you were going to do this. Thank you!
Hope you enjoyed it ☺️
Amazing one. Thanks for the workflows
Found out on this build with 3090 that for Flux Depth part using a weight d_type : fp8_e4m3fn with Flux Guidance : 4.0 and leave everything else the same will produce some quality photorealistic results.
Hope this helps! Thanks again for the tutorials.
Hi pixaroma , thanks for your effort .
I'm just wonder what the different between this office models and other models you were mentioned before ?
like :
Ep19 , Flux Dev Q8 INPAINT OUTPAINT
Ep14 , Flux Dev Q8 GGUF with ControlNet
is using different methods, and does similar things, some are bigger than other so might not work on all computers if dont have enogh vram, in same cases some are better than others. For example with this tools you can use only those models from episode, but with method from ep 19 you can use sdxl, or flux diffeent models that are smaller than the Fill model. For control net in this episode is using lora, and in ep14 is different model using control net, so are different technology to achieve similar things, like in many software you can do the same thing in different ways and have to see what works for you. All come with advantages and disadvantages, some of this flux tool need a high flux guidance that might not work well if you want to do different more complex workflow and combine with other models, and since some models are so big you might not be able even to run together with other models, like combine fill with control net and so on in some cases
Thank you
hey this is agreat video! Keep it up!!!
thank you :)
thanks a lot
There are a couple ways to control the style transfer strength. The easiest way is with KJNodes' Apply Style Model Advanced node. The other is to use ConditioningSetTimestepRange or ConditioningSetAreaStrength and combine the conditionings.
Does it work with ksampler? Or it need the other workflow like the one using full dev?
@@pixaroma It should work fine with the regular ksampler. I also just found the Advanced Reflux control nodes that look like they may be even better.
Are you planning on making a similar setup walkthrough for SD3.5?
SD Outpainting is the bane of my existence - the generations never blend in well with the original image and Comfyui is so messy with file directories that i will run out of space long before figuring out the right combination of nearly infinite models out there 🤕
Not sure, sd3.5 still doesn't get me better images than flux, i was hoping for a fine-tuned version coming up like happened with sdxl to fix some anatomy mistakes
@@pixaroma I see, it might take them a better part of the year if looking at intervals between major releases, but i see your point.
It would still be nice to have the option to switch to sd3.5 since its imperfections have their own charm that leaves some room for creative freedom in concept art.
so cool!!!! thanks a lot sir, u are the best
Thank you ☺️
does the with flux inpatient model work with the turbo lora?
well it didnt give me an error when tried turbo alpha, but the result was not so great, it was looking like when I generate without lora at 8 steps, so with or without lora at 8 steps i got that a little pixelated artefacts on mask, so not sure if it has an effect, you just reduce the steps of normal model and is faster, so instead of 20 try 16 or something to be a little faster, on 8 image is degrading. But maybe I didnt combined some nodes right, but I would have got an error I guess
Hi, do you plan to create a video about the Pulid Flux workflow on Flux Dev using ComfyUI? Thanks for your reply!
I am not doing tutorials for any tools that use insightface, so mo pulid, roop, faceid, etc. some RUclipser got copyright strike, is not available for commercial use, and it gives a lot of problems and dependencies problems when you try to install it. I am sure new technology will appear that doesn't use insightface, or maybe the new desktop comfyui can fix that somehow to avoid any problems
is fp16 required? Can we download the t5xxl fp8 one ?
I think it works, but didn't test it
@@pixaroma sure I'll test
How much vram do you think you need for first inpaint node?
I dont know maybe 16, i think it has similar requirements with the full dev the original one, so if you can run that probably you can run this also.
Hey pixorama, I hope you got a great vacation. I wanted to ask 1 more thing) Is there a way in Fill model to control what exactly to infil? Example - I want to change cloth on model with the exact example, is this possible to do? Maybe connect ipadaper or something like that?
I didn't try yet, and i am not using ipadapter, but if i find a way i will do a tutorial
Hey pixorama, I think you're the best when it comes to new workflows and reviews of new tools. I have a couple of questions but wasn’t sure where to ask them.
1. I have an interior scene, and I’d like to change the lighting to different times of day like night, morning, etc. Is that possible to do?
2. I have a cream tube, and I want to place it against a beautiful background in a way that doesn’t look photoshopped but keeps all the labels intact.
Do you have any reviews or workflows that cover something like this?
You can try with a control net but it will not be identical, you will have some differences so is like you get similar interiors but some things will be different like maybe a vase in one will be a jar in other and so on. As for cream tube you can use the flux fill and inpaint anything else just noy the tube, so you change the background without touching the background. But i have to do some experiments when i get some time, maybe using the node that remove background to get a clear mask so we can inpaint only background more accurately, but i need more time to test it and it wasn't a priority
@pixaroma thank you for answer. The thing with the tube is I want so the lighting of the tube will also change, like shadows casting on it. I think this is a little too difficult. But I will go into your discord channel , I see there is so much useful information!
@AndreyJulpa inpaint it first the background, then run through image to image to get a variation of it, but that will probably change text and what others things you have, maybe a combination of photoshop with Ai, not sure
Do a search on this words is something new and might work for what you need, search: "In Context Lora"
Can you modify the Inpaint one to use "Inpaint Crop and Stitch" Nodes?
I didn't try, not sure if works well together, and I cant test right now, only if you try
@@pixaroma 🥲
Thanks again for this useful guide. I noticed that the models provided by Black FOrest are so large, why should we switch toi those ones as there are some alternatives for Flux IPAdapter
Depends on the PC configuration and I test all and keep only the one that i am happy with and deleted the rest, so for systems doesn't worth it. I use for example dev q8 because it works ok for me, so probably in a few days or weeks it will appear smaller models so we can use those if it works ok. So far I like the flux fill so i will use that, and canny lora also works nice, and redux model is small
what hardware do you have?
**My PC: **
- CPU Intel Core i9-13900KF (3.0GHz, 36MB, LGA1700) box
- GPU GIGABYTE AORUS GeForce RTX 4090 MASTER 24GB GDDR6X 384-bit
- Motherboard GIGABYTE Z790 UD LGA 1700 Intel Socket LGA 1700
- 128 GB RAM Corsair Vengeance, DIMM, DDR5, 64GB (4x32gb), CL40, 5200Mhz
- SSD Samsung 980 PRO, 2TB, M.2
- SSD WD Blue, 2TB, M2 2280
- Case ASUS TUF Gaming GT501 White Edition, Mid-Tower, White
- Cooler Procesor Corsair iCUE H150i ELITE CAPELLIX Liquid
- PSU Gigabyte AORUS P1200W 80+ PLATINUM MODULAR, 1200W
- Microsoft Windows 11 Pro 32-bit/64-bit English USB P2, Retail
- Wacom Intuos Pro M
Hello! I'm using Flux tools inpaint in Comfy UI and it works perfect! But it has some shifting in saturation -> final image saves with a little less saturation. I think 5-7% sat. drop. It would be nice to have final result untouched (upd, i used a GGUF Flux for original)
I guess in some cases it still not work perfect, is only the first version, lets hope they improve it and we get better inpaint
Could this inpaint workflow work with a controlnet to guide what is generated?
Let s say u have a specific rocket toy in mind, adding a line drawing or image reference (canny, depth ect)?
I didn't try, that complicates the workflow a little not sure if it will work, and how to connect the right nodes, but give it a try and let me know if you can make it work
So cool! Thank you! Just tested and you really need GPU with 16 gigs to run it(4070ti super or 4080)
Yeah are quite big not sure what is the minimum, but same with flux full model i think is similar
Thank you! So if i understand it right, the only FLUX model i need is 23 GB Fill model? For the seek of storage saving?
If you want Inpainting only then the fill and of course the clip ones if you dont have it.Or wait maybe someone will make them smaller
@@pixaroma So the fill one is not the "regular + inpaint" option in one file?
@@Scitcat1 it doesn't include clip models so you need those separate you can see in the video the nodes or download workflows from discord
@@pixaroma Ok Thank you very much!
nice!
One more quetion) While using Fill items are a little bit blurry. Is there a way to make them sharper?
Make sure the image is not bigger than 2 megapixels sometimes that helps, test with 1024*1024px images see if still blurry
@@pixaroma It helps a bit, how do you think raising up sampling stems does affect quality of infiil object?
@@AndreyJulpa only if you play around with it, i didn't had much time to test since is only a few days old, and I am in vacation now, so maybe more test when im back
12:55 I think they meant that the restyling workflow with 1 image + 1 prompt is available through their API, but it still only uses Redux.
Yeah I think so, but saw some nodes advanced reflux that gives a little more control to the prompt
how much vram is needed?
Probably the same as the full dev model since it is the same size, is quite new and so not many people had time to test it, and i have 24gb of vram.
I downloaded the Flux1 Fill file and put it in my UNET folder, but I don't see it as a selectable option after I restart. I only see the Flux1 gguf file. Do you know why this might be?
Not sure, do you have other nodes maybe that influence? Some had problem after installing flow control node with gguf models not showing, maybe js the case that some node is in conflict, not sure
Also check this post maybe someone will post an update, seems to be a recent problem github.com/comfyanonymous/ComfyUI/issues/6165
Thank you so much sensei. Please do a tutorial on archviz where we can enhance realism of our renders using flux.
I will see what i can do, but usually with control net you can already do that with canny or depth
@@pixaromai seem to struggle with this, with controlnet i cannot keep the texture unchanged. I could not find a good tutorial which is not complex to understand. Please help us architects!
Does ccomfui works on mac. it kinda difficult...
I saw people using it, but some had problems with it, need to be installed in a certain way i think, similar to linux, but I cant help there
what's the new resource monitor?
you go to manager, custom nodes manager, and install the node called cyrstools, restart comfyui and it will appear
@pixaroma I have crystools but it won't show up after the new ui changes
@@nekola203 go to settings that gear wheel, look in the left for crystools, and then in the right it says Position (floating not implemented yet) make sure is says Top and see if other buttons are on there and is not deactivated
@@pixaroma tried all that it's not working. thanks anyways
@@nekola203 I have comfyui on 2 pc so it works on both, maybe a clean install of comfyui
Great series so far! I've watched all the videos and caught up to this one. I was wondering if it's possible to set up user accounts with a username and password. I'm trying to configure it for my kids to use, but I want to restrict their ability to install or delete anything. Is this feature available?
I didn't see something like that, maybe you can find a custom node that do that since are hundreds of nodes. But i am not aware of any
I’ve spent more time experimenting with Flux Fill and discovered a significant issue. If you want to modify a small detail in a large image-like replacing 3D people in an exterior visualization-the results often lack quality. Invoke solves this problem by allowing you to isolate and inpaint only the specific area of the image, preventing unnecessary generation on the entire scene. Is there a way to address a similar issue in ComfyUI?
Maybe you can combine with crop and stitch node like i did in episode 19 but didn't try yet, that takes a crop and modify it and put it back in the big image. Also make sure your image is no too large because flux can do max 2 mp images.
@@pixaroma Crop and stich works, thank you!
hey my friend can you tell me which ai tools for your voice on videos ?!
ElevenLabs dot io
In your opinion, what is the best way to remove something unwanted in an image? E.g. an object, etc using these kind of tools and without Photoshop
i still use photoshop remove tool :D you can use inpaint in comfyui and prompt for what is in the image, so if is a bird on a sky and want to remove the bird maybe prompt for sky, or put a cloud in place, or prompt for another bird maybe it looks better, so you just replace what you dont like with something else, ,is never empty like sure must be something there, white background, or something since we dont generate with transparency, so prompt for that. If still doesnt work, paint with a similar color with the background over the image and then try inpaint again
@@pixaroma Nice idea. Thank for sharing.
Hi how do I train jewellery as a flux lora, and then inpaint a lora (like a necklace) to inpaint with
I think you need photos of that necklace from different angles on different backgrounds, I used for example tensor art to train a person or a style but didnt tried with an object yet, I saw somewhere someone trained some sinkers, so it should work theoretically, I was able to inpaint a face on a different photo
@pixaroma I just have the hires pics of the products on a bust from different angles. With sdxl it was never accurate, but using fluxgym trained it to good accuracy. Works as a lora, but since no reference pics of models wearing it size mismatch can happen. Hence was wondering if I can use the trained lora and inpainting over an accurate mask Also most pics it generates are from nose down since no people in training images
@@MrDebranjandutta I never done something like that so unless you try different things not sure what it will work or not, since with AI all is random :)
Yeah this keeps crashing, only 12gb of VRAM :( is there anyway to make it work? For flux fill.
only if you find a smaller version, saw online some flux fill fp8 but might need a different workflow
Ah fair enough yeah found the fp8 version, no worries thanks for replying. I got a workflow setup for it testing it
flux1-dev-fp8 also works Loras? Or it must be a full version fp16. did anyone tried that. Thank you
Theoretically it should work, just make sure it has the right nodes, like the loader is different from gguf
@@pixaroma ohh sure, thank you :)
@@pixaroma Also do you think that after discovering flux tools,should I also try Controlnet Union Pro? I somehow thought that it replaces union pro.
@@aysenkocakabak7703 you can try both, see what works best, maybe some are faster depends on your pc, and stick with what works, none is perfect but we use what we have
Great tutorial... But unfortunately Flux not for commercial use... RealVisXL V5.0 can use for commercial use. can you please make tutorial for it.. specially nature, animal, human, images.. Thank you
You can use the images you generate with the model for commercial work, the output, you can use the model for commercial use like asking people money to use the model on your server
Uses a mask, to generate a mask - lol :D
Who, where, when? 😂
This guy.
present 🙂
This one didn't work for me, but I think it's because of my machine. It's still a great tutorial, though
It need a lot of vram just like the full dev model, so is possible to not work, maybe try the loras version or redux those are smaller
@@pixaroma Thx for the reply! For now I'm going to stop, wait a bit. Keep an eye on the channel for possible updates
Please. Tutorial install magicQuill
From what i saw on Reddit people says is not for commercial use, i will check it out but it looks like an inpainting method
Artificiel stupidty need a lot of space
They are getting faster in time and less space need it or cheaper hard drive will appear :) but they are big, the smarter it is more it needs. Imagine the size of chatgpt model 😀
My PC has: Total VRAM 8192 MB, total RAM 32637 MB
pytorch version: 2.5.1+cu124
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3050 : cudaMallocAsync
Even flux1-schnell-fp8.safetensors based workflows not working in my pc, Comify UI Reconnecting and pausing, any suggestions how to fix this issue?
You dont have enough vram to run those models, are too big for your video card. If they make smaller GGUF models like q4 maybe then, but even those need like 12-16gb of vram, flux need a lot of vram unfortunately