Thank you so much for creating this video! It came at the perfect time as I’m preparing for a hackathon tomorrow, where I’ll present a solution to address how easily AI tools can manipulate photos and videos. My idea is to embed metadata or codes in photos to block AI tools from altering them. This issue is especially important in cases like civil violence, Your video has been incredibly helpful and inspiring-thank you so much for your amazing content!!!!
Honestly so irresponsible how you are advertising the use of AI. Zero regard to how this is negatively impacting women globally. Clearly we see why you are so passionate about it. Perverse.
This is my new fav AI channel. I love that you go through the tools and show the app actually in use instead of reading white papers or going on and on about what it "should" do. I can't wait to try this tool out for myself :)
@ you don’t have to dream. IRL you are all socialized to become perverts and victimize women. As long as we don’t have to interact with you IRL i think i can make peace with it. Enjoy living a life led by your external sex organs. I hope it fulfills you.
@@Ez-se2dl my comments keep getting deleted, AI search is a coward who doesn’t like being called out for enabling men to program themselves into becoming perverse machines. Keep on victimizing women then, hope that fulfills you (it won’t)
This usually happens because many developers prioritize flexibility and control over ease of use. They assume their target audience has the technical knowledge to handle dependencies and manual setups. Additionally, creating a "simple" installer can require more development time and testing, especially in complex environments like Python, where library versions and operating systems vary greatly. Another reason might be a lack of resources or focus on user-friendliness, as many projects rely on volunteer contributions and don’t have dedicated teams for designing user-friendly installers
It's a cool, clean interface, though you can do all this with Fooocus inpainting, plus a hell of a lot more! (faceswap, outpaint, etc.) Plus it's much easier to install!
Request: when you are demoing something, or showing an image or anything else, in case someone visually impaired watches, you can explain the image and or the text that has been written or you are writing or anything else like an edited image or current thing displayed on screen if needed.: thank you.
that would make it really slow for regular users. i personally hate my time being wasted and that would be worse for 99.9% of watchers. why would a visually impaired person watch this anyways? its not like they can see how impressive the tool is
@BlackMamba-ey9vm they don't see how impressive the tool is? they don't need to see how impressive the freaking tool is! I regularly daily use image generators for presentations / other things! They not seeing images doesn't mean they don't use AI or ask sighted people to explain the image for them. And taking 3 seconds to explain a simple image doesn't take 1 million years!
I tried MagicQuill out a bit online and I got a Runtime error (Memory limit exceeded 30G) after about 10 generations. Doesn't seem usable again at this moment. It seemed cool, but it didn't remove items as simply as I thought it would. I used an image of a woman wearing a backpack in a country setting. The first thing I attempted to do was erase and remove her backpack with the negative - quill. Instead of removing the backpack, it replaced it with a different backpack. In my second attempt I entered backpack in the negative prompt and that time it did remove her backpack and in the new image I could not tell there was a backpack previously.
I use Stability Matrix and Forge, I woiuld love to see this in there. Or as someone said, just installer would be very appreciated. Not that people cannot do this, just it reminds me of ancient times of linux nerds who had to type commands to do anything.
It reminds me a bit of what was once freely achievable with the old Playground and also with Alpaca Chroma... nowadays, there's something similar in Alibaba's ACE
From MagicQuill: "Currently, our system automatically resizes images to 512 pixels on the shorter side to optimize model performance, which inevitably reduces the resolution of high-quality images. You may manually resize the edited image, or try some super-resolution method to upscale the image. Thanks." NICE.....
At last, a free decent tool for editing photos that doesn't require a PhD to know how to use it (I'm exaggerating ofc, but I've always found learning Photoshop quite tedious)
@@xandragonist bro knew most men are trash. Real porn wasn’t enough for you. Now you have to undress your colleagues and family friends to get off. I hope you find fulfillment in your life.
@@taavetmalkov3295 Or use the site in the video, not sure if it has limits or content restrictions. Jut figured it was worth pointing out for people who want to run truly locally since it's glossed over in the video. For me it's just a hassle to go through the process of renting a GPU unless I really have to. I prefer to just use models I can run for free right on my own hardware.
Sometimes it works, I can change somebody to Mona Lisa. But many times the screen gets black or too many people use this tool. I wonder if we should invest in the Python version, it's probably not cheap for my customers.
I don't know when you made this video because 95% of your installation process is outdated and doesn't work at all since there are new step installation instructions for Windows users on the project's page. Keep in mind that projects on GitHub can change every hour. So, you should or you MUST mention it in your next videos. To correctly install this on Windows, you guys need to read that part: _"Setup: If you are a Linux user, follow the following guide to set up the environment. If you are a Windows user, you may find this helpful."_ If you click on "this", you must now read carefully every step, be careful to copy and paste every line, don't copy everything at once or your Python script modification won't work, and also, read the comments from users below, because there are some errors, even in this Windows step installation! IMPORTANT NOTE: For the "forward slashes" and "Copy-Item" step, you need to use a PowerShell prompt, not a DOS prompt (cmd) otherwise the commands won't work. And use "set CUDA_VISIBLE_DEVICES=0 && python gradio_run.py" command, not the "$env:CUDA_VISIBLE_DEVICES=0; python gradio_run.py" command, or it won't work. I hope this helps!
If it would be created not by Chinese, but by Americans, it would work differently. First of all, it would create a watermark, warning that this is an AI-altered image. And, perhaps, add URL to some generator-related resource. And insert metadata. And, perhaps, some magic pixels tied to your installation of the tool. And if you don't have a subscription for mere $8.99/month - add second watermark as well and set resolution to 480px.
Of course, everything side A is good, and everything side B is bad. This is such a nuanced and intelligent view of the world. Let's ignore Mistral, Meta, Stability AI, BFL, HuggingFace, etc.
as an american i agree, we have so many sue happy people here and corporations running our government essentially that yes, here there are too many regulations ruining everything to protect special interests
@@4.0.4 and you really think public models are totally clean and don't track you? 😂 Why do you think you have to provide contact info before downloading Llama and others? Free AI is a business (and now a national security matter). Corporations protect their money.
@ronilevarez901 I have not provided any info for downloading llama, since I never used the basic un-tuned model and it's open source. Also I did have to provide info to try out some Chinese AIs, so this isn't one-sided.
Very impressive, but that's what any AI image generator does as long as it lets you inpaint. EDIT : I see it's using LLAVA, so it could be more than just inpaint!
How is this different from regular inpainting in other platforms like comfy and a1111? Not trying to be rude here, it's just a question. Good content as always btw.
Tried the online version and it only worked once out of 12 tries. Servers must be overloaded. Great video, though. Too bad the program didn't work for me.
@@cristiandiazbasualto VRAM is not RAM, bro. Check your GPU again. VRAM is what your GPU has. not the RAM. I tested it with RTX4060(8GB vram)and it works. My RAM is 32GB(it doesn't matter)
Ease of use, automation and inpainting optimization. The tool integrates additional masking optimization, automatic constraint via ControleNet (position and recoloring), as well as a vision LLM for image understanding, result optimizers like T2I-Adapter, and a model for annotations.
@@kodoxlucu I've just tested it, it doesn't work (Brushnet model error), but in theory it's possible. You'd probably have to change all the associated models (brushnet, controlenet, etc) and replace them with SDXL-compatible versions, and adapt all the config files. Better wait for the develloper to do it for us :-)
Conda environments dont work for shite, never once been able to install python garbage without piles of dependancy errors, i will just wait for it to come out on Pinokio so I dont have to mess with all that
Great explanation, but doesn't work stable. Tried it once and then it worked, afterwards it didn't work anymore. Couldn't click the feather -, runtime error, ...
It says "image editing operations require ~15GB VRAM". Should it still run with a 4070 12 GB or do we need a 4060Ti 16 GB instead to make it work on a "low budget"? I can imagine the 4060Ti 16 GB being too slow as well and I don't know how exactly VRAM and GPU performance relate to each other in such tasks, that's why I'm asking.
Maybe it can run (slower, though), on Windows the GPU can use up to half of system RAM as shared GPU memory, as you have 32GB RAM your GPU memory can go up to 16+8GB = 24GB VRAM, but the speed deppends on how much data it needs to access on the shared VRAM when it is processing the image because PCI-Express and CPU RAM bandwidth are very slow. Stable-Diffusion 1.5 often go around 16-20GB VRAM when generating hi-res images on my RTX 3060 12GB, but it is quite fast because image generation is more computing-bound than memory bandwidth-bound, (unlike Large Language Models that go way slower when it doesn't entirely fit on GPU VRAM).
If the mouth is selected with the tool, is the AI smart enough to replace the mouth with a phoneme that I specify? For example “ju” is the IPA phoneme for “you”. (Don’t like wasting the tine installing if this eas not possible… was hoping someone could try it) 🙂
wow... ive downloaded 3/4 of the models file AND BOOM IT WAS DONE FOR SOME REASON. I try to unpack it.. it gives me error.. my download just stopped but it gave me the zip file how is that possible. I hate my life I waited 4h and now have to reinstall it
Thanks to our sponsor Thoughtly. Get 50% OFF with code HALFOFFTHEFIRST
thought.ly/?ref=ref
How do I save the created image?
Thank you so much for creating this video! It came at the perfect time as I’m preparing for a hackathon tomorrow, where I’ll present a solution to address how easily AI tools can manipulate photos and videos. My idea is to embed metadata or codes in photos to block AI tools from altering them. This issue is especially important in cases like civil violence, Your video has been incredibly helpful and inspiring-thank you so much for your amazing content!!!!
@@salahsalem4348 Right click and then press save image or save image as.
'CUDA_VISIBLE_DEVICES' is not recognized as an internal or external command,
operable program or batch file.
Damn error
Honestly so irresponsible how you are advertising the use of AI. Zero regard to how this is negatively impacting women globally. Clearly we see why you are so passionate about it. Perverse.
One thing I love about this channel is that they promote free apps(and local installation) over paid ones.
YES!
@@wwk279 we love men promoting the digital violation of women, we love that
this isnt local installtion though
@darrenlewis8273 some can be though
@@darrenlewis8273 ummm .... did you not watch the video? It's a LOCAL INSTALL.
This is my new fav AI channel. I love that you go through the tools and show the app actually in use instead of reading white papers or going on and on about what it "should" do. I can't wait to try this tool out for myself :)
It takes a brave man to casually scroll around on Civitai like that
lol
Haha; thankfully not being logged in enables content filters.
would have missed that gem if you didnt mention it. got any others?
@@makers_lab ? You can filter content logged in as well. But yeah, so much weird shit on there if you don't.
Bro knew EXACTLY what he was doing 😂
“ANYTHING”
These are the men women refer to when they say "men are trash"
@@Emily-rk4ps A man can dream
@ you don’t have to dream. IRL you are all socialized to become perverts and victimize women. As long as we don’t have to interact with you IRL i think i can make peace with it. Enjoy living a life led by your external sex organs. I hope it fulfills you.
@@Ez-se2dl my comments keep getting deleted, AI search is a coward who doesn’t like being called out for enabling men to program themselves into becoming perverse machines. Keep on victimizing women then, hope that fulfills you (it won’t)
dopepics AI fixes this. MagicQuill AI editor: installation review.
I still don't understand why these people don't do a simple batch installer instead of having to spend half an hour debugging python issues....
Yeah that’s why I use pinonkio for a lot of my stuff…
This usually happens because many developers prioritize flexibility and control over ease of use. They assume their target audience has the technical knowledge to handle dependencies and manual setups. Additionally, creating a "simple" installer can require more development time and testing, especially in complex environments like Python, where library versions and operating systems vary greatly.
Another reason might be a lack of resources or focus on user-friendliness, as many projects rely on volunteer contributions and don’t have dedicated teams for designing user-friendly installers
when you feel the pain and establish a hate/love relationship to your tool, that's a bond that is to remember for eternity...
@@hqcart1 LOL so true.
@@hqcart1me & adobe😂
It's a cool, clean interface, though you can do all this with Fooocus inpainting, plus a hell of a lot more! (faceswap, outpaint, etc.)
Plus it's much easier to install!
You saved me alot of hassle 🙂
Which exact website address did you go as I find some websites that are using fooocus ai? The one using Github account or google email/ outlook ones?
Request: when you are demoing something, or showing an image or anything else, in case someone visually impaired watches, you can explain the image and or the text that has been written or you are writing or anything else like an edited image or current thing displayed on screen if needed.: thank you.
that would make it really slow for regular users. i personally hate my time being wasted and that would be worse for 99.9% of watchers. why would a visually impaired person watch this anyways? its not like they can see how impressive the tool is
@BlackMamba-ey9vm they don't see how impressive the tool is? they don't need to see how impressive the freaking tool is! I regularly daily use image generators for presentations / other things! They not seeing images doesn't mean they don't use AI or ask sighted people to explain the image for them. And taking 3 seconds to explain a simple image doesn't take 1 million years!
I can't live without you.
awww thanks
I need frame interpolation!!!😊
Oxygen : am i a joke?
@@im-the-nibba567🤣🤣
as someone whos been looking for a LEGIT ai gen channel yours is the best thanks so much for your work
thanks!
Nice just in time!
Thank you bro for your explaining i needed that :)
Hope they add autosegmentation soon for quick selection.
This is so much better than Firefly
Its really great having a dedicated RUclipsr full on ai now that its gonna almost revamp our society in a few years
I tried MagicQuill out a bit online and I got a Runtime error (Memory limit exceeded 30G) after about 10 generations. Doesn't seem usable again at this moment.
It seemed cool, but it didn't remove items as simply as I thought it would. I used an image of a woman wearing a backpack in a country setting. The first thing I attempted to do was erase and remove her backpack with the negative - quill. Instead of removing the backpack, it replaced it with a different backpack. In my second attempt I entered backpack in the negative prompt and that time it did remove her backpack and in the new image I could not tell there was a backpack previously.
Amazing video as always. Downloaded it in my pc just now. Took 40 min to donwload the models zip file
Wot size. Ima running low 😢
Is it going to run on my RTX 3070 TI or not worth it? 🥺
@@buckshot.522 Yeah, pretty easily at that. I have a rtx 4060, and it worked like a charm. Though be ready for it to use 8 GB ram for it to run.
@@armondtanz 27 GB bro.
@hrshlgunjal-1627 Got it. thanks😊
I use Stability Matrix and Forge, I woiuld love to see this in there. Or as someone said, just installer would be very appreciated. Not that people cannot do this, just it reminds me of ancient times of linux nerds who had to type commands to do anything.
This is what people used to imagine when we Photoshop an image
It reminds me a bit of what was once freely achievable with the old Playground and also with Alpaca Chroma... nowadays, there's something similar in Alibaba's ACE
Who is this absolute beautiful woman in the magicQuill scene??
It's a free inpaint tool. If you use a free image generator (or even Mage with Flux) it could be very useful.
Now just imagine a dedicated application instead of web based and with a bunch of photoshop features built in
I think it will happen as Krita or photoshop plugin soon
From MagicQuill: "Currently, our system automatically resizes images to 512 pixels on the shorter side to optimize model performance, which inevitably reduces the resolution of high-quality images. You may manually resize the edited image, or try some super-resolution method to upscale the image. Thanks."
NICE.....
Great video as Always!
But how do you actually download the edited images in huggin face space?
wondering that too
Right click>"Save image as..."
Sweet. They also plan a ComfyUI node! :-)
I love your content, soon enough every body here will be enjoying time with their own creations.
(That start might get you troubled ngl)
thanks!
(🔥🔥🔥)
hopefully it means perverts can stay locked in their bedrooms with AI so real women don't have to interact with y'all anymore
At last, a free decent tool for editing photos that doesn't require a PhD to know how to use it (I'm exaggerating ofc, but I've always found learning Photoshop quite tedious)
3:22
I wish we could see the most replayed area moments on the video player 😭
finally ive installed models
Bro knows what he was doing at the start
Sis XDDDDDDDDDDDDDDDDDD
the most important thing to test 😉
The real question is can we remove the bikini 😏
@@theAIsearch you're disgusting my guy
@@theAIsearch he's a pervert
LOL at the note under the run button 🤭 they know!
so what will happen if i wrote naughty stuff in it?
@@selvinminj i bet it work, go ahead buddy!
bro knew what we want
@@xandragonist bro knew most men are trash. Real porn wasn’t enough for you. Now you have to undress your colleagues and family friends to get off. I hope you find fulfillment in your life.
ADOBE 100% will fork this and apply it in their photoshop
so this is basically standalone inpainting? but better?
If you could also add an image and paint over something to add it there it would be amazing
Impressive but it currently looks like it requires 15 GB of VRAM so it won't work for most people locally right now.
rent a online pc... some 60 cents per hour.
I have an rtx 3070 with 8gb vram, this won't run at all?
@@taavetmalkov3295 Or use the site in the video, not sure if it has limits or content restrictions. Jut figured it was worth pointing out for people who want to run truly locally since it's glossed over in the video. For me it's just a hassle to go through the process of renting a GPU unless I really have to. I prefer to just use models I can run for free right on my own hardware.
@@nevill2947I've got an 8gb 4060. Regretting not buying a 16gb 4060 so much lol
HA! This 25gb graphics card found it's use. Making bikini ladies.
This is how Inpainting & Outpainting should have been from day 1.
Sometimes it works, I can change somebody to Mona Lisa. But many times the screen gets black or too many people use this tool. I wonder if we should invest in the Python version, it's probably not cheap for my customers.
Thank you Bob ❤
The process of installing the program is torture 🤣
Well, it is a prototype fresh off the lab
*Even works on GPU 4070 laptop 8GB VRAM*
I don't know when you made this video because 95% of your installation process is outdated and doesn't work at all since there are new step installation instructions for Windows users on the project's page. Keep in mind that projects on GitHub can change every hour. So, you should or you MUST mention it in your next videos.
To correctly install this on Windows, you guys need to read that part:
_"Setup: If you are a Linux user, follow the following guide to set up the environment. If you are a Windows user, you may find this helpful."_
If you click on "this", you must now read carefully every step, be careful to copy and paste every line, don't copy everything at once or your Python script modification won't work, and also, read the comments from users below, because there are some errors, even in this Windows step installation!
IMPORTANT NOTE: For the "forward slashes" and "Copy-Item" step, you need to use a PowerShell prompt, not a DOS prompt (cmd) otherwise the commands won't work.
And use "set CUDA_VISIBLE_DEVICES=0 && python gradio_run.py" command, not the "$env:CUDA_VISIBLE_DEVICES=0; python gradio_run.py" command, or it won't work.
I hope this helps!
thanks !
You had me at "red bikini".
Waoh amazing bro
Thanks
If it would be created not by Chinese, but by Americans, it would work differently. First of all, it would create a watermark, warning that this is an AI-altered image. And, perhaps, add URL to some generator-related resource. And insert metadata. And, perhaps, some magic pixels tied to your installation of the tool. And if you don't have a subscription for mere $8.99/month - add second watermark as well and set resolution to 480px.
Of course, everything side A is good, and everything side B is bad. This is such a nuanced and intelligent view of the world. Let's ignore Mistral, Meta, Stability AI, BFL, HuggingFace, etc.
as an american i agree, we have so many sue happy people here and corporations running our government essentially that yes, here there are too many regulations ruining everything to protect special interests
I assure you that any tagging, watermarking and surveillance addition to the tools done by USA is done twice by China.
@@4.0.4 and you really think public models are totally clean and don't track you? 😂
Why do you think you have to provide contact info before downloading Llama and others?
Free AI is a business (and now a national security matter). Corporations protect their money.
@ronilevarez901 I have not provided any info for downloading llama, since I never used the basic un-tuned model and it's open source.
Also I did have to provide info to try out some Chinese AIs, so this isn't one-sided.
Thanks a lot, so how do you download the image after transformation?
And there is NO Save button, nice!
Screenshot, but you're right, and that sucks.
@christiancarter255 we can right click on the result to save, but the size is not the original one
@@Anna-tl8xh I did that twice and it saved... "nothing"? :/
In chrome right click and select inspect, then find the image link, right click again and select open in new window. But max resolution is 850x512…
@@christiancarter255 for the right image, not the left one, for the left you need to inspect to hide the mask overlay if you want to save
Very impressive, but that's what any AI image generator does as long as it lets you inpaint.
EDIT : I see it's using LLAVA, so it could be more than just inpaint!
Men Scroll
Men See
Men Focus
Men Happy
Men click
Men praise.
How is this different from regular inpainting in other platforms like comfy and a1111? Not trying to be rude here, it's just a question. Good content as always btw.
What model does it run under the hood? Flux dev?
SD1.5
I thought its Omnigen...pretty similar 😮
yes! we're seeing a lot of these types of image editors recently
AssertionError: Torch not compiled with CUDA enabled
what to do ?
For everyone asking for the specs to RUN this, you can see @20:39 it specifies: about 15GB of VRAM
Thank you man My 4GB VRAM can't even imagine running this
i have 500 mb VRAM, is that enough?
@@h33e What do you think ? 🤔 The requirement is 15GB VRAM, yours has 500 MB, I wonder if you can run, based on these obvious numbers ?
@@h33e yes its enough....
enough reason to buy another gpu😜🖕
well I will try with 12GB VRAM
AssertionError: Torch not compiled with CUDA enabled
Tried the online version and it only worked once out of 12 tries. Servers must be overloaded. Great video, though. Too bad the program didn't work for me.
Great tutorial unfortuantely I keep getting a Torch not compiled with CUDA enabled error any suggestions?
Ask a gpt. ;)
Waiting for the automatic extension
I got an error because I don't have much VRAM... I only have 4GB worth of VRAM, is there a way for me to still run it?
i got the same error, I don't think is the ram, I got 32gb of ram and it didnt work very well for me either.
@@cristiandiazbasualto VRAM is not RAM, bro. Check your GPU again. VRAM is what your GPU has. not the RAM. I tested it with RTX4060(8GB vram)and it works. My RAM is 32GB(it doesn't matter)
Bro can you make a updated video on the Colab x Diffusers tutorial? The old one has a few sets of errors and stuff. Would really appreciate it!
How do you save picture to computer after editing?
if it use SD1.5, what is the difference compared to automatic1111/forge inpainting?
Ease of use, automation and inpainting optimization. The tool integrates additional masking optimization, automatic constraint via ControleNet (position and recoloring), as well as a vision LLM for image understanding, result optimizers like T2I-Adapter, and a model for annotations.
@gremibarnou8146 Nice 👍🏻 although the installation a bit more complicated. Can we use SDXL model?
@@kodoxlucu I've just tested it, it doesn't work (Brushnet model error), but in theory it's possible. You'd probably have to change all the associated models (brushnet, controlenet, etc) and replace them with SDXL-compatible versions, and adapt all the config files. Better wait for the develloper to do it for us :-)
Looks like I'll be "forced" to get a rtx5090 to start "playing" with AI 🥵
15GB of VRAM is crazy.
No, but you've just got an excuse to get one.
@@Adam2YeshuaIt will be 32GB VRAM
Nice 👌🏻 what's the max resolution it can output?
@@comxed SD1.5 size
now in a day GITHUB devloper don't care about low vram gpu users they just build programs for making money and for those who have high end GPU.
How do you save images?
Great video, great Tool, scary future for us visual artists
I don't think it will be a scary future. Just need to accept and use the potential to create own concepts.
Use your creativity bro, it's very helpful.
Can you give us an update, when the comfyUI Version is out?
Can it switch to specific clothes,like a specific football jersey?
No cos that thumbnail was smart to get us interested😂
the model download is too slow, why dont they put it in huggingface or something. btw im new to the AI image generation
There is no download on the hugginface page for the art file image I changed, is there supposed to be one?
15GB of VRAM is crazy
Conda environments dont work for shite, never once been able to install python garbage without piles of dependancy errors, i will just wait for it to come out on Pinokio so I dont have to mess with all that
how to download the final image in high quality
Good find, but demo is just steady "ERROR" messages.
RIP to all the people that have learned professional photoshop
Generative fill is like 1% of photoshop. In this tool you can't do basic things like rotation, resize, transparency.
@JustFor-dq5wc I mean in general
This error: AssertionError: Torch not compiled with CUDA enabled
your voice 1:30 changes. did the ai glitch or something 🤣
This needs integrated with GIMP
looks amazing
How can websites change by so much in just one day? Everything looks completely different. I dont get where py 311 is.
Does the Nvidia 40 series compatible with CUDA architecture,?
yeah
came into this thinking it was just generative fill from photoshop
this is not generative fill from photoshop 😳
Isn't it possible to download the result when using huggingface demo?
Great explanation, but doesn't work stable. Tried it once and then it worked, afterwards it didn't work anymore. Couldn't click the feather -, runtime error, ...
It says "image editing operations require ~15GB VRAM". Should it still run with a 4070 12 GB or do we need a 4060Ti 16 GB instead to make it work on a "low budget"? I can imagine the 4060Ti 16 GB being too slow as well and I don't know how exactly VRAM and GPU performance relate to each other in such tasks, that's why I'm asking.
Can you add more models to the directory.
I have CUDA GPU but VRAM is only 8GB and RAM is 32 GB. That's not enough right?
They at least need 16GB VRAM?
from here they say 15g is needed. github.com/magic-quill/MagicQuill/issues/5
GPU 2060 is minimum requirement
he lied when he said free, he duplicated the space so not free unless you have a GPU
Maybe it can run (slower, though), on Windows the GPU can use up to half of system RAM as shared GPU memory, as you have 32GB RAM your GPU memory can go up to 16+8GB = 24GB VRAM, but the speed deppends on how much data it needs to access on the shared VRAM when it is processing the image because PCI-Express and CPU RAM bandwidth are very slow. Stable-Diffusion 1.5 often go around 16-20GB VRAM when generating hi-res images on my RTX 3060 12GB, but it is quite fast because image generation is more computing-bound than memory bandwidth-bound, (unlike Large Language Models that go way slower when it doesn't entirely fit on GPU VRAM).
@@MCA0090 Have you tested this on your 3060 ? I have a 3060 too
Let's gooooooo 🗣️🗣️
🚀🚀🚀
3:52 Not only the arms that we know what they look like now 😂
how do you download? Not the MagicQuill software, but after you make the edits?
If the mouth is selected with the tool, is the AI smart enough to replace the mouth with a phoneme that I specify? For example “ju” is the IPA phoneme for “you”. (Don’t like wasting the tine installing if this eas not possible… was hoping someone could try it) 🙂
On thing bro, what sucked me in was the demo w the woman doing all her edits on an iPad w a pencil, how can we access this setup?!
I would have brushed over the dress and said, ....hmmm
when you start with the "set cuda" etc using && may not work, instead use =0; python instead of &&
How is it different from inpaint feature of sd that we had for a long time ?
wow... ive downloaded 3/4 of the models file AND BOOM IT WAS DONE FOR SOME REASON. I try to unpack it.. it gives me error.. my download just stopped but it gave me the zip file how is that possible. I hate my life I waited 4h and now have to reinstall it