How to Train a Flux.1 LoRA Using Ai-Toolkit Locally for FREE!

Поделиться
HTML-код
  • Опубликовано: 18 янв 2025

Комментарии • 163

  • @TransformXRED
    @TransformXRED 5 месяцев назад +33

    Most people don't know but the Florenc2 model is very versatile and can be used for OCR. And OCR of handwriting text too, which is pretty hard to do.

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +3

      It’s pretty versatile - especially for the size!

    • @TransformXRED
      @TransformXRED 5 месяцев назад

      @@NerdyRodent i'm curious, have you played with the optimizer? On CivitAI, in the settings for training we have Adafactor and Prodigy (I believe I saw it in the past in Koyha).

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      @@TransformXRED so far AdamW has been fine for me, but I’m creating a variety of datasets to test with still 🫤

    • @ayrengreber5738
      @ayrengreber5738 5 месяцев назад +1

      Any good tutorials for OCR?

    • @TransformXRED
      @TransformXRED 5 месяцев назад

      @@ayrengreber5738 I don't know.
      But you can pretty much use the same workflow as this video, and you just change the settings of the Forence2run node to OCR. There are two OCR options, try both.
      I didn't try to OCR a whole folder. I just use a load image node, load and run Florence2 nodes, a node displaying the text.

  • @deastman2
    @deastman2 5 месяцев назад +23

    It’s exciting to see how Flux support is developing so rapidly! Between Lora training and ControlNet, we’re just about to the point where we can bid farewell to SD entirely.

    • @PredictAnythingSoftware
      @PredictAnythingSoftware 5 месяцев назад

      True, just like what happened to me. My SSD, where all my SDXL and SD1.5 checkpoints and LoRA are stored, became corrupted. But instead of feeling very bad about it, I thought, it’s okay, I can do much better using Flux soon anyway.

    • @xbon1
      @xbon1 4 месяца назад +1

      flux is still the underlying sd engine. we wont be saying goodbye to sd any time soon, flux is just a unet.

    • @AkshayAradhya
      @AkshayAradhya 4 месяца назад

      true

    • @personmuc943
      @personmuc943 4 месяца назад

      But still the 24gb vram requirement to train a lora is giving us a reason to stay in SD

  • @djbone94
    @djbone94 4 месяца назад +12

    24g of vram is noit needed anymore. It's even possible with 12g if we shared with a cpu (but you need 32g of Ram)

  • @ThoughtFission
    @ThoughtFission 5 месяцев назад +2

    Nice! Just stepping into the Lora chasm and this will really help

  • @dog8398
    @dog8398 5 месяцев назад +19

    I dropped a like because I love you man. As for the content, I can barely work out a sky remote. My son literally showed me a button I can press on my Sky remote so I can ask for what I want to watch. I wish I kept up with tech as you have. I started out good because I was a proud owner of a ZX81 and a 16K ZX Spectrum. I bought a chip from Radio Shack to change it to a 128K Spectrum. It broke. Gave up after my Commodore 64.

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +2

      😉

    • @southcoastinventors6583
      @southcoastinventors6583 5 месяцев назад +3

      Gave up 40 years ago it like those movies were people go into comas and wakeup in the future to bad for you robot butlers aren't out yet should have waited 10 more years.

    • @dog8398
      @dog8398 5 месяцев назад

      @@southcoastinventors6583 Creature from the black lagoon on channel 4. I remeber coming home to it when I was about 18. Got me totally into B movies.

  • @sicshop
    @sicshop 4 месяца назад +1

    OMG. That thumbnail is on fleek. That’s beautiful.

  • @tetsuooshima832
    @tetsuooshima832 4 месяца назад +1

    There's a package "ComfyUI-prompt-reader-node" (self-explanatory), this gives tons of outputs along with the extracted FILENAME (without the extension) that you could use for naming or put into metadatas why not

  • @JustinSgalio
    @JustinSgalio 3 месяца назад

    You are doing a splendid Job!

  • @cleverestx
    @cleverestx 5 месяцев назад +2

    Thanks for the video, but @5:16 it would be nice if you could expand on how to do this entire section with the "run it the first time" (run what?), and how to make that .ENV file (folder?), I'm using WSL Ubuntu in Windows 11 (installing Linux version)
    Under accepting model license it says, "Make a file named .env in the root on this folder"
    What root? On hugging face itself; is that "THIS FOLDER", or the ai-toolkit root folder we just made using the instructions above? The instructions on the Github for this part needs to be less vague, or am I just that thick?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      To create a new text file, you will need to be able to use a text editor, much like editing the config file

  • @swannschilling474
    @swannschilling474 5 месяцев назад +2

    Yaaaaay going Flux is the only way!! 🎉😊

  • @tekkdesign
    @tekkdesign 5 месяцев назад +4

    This is incredible! I've learned more in 15 minutes than I did in a month of watching RUclips videos.

    • @taucalm
      @taucalm 5 месяцев назад +1

      This is youtube video?

    • @evilchucky69
      @evilchucky69 5 месяцев назад

      @@taucalm LMFAO! 🤣🤣🤣

    • @Huang-uj9rt
      @Huang-uj9rt 5 месяцев назад

      Yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.

    • @wakegary
      @wakegary 5 месяцев назад

      @@Huang-uj9rt bruv, no. say less

  • @gilgamesh.....
    @gilgamesh..... 5 месяцев назад +1

    I had trouble with u-kiyoe art as well. Flux is great at specific things but I think a lot of loras are going to be needed with it.

  • @ai-aniverse
    @ai-aniverse 5 месяцев назад +1

    you should have more subs!

  • @niccolon8095
    @niccolon8095 4 месяца назад +1

    followed the steps and im only getting 1 .txt file even though I have many images. In the CMD prompt it describes all my images but only saves 1 .txt file that describes 1 random image .. any idea?

  • @epicbengali9109
    @epicbengali9109 4 месяца назад +2

    I've 16gb vram rtx 4080, is it possible for me to run the flux lora training locally??

  • @Vyviel
    @Vyviel 5 месяцев назад +3

    Thanks for this always love your videos! Hardest part looks to be the dataset so really appreciate the auto tagging workflow part. Do you have any tips for datasets for subjects rather than styles? Should I mix close up face shots and full body shots? Have you tried any other training tools like kohya-ss?

    • @unknownuser3000
      @unknownuser3000 5 месяцев назад +1

      Try onetrainer for auto tagging since it also masks it

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      That would depend whether you want the subject’s face alone, or their body as well…

  • @PimentelES
    @PimentelES 5 месяцев назад +4

    Nerdy, would you say that Ai-Toolkit is better than simpletuner for flux lora training?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +2

      It depends what you mean by better 😉

    • @Rimston
      @Rimston 5 месяцев назад

      A few criteria would be ease of training, speed, VRAM requirements, and visual fidelity/quality. I would also like to know.

  • @jonathaningram8157
    @jonathaningram8157 5 месяцев назад +1

    I have issue with \venv\lib\site-packages\torch\lib\fbgemm.dll Module not found.

    • @jonathaningram8157
      @jonathaningram8157 5 месяцев назад

      I downloaded a dll called : libomp140.x86_64.dll and put it in C:\Windows\system32\ and it did the trick.

  • @dan323609
    @dan323609 5 месяцев назад

    It is mergeable? With model?

  • @krnondemand89
    @krnondemand89 5 месяцев назад

    Great stuff! Could you share workflow for Florenc2

  • @Villa-Games
    @Villa-Games 5 месяцев назад +1

    Thank you!

  • @zerovivid
    @zerovivid 4 месяца назад

    Great video, I've a couple of questions though: What is the huggingface token used for, is it only used to download the flux-dev model? Is any data sent back to HF? And lastly, if I already have the flux-dev model, can I skip this step somehow by placing the model in a folder somewhere?
    Thanks for the tutorial!

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      Yes, the token is just to download the model. If you already have the files (not just the single .safetensors one) you can specify the directory in the configuration.

    • @zerovivid
      @zerovivid 4 месяца назад

      @@NerdyRodent Thanks for the clarification!

  • @twalling
    @twalling 3 месяца назад

    "For the images, you can just use the relative path" -- relative to what? Sorry for the newbish question--it looks like your images and text files both showed up in the same folder, but I don't understand where to point the paths for the images and the text files.

    • @twalling
      @twalling 3 месяца назад

      EDIT: relative, evidently, to the comfyui output folder.

  • @contrarian8870
    @contrarian8870 4 месяца назад

    @Nerdy Rodent Maybe a stupid question: can Loras trained for SD1.5/XL be applied to Flux models?

  • @neonlost
    @neonlost 5 месяцев назад +3

    awesome video is there a way to use multi GPUs though? i have 3 3090

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      Yup, apparently that is possible

    • @neonlost
      @neonlost 5 месяцев назад

      @@NerdyRodent great! i find stuff on simpletuner about it but didn't notice anything under ai-toolkit glad it is possible though will try it with simpletuner.

    • @martinwang7098
      @martinwang7098 4 месяца назад +1

      ur rich man,i only have an old 2070

  • @andrefilipe8647
    @andrefilipe8647 4 месяца назад

    Hi there can I generate images with my Lora model without the use of confyUi?
    I just want to be able to load the checkpoint and generate images

  • @TransformXRED
    @TransformXRED 5 месяцев назад

    I'm not sure if it's right but for those who are reading the comments, in the config yaml file, you can un-comment the line number 15, and use a trigger word, I imagine if we want to train a subject/person, right?

  • @supernielsen1223
    @supernielsen1223 3 месяца назад

    Is there a CPU mode for any lora training and flux out there..?

    • @NerdyRodent
      @NerdyRodent  3 месяца назад

      I can’t even guess how many days it would take on a CPU… best to use the website for a quick lora

    • @supernielsen1223
      @supernielsen1223 3 месяца назад

      Haha okay didnt think it would be that slow actually, but i can run it on my gpu luckily it were more a curiois question ☺️☺️​@@NerdyRodent

  • @mertcobanov
    @mertcobanov 5 месяцев назад +1

    where is the workflow?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      You can find links for everything in the video description 😃

  • @Enigmo1
    @Enigmo1 4 месяца назад

    is there a way to do this without all the huggingface connection. i already have the model downloaded to a separate location on my PC...don't wanna waste time downloading it again

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      If you’ve already got the files (i.e. not just the dev safetonsors file for comfy), specify the directory path in the config file

  • @JurassicJordan
    @JurassicJordan 5 месяцев назад +7

    Pls when training on lower vram comes out, make a vid on that

    • @Elwaves2925
      @Elwaves2925 5 месяцев назад +2

      It's already available but it's with paid services, not locally.

    • @quercus3290
      @quercus3290 4 месяца назад +3

      you can train localy, but 16gig of vram will take around a day

    • @JurassicJordan
      @JurassicJordan 4 месяца назад

      @@quercus3290 I only have 12 😔

    • @monke_music
      @monke_music 4 месяца назад

      @@quercus3290 with this particular setup? Cause on github it says it's only possible with at least 24gb

  • @moezgholami
    @moezgholami 5 месяцев назад

    Hi, Thanks for the video. Can you tell us how many images you used to train the style LoRA for Flux? And whether you used any augmentation?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      20-50 is typically good!

    • @moezgholami
      @moezgholami 5 месяцев назад

      @@NerdyRodent thank you sir

  • @EvilNando
    @EvilNando 5 месяцев назад +1

    hello , whenever I try to launch the AI script I get the following: ModuleNotFoundError: No module named 'dotenv'
    any ideas? (have pyton and git already installed)

    • @mond000
      @mond000 4 месяца назад

      this error went away after running pip3 install -r requirements.txt from within the ai-toolkit folder

  • @JieTie
    @JieTie 3 месяца назад

    Those loras doesnt work with web ui forge :/

  • @pranitmane
    @pranitmane 4 месяца назад

    Hey can I train it with 16 gigs of ram on m1 pro mac?
    What alternatives do I have?

  • @vitalis
    @vitalis 5 месяцев назад

    How about sophisticated 3D style renderings? Is it possible?

  • @RyanGuthrie0
    @RyanGuthrie0 5 месяцев назад +2

    There any way to do this with like Google Colab or a gpu on demand type service?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      Like the fal website?

    • @Elwaves2925
      @Elwaves2925 5 месяцев назад

      He mentions the Fal service near the start of the video. Search for 'Fal ai'.
      There's also Replicate which I found a little cheaper than Fal. I trained a Flux lora with that and it turned out really well, took about 40mins with the default settings. Also, if you head over to Matt Wolfe's tutorial on it (read the comments because he misses some important bits) you can get $10 free credits, which should be enough for 2 or 3 Flux loras.

  • @Noonan37
    @Noonan37 5 месяцев назад

    Where can we get reference images that aren't copyrighted and free to use for training?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      Lots of museums have open access images

  • @Duckers_McQuack
    @Duckers_McQuack 4 месяца назад

    The .env file way did not work at all for me for some reason. But the cli login was literally paste command, paste key, done :P

  • @cemguney2
    @cemguney2 5 месяцев назад

    is there a way to use flux with animatediff for creating videos?

  • @digidope
    @digidope 5 месяцев назад

    Why peoples talk only on step count? It used to be epochs as when all your dataset is seen once by model its one epoch. Thats why step count should very different for dataset of 5 images vs 25 images. Has something changed?

  • @j_shelby_damnwird
    @j_shelby_damnwird 4 месяца назад

    So...16 gig VRAM won´t cut it?
    bummer

  • @diego102292
    @diego102292 5 месяцев назад

    god damn i finally mastered khoya and now have to use another trainer
    ( that's the one thing so frustrating with ai with each update half of the shit breaks) hope it will come to khoya as well

  • @unknownuser3000
    @unknownuser3000 5 месяцев назад +1

    Looking forward to when I can make these with a 3080 but still using SD 1.5 since sdxl and flux have still not given me as good as sd 1.5

  • @davidmartinrius
    @davidmartinrius 5 месяцев назад

    Hi there! I was wondering if it's possible to train a single LORA model to recognize and generate multiple specific faces or bodies of specific persons. For example, could one LORA model be used to generate both my own face and the faces of others based on their names? How to manage this with the trigger words?
    I have a single dataset with all people tagged by its name and a short caption in the .txt files

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      Yup, you can do it like that!

  • @RDUBTutorial
    @RDUBTutorial 18 дней назад

    Has anyone tried the Linux instructions on Mac to see if it works? Looking for local Lora training solution that won’t break my current comfyui

  • @jonathaningram8157
    @jonathaningram8157 5 месяцев назад

    It's really weird, I managed to train a lora, it works great but only for a few generation, then the generation gets extremely noisy and unusuable, I have to reload the whole model again and it's fine. I don't know what's messed up.

  • @FranckSitbon
    @FranckSitbon 5 месяцев назад

    I'm trying your workflow with other flux lora models from civitai, but the render is still too long. 50s per iteration.

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      50s is about right for the lower end cards with Flux. It’s around 20s on a 3090

  • @juanjesusligero391
    @juanjesusligero391 5 месяцев назад +5

    Oh, Nerdy Rodent, 🐭💻
    he really makes my day; 😊✨
    showing us AI, 🤖📊
    in a really British way. ☕🎩

  • @jonjoni518
    @jonjoni518 5 месяцев назад

    i find it impossible to download the models. it starts downloading the model at 30mb/s and then it goes down to just a few Kbytes and stays at 99%. i have tried with different hugginface tokens (write, read finegrain....). i also leave the .yaml by default except the path where i indicate the directory of my dataset. by the way i have a 14900k 4090 and 128ram and windows 11

  • @AyuK-jm1qo
    @AyuK-jm1qo 4 месяца назад

    Is this better or worse than kohya? I want to train a lineart style human body poses for reference and Im really having issues.

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      Give it a go and see! 😉

  • @thecorgisbobandgeorge
    @thecorgisbobandgeorge 5 месяцев назад

    Have they made it any quicker? My 3060 takes about 20 minutes for a basic image using flux in comfyui

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      20s on a 3090, so that sounds pretty slow!

  • @AyuK-jm1qo
    @AyuK-jm1qo 4 месяца назад

    I paid the patreon, can you share there this lora, looks great

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      As it’s flux dev I can’t put the Lora itself up on patreon, but I’ll maybe look at putting it up on GitHub if there is interest in that test file!

    • @AyuK-jm1qo
      @AyuK-jm1qo 4 месяца назад

      @@NerdyRodent I dont understand why you cant put it up on patreon. Otherwise, can you just upload to wetransfer or something and send it?

  • @DrMacabre
    @DrMacabre 5 месяцев назад

    no luck here, during training, even at steps 500, the samples looked amazing but i can't load the lora in comfy, neither with the lora loader nor the flux lora loader.

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      Could be an old version of comfyu as the Lora support was added hours ago now 😉

    • @DrMacabre
      @DrMacabre 5 месяцев назад

      @@NerdyRodent i was on a different branch (facepalm)

  • @HojeNaIA
    @HojeNaIA 5 месяцев назад

    I have a workstation with 2 GPUs: A4500 20 GB in cuda0 and RTX 3060 12 GB in cuda1. Is training possible in this condition? Can I train in 20 GB using A4500? Or multi GPU using both? Or do I need 24 GB in a single GPU?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      You may be able to squeeze it into the 20 gig in low vram mode

    • @HojeNaIA
      @HojeNaIA 5 месяцев назад

      @@NerdyRodent Thanks. I will try that tonight and will let you know the results

  • @kallamamran
    @kallamamran 5 месяцев назад

    The captioning works if I run it without save image node. If I run them both it never captions any images, it just loops through the same image node indefinately saving multiple copies of the images until I cancel the queue :(

    • @jonathaningram8157
      @jonathaningram8157 5 месяцев назад +1

      by any chance do you save your image the same directory you load them ?

  • @Markgen2024
    @Markgen2024 5 месяцев назад

    What if i add one more 12gb gpu in my pc would it be detected in comfy ui? because in sd1.5 it did not recognize but im pretty sure my gpu is good.

    • @Instant_Nerf
      @Instant_Nerf 4 месяца назад

      Can’t add more vram without buying a new card. It’s not overal ram. It’s gpu ram

  • @sunnyfunnysf8576
    @sunnyfunnysf8576 4 месяца назад

    A very good tutorial. But I'm not sure if ai-toolkit really works on my computer. How long does it take until something happens here?
    Generating baseline samples before training
    Generating Images: 0%|

    • @NerdyRodent
      @NerdyRodent  4 месяца назад +1

      Flux usually takes about 20 seconds per image

  • @PunxTV123
    @PunxTV123 5 месяцев назад

    Can i train on my rtx 3060 12gb?

  • @RiiahTV
    @RiiahTV 5 месяцев назад

    is there a way you can do this with 16 vram?

    • @Elwaves2925
      @Elwaves2925 5 месяцев назад +1

      You can through paid online services. That's the only way right now.

    • @RiiahTV
      @RiiahTV 5 месяцев назад +1

      @@Elwaves2925 im just gonna buy a new gpu gonna get another rtx a4000 then ill have 32 vram

    • @Elwaves2925
      @Elwaves2925 5 месяцев назад

      @@RiiahTV Nice, I hope you get it. The higher end cards are way out of my range and I have other things I want too. I'll stick with my RTX 3060 12Gb VRAM for now.

  • @igorzimenco7773
    @igorzimenco7773 5 месяцев назад

    Hello! 16GB VRAM won't be enough? 😢

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      You can use fal 😀

    • @mssalomander
      @mssalomander 5 месяцев назад

      What is fal?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      @@mssalomander links are in the video description!

  •  5 месяцев назад

    Hi great tutorial thank you, I have 11gb ram (GPU) is it impossible to train with this specs?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      You can use fal 😀

    •  5 месяцев назад

      Can you kindly please define what "fal" is?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      You can find the link to the fal website shown in the video in the video description!

    •  5 месяцев назад

      Thank you🎉

  • @Phagocytosis
    @Phagocytosis 5 месяцев назад

    I'm very sad, because I got a nice new GPU just last year, but it's an AMD, and now I've become very interested in AI and I can't do almost anything with it on my local machine. I've found that at least on Linux, you can use ROCm for Pytorch and get some things working that way, so that's my plan now, to install Linux alongside my Windows installation.
    However, the requirements in this video suggest that you just straight up need NVIDIA, it doesn't even mention the option of AMD+Linux. So am I basically SOL for this one?

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      Whilst AMD does indeed have the best support on Linux, a lot of things will still require Nvidia software. And though I know it says you need Nvidia, one can only be sure if it gives an error 😉 There is always the fal site too!

    • @Phagocytosis
      @Phagocytosis 5 месяцев назад

      ​@@NerdyRodent I'll probably try it out on Linux, then, and see if it might work still. I wasn't able to get Pytorch to work before; apparently it's now supported on Windows for some AMD GPUs, but not the one I have (RX 7800 XT), but supposedly that one does work on Linux. Sounds like it might be worth a try, at least! Thank you for your reply, and for the video :)

  • @drawmaster77
    @drawmaster77 5 месяцев назад

    how do you get 24Gb of VRAM?

    • @jibcot8541
      @jibcot8541 5 месяцев назад +4

      Buy an Nvidia RTX 3090 or RTX 4090.

    • @slashernunes
      @slashernunes 5 месяцев назад

      @@jibcot8541 in other words, be less poor.

    • @Instant_Nerf
      @Instant_Nerf 5 месяцев назад +2

      Download the remaining GB 😂

  • @深圳美食
    @深圳美食 4 месяца назад

    看了半天,所以你是在lunix上训练的吗?

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      Yup! Anything AI is best supported on Linux 😀

  • @GerardMenvussa
    @GerardMenvussa 5 месяцев назад +6

    24GB of VRAM lol
    Might as well hire a freelance artist to draw things for you at that price :o)

    • @jibcot8541
      @jibcot8541 5 месяцев назад +4

      You would probably get 2 commissioned images for that price, I have made over 500K for the £700 I paid for my 3090 2 years ago.

    • @brianmonarchcomedy
      @brianmonarchcomedy 5 месяцев назад

      @@jibcot8541 Congrats... What did you do with your card to make so much?

    • @Emonk2010
      @Emonk2010 5 месяцев назад

      @@jibcot8541 How?

    • @adams546
      @adams546 5 месяцев назад +11

      @@jibcot8541 lol liar

  • @MrDebranjandutta
    @MrDebranjandutta 4 месяца назад

    so guess no way to make this work on half the recommended VRAm (like 12 or 16gb)

  • @deadlymarmoset2074
    @deadlymarmoset2074 5 месяцев назад +1

    Tthis guy is always has the hottest stuff.

  • @ThePinkOne
    @ThePinkOne 5 месяцев назад

    Would you make a tutorial for Civtiai's lora training feature? I have no idea what the best settings are looool

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      Have a go on the fal one!

  • @VaibhavShewale
    @VaibhavShewale 5 месяцев назад +1

    here i have 40mb vram XD

  • @lockos
    @lockos 5 месяцев назад

    24GB of VRam...Ow maan, in other words it means only the lucky owner's of RTX 4090 can train loras locally.
    Us peasants will have to wait I guess.

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      Or use the website shown 😉

    • @lockos
      @lockos 5 месяцев назад

      @@NerdyRodent ...unless you don't want some trace of your own face all over the internet, which is my case. Hence the fact I always prioritized local training over online services.
      Wether it is runpod or another Web service to train Flux Loras, how can I be 100% sure they dont keep track of my datasets ?

    • @jonathaningram8157
      @jonathaningram8157 5 месяцев назад +1

      or a 3090

    • @lockos
      @lockos 5 месяцев назад

      @@jonathaningram8157 3090 is 16Gb of Vram so no. Video says you need 24Gb of Vram.

  • @taucalm
    @taucalm 5 месяцев назад

    Flux is a great model, bad thing is we dont have consumer class gpus for that yet (affordaable).

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад

      We have things like the 3090 & 4090, which is great! Fal is nice and cheap too - especially if you know you’re never going to need a GPU again

    • @taucalm
      @taucalm 5 месяцев назад

      @@NerdyRodent which have 16gb and 24gb of vram and we would need at least 36gb maybe 48gb. we need chinese modded 4090 48gb.

    • @NerdyRodent
      @NerdyRodent  5 месяцев назад +1

      @@taucalm they have 24gb VRAM 😉

  • @playthisnote
    @playthisnote 4 месяца назад

    Yeah I get annoyed that every tutorial on the net for AI is mostly a website interface with a form and a click run button. A person is teaching almost nothing but go here. I mean I am looking for real info about installing it and running it myself. Or even coding it from scratch. Actually it’s like this for everything beyond AI. People these days want just a button for a technical skill and then a certificate that says they are something lol 😂
    Additionally I don’t like comfyui because it’s too hands on with gui. I mean I code automatic programs so there wouldn’t be a gui. However comfyui has an export feature for straight code. Which I don’t think alot of people are aware.

    • @NerdyRodent
      @NerdyRodent  4 месяца назад

      The websites are great for people who don’t have the GPU power of course. Glad you got this one installed locally using my tutorial!