#### Link from my Video #### Join my Discord: discord.gg/XKAk7GUzAW Bulk Resize: bulkresizephotos.com/en Kohya-ss Install: github.com/bmaltais/kohya_ss Booru Dataset Tag Manager: github.com/starik222/BooruDatasetTagManager/releases/tag/v1.6.3 SD 1.5 Training Model: huggingface.co/runwayml/stable-diffusion-v1-5/tree/main
Great video, but aren't HyperNetwork's also an option? Is there any reason why we'd use Lora's over Hyper Networks? So far I've gotten better results training HyperNetworks, but admittedly Lora's had only just arrived on the scene when I was experimenting.
do you know how feasible it is to install Stable Diffusion on a Mac? for example Mac Mini M2 Pro? Will it work by running it on a virtual machine to windows or will it be better via bootcamp? or is there a Mac direct version?
A really helpful and informative video. In addition, and I find this particularly important, your pronunciation is also really super. For me, as a non-native speaker, this is very important. In many videos, people just mumble to themselves, or you can not understand them so well for technical reasons. With your videos I found the first time someone I can really understand excellent and crystal clear. 😊
Agreed, he gets the pace just right. Not easy to do, some people take too much time with no benefit, some too fast and hard to follow. The balance here is just right 👌
Te trouble with all these tutorials is that when I spend an entire day following them, the software I see does not ever seem to match the software in the video. When I reach an error screen, I never know why and I have no comprehension of what to do about it. It's quite daunting for an artist who thought the only thing he would have to master in life was using a pencil and a brush. :(
the taging changes a fair bit depending on if your trying to train a topic or a style or a specific person and it would be nice to show the differences there. i see in on another video they explained how you should tag everything you DONT want to be a part of your lora unless you are training for a single person or thing, and I wold like to see a more in depth dive into tagging and its effects
The tagging strategy is a pretty confusing. I've also seen tutorials that say you should tag only the things you DO want to be part of your lora/model. And some tutorials say to not use tags at all and let the AI figure it all out. And should you use a unique "keyword" tag to train it to that particular unique "keyword" to use in the prompts? I've tried different the approaches but didn't get any conclusive results. They were all pretty bad and didn't work as expected. 🤨 A video from Olivio covering just the tagging aspect with a demo for each approach would be awesome.
@@Beltalowda55 captions are what a model uses to try and predict what the picture should look like during training. Difference between predicted and actual image is used to adjust the model in the right direction. So it should be obvious that you want to describe everything that is not a part of the concept you're trying to learn, to help the model to make a correct prediction and narrow the direction for the learning. Keyword is added for the model to associate the concept with that keyword, that helps to preserve sanity. It sould be noted though that the training process does not actually differentiate what you're trying to learn in any way. All it does is it tries to match a predicted image with the actual image. That means it can associate to your concept any other or multiple tags you used in unpredictable ways, basically splitting the concept between them. To mitigate that, no tag except a keyword should be used too frequently (in my experience 30-50% is the limit)
@@fritt_wastakenI wish you had some videos up to talk about this. I think I understand what you're saying but some of it is a little fuzzy. Thank you for the info nonetheless.
Danke! :) Thanks Olivio for your many great explanation videos and tutorials and that you made the effort to really go through every step of the setup and first usage of kohya. I needed exactly this to get to a starting point of the mysterious journey of training my own models/loras and now I can finally experiment on my own.
You are my hero today! ❤ I was searching for exact this topic weekly for last 2 months 😂 Last time was this morning. Not sure if I go to sleep now or give it a try 😅
24:08 the same when you said "512x512" is in the past, the same goes to folder structure, is not necessary to create 3 folders, only the image folder and the inside folder with your dataset, for model and log you can leave the main folder "Neon" and the files will be saved there with no problems, I discovered that by accident and no issues after.
The Kohya page and download isn't the same as in this video and the setup.bat file has been updated. It doesn't ask me the same questions as it does him in the video. How did you get it to work?
This is awesome, thanks. I think you did a great job on disseminating the critical information concisely, I wish more tutorials were done like this. I have a question, how would training for an illustration style, using real photos for face reference differ, if at all?
Fantastic tutorial Olivio! Love all your AI videos. Can you use LORA to train other objects and keep them consistent? Using your technique, I've been trying to create a LORA of a specific baseball glove and a specific helmet, and apply it to an image of a baseball player in stadium. Most of the outputs I get are quite monstrous like 10 finger alien gloves. I've given it a ton of data like close up/medium shots of the glove from multiple angles, medium and wide shot of someone wearing the glove. Still nothing works. Any thoughts on how to train a LORA to keep something like a glove consistent? I'm using using SD v1.5 pruned, photon v1, realistic vision v40 as the checkpoints in A1111.
Perfect tutorial. Thanks for your work 😊🙏 I have noticed there is now also LoHa, have you tested, compared? Otherwise I’m very happy to use this workflow
@@plattepus Try installing and running SD, and watch for the error in the console, It will give you a pretty noticeable verbose error saying Pip is not installed. If so just run the python install again and make sure to tick the box to install pip in the menu. Hope that helps.
@@addermoth appreciate it man. Also, I'm only seeing "gui.bat" in the "kohya_ss" folder, and not "user gui.bat", but it still seems to launch it fine. And I can't tell if it installed cudnn, I put the cuddn_windows into kohya_ss, ran the command to install it, but it never said "done" or anything.
Good evening Olivio, So I just followed your guide to create my first LORA but I find myself with 22 hours of training! I must have made a mistake somewhere ;-) However, I have an RTX 4070 TI and a Core I9 13900k. 120 photos of the same model in high quality (FHD) half are faces and the other is the full length model. I chose 10 steps, I think my concern comes from there. I'm still going to let it finish just out of curiosity. Thank you for the guide which will have nevertheless been very useful to me.
24:15 When Olli said "The images are not in the images folder" I got really confused. Eventually I figured out that you still have to copy your source images into the new folder, otherwise it won't have anything to tag.
Thank you for the video! You mentioned you could do one with an online server, please, pleeeeeeease would you be so kind as to show us how to train a LORA step by step with RunPod ? 😭😭😭
Nice VDO as usual. You mentioned in the VDO that you will cover both Dreambooth & Dreamboot LoRA; however, there has been no Dreamboot LoRA covered. I am not sure whether both Dreambooth & Dreamboot LoRA will have the same process & pretrain model or not? For art painting style training using Dreamboot LoRA, could you please suggest an appropriate Pretrained Model, and min. total step for each Epoch. I tried 100 Steps for sample 20 images (2000 in total) but when I tested with text prompts used in the training, the results were still far from original images. Also tried with LyCORIS/LoCon but with no success. I would be very appreciated if you could demonstrate how to train a painting style like Van Gogh painting style (though it is already available in some models) showing that the result from prompts used in the training process can reproduce images used in the training.
Dreambooth doesn't make LORAs. It trains a model, then you can extract a LORA from that by comparing the original base model and the trained model. You can see the tabs for that in the video when he's looking over the Dreambooth GUI.
I watched the video and it was very helpful. Could you provide additional explanation about learning in DreamBooth? There is no detailed explanation about additional learning in DreamBooth on RUclips. I tried to learn by adding clothing styles after making the model myself, but the model's face gets distorted due to additional learning. I don't know what the problem is. It would be very helpful if there were video lectures about additional learning in DreamBooth. By the way, I'm using the PC version. Thank you.
i would suggest to use ligthroom to create keywords and export with filenames containing comma-separated keywords, and create captioning .txt files from that filenames with a simple python script 👍🏻 Much better keywording and organization. (smart albums, etc) Thanks
Hey Olivio, great vidio. Really go deep and surface all in one. I have a question. The style model I want to use is Dreamlike Photorealistic, from the same people as realisticvision. In your case you trained with sd15 in Kohya-Dreambooth (with captions for training dataset), but your video is entitled Lora. With Lora I usually use class dataset and Training dataset. Not sure if that was intentional. But more to the point, should i train with SD15 and merge to Dreamlike, go apply that style? Should I also use captions? I am doing Lora,thought. Thanks
If I understand correctly, the merge is just a weighted average of the model weights, so the shapes of the models must be the same. So you should merge your LoRA with another LoRA that fits the style you like
Great video Olivio! As I am watching this, I wonder what goes into the "log" folder and "model" folder? I didn't see what those are for. Also, do we need regularization images? If so, how many?
Hi Olivio, thanks for this guide, it's a bundle of usefull informations. I have a question: at 30:56, when there's your Kohya full screen, I read "Learning rate: 0.00001", the default should be 0.0001, is this correct?
Hi Olivio! I would like to know how much regularization images are needed when training a game character (in my case Rogue from Cyberpunk 2077), as she has a slightly off-topic appearance, punk clothes, hairstyle and cyberimplants. And I don't understand if I need to use regularization images when training the model. And if so, what token to use to classify this game character... It's very hard to understand. I would also like to know if I'm training LoRA at 768x768 resolution on an SD 1.5 model, and if I do need to use regularization images, what resolution should I take them in when training at this resolution? Should I use 512x512 or do I need 768x768?
Thanks. This video helped a lot. However i do have aproblem with Kohya. All the buttons are not working so cannot really do anything. The is a warnig about LD Library in Terminbal. Am i missing some files or what is going on?
Thank you Olivio for this tutorial as well as for all your others. Great help. When I run my LORA models on my Automatic 1111, I get “ValueError: not enough values to unpack (expected 2, got 1)”. How can I fix that? Thanks for your help.
Correct, but don't use convert RGBA To RGB and transparent image to white Background, because it's make ur trigger background situation or etc, at your prompt be blank or just white or just template take a sample data.
Great, but you skipped some confusing stuff....do I choose no distributed choice with a 4090 card, do I choose to optimize the script with dynomo, and which dynamo if so? Do I use DEEPSPEED, etc...I have no idea about these cryptic choices and what would be optimal... 😞 help?
#### Link from my Video ####
Join my Discord: discord.gg/XKAk7GUzAW
Bulk Resize: bulkresizephotos.com/en
Kohya-ss Install: github.com/bmaltais/kohya_ss
Booru Dataset Tag Manager: github.com/starik222/BooruDatasetTagManager/releases/tag/v1.6.3
SD 1.5 Training Model: huggingface.co/runwayml/stable-diffusion-v1-5/tree/main
make tutorial finetune on kohya please..
Great video, but aren't HyperNetwork's also an option? Is there any reason why we'd use Lora's over Hyper Networks? So far I've gotten better results training HyperNetworks, but admittedly Lora's had only just arrived on the scene when I was experimenting.
everytime I paste the cmd i get ''fatal: could not create leading directories of 'kohya_ss.\setup.bat': Invalid argument'' can anyone help
do you know how feasible it is to install Stable Diffusion on a Mac? for example Mac Mini M2 Pro? Will it work by running it on a virtual machine to windows or will it be better via bootcamp? or is there a Mac direct version?
This is the most comprehensive model training tutorial I've found on RUclips. Thank you for making this video!
Your guide on LORA and Checkpoint Model Training is a game-changer. The merging trick you shared is pure gold. Subscribed! 🙌
I've been waiting on a video like this for ages! Thank you so much
Really love your content, thank you for taking the time to produce and share it!
I like this guy, he's like a nerdy hugging bear man you wana hold and thank for making such cool helpful channel.
Best comment of the day 🥇
Sometimes i think he is AI
"Hugging face guy" lol
Down bad
Dude I watch all your videos even if I already know what it's about just to bump your numbers. Same reason for this comment. You're killing it.
Thank you very much. Your detailed introduction is very rare, and many of the details have been very helpful to me.
I may have missed the Lora training, I followed the Kohya installation and the checkpoint training. Thanks for all the excellent content, Olivio!
One of your best videos so far!!! And currently my favorite RUclipsr for SD.
Greetings to Vienna from Stuttgart, Germany.
Sascha
Of everything I learned, you blew my mind with the f2 trick
Thanks Olivio, I was just thinking I needed a guide like this after getting tired of google colab. This will be invaluable - your're a super star!
Super cool! Please continue describe this topic for us in your next videos, I am waiting for those
Fantastic job Olivio! I really appreciate the work you do!
Lol I was just looking for an updated LoRA guide, great timing thank you!
I am so grateful for your detailed tutorials, step by step instructions. A big thank you.
Thanks for this informative video. I have been trying to figure this out for awhile, so this was right on time
So cool, thanks for the indepth explanation 👍
Thank you for guiding me with this excellent video!
A really helpful and informative video. In addition, and I find this particularly important, your pronunciation is also really super. For me, as a non-native speaker, this is very important. In many videos, people just mumble to themselves, or you can not understand them so well for technical reasons. With your videos I found the first time someone I can really understand excellent and crystal clear. 😊
Thank you very very much! This lecture has answered a lot of my questions.
This video was really needed. Thanks for that
Thank you for this awesome guide! Anything about regularization images would also be fantastic.
That tag manager is really cool, thanks for that.
You take your time, talk with no rush but also go straight to the point. Thanks a lot, really helping me now that most tutorials are outdated
Agreed, he gets the pace just right. Not easy to do, some people take too much time with no benefit, some too fast and hard to follow. The balance here is just right 👌
nice ive been waiting for this video !
Took me forever to record this ;)
Te trouble with all these tutorials is that when I spend an entire day following them, the software I see does not ever seem to match the software in the video. When I reach an error screen, I never know why and I have no comprehension of what to do about it. It's quite daunting for an artist who thought the only thing he would have to master in life was using a pencil and a brush. :(
Excellent Tutorial!!!!
lovely vid, keep on these good works!
Thanks so much for this video! really appreciate it
the taging changes a fair bit depending on if your trying to train a topic or a style or a specific person and it would be nice to show the differences there. i see in on another video they explained how you should tag everything you DONT want to be a part of your lora unless you are training for a single person or thing, and I wold like to see a more in depth dive into tagging and its effects
The tagging strategy is a pretty confusing. I've also seen tutorials that say you should tag only the things you DO want to be part of your lora/model. And some tutorials say to not use tags at all and let the AI figure it all out. And should you use a unique "keyword" tag to train it to that particular unique "keyword" to use in the prompts? I've tried different the approaches but didn't get any conclusive results. They were all pretty bad and didn't work as expected. 🤨
A video from Olivio covering just the tagging aspect with a demo for each approach would be awesome.
@@Beltalowda55 captions are what a model uses to try and predict what the picture should look like during training. Difference between predicted and actual image is used to adjust the model in the right direction. So it should be obvious that you want to describe everything that is not a part of the concept you're trying to learn, to help the model to make a correct prediction and narrow the direction for the learning. Keyword is added for the model to associate the concept with that keyword, that helps to preserve sanity.
It sould be noted though that the training process does not actually differentiate what you're trying to learn in any way. All it does is it tries to match a predicted image with the actual image. That means it can associate to your concept any other or multiple tags you used in unpredictable ways, basically splitting the concept between them. To mitigate that, no tag except a keyword should be used too frequently (in my experience 30-50% is the limit)
I don't even bother tagging anymore when training a person.
@@fritt_wastakenI wish you had some videos up to talk about this. I think I understand what you're saying but some of it is a little fuzzy. Thank you for the info nonetheless.
Thank you so much for this ❤
THX! I learned so much! You are amazing
was waiting on this update !
Danke! :)
Thanks Olivio for your many great explanation videos and tutorials and that you made the effort to really go through every step of the setup and first usage of kohya.
I needed exactly this to get to a starting point of the mysterious journey of training my own models/loras and now I can finally experiment on my own.
Thanks a lot I get more information about training now 😊
this video helped me understand a lot for another project im working on for langague model to use LoRAs to beef and nudge the actual model.
Wow wow wow what a man. I was waiting for this video
Hope you enjoy it and it help you a lot :)
@@OlivioSarikas thx oli
My hero ! many thanks
Great explanation!
Thank you for explaining
Thank you!!
Video is super simple compared to others :)
You are my hero today! ❤
I was searching for exact this topic weekly for last 2 months 😂 Last time was this morning. Not sure if I go to sleep now or give it a try 😅
Allez dormir heureux si vous etes fatigué. Vous serez frais le lendemain pour rêvé éveillé.
Thank you for explaining very clearly all the concepts that apply to creating LORAs. I really appreciate the info in the video.
Great stuff, thanks
6:33 Lora and Models
17:50 Lora Training: files and folders
18:53 KoyaSS: the software to train the models
Thank you so much !
Thank you so much sir, I love your YT channel ! =)
Thank you :)
Thank for sharing. 😁😁
Thanks for the F2 rename files trick... I didn't know about it, you just saved me hours of renaming 😄
Thank you Olivio.
Would love to see something like this but from the perspective of objects, not portraits.
Same
24:08 the same when you said "512x512" is in the past, the same goes to folder structure, is not necessary to create 3 folders, only the image folder and the inside folder with your dataset, for model and log you can leave the main folder "Neon" and the files will be saved there with no problems, I discovered that by accident and no issues after.
18:41....👀I didn't know that trick 👍🏻 Thanks
Watch so many of your vids and I thought I've already subscribed to you. Until today I find out😅😅 Sub'ed anyways
Thanks!
Thank you very much 🥰👍
mostly installation instructions very clear but not what I was expecting
The Kohya page and download isn't the same as in this video and the setup.bat file has been updated. It doesn't ask me the same questions as it does him in the video. How did you get it to work?
Your awasome
Would definitely like a online workflow video
Just once I would like one of these tutorials to actually work and properly correspond to what I see when following the steps.
This is awesome, thanks. I think you did a great job on disseminating the critical information concisely, I wish more tutorials were done like this.
I have a question, how would training for an illustration style, using real photos for face reference differ, if at all?
Thank you! Can you create the video with the colab as well? don't know if the old ones around still work, this things change so fast!
Fantastic tutorial Olivio! Love all your AI videos. Can you use LORA to train other objects and keep them consistent? Using your technique, I've been trying to create a LORA of a specific baseball glove and a specific helmet, and apply it to an image of a baseball player in stadium. Most of the outputs I get are quite monstrous like 10 finger alien gloves. I've given it a ton of data like close up/medium shots of the glove from multiple angles, medium and wide shot of someone wearing the glove. Still nothing works. Any thoughts on how to train a LORA to keep something like a glove consistent? I'm using using SD v1.5 pruned, photon v1, realistic vision v40 as the checkpoints in A1111.
I'd like a guide to do this on rubpod. Thanks!
Perfect tutorial. Thanks for your work 😊🙏 I have noticed there is now also LoHa, have you tested, compared? Otherwise I’m very happy to use this workflow
Awesome video! Big thanks. I am starting to think that my hardware might be not up to this... whenever I run the kohya gui bat it just closes itself.
THANK YOU!
LOOK INTO LoRA MERG FEATURE!!
A little tip, when installing python make sure to check the box that adds pip to environment variables, for some reason mine did not by default.
mine also. I just uninstalled Python and reinstalled it over the Microsoft store. It will install pip correctly
how would I know if I installed it without checking the box?
@@plattepus Try installing and running SD, and watch for the error in the console, It will give you a pretty noticeable verbose error saying Pip is not installed. If so just run the python install again and make sure to tick the box to install pip in the menu. Hope that helps.
@@addermoth appreciate it man. Also, I'm only seeing "gui.bat" in the "kohya_ss" folder, and not "user gui.bat", but it still seems to launch it fine. And I can't tell if it installed cudnn, I put the cuddn_windows into kohya_ss, ran the command to install it, but it never said "done" or anything.
yes plese make a video training checkpoint models using online servers
Very nice on , now on my testcomputer it seems to work now..
What i am still missing is how to create a LoRA with this tool..
Good evening Olivio,
So I just followed your guide to create my first LORA but I find myself with 22 hours of training!
I must have made a mistake somewhere ;-) However, I have an RTX 4070 TI and a Core I9 13900k.
120 photos of the same model in high quality (FHD) half are faces and the other is the full length model.
I chose 10 steps, I think my concern comes from there. I'm still going to let it finish just out of curiosity.
Thank you for the guide which will have nevertheless been very useful to me.
YOU MAKE THE BEST AI VIDEOS!!!!!!!!!
Awesome video, please make video for Google colab also
Thanks! Very helpful
24:15 When Olli said "The images are not in the images folder" I got really confused. Eventually I figured out that you still have to copy your source images into the new folder, otherwise it won't have anything to tag.
Me too
Would very much appreciate a tutorial on how to run Koyha-ss on a RunPod
Thank you for the video! You mentioned you could do one with an online server, please, pleeeeeeease would you be so kind as to show us how to train a LORA step by step with RunPod ? 😭😭😭
Hey Olivio! I love this tutorial! Do you have any plans to update it? Have a great day!
❤
Nice VDO as usual. You mentioned in the VDO that you will cover both Dreambooth & Dreamboot LoRA; however, there has been no Dreamboot LoRA covered. I am not sure whether both Dreambooth & Dreamboot LoRA will have the same process & pretrain model or not?
For art painting style training using Dreamboot LoRA, could you please suggest an appropriate Pretrained Model, and min. total step for each Epoch. I tried 100 Steps for sample 20 images (2000 in total) but when I tested with text prompts used in the training, the results were still far from original images. Also tried with LyCORIS/LoCon but with no success.
I would be very appreciated if you could demonstrate how to train a painting style like Van Gogh painting style (though it is already available in some models) showing that the result from prompts used in the training process can reproduce images used in the training.
Dreambooth doesn't make LORAs. It trains a model, then you can extract a LORA from that by comparing the original base model and the trained model. You can see the tabs for that in the video when he's looking over the Dreambooth GUI.
NICE tutorial, could tell me how to active dark theme?
I watched the video and it was very helpful. Could you provide additional explanation about learning in DreamBooth? There is no detailed explanation about additional learning in DreamBooth on RUclips. I tried to learn by adding clothing styles after making the model myself, but the model's face gets distorted due to additional learning. I don't know what the problem is. It would be very helpful if there were video lectures about additional learning in DreamBooth. By the way, I'm using the PC version. Thank you.
subarasii😀
Please do the online version of the training
Great job Daddy :) I would like to thank you as someone who can explain such a difficult to understand subject so well.
Please make one for online but with the same idea, like the idea of full resolution
i would suggest to use ligthroom to create keywords and export with filenames containing comma-separated keywords, and create captioning .txt files from that filenames with a simple python script 👍🏻 Much better keywording and organization. (smart albums, etc) Thanks
you are awesome. internet is getting better bcs of u 🎉
Hey Olivio, great vidio. Really go deep and surface all in one. I have a question. The style model I want to use is Dreamlike Photorealistic, from the same people as realisticvision. In your case you trained with sd15 in Kohya-Dreambooth (with captions for training dataset), but your video is entitled Lora. With Lora I usually use class dataset and Training dataset. Not sure if that was intentional. But more to the point, should i train with SD15 and merge to Dreamlike, go apply that style? Should I also use captions? I am doing Lora,thought. Thanks
If I understand correctly, the merge is just a weighted average of the model weights, so the shapes of the models must be the same. So you should merge your LoRA with another LoRA that fits the style you like
Great video Olivio! As I am watching this, I wonder what goes into the "log" folder and "model" folder? I didn't see what those are for. Also, do we need regularization images? If so, how many?
Would love to see this process using Google colab.
Hi Olivio, thanks for this guide, it's a bundle of usefull informations. I have a question: at 30:56, when there's your Kohya full screen, I read "Learning rate: 0.00001", the default should be 0.0001, is this correct?
when i press train. it shows this error " Not all images folders have proper name patterns "
Hi Olivio! I would like to know how much regularization images are needed when training a game character (in my case Rogue from Cyberpunk 2077), as she has a slightly off-topic appearance, punk clothes, hairstyle and cyberimplants. And I don't understand if I need to use regularization images when training the model. And if so, what token to use to classify this game character... It's very hard to understand. I would also like to know if I'm training LoRA at 768x768 resolution on an SD 1.5 model, and if I do need to use regularization images, what resolution should I take them in when training at this resolution? Should I use 512x512 or do I need 768x768?
Thanks. This video helped a lot. However i do have aproblem with Kohya. All the buttons are not working so cannot really do anything. The is a warnig about LD Library in Terminbal. Am i missing some files or what is going on?
Thank you so much, Olivio! Is the training feature on Automatic1111 still broken?
Thank you Olivio for this tutorial as well as for all your others. Great help. When I run my LORA models on my Automatic 1111, I get “ValueError: not enough values to unpack (expected 2, got 1)”. How can I fix that? Thanks for your help.
I would suggest to always select PNG over JPG since PNG will not lose image information, and the file is small enough anyway.
Correct, but don't use convert RGBA To RGB and transparent image to white Background, because it's make ur trigger background situation or etc, at your prompt be blank or just white or just template take a sample data.
Great, but you skipped some confusing stuff....do I choose no distributed choice with a 4090 card, do I choose to optimize the script with dynomo, and which dynamo if so? Do I use DEEPSPEED, etc...I have no idea about these cryptic choices and what would be optimal... 😞 help?
...also, you have a 4090 so when you get to choose pick bf16. The f16 is for older cards, like my 1070