Excellent. You made everything that i would have wanted to do for a first generative AI image course. Everything in order of importance and possibility. This is a starting point that most people crave when they get into AI. I've been discovering image gen since october 2022 and have keep myself updated since then so it is sometimes hard to remain grounded.
It's all good, but video how to train models on your dataset is really needed because most models excluded from huge corpus of data. For example architectural photos, ornamental works, engraving prints - no model can generate such esp in perfect quality. To train models, the dataset must be prepared and thoroughly described by other Ai, which is not easy with thousands photos libraries.
i think you meant Billions of images, however these models we are using no longer contain copyright images, only SD1.5 and a small amount of SD2.1, even smaller portion of the SDXL dataset contained recent works by living artists IIRC. When i speak about "training on your own work" we mean by that "finetuning checkpoints" or "Finetuning Lora patch models" for various diffusion models. You are not wrong, but in practice a good fine tune will only produce estimations of your work when prompted for that purpose. FLUX1 is the leading base model and if you have enough data (your work) and you caption it accurately (tutorials on this are on this channel) it is totally a viable route artists are using right now! Personally, if i had the budget i would train a true clean base model on only Public domain works, this has been the goal for some time now - although we cannot guarantee this due to the time it would take to audit 5 billion images by human eyes (~20 years with no sleep) !
@FiveBelowFiveUK wow, you're right. I'm interested in that finetuning technique for sure. Do you know Eric Hartford, the famous publisher of uncensored LLMs in UK?
❤
Fantastic job, well done!
Excellent. You made everything that i would have wanted to do for a first generative AI image course.
Everything in order of importance and possibility.
This is a starting point that most people crave when they get into AI.
I've been discovering image gen since
october 2022 and have keep myself updated since then so it is sometimes hard to remain grounded.
thanks for the feedback - that is my mission statement - entirely!
Interesting presentation with imitation of a hall
yes i only had the audio, the first image you see was from the actual talk i gave this week.
Rspect you as always❤
Top work mate, keep it up
Thanks 👍
It's all good, but video how to train models on your dataset is really needed because most models excluded from huge corpus of data. For example architectural photos, ornamental works, engraving prints - no model can generate such esp in perfect quality. To train models, the dataset must be prepared and thoroughly described by other Ai, which is not easy with thousands photos libraries.
i think you meant Billions of images, however these models we are using no longer contain copyright images, only SD1.5 and a small amount of SD2.1, even smaller portion of the SDXL dataset contained recent works by living artists IIRC. When i speak about "training on your own work" we mean by that "finetuning checkpoints" or "Finetuning Lora patch models" for various diffusion models. You are not wrong, but in practice a good fine tune will only produce estimations of your work when prompted for that purpose. FLUX1 is the leading base model and if you have enough data (your work) and you caption it accurately (tutorials on this are on this channel) it is totally a viable route artists are using right now!
Personally, if i had the budget i would train a true clean base model on only Public domain works, this has been the goal for some time now - although we cannot guarantee this due to the time it would take to audit 5 billion images by human eyes (~20 years with no sleep) !
@FiveBelowFiveUK wow, you're right. I'm interested in that finetuning technique for sure. Do you know Eric Hartford, the famous publisher of uncensored LLMs in UK?