I tested this specifically with some lora versions. We trained the same dataset on chars vs token limit. Prompts can be longer than training captions it seemed. While 1500 is on the edge of the limit, it can work depending on how those words are tokenized. In the end although more details can be trained with longer captions, being brief prevents attention wandering, while increasing transformation elasticity.
Very interesting and creative approach. Nicely done amigo!!❤️🇲🇽❤️
I’m captioning artwork and noticed it’s around 1500 characters. Does flux actually take that amount for captioning?
I tested this specifically with some lora versions. We trained the same dataset on chars vs token limit. Prompts can be longer than training captions it seemed. While 1500 is on the edge of the limit, it can work depending on how those words are tokenized. In the end although more details can be trained with longer captions, being brief prevents attention wandering, while increasing transformation elasticity.