Thank you for this video - very informative! I laughed so hard at the mistake: "stupid female voice" 🤣but I think it's probably safe from the "Internet police" 🚔 I will use your tutorial to see if I can train a new language with this tech 👍
Great video and great explanation! I hope you do more tutorials like these in the future :) Would you say F5 is the best Open Source TTS in the market?
I think so as it is a bit more flexible than xtts to add different tones, btw I'm not associated with F5tts team just a random guy trying to fin a good TTS
Link updated. In the last version of F5tts in the web interface select "custom" and enter theses path: MODEL_CKPT: hf://RASPIAUDIO/F5-French-MixedSpeakers-reduced/model_last_reduced.pt VOCAB_FILE: hf://RASPIAUDIO/F5-French-MixedSpeakers-reduced/vocab.txt
Good video, Thanks for the contribution. One quick qs: This is used when you want to add a new language but suppose u want to use it for voice cloning, how will it work?
Thank you for this work. Seems your result is more close to zero shot voice cloning, than the one Jarod trained in his video tutorial (he used ~10 hours single speaker). Just to get it right, the 80k samples you used where all from the same reader (single speaker)? This would mean: 1) few hours, single speaker --> model speaks new language, but only for reference speaker from training data 2) many hours, single speaker --> model generalizes new language (zero shot capability) 3) many hours, multi speaker, multi language (as for base model) -> proper voice cloning, code switching within single text
@@TheMame82 it's hard to make conclusion at that point as there is not enough data. After training with one speaker for 80k for a consistent learning I'm fine-tuning with 90k samples of multiple speakers hoping that it will help with zero shot flexibility, I will publish results.
Bonjour, j'aimerais bien entrainer le modèle sur un dataset très très large (librispeech, qui fait plus de 100GO), comment pourrais-je faire ça sur le cloud ? Je pense que le streaming est compliqué j'ai rien rien compris au code original de l'entrainement...
Si vos fichiers sons sont déjà retranscris en texte il suffit de les mettre dans le bon format, autrement faire un Whisper Je pensais faire une vidéo pour faire ça dans le cloud, mais pour entraîner sur 100go ça coûtera très cher!
@@RaspiAudio J'aimerais bien financer cela, seriez vous prêts à entrainer un modèle multilingual et multispeaker (avec language token, j'ai remarqué que le modèle avait du mal avec le cross lingual...) Avez vous un contact ?
@@cyberbol the reference recording (the voice to clone) could be very short like 10s. But if you need to train a new language you will need I think at least 20 hours of audio.
@@RaspiAudio Ohh. Yes I wish train, Thank you. The problem with a clone is that it not working for other like EN and Chinese. I want use Polish so I don't have a option , need do model I think
Salut! Je suis en train de faire un training français avec le corpus Mozilla de 800k fichiers. J'ai 20 epoch sur 40 d'effectué. Je t'en donnerai des nouvelles. Par contre, F5-TTS contient certains bogues. J'ai dû créer des dossiers comme "french" quand j'avais déja french_char de créé.
@@Pacifier1222 ça serait vraiment cool si vous pouvez entraîner sur la base de mon checkpoint de cette manière on pourrait conjuguer les efforts plutôt que repartir de zéro à chaque fois
@@RaspiAudio En fait, j'avais déja 20 epoch de fait au final. J'ai décidé d'en refaire 20 autres. je trouvais qu'il y avait une tonalité sur certains mots incorrectes. J'ai déja 1 semaines de fait dessus, alors c'est sûr que je ne voudrais pas trop recommencer.
Epoch -> "ipok" en bon anglais, pour la prochaine 😉 Thanks for the tutorial 👍
Thank you. Your video helped me a lot. Before I tried train language from scratch and I was not successful. So Ill try your guide.
Merci beaucoup. Ça marche.
Thank you for this video - very informative! I laughed so hard at the mistake: "stupid female voice" 🤣but I think it's probably safe from the "Internet police" 🚔
I will use your tutorial to see if I can train a new language with this tech 👍
Great video and great explanation! I hope you do more tutorials like these in the future :) Would you say F5 is the best Open Source TTS in the market?
I think so as it is a bit more flexible than xtts to add different tones, btw I'm not associated with F5tts team just a random guy trying to fin a good TTS
Bonjour. Merci pour cette vidéo très instructive, sans oublier cet accent bien de chez nous. ;-)
Link updated. In the last version of F5tts in the web interface select "custom" and enter theses path:
MODEL_CKPT: hf://RASPIAUDIO/F5-French-MixedSpeakers-reduced/model_last_reduced.pt
VOCAB_FILE: hf://RASPIAUDIO/F5-French-MixedSpeakers-reduced/vocab.txt
@@RaspiAudio Merci beacoup.
can u help me, device/cuda (empty) error when use test model 19:00
I like your video thank you.
Good video, Thanks for the contribution. One quick qs: This is used when you want to add a new language but suppose u want to use it for voice cloning, how will it work?
Thank you for this work. Seems your result is more close to zero shot voice cloning, than the one Jarod trained in his video tutorial (he used ~10 hours single speaker). Just to get it right, the 80k samples you used where all from the same reader (single speaker)?
This would mean:
1) few hours, single speaker --> model speaks new language, but only for reference speaker from training data
2) many hours, single speaker --> model generalizes new language (zero shot capability)
3) many hours, multi speaker, multi language (as for base model) -> proper voice cloning, code switching within single text
@@TheMame82 it's hard to make conclusion at that point as there is not enough data. After training with one speaker for 80k for a consistent learning I'm fine-tuning with 90k samples of multiple speakers hoping that it will help with zero shot flexibility, I will publish results.
Is the quality better if a larger pt model file is used instead of last_reduced? Both seem to work, I am just wondering.
same quality, but only the larger allows to continue training on it.
@ Got it. The quality seems fine, I've tried several voices. Thank you
Bonjour, j'aimerais bien entrainer le modèle sur un dataset très très large (librispeech, qui fait plus de 100GO), comment pourrais-je faire ça sur le cloud ? Je pense que le streaming est compliqué j'ai rien rien compris au code original de l'entrainement...
Si vos fichiers sons sont déjà retranscris en texte il suffit de les mettre dans le bon format, autrement faire un Whisper
Je pensais faire une vidéo pour faire ça dans le cloud, mais pour entraîner sur 100go ça coûtera très cher!
@@RaspiAudio J'aimerais bien financer cela, seriez vous prêts à entrainer un modèle multilingual et multispeaker (avec language token, j'ai remarqué que le modèle avait du mal avec le cross lingual...)
Avez vous un contact ?
@@lullu3467 oui vous pouvez utiliser info@raspiaudio.com
can i use or add bengali langauge to train this
What are computer characteristics required to train model?
I'm using an rtx 4090, but I would like to make a google collab so anyone could train in the cloud on a pay per use base
@RaspiAudio OK thanks
How long I need record my voice ? How you think ? Minimum training data ?
@@cyberbol the reference recording (the voice to clone) could be very short like 10s.
But if you need to train a new language you will need I think at least 20 hours of audio.
@@RaspiAudio Ohh. Yes I wish train, Thank you. The problem with a clone is that it not working for other like EN and Chinese. I want use Polish so I don't have a option , need do model I think
Salut! Je suis en train de faire un training français avec le corpus Mozilla de 800k fichiers. J'ai 20 epoch sur 40 d'effectué. Je t'en donnerai des nouvelles.
Par contre, F5-TTS contient certains bogues. J'ai dû créer des dossiers comme "french" quand j'avais déja french_char de créé.
J'ai aussi un sample de 8k de fichiers en quebecois pour être plus régional!
@@Pacifier1222 ça serait vraiment cool si vous pouvez entraîner sur la base de mon checkpoint de cette manière on pourrait conjuguer les efforts plutôt que repartir de zéro à chaque fois
@@RaspiAudio En fait, j'avais déja 20 epoch de fait au final. J'ai décidé d'en refaire 20 autres. je trouvais qu'il y avait une tonalité sur certains mots incorrectes.
J'ai déja 1 semaines de fait dessus, alors c'est sûr que je ne voudrais pas trop recommencer.
What hardware are you using?
@@RaspiAudio Nvidia 3090, AMD 5950x et 64GB de ram
I get in my info: transcribe complete samples : 0
path : C:\F5-TTS\F5-TTS\src\f5_tts\..\..\data\my_speak_char\wavs
error files : 5
your path is wrong
Français originel?
@@normioffi oui oui
Génial ça
Unfortunately, it does not support Indonesian language.
Find large audio books or audio file of minimum 10h in your language and train it
Don't waste your time F5-TTS is horrible I'm sorry
@@mulagraphics it's not, what else do you recommend?
Actually I trained new language under 2 hours data . It’s very good 👍. I don’t know which script could do that
@@bomar920 which language do you use? is it 1 speaker/multi?