It will work well on a trained dataset but if you feed your question with context concatenated, it will work but it will throw garbage. It's a very small transformer to have better reasoning.
Hey its a very nice video, can you guide if i want to create a project of Natural Language to SQL query conversion? Using hugging face and NLTK how can i build this, I am not getting how can I make this..
You can use tokenizer = T5TokenizerFast.from_pretrained("path/to/saved/tokenizer") Model = T5FotConditionalGeneration.from_pretrained("path/to/saved/model") And rest of the inference code is the same as in the video.
With the right dataset, absolutely yes. There are many other great models out there but you may find them a bit large and take much computation efforts. So T5 might be the optimal choice
@@umaisimran4383 First you'll need to prepare a basic dataset. The dataset should contains following things. 1. Popular questions regarding every skill. Like for for keyword python, there must be some 4 to 5 questions in the dataset. 2. You can also use difficulty level with skill and get the questions accordingly. Like for python you can have multiple rows contains different questions based on experience level. Similarly you can add up multiple things to make your dataset diverse. 3. Then convert your dataset as per T5 requires. This is a very high level overview and of course you can breakdown these to sub problems according to your usecase
Hi there, Thank you! I need your help with translating from Urdu to English using T5 small. Could you please guide me? I'm willing to tip $50 for your assistance. I'm quite new to this. I also have a dataset ready. Thanks!
T5 will not be an optimal approach to achieve if you want these kinds of results. You can pick models like phi-2 or llama 7b and fine tune using adaptors.
very helpful thanks
Do i have to provide context (input) for asking at the inference stage?
It will work well on a trained dataset but if you feed your question with context concatenated, it will work but it will throw garbage. It's a very small transformer to have better reasoning.
Thanks this video is so helpful. Just having a question, what part are we fine-tuning? are we fine-tuning all layers?
Yup!
Thanks for the video. Can we download the fine tuned tokenizer and model from google colab for later use. If yes how?
You can use os.getcwd() with join function or "./model" to save it in the directory you want. Then you simply double click to download
Can you make a video on document question answering?
Sure,
Covering up in upcoming video!
Hey its a very nice video,
can you guide if i want to create a project of Natural Language to SQL query conversion? Using hugging face and NLTK how can i build this, I am not getting how can I make this..
Use an LLM to convert English to SQL query
@@DevelopersHutt can you please guide me the flow, like how i can do it?
Great video, i'm kind of new to pytorch. I've already trained and saved the model, how do i load it again for inferencing?
You can use
tokenizer = T5TokenizerFast.from_pretrained("path/to/saved/tokenizer")
Model = T5FotConditionalGeneration.from_pretrained("path/to/saved/model")
And rest of the inference code is the same as in the video.
@@DevelopersHutt thanks alot! Great video, love the way you write and explain every lines of code.
can we fine tune this model to generate interview questions from the job description (as context) or is their any other model that can do such thing?
With the right dataset, absolutely yes.
There are many other great models out there but you may find them a bit large and take much computation efforts. So T5 might be the optimal choice
@@DevelopersHutt can you please guide how to do so?
@@umaisimran4383 First you'll need to prepare a basic dataset.
The dataset should contains following things.
1. Popular questions regarding every skill. Like for for keyword python, there must be some 4 to 5 questions in the dataset.
2. You can also use difficulty level with skill and get the questions accordingly. Like for python you can have multiple rows contains different questions based on experience level.
Similarly you can add up multiple things to make your dataset diverse.
3. Then convert your dataset as per T5 requires.
This is a very high level overview and of course you can breakdown these to sub problems according to your usecase
@@DevelopersHutt Thank you so much.
Hi there,
Thank you! I need your help with translating from Urdu to English using T5 small. Could you please guide me? I'm willing to tip $50 for your assistance. I'm quite new to this. I also have a dataset ready.
Thanks!
Can i fine tune the model to generate sentence from a set of keywords
T5 will not be an optimal approach to achieve if you want these kinds of results.
You can pick models like phi-2 or llama 7b and fine tune using adaptors.
@@DevelopersHutt can it run in local cpu
@@arf_atelier1819 it can but speed will be terribly slow
with another dataset do like this or i need to cusstome the dataset
If you're using the same process as in video, then yes you'll need the data in the same format.