Custom Training Question Answer Model Using Transformer BERT
HTML-код
- Опубликовано: 4 окт 2024
- Simpletransformer library is based on the Transformers library by HuggingFace. Simple Transformers lets you quickly train and evaluate Transformer models. Only 3 lines of code are needed to initialize a model, train the model, and evaluate a model.
github: github.com/kri...
simple transformer: simpletransfor...
simple transformer github: github.com/Thi...
⭐ Kite is a free AI-powered coding assistant that will help you code faster and smarter. The Kite plugin integrates with all the top editors and IDEs to give you smart completions and documentation while you’re typing. I've been using Kite for a few months and I love it! www.kite.com/g...
Subscribe my vlogging channel
/ @krishnaikhindi
Please donate if you want to support the channel through GPay UPID,
Gpay: krishnaik06@okicici
Telegram link: t.me/joinchat/...
Please join as a member in my channel to get additional benefits like materials in Data Science, live streaming for Members and many more
/ @krishnaik06
Connect with me here:
Twitter: / krishnaik06
Facebook: / krishnaik06
instagram: / krishnaik06
You're so passionate about teaching and it shows through all your tutorials. Thanks for the effort you put into this and helping others. The best ML channel I know so far on youtube.
Thank you Krish for covering this Topic, you are a saviour as always.
There are annotation tools like Haystack Annotation where you can annotate the training and test data set manually. Its a great tool for someone who is looking out to train a huge corpus of data. Btw Fantastic video Krish!! Thankyou :)
Hi Krunal! Thanks for the insight!
Have you tried this with simpletransformers?
I tried using haystack annotation and exported the annotated documents in the SQuAD format, but that doesn't seem to work while training the model!
Am I doing something wrong?
@@hridaymehta893 hi, may i ask for the solution for the issues u have mentioned?
@@avartarstar6744 Hey, as far as I remember, there is just some formatting that needs to be done in the JSON file, after you export the annotations.
You just need to remove some fields in the JSON file, if I am not wrong.
Check the difference in the format of the file required for simpletransformers vs what you are getting using Haystack.
Or you can use the Haystack Model itself for the QnA training. I worked with the Haystack Model as well.
@@hridaymehta893 i see! Much appreciated!
@@avartarstar6744 No worries :)
Thank you Krish. We really appreciate your effort to create the video lectures. Your tutorials are really informative. Thanks for covering the Question Answer Generation BERT model topic.
Thanks for all the effort you've put on this Krishnaik. It's super well-made and helpful!
@3:52 , it's a wrong explanation of the is_impossible flag. It essentially means that if it is set to false, the answer can be obtained directly from the context and if it is true, it means the answer can not be directly answered from the context.
Krish sir, your videos are more informative than others. Would you please share how you created the dataset for QA model training.
Thank you Krish.
Could you please make a video for "Text Summarization with Custom Data"
Thank you Krish. This was really helpful. Keep up the good work :)
Sir I am not able to find more videos from this playlist. This is amazing playlist. I want to learn more about transformers.
The QA is so nonsense. I mean if the user has to provide context with the question then it means he already knows the answer then why would he take help from the model 😂
If the document is too long and the person doesn't wanna read it all
It's called extractive question answering for a reason 🤦 what did you expect
@@nicolasnicolas5238 but the extraction process will take alot of time if the context is long document
@@SmartTech-m1u
Actually, it doesn'ttake too much time. This is because the fact that if you know beforehand that the task is QA with context, you can finetune a small language model which should run faster than an LLM like GPT3, it's should even run faster than GPT3-turbo or GPT-4o-mini by orders of magnitude at no cost. This is possible because abstractive QA can be formulared as a prediction of indices over the context. The training data looks something like this input=(question, context) -> output=(position where the answer appears in the context, span of the answer). Notice that the output os just a tuple of integers, which means this task is easier than text generation (generative QA).
Now, you can even filter out unnecesary context even more, just follow the RAG process but exchange the final call generative model (the G in RAG) with the abstractive QA model and there you go. That simple trick should make the whole process even faster.
And if you use an open model trained on the Squad dataset, the implementation should be really easy, no more than 50 lines I'd say.
Great topic Krish, please add videos on other NLP tasks also.
GREAT video! very informative!
Please create a video on how to train squad dataset for question generation. And thanks for this video.
Thank you for your video. It was so helpful. One question. In real implementation, how do you use metric (e.g. f1 score) for evaluate the model?
Context, can answer, index at which answer is available. Feeding this kind of details to a ML model which is already trained on NL is not too much spoon feeding ? Think the amount of effort it will take to create a Q&A dataset with diverse topic. At the end model is doing a lookup or search? Where is intelligence ?
Change train_batch_size to lower value like 6 or 2 . It will give you correct result. Bcs number of training data is very small.
thank you so much krish! how can you get the list of training accuracy and evaluation accuracy?
Thank you, your videos are of great help. Can you please guide me on how you created your custom data? Like if there are any labelling tools for question answering tasks.
Thank you so much, it was a great tutorial it helped alot
hey can can u make video automatically created question and answers from pdf and text file?
@kirsh naik Hi, when creating a custom dataset, is it best to limit the context as short / as long as possible? Additionally, can a context have multiple question, and each quesiton might have multiple variation of answer?
I would request you to please make an updated video on this same topic. This is need of the hour.
Thank you Krish
Hi Krish, can you please make a video on Feature Engineering on numerical variables? Thank you
Hi! I have a question... How use SimpleTransformers for generate answers more smartest?
impressive tutorial
The dataset building process will take too long and the process is not feasible to create custom datasets from scratch. Is there any work around for this??. Mostly looking for an answer which will automate this task.
Hi Krish, thank you very much for the great video, my dataste is in the csv format and I have one column of description, and the other column of labels both in text, can I do QA on this such that the Q is the description and the answer is the labels? If yes, how can I prepare the data in the format you mentioned?
Have you got process
how to prepare that?
No, what do you mean?@@manasmanuu5430
@saharyarmohamadi9176 hi, I have the same data as you and tried converting to json, however when train_model I get a message that cannot be found... (I don't know if the default setting answer_start = 0 affects the results or not)
How can I achieve the same question answering model without transformers and use rnn LSTM and attention mechanism. Please help me on that sir
How to finetune Dense Passage Retriever using your own excel file for Question-Answer Model using DPR?
can I use a one large context with many questions, instead of using separate contexts?
Sir My dataset have only two column first is question and answer i want to train my model on this so can i train my model using this method or not
thank you krish for the great video, can you please make a video about deploying custom object detection model on android? thanks in advance!
what if i have a csv file which has 2 columns "Questions" and "Answers", how will i build the chatbot then??
Sir make video on batch normalisation please
Sir is this based on same paper
could u please make a video same for jira sample data .
krish sir please please reply one thing if a want a advice from you what subscription I have to take please tell me u are everything for me sir
i want this to work offline what should i do sir... where should i download the bert files from
What is the difference between Transformer and a Simple transformer you are using a simple transformer while implementing the QA. Anyone can give and if you guys know the answer.
model.train_model(train, eval_data=test)
NameError Traceback (most recent call last)
in
----> 1 model.train_model(train, eval_data=test)
NameError: name 'test' is not defined
I have the same issue.
krish sir i want to work under u for whole life what I have to do please tell me bcoz of u I am learning so many things
Why is it asking for API key??Cant we train offline? if so what is required jee?
Sir can you add a video to extract the output from the lstm model
Great tutorial
multi label classification using bert possible ? any good Neural Network project to refer ?
Why is the eval_loss negative? Is it possible that there might be a bug somewhere?
Great tutorial. Could anybody share the link to the source code?
How can I use this for more than 512 tokens?
cant we use csv file for the bert model ?
is this task with Pytorch or tensorflow?
Please can you make more videos on transformers
can we use it for fake news detection? and can excel format will work?
model.train_model(train, eval_data = test) Hey Krish, got confused at this point. What is train_model?
can i do it for mistral or llama2?
Can I use this simpletransformer for bengaliQA dataset?
Sir, is there a fast and efficient way to create the dataset in this format from a csv file?
@krishnaik the video is amazing...i am looking for Faq based qna with no context ...how can i use your code?
Please share some input..
Thanks for the illustrative video. I have transformed my data from csv to json format required by simple transformers. I checked the format line by line with yours as well. As soon as I try to train my model, it says list index out of range. Can you please help me why is it throwing that error?
Hey I faced the same error. Were you able to solve it?
@krish What is the use of this if we have to provide context every single time when we are doing the prediction. This makes this whole framework garbage , if there is no other . if there is please share
Amazing Video! Can you make similar video on Conversational AI with End to End Pipeline :)
Can u tell me how to install cdqa In colab
How to creat a custom.dataset?
no offence but you don't have any idea how to improve accuracy
gem💎
#Thanks #krish
How do i experts in data scientist???how
Robinson Larry Taylor Christopher Lopez Larry