Training Sentiment Model Using BERT and Serving it with Flask API

Поделиться
HTML-код
  • Опубликовано: 28 дек 2024

Комментарии • 139

  • @abhishekkrthakur
    @abhishekkrthakur  4 года назад +27

    Full code is available here: github.com/abhishekkrthakur/bert-sentiment/
    Please NOTE: As Chirag Jain pointed out, at 18:14 it is "self.review[item]" instead of "self.review". This is fixed in github repo.

    • @amilapathirana4030
      @amilapathirana4030 4 года назад

      self.target --> self.target[item]

    • @samkomo4289
      @samkomo4289 3 года назад

      @@amilapathirana4030 Thats taken care of on line 35

    • @harissaeed5811
      @harissaeed5811 2 года назад

      @@samkomo4289 SIR HOW ROMAN URDU DATA SET IS TRAINED USING BERT?CAN YOU PLEASE HELP ME IN THIS

  • @chiranshuadik
    @chiranshuadik 3 года назад +11

    This is the most organized and neat implementation of a NN code I've seen. Thanks for sharing!

  • @abhishekkrthakur
    @abhishekkrthakur  4 года назад +5

    As Chirag Jain pointed out, at 18:14 it is self.review[item] instead of self.review.

  • @not_a_human_being
    @not_a_human_being 4 года назад +4

    amazing stuff! truly end-to-end with no external code copy-pastes, this is how every coding tutorial should be!

  • @shaheerzaman620
    @shaheerzaman620 4 года назад +4

    Great stuff as usual. Complexity simplified. Thanks Abhishek . What makes your videos so great is that you teach practical, real world advanced concepts. Please continue doing the great work.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 4 года назад

    I really like the way you architected the project. it makes a lot of sense and easy to follow. thanks for sharing the GitHub.

  • @amilapathirana8822
    @amilapathirana8822 4 года назад

    I like the way you code. I started following each and every step of your video. trying to fully understand each andy step in-depth. It took me 12 hours to come to 25:00 min mark of the video. hopefully i can complete your tutorial next week :)

  • @mankaransingh2599
    @mankaransingh2599 4 года назад +4

    Great video as always, good to see you are covering things that actually require experience and not some basic videos.

  • @akhilsingh7917
    @akhilsingh7917 4 года назад +8

    Its always a pleasure to see you 4X GM.

  • @architgarg5678
    @architgarg5678 4 года назад +1

    Hey,
    Thanks for being there, you are a boon to all the data science aspirants.

  • @bernardogarciadelrio3630
    @bernardogarciadelrio3630 3 года назад

    Very helpful tutorial! many thanks. Looking forward to more videos about NLP

  • @manishbhatia2724
    @manishbhatia2724 3 года назад

    Simplicity along with Awesomeness

  • @bk100bk
    @bk100bk 4 года назад

    This video helped me a lot! I'm a very beginner in NLP and Bert. Impressive basic Bert model using Pytorch. Thank you !

  • @sriramvenkatasubrahmanyamv1527
    @sriramvenkatasubrahmanyamv1527 4 года назад

    Thank you so much. It really helped me get a head start on using BERT in my other projects. Looking forward to seeing your future videos.

  • @2107mann
    @2107mann 4 года назад +1

    Awesome... Thanks a lot Abhishek

  • @deepakkumarsuresh1921
    @deepakkumarsuresh1921 4 года назад

    Thank you for the session on BERT and deployment Abhishek ..Looking forward to learn more advanced ML stuff from you .

  • @arjungoalset8442
    @arjungoalset8442 4 года назад +1

    Thanks a lot! Your way of doing the project has taught me a lot. 🙏

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад

      Thank you! I consider it an honour if i have been helpful :)

    • @arjungoalset8442
      @arjungoalset8442 4 года назад

      @@abhishekkrthakur love your hat :)

  • @ZolTheSuci
    @ZolTheSuci 4 года назад +3

    Instead of padding the ids, attention_masks, and token_type_ids manually, there's also a `pad_to_max_length` param in `encode_plus()` that automatically pads according to the length you give it.

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад +3

      yes. a PR has updated the code on github :)

    • @prasad_yt
      @prasad_yt 4 года назад

      @@abhishekkrthakur , I guess this will need another change as `pad_to_max_length` argument is deprecated and will be removed in a future version. Change required : padding='max_length'

  • @harimohan810
    @harimohan810 4 года назад +1

    Thanks a lot fro posting these videos, it is very much helpful.

  • @VladimirMheidze
    @VladimirMheidze 4 года назад +1

    Thank you for great lesson!

  • @chiragjn101
    @chiragjn101 4 года назад +2

    At 18:14 shouldn't it be self.review[item]?

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад +1

      YES! Its a big mistake that i have made in the video. I fixed it later but forgot to show it. Its fixed in the github repo. Thank you so much for pointing it out! :)

    • @chiragjn101
      @chiragjn101 4 года назад +1

      ​@@abhishekkrthakur That str cast! easy to make mistakes when everything works so seamlessly in Python 😆

    • @tanulsingh4797
      @tanulsingh4797 4 года назад

      Can you please tell why we do review[item] ? what is item here

  • @shikharsaxena9989
    @shikharsaxena9989 Год назад

    Amazing video.
    The code was throwing an error as "TypeError: dropout(): argument 'input' (position 1) must be Tensor, not str".
    I added return_dict=False in the model parameter and it was working fine then.

  • @smakarevich
    @smakarevich 4 года назад +1

    This part about copy/pasting 20 lines of code was heartbreaking as well as import structure :)
    + for lines splits - I love it

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад

      which part? did i copy paste from some place i shouldnt have? 🤔

    • @smakarevich
      @smakarevich 4 года назад

      Abhishek Thakur I would create a function if I have to copy one line of code. When imports are in line with PEP recommendations it is easier to understand the structure of dependencies.

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад +6

      you are right. when we have the same code, we should create a function instead. probably i was just being lazy haha. ill take care of it in next videos. currently im ignoring import orders but i dont do that in real life. its good that you point it out :) . i hope i dont have it like that in future videos :) thank you!

  • @varuntandon4465
    @varuntandon4465 4 года назад

    How to extend this binary BERT classification for multi-class classification problems? Does it depend on the 'out' attribute in the BERTBaseUncased Class? Could tell us more about the loss function that you where talking about at 9:37.

  • @shubheshswain5480
    @shubheshswain5480 3 года назад

    Easily understandable by a beginner. Can you have a video on Toxic comment classification with Flask?

  • @krantikumar2886
    @krantikumar2886 4 года назад

    Thanks a lot Abhishek!

  • @prachinagpal3112
    @prachinagpal3112 4 года назад

    Thanks for sharing.

  • @FREELEARNING
    @FREELEARNING 4 года назад

    Hi Abhishek, You did great videos, thank you for your knowledge sharing.
    Usually, after developing an application, before development, a testing phase is usually done. I want to learn more about how you do this phase. And if it's possible to make a video about it.

  • @siddharthjain3945
    @siddharthjain3945 3 года назад +2

    getting error "TypeError: dropout(): argument ''input" (position 1) must be Tensor,not str,
    help me with this

    • @lorenzof4787
      @lorenzof4787 2 года назад

      I got the same error and solved it. I think this has to do with the fact that he's writing code for an older version of the transformers package.
      Simply go to model.py and change self.bert by adding the param 'return_dict=False'. The line should look like this:
      self.bert = transformers.BertModel.from_pretrained(config.BERT_PATH, return_dict=False)

  • @Prasad-MachineLearningInTelugu
    @Prasad-MachineLearningInTelugu 4 года назад +1

    Thank you sir 🤩🤩🤩🤩

  • @canernm
    @canernm 3 года назад

    Hi, thanks for the video. Quick question: are there any benefits, when creating the model class, to inherit from huggingface's PreTrainedModel instead of torch.nn.Module?

  • @akshayraj4627
    @akshayraj4627 4 года назад

    Hey Abhiskek, grateful for this content. A question though, why was line 78 used in train.py? To calculate accuracy score why not use all of the outputs?

  • @ayuumi7926
    @ayuumi7926 4 года назад

    Thanks for the great video. You mentioned about the practice of using tokenizer dispatcher to compare different models. May I know in which video u have demonstrated that?

  • @jyoti.m7
    @jyoti.m7 Год назад

    got an error: TypeError: dropout(): argument 'input' (position 1) must be Tensor, not str
    solution:- adding return_dict=False to the model parameters can resolve the problem when using certain versions of the Transformers library.
    In recent versions of the Transformers library, the default behavior of the BertModel is to return a dictionary with various outputs, including the last_hidden_state, pooler_output, etc., when the model is called. However, in some cases, such as custom model architectures or specific usage scenarios, you might need to set return_dict=False to get the raw output tensors directly instead of a dictionary.
    By setting return_dict=False, you are instructing the model to return the raw output tensors, which might be more compatible with your specific model architecture or downstream tasks.
    class BERTBaseUncased(nn.Module):
    def __init__(self):
    super(BERTBaseUncased, self).__init__()
    self.bert = transformers.BertModel.from_pretrained(config.BERT_PATH, return_dict=False)
    self.bert_drop = nn.Dropout(0.3)
    self.out = nn.Linear(768, 1)
    def forward(self, ids, mask, token_type_ids):
    o2 = self.bert(ids, attention_mask=mask, token_type_ids=token_type_ids)
    bo = self.bert_drop(o2)
    output = self.out(bo)
    return output
    By making this change, the model should now work correctly with your train_fn and eval_fn functions without raising the TypeError related to the dropout operation.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 4 года назад

    why does engine.train_fn function doesn't have to return anything? I see, because the instantiated model object retains the optimized weights even without returning the model object from the train_fn function.

  • @dalchemistt7
    @dalchemistt7 4 года назад +1

    Thanks a lot for the video. It provided a structured overview of applying multiple things together.
    As a request, can you do a tutorial on using transformers for sequence labeling task and using/customizing various attention layers.
    TIA.

  • @KamalChhirang
    @KamalChhirang 4 года назад +1

    Thanks, great video. I am wondering, what changes we need to make to your wonderful code to train it using Distilled BERT.
    I tried replacing the "BERT_PATH" variable with Distilled BERT model (uploaded by you on Kaggle). But the accuracy is stuck at 0.5 while BERT is giving 85% accuracy at first Epoch.
    Thank you

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад

      As replied in the email, if you do it the same way as i did, bert should be 90+ in 1st epoch. can you check if nothing else is wrong?

    • @KamalChhirang
      @KamalChhirang 4 года назад

      @@abhishekkrthakur Hey, yeah I got 90% accuracy on IMDB dataset. But my accuracy is stuck at 0.5 on IMDB dataset, If I use distil BERT. I just changed the "BERT_PATH" variable to Distil BERT model

  • @tanb13
    @tanb13 4 года назад +1

    If anyone has managed to run the code on Google Colab GPU, can they please share how much time it took to train the model?

  • @hieungotrung5411
    @hieungotrung5411 4 года назад

    Thank you for your extremely helpful and informative videos. I’m a student with some basic background about Ml and just get started on NLP. Can you do a video about what NLP technique or algorithm that people are using in production (like BERT if I correct). And also looking forward to your book

  • @ximingdong503
    @ximingdong503 3 года назад

    Thanks, I have a question, is necessary for Bert has "mask, and token_type_ids" as input?

  • @vidyap8229
    @vidyap8229 4 года назад

    Great video!!
    Is it possible for you to do a video where you build a UI as well to display the results? Or may be can you provide any material regarding that so I can get started on my own?
    If so, looking forward to it 😃

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад +1

      Is this ok: ruclips.net/video/BUh76-xD5qU/видео.html ?

    • @vidyap8229
      @vidyap8229 4 года назад

      @@abhishekkrthakur Yes! This is what I was looking for...thanks a lot for sharing... 😄😄

  • @romananalytics2182
    @romananalytics2182 3 года назад

    Great video! how do you remember all this? Is it experience of buildling same code again and again or clarity of concepts? It's difficult to remember these many steps!!

    • @abhishekkrthakur
      @abhishekkrthakur  3 года назад +1

      some i remember, for some i have references to take a look at :)

    • @romananalytics2182
      @romananalytics2182 2 года назад

      @@abhishekkrthakur amazing! it would be interestiing to know your references :-) huge inpiration to see you coding with confidence and clartiy!

  • @michellebelgrave1436
    @michellebelgrave1436 4 года назад

    Hi, thank you for a very helpful video! I have a rather basic question - is there a limitation on the size of the input text for inference? Can I pass 10 pages worth of text for sentiment analysis? I am sure some more experienced users here will also have an answer. Thanks again!

  • @michaelringer5644
    @michaelringer5644 4 года назад

    Hey Abhishek,
    what do i need to change in the code if i want to have a float value (0-10.0 ) as the training data label instead of 1/0. For example an to predict the imdb rating 0-10.0?
    Best regards,
    Michael

  • @catadanna4679
    @catadanna4679 4 года назад

    Why super same class at min 7:50 ?

  • @eddiesec
    @eddiesec 4 года назад

    I am curious about something code related. I noticed you pass `model` to `train_fn` but return nothing. Can a function alter a global variable? As far as I understood it, it would copy that variable inside its scope and alter this version, not the global one. I appreciate if you could clarify me.

  • @sampreethachitagubbi1778
    @sampreethachitagubbi1778 4 года назад

    why do we need config.json file ? what is the use of it?

  • @malazalbawarshi3959
    @malazalbawarshi3959 3 года назад

    Is it better to use TFBertForSequenceClassification? to save time and work!!
    Thanks for the video it's very nice

    • @machinelearning3518
      @machinelearning3518 2 года назад

      Yes but when you will be asked to make changes in layers of NN then you need to understands this

  • @2311passion
    @2311passion 4 года назад

    if my problem is 4 labels,
    i try to change code loss_function to targets.view(-1,4) and in BERTBaseUncase code, self.out = nn.Linear(768, 4)
    but i got error " shape '[-1, 4]' is invalid for input of size 8"..what should i modify?

  • @slimenbouras3184
    @slimenbouras3184 4 года назад

    Impressive

  • @vishnuvardhan-md1ux
    @vishnuvardhan-md1ux 4 года назад

    Thanks a lot, Abhishek sir . can you tell us about named entity recognition also that would be helpful.

  • @soumyadrip
    @soumyadrip 4 года назад +1

    🔥

  • @raamav.4837
    @raamav.4837 4 года назад

    This is SUCH a nice video. Thanks, Abhishek.
    Since I primarily code using Keras, I am wondering how to write this exact same code in Keras. Any Suggestions Please?
    To Abhishek and Everyone Else:
    On a separate note, I tried learning TF 2.0 low-level API but I found it to be incredibly confusing/complex. Do you think I should move to PyTorch instead? PyTorch seems to be more systematic than TF 2.0.
    Thanks for your views and pointers.

  • @pavelpeskov1895
    @pavelpeskov1895 4 года назад

    Thanks for this video! What Bert Model with head on top should I use for a article title generation task?

    • @PrasenjeetRoyMPAI
      @PrasenjeetRoyMPAI 3 года назад +1

      Search for topic modeling task, it is quite similar to that.

  • @hanman5195
    @hanman5195 4 года назад

    What are the libraries need to install run this code ?
    Please share the list of library names

  • @nishantkumar3997
    @nishantkumar3997 4 года назад +1

    Hi Abhishek , I am always running into CUDA out of memory errors. If I reduce the batch size the model runs but takes comparatively quite a lot of time.
    Can you please share the GPU config that you use ?

    • @nishantkumar3997
      @nishantkumar3997 4 года назад

      HI @abhishek , can you please help out ?
      Error : RuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 3.95 GiB total capacity; 2.97 GiB already allocated; 93.75 MiB free; 37.90 MiB cached)
      I couldnt find where the 2.97 GB is allocated .
      Can anybody please suggest ?

    • @renatoviolin
      @renatoviolin 4 года назад

      @@nishantkumar3997 my GPU has 8 GB, the max batch_size it can fit is 6. You can reduce the MAX_LEN, and increase the batch size.
      In my case, I put MAX_LEN = 280 and BATCH_SIZE = 16. This config fitted in 8 GB. This achieve ~92 acc.
      You need to play around those numbers to fit in your GPU.

    • @stanleydukor
      @stanleydukor 4 года назад

      @@renatoviolin Hi, I just followed this tutorial, and my BERT encoding layer produces the same output for all inputs during evaluation. It's really confusing. Thanks

    • @michaelringer5644
      @michaelringer5644 4 года назад

      @@renatoviolin Will it actually run on a gtx 750?
      Cause i am getting always an out of memory error even with 'MAX_LEN = 2, TRAIN_BATCH_SIZE = 2048, VALID_BATCH_SIZE = 1024 '.

  • @ximingdong503
    @ximingdong503 3 года назад

    hi, brother. I run the code, and find that the best validation loss epoch is 1. I am confused about it. since I think the validation loss should decrease at First few epoch

  • @PrasenjeetRoyMPAI
    @PrasenjeetRoyMPAI 3 года назад

    Sir, can you post a tutorial on how to use Bert for abstractive text summarization.

  • @riteshramanaryan2745
    @riteshramanaryan2745 4 года назад

    Thank You So much for such an awesome tutorial, really helped me a lot in understanding BERT , I have a request could you please cover the topic Quantization in future video if possible that would be really great . Thank You...!!

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад

      I can try. Can you provide some references for me?

    • @riteshramanaryan2745
      @riteshramanaryan2745 4 года назад

      @@abhishekkrthakur yes i was going through pytorch documentation regarding quantization from here- pytorch.org/docs/stable/quantization.html
      and also i referred - medium.com/@joel_34050/quantization-in-deep-learning-478417eab72b
      One use case that i can think of is like performing quantization for object detection.
      It would be helpful if you make a video regarding this if possible and walk through an example, like you did for the above sentiment model...!! Thank you

  • @tech4028
    @tech4028 4 года назад

    I'm working on MacOS, do I need GPUs to code along (to execute the code)?

  • @siwarjbeli1400
    @siwarjbeli1400 4 года назад

    Does anyone know how to start this project on dataiku Data Science Studio?

  • @ba-en1io
    @ba-en1io 4 года назад

    thank you for this tutorial sir, but Im getting this error RuntimeError: The size of tensor a (64) must match the size of tensor b (65) at non-singleton dimension 1

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 4 года назад

    it's not clear what is out1 and out2. Can you recommend any resource for that?

  • @kyomdonalddogo5775
    @kyomdonalddogo5775 4 года назад

    Thank you sir for this video. Sir I want to ask you on how to deploy a seq2seq model using flask? Thank you

  • @SeyyedMohammadLoghmanDastgheyb
    @SeyyedMohammadLoghmanDastgheyb 4 года назад

    Thanks for the great tutorial. I have run the code on the colab, but I got 50% accuracy for all the epochs. what am I missing?

    • @G4RYLeL
      @G4RYLeL 4 года назад

      I've got the same problem. I am running this to check, but I just seen at the comments that at 18:14 it is "self.review[item]".
      If it gets out of 50% I'll let u know

  • @anaghajose691
    @anaghajose691 4 года назад

    Sir,Which platform is used to run the code?

  • @tanulsingh4797
    @tanulsingh4797 4 года назад

    Hello Abhishek Sir , can you please tell what is the code train_data_loader doing , we have already processed the data in input ass BERT wanted right ? in the code train_dataset , then why we do this

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад

      i didnt understand. Can you explain a bit more please?

  • @gauravgupta0125
    @gauravgupta0125 4 года назад +1

    Could you please me know which IDE or editor is this? Is it sublime?

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад +1

      Its vscode server. refer the intro video: ruclips.net/video/ArygUBY0QXw/видео.html :)

    • @gauravgupta0125
      @gauravgupta0125 4 года назад

      @@abhishekkrthakur Thanks

  • @sdhilip
    @sdhilip 4 года назад

    Another great video. Thanks . Is BERT can handle the sarcastic comments?

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад

      Probably it can, i have not tried it myself but if you do, please let me know the results in the comments :)

    • @sdhilip
      @sdhilip 4 года назад

      @@abhishekkrthakur S

  • @anishjain3663
    @anishjain3663 4 года назад

    Hey abhisek sir u look cute with cap 🤗, sir what should be approch to learn from your videos , from any other sourch write code that u just show or copy it and try to improve it i am quite confused cause there are lot of resources and no one tell how to learn from thos may be this is silly question sorry for that

  • @stanleydukor
    @stanleydukor 4 года назад

    Hi
    Abhishek, I just followed this tutorial, and my BERT encoding layer produces the same output for all inputs during evaluation. It's really confusing. Thanks

    • @jbm5195
      @jbm5195 3 года назад

      Please did you get this fixed. I am having the same problem. Kindly help me

  • @varuntandon4465
    @varuntandon4465 4 года назад

    Could someone explain what "self.bert_drop = nn.Dropout(0.3)" means in model.py?

    • @h.hanithavarsini1704
      @h.hanithavarsini1704 4 года назад

      nn.Dropout(0.3) this means how much percentage of neuron do you want to drop out, this is a regularization tech to avoid Overfitting

  • @prasannakumar7035
    @prasannakumar7035 4 года назад

    "Unable to set proper padding strategy as the tokenizer does not have a padding token. "
    ValueError: Unable to set proper padding strategy as the tokenizer does not have a padding token. In this case please set the `pad_token` `(tokenizer.pad_token = tokenizer.eos_token e.g.)` or add a new pad token via the function add_special_tokens if you want to use a padding strategy
    i tried to fix by add this line tokenizer.pad_token = tokenizer.eos_token,even though it is showing this error @Abhishek Thakur can you help me in this .Thanks in advance

  • @ancient_living
    @ancient_living 2 года назад

    Maybe the tutorial is a little old already. It throws HFValidationError : Repo id must be in the form 'repo_name' or 'namespace/repo_name': '../input/'. Use `repo_type` argument if needed. This is because one has to use actual name of this model that Abhishek is using.

  • @JuJu-fy1mk
    @JuJu-fy1mk 4 года назад

    Thank you so much for the tutorial! When I try to run the code, I somehow get the following error, could you please advise?
    /bert-sentiment/src/app.py", line 90, in
    MODEL.load_state_dict(torch.load(config.MODEL_PATH))
    File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/serialization.py", line 526, in load
    if _is_zipfile(opened_file):
    File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/torch/serialization.py", line 76, in _is_zipfile
    if ord(magic_byte) != ord(read_byte):
    TypeError: ord() expected a character, but string of length 0 found

  • @missnewton4247
    @missnewton4247 4 года назад

    i need help could you please help me ?? :(

  • @rahulkrishnan529
    @rahulkrishnan529 4 года назад

    93% in one epoch?

  • @secretsuperstar2313
    @secretsuperstar2313 4 года назад

    Keep getting cuda out of memory error :(

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад

      ohh, see the comments, you need to use with torch.no_grad() in validation part.

    • @secretsuperstar2313
      @secretsuperstar2313 4 года назад +1

      @@abhishekkrthakur actually I did that and even tried to reduce the maxlen but still this keeps happening. Is there any other way? I have 4 gigs of GPU

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад +2

      @@secretsuperstar2313 4gb is quite low. reduce the batch size of training and validation. use fp16, use small max len. lemme know how it goes.

    • @secretsuperstar2313
      @secretsuperstar2313 4 года назад

      @@abhishekkrthakur Thank you so much, I used fp16 and max len of 128 and it worked, the accuracy might take a hit though. You took out time to help me it means a lot!

  • @amandarash135
    @amandarash135 4 года назад

    I did just have started to see your videos. Still can't understand. Plz tell me from where I have start to watch your channel. I have few knowledge of ML &Deep learning

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад

      start from episode1 :)

    • @amandarash135
      @amandarash135 4 года назад

      @@abhishekkrthakur introduction to machine learning?

    • @abhishekkrthakur
      @abhishekkrthakur  4 года назад

      @@amandarash135 no. ruclips.net/video/ArygUBY0QXw/видео.html

  • @toofrellik
    @toofrellik 4 года назад

    can someone explain to me what is the difference between this video and 'sentiment-analysis' pipeline: github.com/huggingface/transformers#quick-tour-of-pipelines

  • @truonggianga2tk42
    @truonggianga2tk42 4 года назад

    Thank you for your sharing. Could you please share with me what IDE you use to make this video (ide open on web with address 127.0.0.1) ?