Text Classification | Sentiment Analysis with BERT using huggingface, PyTorch and Python Tutorial

Поделиться
HTML-код
  • Опубликовано: 15 сен 2024

Комментарии • 129

  • @venelin_valkov
    @venelin_valkov  4 года назад +13

    I was wrongly feeding the cross-entropy loss function with the output of a softmax function. This is fixed in the text tutorial: www.curiousily.com/posts/sentiment-analysis-with-bert-and-hugging-face-using-pytorch-and-python/
    89.3% validation accuracy after 10 epochs. Thanks to @copley web for the finding!

    • @d3v487
      @d3v487 3 года назад

      Hello Venelin i just run the Entire notebook everything same and I scrape my own data from google play and Run the model and the loss is increasing (15 epochs --- 2.15 loss) whats the issue can you tell me ??? Is there any extra preprocessing Required ??

    • @Nilesh773
      @Nilesh773 2 года назад

      when I try move cpu to gpu and rerun all cell i got following error for this line
      "F.softmax(model(input_ids, attention_mask), dim=1)"
      error:
      RuntimeError Traceback (most recent call last)
      in
      ----> 1 F.softmax(model(input_ids, attention_mask), dim=1)
      8 frames
      /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
      2197 # remove once script supports set_grad_enabled
      2198 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
      -> 2199 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
      2200
      2201
      RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
      please help me out from this

  • @emiralkan4359
    @emiralkan4359 3 года назад +40

    To anyone who is doing this tutorial on the latest version of transformers/huggingbox, you need to change the line
    self.bert = BertModel.from_pretrained('bert-base-cased')
    to
    self.bert = BertModel.from_pretrained('bert-base-cased', return_dict=False)

    • @ZcUHJETblgnLlfzH
      @ZcUHJETblgnLlfzH 2 года назад

      Triple likes to this!

    • @waliurrahman5363
      @waliurrahman5363 2 года назад

      I get an error saying return_dict is not defined

    • @karannaidu3495
      @karannaidu3495 2 года назад +1

      Yupp , there is some error is you try to just write the code, it fails at a point .
      just copy the github link and just change this above step, whereever we are using self.bert, in 2 or more places.

    • @ggm4857
      @ggm4857 2 года назад

      Thank you for golden suggestion. I was struggling with errors due to this line of code. Now fixed.

    • @itayatelis2898
      @itayatelis2898 2 года назад

      You saved me! i LOVE U

  • @dingusagar
    @dingusagar 3 года назад +30

    Great video. as per the latest version of transformers library, the bert_model outputs an object and not a tuple, so had to google and fix that like this to run the notebook.
    output = bert_model(
    input_ids=data['input_ids'],
    attention_mask=data['attention_mask']
    )
    output.last_hidden_state
    output.pooler_output

    • @adsgfsgd
      @adsgfsgd 3 года назад +1

      Thanks

    • @zhirazzi
      @zhirazzi 2 года назад

      Hi, thanks for your information. I just got the same problem here, can you mention where do I have to change the code?

    • @dingusagar
      @dingusagar 2 года назад

      @@zhirazzi I have mentioned in the comment. Could you try that.

    • @DanielTobi00
      @DanielTobi00 10 месяцев назад +1

      Thanks man.

  • @tanb13
    @tanb13 4 года назад +14

    Can you please make a video on how to perform 'Aspect Based Sentiment Analysis' ?

  • @asmadjaidri1219
    @asmadjaidri1219 3 года назад +1

    one of the best tutorial i have seen on bert

  • @palashkamble2325
    @palashkamble2325 2 года назад +1

    Freeze BERT layers:
    for name, param in model.named_parameters():
    if 'classifier' in name:
    param.requires_grad = True
    else:
    param.requires_grad = False
    Here, I have used 'self.classifier' in the constructor instead of 'self.out'
    i.e., self.classifier = nn.Linear(self.bert.config.hidden_size, n_classes)

  • @girayableful
    @girayableful 3 года назад

    Great video for newcomers to get to know training models with PyTorch and specifically using huggingface transformers!!
    many thanks

  • @TheAnna1101
    @TheAnna1101 4 года назад +2

    great video, very clear. love the preprocessing part, it really helps. thanks, keep up the good work.

    • @capper3360
      @capper3360 4 года назад

      Yeah that previous video was a pretty neat explanation. Subbed after watching it.

  • @sumarah85
    @sumarah85 3 года назад +14

    To anyone getting the error:
    'dropout(): argument 'input' (position 1) must be Tensor, not str'
    Following an update, adding
    "return_dict=False'
    in the SentimentClassifier class, 'forward' function, like this:
    def forward(self, input_ids, attention_mask):
    _, pooled_output = self.bert(
    input_ids=input_ids,
    attention_mask=attention_mask,
    return_dict=False
    )
    removes the error.

    • @boxofspace
      @boxofspace 3 года назад

      Getting the same thing but I'm not sure how to fix it

    • @zhirazzi
      @zhirazzi 2 года назад

      Thank you!

    • @Nilesh773
      @Nilesh773 2 года назад

      when I try move cpu to gpu and rerun all cell i got following error for this line
      "F.softmax(model(input_ids, attention_mask), dim=1)"
      error:
      RuntimeError Traceback (most recent call last)
      in
      ----> 1 F.softmax(model(input_ids, attention_mask), dim=1)
      8 frames
      /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
      2197 # remove once script supports set_grad_enabled
      2198 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
      -> 2199 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
      2200
      2201
      RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
      please help me out from this

    • @magicworld6233
      @magicworld6233 Год назад

      Me too Getting the same error, How did you resolve this, I am stuck any suggestions would help me

    • @kakkikage
      @kakkikage Год назад

      it works! thank you so much

  • @tawfik1546
    @tawfik1546 4 года назад +7

    Thanks for your awesome presentation !
    Much Love and respect to what you're doing .
    I have a little question :
    Are you fine tuning the model for all the bert model layers or for the last layers (dropout and output layer ) only ?
    And what do you advise us to do .
    Cheers

  • @SY-jh3tg
    @SY-jh3tg 3 года назад +2

    I did exactly as you did. But for some reason my training is stuck at 1st epoch?
    Anyone having the same problem, if so did you figure it out?

  • @yofootball746
    @yofootball746 4 года назад +3

    Hi Venelin, Thanks for the tutorial !!!
    I was trying the tutorial on colab but in the training step I am getting the error :
    RuntimeError: size mismatch, m1: [16 x 3], m2: [768 x 3] at /pytorch/aten/src/THC/generic/THCTensorMathBlas.cu:283
    can you please tell me what it is?
    I have just replicated the same steps you have done in your website

  • @BarbarooTheKangaroo
    @BarbarooTheKangaroo Год назад +1

    Great video! One question, why did you have to change the dimension of the inputs? (that is, drop one dimension?). Why did we get an extra dimension in the first place?

  • @tauhidzaman2826
    @tauhidzaman2826 4 года назад +2

    Thanks for making such a great video! Im going to teach your notebook in my social media analytics class next spring.

  • @biku1998
    @biku1998 4 года назад +1

    Awesome balance of theory and practical 👍. Waiting for seq2seq translation with transformers in pytorch if possible.

  • @nithinreddy2299
    @nithinreddy2299 3 года назад +1

    How to add one more class to that and run the code? I tried that but I'm getting error for 'last_hidden_state.shape'

  • @justinhuang8034
    @justinhuang8034 4 года назад +1

    How long did it take for the training to begin. My cell is still running after 30 mins and it still hasn't shown progress on the first epoch.

  • @AjaySharma-me1sy
    @AjaySharma-me1sy 4 года назад +10

    Cool to have references from "The Office" :D

    • @Senshiii99
      @Senshiii99 3 года назад

      That's what she said

  • @tash-jq
    @tash-jq 3 года назад +3

    I am getting the following error "dropout(): argument 'input' (position 1) must be Tensor, not str" anyway of fixing this?

    • @SuperOnlyP
      @SuperOnlyP 3 года назад +2

      please try this at def forward:
      ---------------------------------------------------------------------------
      class SentimentClassifier(nn.Module):
      .......
      def forward(self, input_ids, attention_mask):
      output = self.bert(
      input_ids = input_ids,
      attention_mask = attention_mask
      )
      pooled_output = output[1]
      output = self.drop(pooled_output)
      return self.out(output)
      ---------------------------------------------------------------------------
      Instead of: _, pooled_output = self.bert (...)
      please do: output = self.bert (...)
      For some reason: _, pooled_output = self.bert (...) will return string object

    • @senospearrme6118
      @senospearrme6118 3 года назад +3

      @@SuperOnlyP thankuo soo much here is mine if anyone needs help to refer ive considered 2 classes
      class SentimentClassifier(nn.Module):
      def __init__(self, n_classes):
      super(SentimentClassifier, self).__init__()
      self.bert = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME, return_dict=False)
      self.drop = nn.Dropout(p=0.3)
      self.out = nn.Linear(self.bert.config.hidden_size, n_classes)
      self.softmax = nn.Softmax(dim=1)
      def forward(self, input_ids, attention_mask):
      output = self.bert(
      input_ids = input_ids,
      attention_mask = attention_mask
      )
      pooled_output = output[1]
      output = self.drop(pooled_output)
      return self.out(output)
      return self.softmax(output)

    • @djdjfjmgbhdj
      @djdjfjmgbhdj 2 года назад

      @@senospearrme6118 please i need your help

  • @marwanelghitany8875
    @marwanelghitany8875 3 года назад +2

    Excellent Tutorial... ♥
    Have a comment, if you're using the CrossEntropy loss function, you don't need a softmax layer as output ... (just Linear layer) as it make by default a softmax before applying the cross-entropy measure. :)

  • @ramigaaloul734
    @ramigaaloul734 3 года назад +4

    Hello thanks for this cours it is very interesting,
    but when i run the code with a copy of your notbook i have this erro:
    "last_hidden_state.shape "
    AttributeError: 'str' object has no attribute 'shape'

    • @davidevenditti36
      @davidevenditti36 3 года назад

      same problem here ,update on that?

    • @coolmacmaniac
      @coolmacmaniac 3 года назад +3

      pass return_dict=False argument while calling, like this -
      last_hidden_state, pooled_output = bert_model(
      input_ids=encoding['input_ids'],
      attention_mask=encoding['attention_mask'],
      return_dict=False
      )

    • @aanwar2933
      @aanwar2933 3 года назад

      @@coolmacmaniac Thank you!! been stuck on this for so long.

  • @chinamatt
    @chinamatt 4 года назад +1

    Hi Venelin,
    Appreciate your work, very clear and easy to follow tutorial! Thank you 🙏🙏
    Question: When you say we could fine-tune the parameters, do you mean changing the, MAX_LEN, BATCH_SIZE, and EPOCHS?
    N.B. in the text tutorial, I had to replace F.softmax with nn.functional.softmax.

  • @Tiger-Tippu
    @Tiger-Tippu 10 месяцев назад +1

    How this one is different from fine tuning

  • @user-ul8uy9xy4d
    @user-ul8uy9xy4d 9 месяцев назад

    whats the order for these videos? is this the 2nd video? please label it as part 1, 2, etc to make it easier!

  • @alteshaus3149
    @alteshaus3149 2 года назад

    Thank you very very much for this awesome video. I learned a lot! Please go on

  • @paulntalo1425
    @paulntalo1425 3 года назад

    Thank You for this series of videos. Am new to NLP projects but I know that transformers are far a better choice than LSTM models. I must videos have been of great help

  • @marvinprakash1612
    @marvinprakash1612 3 года назад +1

    Very nice tutorial, just wanted to tell you that some code is missing from the website pls fix it. the sentiment classifier class one

  • @anuragpachauri4297
    @anuragpachauri4297 3 года назад

    Hi Venelin Valkov
    I have a Doubt in the code
    You have used softmax function seperately in the notebook as F.Softmax (model) but in the same notebook inside train_epoch you haven't used F.softmax.So what code should i assume as correct the one in the video or the one in the colab notebook.

  • @consistentthoughts826
    @consistentthoughts826 3 года назад +1

    getting this error AttributeError: 'str' object has no attribute 'shape' for below code
    last_hidden_state, pooled_output = bert_model(
    input_ids=encoding['input_ids'],
    attention_mask=encoding['attention_mask']
    )
    I have tried a lot to debug but not able to resolve

    • @d3v487
      @d3v487 3 года назад

      Same with me Bro.

    • @serdar_altan
      @serdar_altan 3 года назад +2

      Put additional argument -> `return_dict = False` as below:
      last_hidden_state, pooled_output = bert_model(
      input_ids=encoding['input_ids'],
      attention_mask=encoding['attention_mask'],
      return_dict = False
      )

    • @nidhirbhavsar3917
      @nidhirbhavsar3917 3 года назад

      @@serdar_altan yeah got into the same problem, but resolved it!
      thanks

  • @singireddyabhilashreddy1565
    @singireddyabhilashreddy1565 3 года назад

    Hi Venelin! I didn't get where are we doing the sentiment analysis for the data we crawled. Because I see that you are changing the number to get the sentiment of that review but somewhere I felt the pipeline to that is missing

  • @abdulhakimbashir6268
    @abdulhakimbashir6268 4 года назад +1

    Great tutorial, What is the best approach to change the model into predicting only 2 class ie pos and neg

  • @ParamSaraf
    @ParamSaraf 3 года назад +1

    Hi Venelin, Will this work for Binary Classification as well or do we have to change the Loss function to Binary Cross Entropy? Do you have any collab notebook for the same?

  • @DrOsbert
    @DrOsbert 3 года назад

    Overall it's a good tutorial, the only regretful part is the 'break' part where you come back after you trained the model, somewhat there's a 'white space' between the gap.

  • @mentefuertecaminoestoico
    @mentefuertecaminoestoico 3 года назад

    in order to apply or use one of the models on HugginFace with a different application i would have to change also the tokenizer, i want to implement one of hugginface models but for molecular analysis, data inputs SMILES type.

  • @AK-ud4ur
    @AK-ud4ur 2 года назад

    Class SentimentClassifier(mm.Module):
    I cannot understand above code, Can someone please help me with a link or video where this block of code is explained in depth.

  • @Nilesh773
    @Nilesh773 2 года назад

    when I try move cpu to gpu and rerun all cell i got following error for this line
    "F.softmax(model(input_ids, attention_mask), dim=1)"
    error:
    RuntimeError Traceback (most recent call last)
    in
    ----> 1 F.softmax(model(input_ids, attention_mask), dim=1)
    8 frames
    /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
    2197 # remove once script supports set_grad_enabled
    2198 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
    -> 2199 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
    2200
    2201
    RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
    please help me out from this

  • @mlguru3089
    @mlguru3089 4 года назад

    I tried this on IMDB dataset but its not working the accuracy remains on 50% even after 3 epochs. I have also tried using lstm, bi directional lstm on that dataset but its still not working. So is the problem in the dataset??

  • @jorgeih
    @jorgeih 3 года назад

    I would like to know the transformers version used in this tutorial, because the new library version has some issues with the code.

  • @ebitunyan3894
    @ebitunyan3894 3 года назад

    Hello I followed your 1:1 Consultation Session With Me link and booked a meeting and I was sent a zoom link which I followed but the meeting was never started. What could have been the problem?

  • @victorthomas6844
    @victorthomas6844 3 года назад

    Can you please suggest to me how to use the Bert model to compare two sentences for semantic similarity and assign them to a class. Your suggestions would be appreciated. Thanks.

  • @infinity__7948
    @infinity__7948 11 месяцев назад

    I am getting cuda assersation failed error what to do

  • @crohno
    @crohno 3 года назад

    is there any way to print the possibility of the sentiment as well? (e.g./ instead of positive, positive 80.5%)

  • @Ramm165
    @Ramm165 4 года назад

    Thank you for wonderful tutorial if possible can you please do a tutorial on Ner using bert

  • @rajeshthakur-zv5lo
    @rajeshthakur-zv5lo 3 года назад

    hi Venelin, First of all thanks for making this amazing video, I executed your notebook but I stuck at a point,
    last_hidden_state, pooled_output = bert_model(
    input_ids=encoding['input_ids'],
    attention_mask=encoding['attention_mask']
    )
    after this if I am executing this
    >>> last_hidden_state.shape
    then it is throwing one error
    AttributeError: 'str' object has no attribute 'shape'
    kindly provide me a solution, I highly appreciate your support,
    Thanks in advance

  • @ximingdong503
    @ximingdong503 3 года назад

    Hi Venelin
    thank u for the bert tutorial.
    I have a question, in your tutorial, validation loss becomes small, and after epoch 3 validation loss becomes bigger and bigger. is that a reasonable result? since I think if loss become big, acc should decrease.
    thanks

  • @depenz
    @depenz 4 года назад

    Nice! What about custom unfreezing layers? I guess the weights of network is fixed, expect the last layer.

  • @venkatesanr9455
    @venkatesanr9455 4 года назад

    Nice tutorial and thanks for sharing knowledge

  • @ninobach7456
    @ninobach7456 11 месяцев назад

    It's difficult to follow this video without having watched the previous video. I didn't know that it builds on top of that when I started watching.

  • @C-Los138
    @C-Los138 2 года назад

    Amazing tutorial. How would we feed a dataframe of new raw unlabeled text into the trained model?

  • @harisumanth
    @harisumanth 2 года назад

    I Am facing this error, while running the epochs:
    RuntimeError: stack expects each tensor to be equal size, but got [160] at entry 0 and [219] at entry 9
    Can anyone help me with this please?

  • @shaikrasool1316
    @shaikrasool1316 4 года назад

    Please make video on next sentence predication and q&a model using bert

  • @tamvominh3272
    @tamvominh3272 4 года назад

    Dear Veneline,
    I have a regression task and I would like to apply BERT. My data looks like:
    input: This video is the most interesting one about BERT I have ever seen //a sentence with maximum 30 word
    output: 4.9 //a score in range [0, 5]
    Is it good if I apply BERT to it? If it's ok, which part should I keep, or I just use BERT as the embedding layer for my task? And in implementation, how do I load BERT and fine-tune it for a regression task? I find it vague and difficult to code.
    I would be very grateful if you give me some advice and instructions!
    Thank you so much!

  • @risheshgarg9990
    @risheshgarg9990 4 года назад

    Thank you very much for the great video!! I have got one query though, a transformer does not make use of lstm or rnn units, as far as i know so what do we mean by hidden states.

  • @nisalbandara
    @nisalbandara 3 года назад

    I'm doing a Twitter sentiment analysis project. Firstly I was gonna go with LSTM + CNN hybrid approach but after I discovered BERT I want to use it. Can I use BERT + CNN for my Project?

  • @somusharma8958
    @somusharma8958 2 года назад

    thank u........!!!!!!!!!!!!!!!!!
    brother u saved my day .......

  • @davidadu1113
    @davidadu1113 4 года назад

    Fantastic tutorial. I love it.

  • @luluray3345
    @luluray3345 2 года назад

    Does anyone run into
    Error(s) in loading state_dict for SentimentClassifier:
    Missing key(s) in state_dict: "bert.embeddings.position_ids".
    I got this error when trying to load the saved model. Don't know how to solve it

  • @harissaeed5811
    @harissaeed5811 Год назад

    my data is bilingual half english half urdu but i dont know how can i do it

  • @nidaalyas1301
    @nidaalyas1301 4 года назад

    dear i am getting truncation was not explicitly... error after softmax execution anyone can help?

  • @riasingh2558
    @riasingh2558 4 года назад

    Can you also upload such task fine-tuning using Transformer-XL and Longformer? Thanks!

  • @miguelangelsanchezramirez9748
    @miguelangelsanchezramirez9748 3 года назад

    Hello I'm doing my master degree in NLP, have some doubts, how much does a 1:1 costs?

  • @salimbo4577
    @salimbo4577 4 года назад

    thnk you so much for sharing your knowlwgde wish you the best in you entier career. i have a qstn .
    if i want to train the model for another langage do i have to train it with the two tasks MLM and NSP ?

  • @sumanmondal2152
    @sumanmondal2152 4 года назад

    I'm getting Error like can't itirate over 0-d tensor ?? on the training loop

  • @mschannel6521
    @mschannel6521 4 года назад

    Nice video friend...
    Thanks for sharing
    Greetings from indonesia

  • @ummarafatima119
    @ummarafatima119 2 года назад

    Hi,
    Thanks a lot for sharing this video.
    I am stuck at "last_hidden_state.shape," when I entered this code I received this message "AttributeError: 'str' object has no attribute 'shape'" even I have applied the BertPooler(nn.Module).
    Can you please help me to resolve this issue?

    • @hirokikoyama8496
      @hirokikoyama8496 Год назад

      Hi, I had the same problem, so I leave the code to solve it.
      Ignore if already resolved.
      [Code]
      outputs = bert_model(
      input_ids=encoding['input_ids'],
      attention_mask = encoding['attention_mask']
      )
      last_hidden_state = outputs.last_hidden_state
      last_hidden_state.shape
      pooled_output = outputs.pooler_output
      pooled_output.shape

  • @darullshifa4870
    @darullshifa4870 3 года назад

    Hi . nice stuff and easy explanation. I have a quest? can we use this model on other datasets? please reply i want to do research on SST-5 dataset

  • @robinjacob1931
    @robinjacob1931 3 года назад

    Hi can you make a video of implementing rule based factoid question answering system

  • @RedionXhepa
    @RedionXhepa 4 года назад

    nice tutorials , keep up like this !

  • @jmysabbagth8151
    @jmysabbagth8151 2 года назад

    the pooled_output is the [CLS] token ?

  • @gulshanlalwani
    @gulshanlalwani 4 года назад

    Hi Venelin, Could we have an example on multi class text classification with BERT using Huggingface?

    • @darullshifa4870
      @darullshifa4870 3 года назад

      the topic of this tutorial was also multiclass text classification.

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 года назад

    Can you elaborate on what the last_hidden_state and pooled_output are? Or can you provide some references. Thanks,

    • @magicworld6233
      @magicworld6233 Год назад

      In my case last_hidden_state and pooled_output are returning string, Not able to figure it out where its going wrong.

  • @tusharsingh2439
    @tusharsingh2439 4 года назад

    Amazing tutorial !

  • @user-or7ji5hv8y
    @user-or7ji5hv8y 3 года назад

    Do you have a video on how schedulers work?

  • @ДуховныйРост-м8п
    @ДуховныйРост-м8п 4 года назад

    awesome tutorial !

  • @yusufaliyu9759
    @yusufaliyu9759 Год назад

    Thank you sir for this wonderful tutorial I really appreciate your effort.
    Plz.... I follow your tutorials and it is perfect. Plz how can I do I If want to use it to predict review in form of pandas frame.
    I have unlabelled data that I need to predict for every review. I use your example at the last Toturial is only one review. so what if the
    are many.
    plz help

  • @ATHIRARAJESHKUMAR
    @ATHIRARAJESHKUMAR 5 месяцев назад

    I am performing the same project from github and it seems a bit from this.

  • @zeki7540
    @zeki7540 3 года назад

    Really, great!!

  • @amananand4092
    @amananand4092 4 года назад

    Can u please share the Colab Notebook

  • @ravivarma5703
    @ravivarma5703 3 года назад

    Getting this when trying to load the already trained model - "best_model_state.bin"
    RuntimeError Traceback (most recent call last)
    in ()
    1 get_ipython().system('gdown --id 1V8itWtowCYnb2Bc9KlK9SxGff9WwmogA')
    2 model = SentimentClassifier(len(class_names))
    ----> 3 model.load_state_dict(torch.load('best_model_state.bin'))
    /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in load_state_dict(self, state_dict, strict)
    828 if len(error_msgs) > 0:
    829 raise RuntimeError('Error(s) in loading state_dict for {}:
    \t{}'.format(
    --> 830 self.__class__.__name__, "
    \t".join(error_msgs)))
    831 return _IncompatibleKeys(missing_keys, unexpected_keys)
    832
    RuntimeError: Error(s) in loading state_dict for SentimentClassifier:
    Missing key(s) in state_dict: "bert.embeddings.position_ids".

  • @bobo-nt2gt
    @bobo-nt2gt 2 года назад

    Great video

  • @ygbr2997
    @ygbr2997 Год назад

    just realized google colab switched to pay as you go, which is really expensive

  • @nana-xf7dx
    @nana-xf7dx 2 года назад

    return_dict=False ,at the forward function,will be no error.

  • @abhishekjain7645
    @abhishekjain7645 4 года назад +1

    i have run the same model but accuracy is very less Epoch 9/10
    ----------
    Train loss 1.0860568284988403 accuracy 0.0002822666008044598

  • @rushikeshbulbule8120
    @rushikeshbulbule8120 4 года назад

    Superb

  • @finnzhang1323
    @finnzhang1323 3 года назад +1

    Just a mention. Better not write the code at the bottom of the screen, cause the subtitle or the progress bar would cover what you write and it's not a good experience with the audience.

  • @hematoma7645
    @hematoma7645 3 года назад

    truncation=True, padding='max_length'

  • @Nilesh773
    @Nilesh773 2 года назад

    'dropout(): argument 'input' (position 1) must be Tensor, not str
    try this its worked
    class SentimentClassifier(nn.Module):
    def __init__(self, n_classes):
    super(SentimentClassifier, self).__init__()
    self.bert = BertModel.from_pretrained(PRE_TRAINED_MODEL_NAME)
    self.drop = nn.Dropout(p=0.3)
    self.out = nn.Linear(self.bert.config.hidden_size, n_classes)

    def forward(self, input_ids, attention_mask):
    _, pooled_output = self.bert(
    input_ids=data['input_ids'],
    attention_mask=data['attention_mask'],
    return_dict=False
    )
    output = self.drop(pooled_output)
    return self.out(output)

  • @user-zm3uw5ij9r
    @user-zm3uw5ij9r 4 года назад +1

    could you please help by doing this project in a video or please explain me how to approach using eeg epochs github.com/CVxTz/EEG_classification