omg,your tensorflow series is very good for bigginers to understand how to begin train their models,i hope you can make some develop tutorials.Thank you so much
how about cactegories on a document or tittle of a paragraph what method we use edit: what i saw its only 2 categories this whole time how about 3 or more categories
I recognize different lengths in train_sentences and train_sequences (at 12:xx). The length of sentence 3 and sentence 5 do not match with their sequence length. Can you please explain this?
at 7:09 this would no longer work, one of the functions maybe from collections import Counter def counter_word(text_col): count = Counter() df['text'].str.lower().str.split().apply(count.update) return count counter = counter_word(df.text)
Hey man, I am getting this error (NotImplementedError: Cannot convert a symbolic Tensor (lstm_11/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported). Can anyone help me out or do you mind sharing which versions of tensor and numpy you used while coding this exercise?
aka : unless the reason is in the embeddings layer input expecting a matrix with batch size and max input length ?!! ====>>> model.add(layers.Embedding(num_unique_words, 32, input_length=max_length)) # The layer will take as input an integer matrix of size (batch, input_length), # and the largest integer (i.e. word index) in the input should be no larger than num_words (vocabulary size).
Another great video. Just a question. In the real world, when processing natural language, is that always converting training words into numbers first before applying to model? Like in this example, you convert "flood bago myanmar arrived bago" into [99, 3742, 612, 1451, 3742]. Basically, we can't use real words in the model?
I feel much more confident going into the TF cert exam after finishing your playlist. Danke Patrick!
omg,your tensorflow series is very good for bigginers to understand how to begin train their models,i hope you can make some develop tutorials.Thank you so much
Many thanks! very clear explanation i like it
Thanks 🙏🏻
Thanks for your awesome videos, some GAN's videos would be helpful.
Will try to do this in the future
thanks for this video, that i can learn NLP and english
Hii thanks for the video. I just have one questions. What is your recommendation to fix the overfitting in the model?
how can I export your model to use in another application?
How can you get the prediction and validation to those numbers? what is the formula to get those numbers?
how about cactegories on a document or tittle of a paragraph
what method we use
edit:
what i saw its only 2 categories this whole time how about 3 or more categories
I recognize different lengths in train_sentences and train_sequences (at 12:xx). The length of sentence 3 and sentence 5 do not match with their sequence length. Can you please explain this?
at 7:09 this would no longer work, one of the functions maybe
from collections import Counter
def counter_word(text_col):
count = Counter()
df['text'].str.lower().str.split().apply(count.update)
return count
counter = counter_word(df.text)
Please can you do a video on tweet sentiment analysis to determine suicidal classification using NLP
I'll add it to my list :)
Hey man, I am getting this error (NotImplementedError: Cannot convert a symbolic Tensor (lstm_11/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported). Can anyone help me out or do you mind sharing which versions of tensor and numpy you used while coding this exercise?
question : why did we use Padding to fix the sequence length ? LSTM/RNNs can deal with variable sequence lengths .. am I missing something ?
aka : unless the reason is in the embeddings layer input expecting a matrix with batch size and max input length ?!! ====>>> model.add(layers.Embedding(num_unique_words, 32, input_length=max_length))
# The layer will take as input an integer matrix of size (batch, input_length),
# and the largest integer (i.e. word index) in the input should be no larger than num_words (vocabulary size).
We should use masking or padding for RNN. In this case I used padding explicitely. And yes if input_length is used then it must be of same size
just amazing
:)
Another great video. Just a question. In the real world, when processing natural language, is that always converting training words into numbers first before applying to model? Like in this example, you convert "flood bago myanmar arrived bago" into [99, 3742, 612, 1451, 3742]. Basically, we can't use real words in the model?
No, you always somehow have to map the words to numbers so that the model can understand it. There are different ways of doing this...
Notification Gang 🔥🔥🔥
Yeah
Thanks for your valuable content.Kindly do some nlp tasks like NER, BERT implementation that will be highly useful.
Yes very interesting topics
Hi thank you for your nice work, can I ask for the code?
Thank you I had found it on the link to github
yep almost all the code to my videos is on github
Why didn't we use test sentences in the tutorial to check the prediction?
my mistake. I should have used the test data in the end...
@@patloeber that's okay. Just wanted to check if my understanding was correct. And thanks for your videos. They are amazing brother
When you say helper functions, next time do explain it also how it works please!!
Text classification using tensorflow
ruclips.net/p/PL-N0_7SF7nTqOQdTzLRIRvyGJW-msR3Q4&feature=shared